Science.gov

Sample records for 3-d point cloud

  1. Vector quantization of 3-D point clouds

    NASA Astrophysics Data System (ADS)

    Sim, Jae-Young; Kim, Chang-Su; Lee, Sang-Uk

    2005-10-01

    A geometry compression algorithm for 3-D QSplat data using vector quantization (VQ) is proposed in this work. The positions of child spheres are transformed to the local coordinate system, which is determined by the parent children relationship. The coordinate transform makes child positions more compactly distributed in 3-D space, facilitating effective quantization. Moreover, we develop a constrained encoding method for sphere radii, which guarantees hole-free surface rendering at the decoder side. Simulation results show that the proposed algorithm provides a faithful rendering quality even at low bitrates.

  2. Point Cloud Visualization in AN Open Source 3d Globe

    NASA Astrophysics Data System (ADS)

    De La Calle, M.; Gómez-Deck, D.; Koehler, O.; Pulido, F.

    2011-09-01

    During the last years the usage of 3D applications in GIS is becoming more popular. Since the appearance of Google Earth, users are familiarized with 3D environments. On the other hand, nowadays computers with 3D acceleration are common, broadband access is widespread and the public information that can be used in GIS clients that are able to use data from the Internet is constantly increasing. There are currently several libraries suitable for this kind of applications. Based on these facts, and using libraries that are already developed and connected to our own developments, we are working on the implementation of a real 3D GIS with analysis capabilities. Since a 3D GIS such as this can be very interesting for tasks like LiDAR or Laser Scanner point clouds rendering and analysis, special attention is given to get an optimal handling of very large data sets. Glob3 will be a multidimensional GIS in which 3D point clouds could be explored and analysed, even if they are consist of several million points.The latest addition to our visualization libraries is the development of a points cloud server that works regardless of the cloud's size. The server receives and processes petitions from a 3d client (for example glob3, but could be any other, such as one based on WebGL) and delivers the data in the form of pre-processed tiles, depending on the required level of detail.

  3. The Feasibility of 3d Point Cloud Generation from Smartphones

    NASA Astrophysics Data System (ADS)

    Alsubaie, N.; El-Sheimy, N.

    2016-06-01

    This paper proposes a new technique for increasing the accuracy of direct geo-referenced image-based 3D point cloud generated from low-cost sensors in smartphones. The smartphone's motion sensors are used to directly acquire the Exterior Orientation Parameters (EOPs) of the captured images. These EOPs, along with the Interior Orientation Parameters (IOPs) of the camera/ phone, are used to reconstruct the image-based 3D point cloud. However, because smartphone motion sensors suffer from poor GPS accuracy, accumulated drift and high signal noise, inaccurate 3D mapping solutions often result. Therefore, horizontal and vertical linear features, visible in each image, are extracted and used as constraints in the bundle adjustment procedure. These constraints correct the relative position and orientation of the 3D mapping solution. Once the enhanced EOPs are estimated, the semi-global matching algorithm (SGM) is used to generate the image-based dense 3D point cloud. Statistical analysis and assessment are implemented herein, in order to demonstrate the feasibility of 3D point cloud generation from the consumer-grade sensors in smartphones.

  4. 3D Building Reconstruction Using Dense Photogrammetric Point Cloud

    NASA Astrophysics Data System (ADS)

    Malihi, S.; Valadan Zoej, M. J.; Hahn, M.; Mokhtarzade, M.; Arefi, H.

    2016-06-01

    Three dimensional models of urban areas play an important role in city planning, disaster management, city navigation and other applications. Reconstruction of 3D building models is still a challenging issue in 3D city modelling. Point clouds generated from multi view images of UAV is a novel source of spatial data, which is used in this research for building reconstruction. The process starts with the segmentation of point clouds of roofs and walls into planar groups. By generating related surfaces and using geometrical constraints plus considering symmetry, a 3d model of building is reconstructed. In a refinement step, dormers are extracted, and their models are reconstructed. The details of the 3d reconstructed model are in LoD3 level, with respect to modelling eaves, fractions of roof and dormers.

  5. a Fast Method for Measuring the Similarity Between 3d Model and 3d Point Cloud

    NASA Astrophysics Data System (ADS)

    Zhang, Zongliang; Li, Jonathan; Li, Xin; Lin, Yangbin; Zhang, Shanxin; Wang, Cheng

    2016-06-01

    This paper proposes a fast method for measuring the partial Similarity between 3D Model and 3D point Cloud (SimMC). It is crucial to measure SimMC for many point cloud-related applications such as 3D object retrieval and inverse procedural modelling. In our proposed method, the surface area of model and the Distance from Model to point Cloud (DistMC) are exploited as measurements to calculate SimMC. Here, DistMC is defined as the weighted distance of the distances between points sampled from model and point cloud. Similarly, Distance from point Cloud to Model (DistCM) is defined as the average distance of the distances between points in point cloud and model. In order to reduce huge computational burdens brought by calculation of DistCM in some traditional methods, we define SimMC as the ratio of weighted surface area of model to DistMC. Compared to those traditional SimMC measuring methods that are only able to measure global similarity, our method is capable of measuring partial similarity by employing distance-weighted strategy. Moreover, our method is able to be faster than other partial similarity assessment methods. We demonstrate the superiority of our method both on synthetic data and laser scanning data.

  6. Performance testing of 3D point cloud software

    NASA Astrophysics Data System (ADS)

    Varela-González, M.; González-Jorge, H.; Riveiro, B.; Arias, P.

    2013-10-01

    LiDAR systems are being used widely in recent years for many applications in the engineering field: civil engineering, cultural heritage, mining, industry and environmental engineering. One of the most important limitations of this technology is the large computational requirements involved in data processing, especially for large mobile LiDAR datasets. Several software solutions for data managing are available in the market, including open source suites, however, users often unknown methodologies to verify their performance properly. In this work a methodology for LiDAR software performance testing is presented and four different suites are studied: QT Modeler, VR Mesh, AutoCAD 3D Civil and the Point Cloud Library running in software developed at the University of Vigo (SITEGI). The software based on the Point Cloud Library shows better results in the loading time of the point clouds and CPU usage. However, it is not as strong as commercial suites in working set and commit size tests.

  7. 3D scene reconstruction based on 3D laser point cloud combining UAV images

    NASA Astrophysics Data System (ADS)

    Liu, Huiyun; Yan, Yangyang; Zhang, Xitong; Wu, Zhenzhen

    2016-03-01

    It is a big challenge capturing and modeling 3D information of the built environment. A number of techniques and technologies are now in use. These include GPS, and photogrammetric application and also remote sensing applications. The experiment uses multi-source data fusion technology for 3D scene reconstruction based on the principle of 3D laser scanning technology, which uses the laser point cloud data as the basis and Digital Ortho-photo Map as an auxiliary, uses 3DsMAX software as a basic tool for building three-dimensional scene reconstruction. The article includes data acquisition, data preprocessing, 3D scene construction. The results show that the 3D scene has better truthfulness, and the accuracy of the scene meet the need of 3D scene construction.

  8. Underwater 3d Modeling: Image Enhancement and Point Cloud Filtering

    NASA Astrophysics Data System (ADS)

    Sarakinou, I.; Papadimitriou, K.; Georgoula, O.; Patias, P.

    2016-06-01

    This paper examines the results of image enhancement and point cloud filtering on the visual and geometric quality of 3D models for the representation of underwater features. Specifically it evaluates the combination of effects from the manual editing of images' radiometry (captured at shallow depths) and the selection of parameters for point cloud definition and mesh building (processed in 3D modeling software). Such datasets, are usually collected by divers, handled by scientists and used for geovisualization purposes. In the presented study, have been created 3D models from three sets of images (seafloor, part of a wreck and a small boat's wreck) captured at three different depths (3.5m, 10m and 14m respectively). Four models have been created from the first dataset (seafloor) in order to evaluate the results from the application of image enhancement techniques and point cloud filtering. The main process for this preliminary study included a) the definition of parameters for the point cloud filtering and the creation of a reference model, b) the radiometric editing of images, followed by the creation of three improved models and c) the assessment of results by comparing the visual and the geometric quality of improved models versus the reference one. Finally, the selected technique is tested on two other data sets in order to examine its appropriateness for different depths (at 10m and 14m) and different objects (part of a wreck and a small boat's wreck) in the context of an ongoing research in the Laboratory of Photogrammetry and Remote Sensing.

  9. 3-D Object Recognition from Point Cloud Data

    NASA Astrophysics Data System (ADS)

    Smith, W.; Walker, A. S.; Zhang, B.

    2011-09-01

    The market for real-time 3-D mapping includes not only traditional geospatial applications but also navigation of unmanned autonomous vehicles (UAVs). Massively parallel processes such as graphics processing unit (GPU) computing make real-time 3-D object recognition and mapping achievable. Geospatial technologies such as digital photogrammetry and GIS offer advanced capabilities to produce 2-D and 3-D static maps using UAV data. The goal is to develop real-time UAV navigation through increased automation. It is challenging for a computer to identify a 3-D object such as a car, a tree or a house, yet automatic 3-D object recognition is essential to increasing the productivity of geospatial data such as 3-D city site models. In the past three decades, researchers have used radiometric properties to identify objects in digital imagery with limited success, because these properties vary considerably from image to image. Consequently, our team has developed software that recognizes certain types of 3-D objects within 3-D point clouds. Although our software is developed for modeling, simulation and visualization, it has the potential to be valuable in robotics and UAV applications. The locations and shapes of 3-D objects such as buildings and trees are easily recognizable by a human from a brief glance at a representation of a point cloud such as terrain-shaded relief. The algorithms to extract these objects have been developed and require only the point cloud and minimal human inputs such as a set of limits on building size and a request to turn on a squaring option. The algorithms use both digital surface model (DSM) and digital elevation model (DEM), so software has also been developed to derive the latter from the former. The process continues through the following steps: identify and group 3-D object points into regions; separate buildings and houses from trees; trace region boundaries; regularize and simplify boundary polygons; construct complex roofs. Several case

  10. Comparison of 3D interest point detectors and descriptors for point cloud fusion

    NASA Astrophysics Data System (ADS)

    Hänsch, R.; Weber, T.; Hellwich, O.

    2014-08-01

    The extraction and description of keypoints as salient image parts has a long tradition within processing and analysis of 2D images. Nowadays, 3D data gains more and more importance. This paper discusses the benefits and limitations of keypoints for the task of fusing multiple 3D point clouds. For this goal, several combinations of 3D keypoint detectors and descriptors are tested. The experiments are based on 3D scenes with varying properties, including 3D scanner data as well as Kinect point clouds. The obtained results indicate that the specific method to extract and describe keypoints in 3D data has to be carefully chosen. In many cases the accuracy suffers from a too strong reduction of the available points to keypoints.

  11. Filtering method for 3D laser scanning point cloud

    NASA Astrophysics Data System (ADS)

    Liu, Da; Wang, Li; Hao, Yuncai; Zhang, Jun

    2015-10-01

    In recent years, with the rapid development of the hardware and software of the three-dimensional model acquisition, three-dimensional laser scanning technology is utilized in various aspects, especially in space exploration. The point cloud filter is very important before using the data. In the paper, considering both the processing quality and computing speed, an improved mean-shift point cloud filter method is proposed. Firstly, by analyze the relevance of the normal vector between the upcoming processing point and the near points, the iterative neighborhood of the mean-shift is selected dynamically, then the high frequency noise is constrained. Secondly, considering the normal vector of the processing point, the normal vector is updated. Finally, updated position is calculated for each point, then each point is moved in the normal vector according to the updated position. The experimental results show that the large features are retained, at the same time, the small sharp features are also existed for different size and shape of objects, so the target feature information is protected precisely. The computational complexity of the proposed method is not high, it can bring high precision results with fast speed, so it is very suitable for space application. It can also be utilized in civil, such as large object measurement, industrial measurement, car navigation etc. In the future, filter with the help of point strength will be further exploited.

  12. Feature-Based Quality Evaluation of 3d Point Clouds - Study of the Performance of 3d Registration Algorithms

    NASA Astrophysics Data System (ADS)

    Ridene, T.; Goulette, F.; Chendeb, S.

    2013-08-01

    The production of realistic 3D map databases is continuously growing. We studied an approach of 3D mapping database producing based on the fusion of heterogeneous 3D data. In this term, a rigid registration process was performed. Before starting the modeling process, we need to validate the quality of the registration results, and this is one of the most difficult and open research problems. In this paper, we suggest a new method of evaluation of 3D point clouds based on feature extraction and comparison with a 2D reference model. This method is based on tow metrics: binary and fuzzy.

  13. Edge features extraction from 3D laser point cloud based on corresponding images

    NASA Astrophysics Data System (ADS)

    Li, Xin-feng; Zhao, Zi-ming; Xu, Guo-qing; Geng, Yan-long

    2013-09-01

    An extraction method of edge features from 3D laser point cloud based on corresponding images was proposed. After the registration of point cloud and corresponding image, the sub-pixel edge can be extracted from the image using gray moment algorithm. Then project the sub-pixel edge to the point cloud in fitting scan-lines. At last the edge features were achieved by linking the crossing points. The experimental results demonstrate that the method guarantees accurate fine extraction.

  14. Automated Mosaicking of Multiple 3d Point Clouds Generated from a Depth Camera

    NASA Astrophysics Data System (ADS)

    Kim, H.; Yoon, W.; Kim, T.

    2016-06-01

    In this paper, we propose a method for automated mosaicking of multiple 3D point clouds generated from a depth camera. A depth camera generates depth data by using ToF (Time of Flight) method and intensity data by using intensity of returned signal. The depth camera used in this paper was a SR4000 from MESA Imaging. This camera generates a depth map and intensity map of 176 x 44 pixels. Generated depth map saves physical depth data with mm of precision. Generated intensity map contains texture data with many noises. We used texture maps for extracting tiepoints and depth maps for assigning z coordinates to tiepoints and point cloud mosaicking. There are four steps in the proposed mosaicking method. In the first step, we acquired multiple 3D point clouds by rotating depth camera and capturing data per rotation. In the second step, we estimated 3D-3D transformation relationships between subsequent point clouds. For this, 2D tiepoints were extracted automatically from the corresponding two intensity maps. They were converted into 3D tiepoints using depth maps. We used a 3D similarity transformation model for estimating the 3D-3D transformation relationships. In the third step, we converted local 3D-3D transformations into a global transformation for all point clouds with respect to a reference one. In the last step, the extent of single depth map mosaic was calculated and depth values per mosaic pixel were determined by a ray tracing method. For experiments, 8 depth maps and intensity maps were used. After the four steps, an output mosaicked depth map of 454x144 was generated. It is expected that the proposed method would be useful for developing an effective 3D indoor mapping method in future.

  15. Dense 3d Point Cloud Generation from Uav Images from Image Matching and Global Optimazation

    NASA Astrophysics Data System (ADS)

    Rhee, S.; Kim, T.

    2016-06-01

    3D spatial information from unmanned aerial vehicles (UAV) images is usually provided in the form of 3D point clouds. For various UAV applications, it is important to generate dense 3D point clouds automatically from over the entire extent of UAV images. In this paper, we aim to apply image matching for generation of local point clouds over a pair or group of images and global optimization to combine local point clouds over the whole region of interest. We tried to apply two types of image matching, an object space-based matching technique and an image space-based matching technique, and to compare the performance of the two techniques. The object space-based matching used here sets a list of candidate height values for a fixed horizontal position in the object space. For each height, its corresponding image point is calculated and similarity is measured by grey-level correlation. The image space-based matching used here is a modified relaxation matching. We devised a global optimization scheme for finding optimal pairs (or groups) to apply image matching, defining local match region in image- or object- space, and merging local point clouds into a global one. For optimal pair selection, tiepoints among images were extracted and stereo coverage network was defined by forming a maximum spanning tree using the tiepoints. From experiments, we confirmed that through image matching and global optimization, 3D point clouds were generated successfully. However, results also revealed some limitations. In case of image-based matching results, we observed some blanks in 3D point clouds. In case of object space-based matching results, we observed more blunders than image-based matching ones and noisy local height variations. We suspect these might be due to inaccurate orientation parameters. The work in this paper is still ongoing. We will further test our approach with more precise orientation parameters.

  16. Unlocking the scientific potential of complex 3D point cloud dataset : new classification and 3D comparison methods

    NASA Astrophysics Data System (ADS)

    Lague, D.; Brodu, N.; Leroux, J.

    2012-12-01

    Ground based lidar and photogrammetric techniques are increasingly used to track the evolution of natural surfaces in 3D at an unprecedented resolution and precision. The range of applications encompass many type of natural surfaces with different geometries and roughness characteristics (landslides, cliff erosion, river beds, bank erosion,....). Unravelling surface change in these contexts requires to compare large point clouds in 2D or 3D. The most commonly used method in geomorphology is based on a 2D difference of the gridded point clouds. Yet this is hardly adapted to many 3D natural environments such as rivers (with horizontal beds and vertical banks), while gridding complex rough surfaces is a complex task. On the other hand, tools allowing to perform 3D comparison are scarce and may require to mesh the point clouds which is difficult on rough natural surfaces. Moreover, existing 3D comparison tools do not provide an explicit calculation of confidence intervals that would factor in registration errors, roughness effects and instrument related position uncertainties. To unlock this problem, we developed the first algorithm combining a 3D measurement of surface change directly on point clouds with an estimate of spatially variable confidence intervals (called M3C2). The method has two steps : (1) surface normal estimation and orientation in 3D at a scale consistent with the local roughness ; (2) measurement of mean surface change along the normal direction with explicit calculation of a local confidence interval. Comparison with existing 3D methods based on a closest-point calculation demonstrates the higher precision of the M3C2 method when mm changes needs to be detected. The M3C2 method is also simple to use as it does not require surface meshing or gridding, and is not sensitive to missing data or change in point density. We also present a 3D classification tool (CANUPO) for vegetation removal based on a new geometrical measure: the multi

  17. Dense point-cloud creation using superresolution for a monocular 3D reconstruction system

    NASA Astrophysics Data System (ADS)

    Diskin, Yakov; Asari, Vijayan K.

    2012-05-01

    We present an enhanced 3D reconstruction algorithm designed to support an autonomously navigated unmanned aerial system (UAS). The algorithm presented focuses on the 3D reconstruction of a scene using only a single moving camera. In this way, the system can be used to construct a point cloud model of its unknown surroundings. The original reconstruction process, resulting with a point cloud was computed based on feature matching and depth triangulation analysis. Although dense, this original model was hindered due to its low disparity resolution. As feature points were matched from frame to frame, the resolution of the input images and the discrete nature of disparities limited the depth computations within a scene. With the recent addition of the preprocessing steps of nonlinear super resolution, the accuracy of the point cloud which relies on precise disparity measurement has significantly increased. Using a pixel by pixel approach, the super resolution technique computes the phase congruency of each pixel's neighborhood and produces nonlinearly interpolated high resolution input frames. Thus, a feature point travels a more precise discrete disparity. Also, the quantity of points within the 3D point cloud model is significantly increased since the number of features is directly proportional to the resolution and high frequencies of the input image. The contribution of the newly added preprocessing steps is measured by evaluating the density and accuracy of the reconstructed point cloud for autonomous navigation and mapping tasks within unknown environments.

  18. Fast Probabilistic Fusion of 3d Point Clouds via Occupancy Grids for Scene Classification

    NASA Astrophysics Data System (ADS)

    Kuhn, Andreas; Huang, Hai; Drauschke, Martin; Mayer, Helmut

    2016-06-01

    High resolution consumer cameras on Unmanned Aerial Vehicles (UAVs) allow for cheap acquisition of highly detailed images, e.g., of urban regions. Via image registration by means of Structure from Motion (SfM) and Multi View Stereo (MVS) the automatic generation of huge amounts of 3D points with a relative accuracy in the centimeter range is possible. Applications such as semantic classification have a need for accurate 3D point clouds, but do not benefit from an extremely high resolution/density. In this paper, we, therefore, propose a fast fusion of high resolution 3D point clouds based on occupancy grids. The result is used for semantic classification. In contrast to state-of-the-art classification methods, we accept a certain percentage of outliers, arguing that they can be considered in the classification process when a per point belief is determined in the fusion process. To this end, we employ an octree-based fusion which allows for the derivation of outlier probabilities. The probabilities give a belief for every 3D point, which is essential for the semantic classification to consider measurement noise. For an example point cloud with half a billion 3D points (cf. Figure 1), we show that our method can reduce runtime as well as improve classification accuracy and offers high scalability for large datasets.

  19. Graph-Based Compression of Dynamic 3D Point Cloud Sequences.

    PubMed

    Thanou, Dorina; Chou, Philip A; Frossard, Pascal

    2016-04-01

    This paper addresses the problem of compression of 3D point cloud sequences that are characterized by moving 3D positions and color attributes. As temporally successive point cloud frames share some similarities, motion estimation is key to effective compression of these sequences. It, however, remains a challenging problem as the point cloud frames have varying numbers of points without explicit correspondence information. We represent the time-varying geometry of these sequences with a set of graphs, and consider 3D positions and color attributes of the point clouds as signals on the vertices of the graphs. We then cast motion estimation as a feature-matching problem between successive graphs. The motion is estimated on a sparse set of representative vertices using new spectral graph wavelet descriptors. A dense motion field is eventually interpolated by solving a graph-based regularization problem. The estimated motion is finally used for removing the temporal redundancy in the predictive coding of the 3D positions and the color characteristics of the point cloud sequences. Experimental results demonstrate that our method is able to accurately estimate the motion between consecutive frames. Moreover, motion estimation is shown to bring a significant improvement in terms of the overall compression performance of the sequence. To the best of our knowledge, this is the first paper that exploits both the spatial correlation inside each frame (through the graph) and the temporal correlation between the frames (through the motion estimation) to compress the color and the geometry of 3D point cloud sequences in an efficient way. PMID:26891486

  20. Towards 3D Matching of Point Clouds Derived from Oblique and Nadir Airborne Imagery

    NASA Astrophysics Data System (ADS)

    Zhang, Ming

    Because of the low-expense high-efficient image collection process and the rich 3D and texture information presented in the images, a combined use of 2D airborne nadir and oblique images to reconstruct 3D geometric scene has a promising market for future commercial usage like urban planning or first responders. The methodology introduced in this thesis provides a feasible way towards fully automated 3D city modeling from oblique and nadir airborne imagery. In this thesis, the difficulty of matching 2D images with large disparity is avoided by grouping the images first and applying the 3D registration afterward. The procedure starts with the extraction of point clouds using a modified version of the RIT 3D Extraction Workflow. Then the point clouds are refined by noise removal and surface smoothing processes. Since the point clouds extracted from different image groups use independent coordinate systems, there are translation, rotation and scale differences existing. To figure out these differences, 3D keypoints and their features are extracted. For each pair of point clouds, an initial alignment and a more accurate registration are applied in succession. The final transform matrix presents the parameters describing the translation, rotation and scale requirements. The methodology presented in the thesis has been shown to behave well for test data. The robustness of this method is discussed by adding artificial noise to the test data. For Pictometry oblique aerial imagery, the initial alignment provides a rough alignment result, which contains a larger offset compared to that of test data because of the low quality of the point clouds themselves, but it can be further refined through the final optimization. The accuracy of the final registration result is evaluated by comparing it to the result obtained from manual selection of matched points. Using the method introduced, point clouds extracted from different image groups could be combined with each other to build a

  1. Extension of RCC Topological Relations for 3d Complex Objects Components Extracted from 3d LIDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Xing, Xu-Feng; Abolfazl Mostafavia, Mir; Wang, Chen

    2016-06-01

    Topological relations are fundamental for qualitative description, querying and analysis of a 3D scene. Although topological relations for 2D objects have been extensively studied and implemented in GIS applications, their direct extension to 3D is very challenging and they cannot be directly applied to represent relations between components of complex 3D objects represented by 3D B-Rep models in R3. Herein we present an extended Region Connection Calculus (RCC) model to express and formalize topological relations between planar regions for creating 3D model represented by Boundary Representation model in R3. We proposed a new dimension extended 9-Intersection model to represent the basic relations among components of a complex object, including disjoint, meet and intersect. The last element in 3*3 matrix records the details of connection through the common parts of two regions and the intersecting line of two planes. Additionally, this model can deal with the case of planar regions with holes. Finally, the geometric information is transformed into a list of strings consisting of topological relations between two planar regions and detailed connection information. The experiments show that the proposed approach helps to identify topological relations of planar segments of point cloud automatically.

  2. 3D campus modeling using LiDAR point cloud data

    NASA Astrophysics Data System (ADS)

    Kawata, Yoshiyuki; Yoshii, Satoshi; Funatsu, Yukihiro; Takemata, Kazuya

    2012-10-01

    The importance of having a 3D urban city model is recognized in many applications, such as management offices of risk and disaster, the offices for city planning and developing and others. As an example of urban model, we reconstructed 3D KIT campus manually in this study, by utilizing airborne LiDAR point cloud data. The automatic extraction of building shapes was left in future work.

  3. Adaptive noise suppression technique for dense 3D point cloud reconstructions from monocular vision

    NASA Astrophysics Data System (ADS)

    Diskin, Yakov; Asari, Vijayan K.

    2012-10-01

    Mobile vision-based autonomous vehicles use video frames from multiple angles to construct a 3D model of their environment. In this paper, we present a post-processing adaptive noise suppression technique to enhance the quality of the computed 3D model. Our near real-time reconstruction algorithm uses each pair of frames to compute the disparities of tracked feature points to translate the distance a feature has traveled within the frame in pixels into real world depth values. As a result these tracked feature points are plotted to form a dense and colorful point cloud. Due to the inevitable small vibrations in the camera and the mismatches within the feature tracking algorithm, the point cloud model contains a significant amount of misplaced points appearing as noise. The proposed noise suppression technique utilizes the spatial information of each point to unify points of similar texture and color into objects while simultaneously removing noise dissociated with any nearby objects. The noise filter combines all the points of similar depth into 2D layers throughout the point cloud model. By applying erosion and dilation techniques we are able to eliminate the unwanted floating points while retaining points of larger objects. To reverse the compression process, we transform the 2D layer back into the 3D model allowing points to return to their original position without the attached noise components. We evaluate the resulting noiseless point cloud by utilizing an unmanned ground vehicle to perform obstacle avoidance tasks. The contribution of the noise suppression technique is measured by evaluating the accuracy of the 3D reconstruction.

  4. Contextual Classification of Point Cloud Data by Exploiting Individual 3d Neigbourhoods

    NASA Astrophysics Data System (ADS)

    Weinmann, M.; Schmidt, A.; Mallet, C.; Hinz, S.; Rottensteiner, F.; Jutzi, B.

    2015-03-01

    The fully automated analysis of 3D point clouds is of great importance in photogrammetry, remote sensing and computer vision. For reliably extracting objects such as buildings, road inventory or vegetation, many approaches rely on the results of a point cloud classification, where each 3D point is assigned a respective semantic class label. Such an assignment, in turn, typically involves statistical methods for feature extraction and machine learning. Whereas the different components in the processing workflow have extensively, but separately been investigated in recent years, the respective connection by sharing the results of crucial tasks across all components has not yet been addressed. This connection not only encapsulates the interrelated issues of neighborhood selection and feature extraction, but also the issue of how to involve spatial context in the classification step. In this paper, we present a novel and generic approach for 3D scene analysis which relies on (i) individually optimized 3D neighborhoods for (ii) the extraction of distinctive geometric features and (iii) the contextual classification of point cloud data. For a labeled benchmark dataset, we demonstrate the beneficial impact of involving contextual information in the classification process and that using individual 3D neighborhoods of optimal size significantly increases the quality of the results for both pointwise and contextual classification.

  5. Image-Based Airborne LiDAR Point Cloud Encoding for 3d Building Model Retrieval

    NASA Astrophysics Data System (ADS)

    Chen, Yi-Chen; Lin, Chao-Hung

    2016-06-01

    With the development of Web 2.0 and cyber city modeling, an increasing number of 3D models have been available on web-based model-sharing platforms with many applications such as navigation, urban planning, and virtual reality. Based on the concept of data reuse, a 3D model retrieval system is proposed to retrieve building models similar to a user-specified query. The basic idea behind this system is to reuse these existing 3D building models instead of reconstruction from point clouds. To efficiently retrieve models, the models in databases are compactly encoded by using a shape descriptor generally. However, most of the geometric descriptors in related works are applied to polygonal models. In this study, the input query of the model retrieval system is a point cloud acquired by Light Detection and Ranging (LiDAR) systems because of the efficient scene scanning and spatial information collection. Using Point clouds with sparse, noisy, and incomplete sampling as input queries is more difficult than that by using 3D models. Because that the building roof is more informative than other parts in the airborne LiDAR point cloud, an image-based approach is proposed to encode both point clouds from input queries and 3D models in databases. The main goal of data encoding is that the models in the database and input point clouds can be consistently encoded. Firstly, top-view depth images of buildings are generated to represent the geometry surface of a building roof. Secondly, geometric features are extracted from depth images based on height, edge and plane of building. Finally, descriptors can be extracted by spatial histograms and used in 3D model retrieval system. For data retrieval, the models are retrieved by matching the encoding coefficients of point clouds and building models. In experiments, a database including about 900,000 3D models collected from the Internet is used for evaluation of data retrieval. The results of the proposed method show a clear superiority

  6. Facets : a Cloudcompare Plugin to Extract Geological Planes from Unstructured 3d Point Clouds

    NASA Astrophysics Data System (ADS)

    Dewez, T. J. B.; Girardeau-Montaut, D.; Allanic, C.; Rohmer, J.

    2016-06-01

    Geological planar facets (stratification, fault, joint…) are key features to unravel the tectonic history of rock outcrop or appreciate the stability of a hazardous rock cliff. Measuring their spatial attitude (dip and strike) is generally performed by hand with a compass/clinometer, which is time consuming, requires some degree of censoring (i.e. refusing to measure some features judged unimportant at the time), is not always possible for fractures higher up on the outcrop and is somewhat hazardous. 3D virtual geological outcrop hold the potential to alleviate these issues. Efficiently segmenting massive 3D point clouds into individual planar facets, inside a convenient software environment was lacking. FACETS is a dedicated plugin within CloudCompare v2.6.2 (http://cloudcompare.org/ ) implemented to perform planar facet extraction, calculate their dip and dip direction (i.e. azimuth of steepest decent) and report the extracted data in interactive stereograms. Two algorithms perform the segmentation: Kd-Tree and Fast Marching. Both divide the point cloud into sub-cells, then compute elementary planar objects and aggregate them progressively according to a planeity threshold into polygons. The boundaries of the polygons are adjusted around segmented points with a tension parameter, and the facet polygons can be exported as 3D polygon shapefiles towards third party GIS software or simply as ASCII comma separated files. One of the great features of FACETS is the capability to explore planar objects but also 3D points with normals with the stereogram tool. Poles can be readily displayed, queried and manually segmented interactively. The plugin blends seamlessly into CloudCompare to leverage all its other 3D point cloud manipulation features. A demonstration of the tool is presented to illustrate these different features. While designed for geological applications, FACETS could be more widely applied to any planar

  7. The Engelbourg's ruins: from 3D TLS point cloud acquisition to 3D virtual and historic models

    NASA Astrophysics Data System (ADS)

    Koehl, Mathieu; Berger, Solveig; Nobile, Sylvain

    2014-05-01

    The Castle of Engelbourg was built at the beginning of the 13th century, at the top of the Schlossberg. It is situated on the territory of the municipality of Thann (France), at the crossroads of Alsace and Lorraine, and dominates the outlet of the valley of Thur. Its strategic position was one of the causes of its systematic destructions during the 17th century, and Louis XIV finished his fate by ordering his demolition in 1673. Today only few vestiges remain, of which a section of the main tower from about 7m of diameter and 4m of wide laying on its slice, unique characteristic in the regional castral landscape. It is visible since the valley, was named "the Eye of the witch", and became a key attraction of the region. The site, which extends over approximately one hectare, is for several years the object of numerous archaeological studies and is at the heart of a project of valuation of the vestiges today. It was indeed a key objective, among the numerous planned works, to realize a 3D model of the site in its current state, in other words, a virtual model "such as seized", exploitable as well from a cultural and tourist point of view as by scientists and in archaeological researches. The team of the ICube/INSA lab had in responsibility the realization of this model, the acquisition of the data until the delivery of the virtual model, thanks to 3D TLS and topographic surveying methods. It was also planned to integrate into this 3D model, data of 2D archives, stemming from series of former excavations. The objectives of this project were the following ones: • Acquisition of 3D digital data of the site and 3D modelling • Digitization of the 2D archaeological data and integration in the 3D model • Implementation of a database connected to the 3D model • Virtual Visit of the site The obtained results allowed us to visualize every 3D object individually, under several forms (point clouds, 3D meshed objects and models, etc.) and at several levels of detail

  8. Comparison Between Two Generic 3d Building Reconstruction Approaches - Point Cloud Based VS. Image Processing Based

    NASA Astrophysics Data System (ADS)

    Dahlke, D.; Linkiewicz, M.

    2016-06-01

    This paper compares two generic approaches for the reconstruction of buildings. Synthesized and real oblique and vertical aerial imagery is transformed on the one hand into a dense photogrammetric 3D point cloud and on the other hand into photogrammetric 2.5D surface models depicting a scene from different cardinal directions. One approach evaluates the 3D point cloud statistically in order to extract the hull of structures, while the other approach makes use of salient line segments in 2.5D surface models, so that the hull of 3D structures can be recovered. With orders of magnitudes more analyzed 3D points, the point cloud based approach is an order of magnitude more accurate for the synthetic dataset compared to the lower dimensioned, but therefor orders of magnitude faster, image processing based approach. For real world data the difference in accuracy between both approaches is not significant anymore. In both cases the reconstructed polyhedra supply information about their inherent semantic and can be used for subsequent and more differentiated semantic annotations through exploitation of texture information.

  9. Feature relevance assessment for the semantic interpretation of 3D point cloud data

    NASA Astrophysics Data System (ADS)

    Weinmann, M.; Jutzi, B.; Mallet, C.

    2013-10-01

    The automatic analysis of large 3D point clouds represents a crucial task in photogrammetry, remote sensing and computer vision. In this paper, we propose a new methodology for the semantic interpretation of such point clouds which involves feature relevance assessment in order to reduce both processing time and memory consumption. Given a standard benchmark dataset with 1.3 million 3D points, we first extract a set of 21 geometric 3D and 2D features. Subsequently, we apply a classifier-independent ranking procedure which involves a general relevance metric in order to derive compact and robust subsets of versatile features which are generally applicable for a large variety of subsequent tasks. This metric is based on 7 different feature selection strategies and thus addresses different intrinsic properties of the given data. For the example of semantically interpreting 3D point cloud data, we demonstrate the great potential of smaller subsets consisting of only the most relevant features with 4 different state-of-the-art classifiers. The results reveal that, instead of including as many features as possible in order to compensate for lack of knowledge, a crucial task such as scene interpretation can be carried out with only few versatile features and even improved accuracy.

  10. Accuracy evaluation of segmentation for high resolution imagery and 3D laser point cloud data

    NASA Astrophysics Data System (ADS)

    Ni, Nina; Chen, Ninghua; Chen, Jianyu

    2014-09-01

    High resolution satellite imagery and 3D laser point cloud data provide precise geometry, rich spectral information and clear texture of feature. The segmentation of high resolution remote sensing images and 3D laser point cloud is the basis of object-oriented remote sensing image analysis, for the segmentation results will directly influence the accuracy of subsequent analysis and discrimination. Currently, there still lacks a common segmentation theory to support these algorithms. So when we face a specific problem, we should determine applicability of the segmentation method through segmentation accuracy assessment, and then determine an optimal segmentation. To today, the most common method for evaluating the effectiveness of a segmentation method is subjective evaluation and supervised evaluation. For providing a more objective evaluation result, we have carried out following work. Analysis and comparison previous proposed image segmentation accuracy evaluation methods, which are area-based metrics, location-based metrics and combinations metrics. 3D point cloud data, which was gathered by Reigl VZ1000, was used to make two-dimensional transformation of point cloud data. The object-oriented segmentation result of aquaculture farm, building and farmland polygons were used as test object and adopted to evaluate segmentation accuracy.

  11. 3DVEM Software Modules for Efficient Management of Point Clouds and Photorealistic 3d Models

    NASA Astrophysics Data System (ADS)

    Fabado, S.; Seguí, A. E.; Cabrelles, M.; Navarro, S.; García-De-San-Miguel, D.; Lerma, J. L.

    2013-07-01

    Cultural heritage managers in general and information users in particular are not usually used to deal with high-technological hardware and software. On the contrary, information providers of metric surveys are most of the times applying latest developments for real-life conservation and restoration projects. This paper addresses the software issue of handling and managing either 3D point clouds or (photorealistic) 3D models to bridge the gap between information users and information providers as regards the management of information which users and providers share as a tool for decision-making, analysis, visualization and management. There are not many viewers specifically designed to handle, manage and create easily animations of architectural and/or archaeological 3D objects, monuments and sites, among others. 3DVEM - 3D Viewer, Editor & Meter software will be introduced to the scientific community, as well as 3DVEM - Live and 3DVEM - Register. The advantages of managing projects with both sets of data, 3D point cloud and photorealistic 3D models, will be introduced. Different visualizations of true documentation projects in the fields of architecture, archaeology and industry will be presented. Emphasis will be driven to highlight the features of new userfriendly software to manage virtual projects. Furthermore, the easiness of creating controlled interactive animations (both walkthrough and fly-through) by the user either on-the-fly or as a traditional movie file will be demonstrated through 3DVEM - Live.

  12. 3D change detection at street level using mobile laser scanning point clouds and terrestrial images

    NASA Astrophysics Data System (ADS)

    Qin, Rongjun; Gruen, Armin

    2014-04-01

    Automatic change detection and geo-database updating in the urban environment are difficult tasks. There has been much research on detecting changes with satellite and aerial images, but studies have rarely been performed at the street level, which is complex in its 3D geometry. Contemporary geo-databases include 3D street-level objects, which demand frequent data updating. Terrestrial images provides rich texture information for change detection, but the change detection with terrestrial images from different epochs sometimes faces problems with illumination changes, perspective distortions and unreliable 3D geometry caused by the lack of performance of automatic image matchers, while mobile laser scanning (MLS) data acquired from different epochs provides accurate 3D geometry for change detection, but is very expensive for periodical acquisition. This paper proposes a new method for change detection at street level by using combination of MLS point clouds and terrestrial images: the accurate but expensive MLS data acquired from an early epoch serves as the reference, and terrestrial images or photogrammetric images captured from an image-based mobile mapping system (MMS) at a later epoch are used to detect the geometrical changes between different epochs. The method will automatically mark the possible changes in each view, which provides a cost-efficient method for frequent data updating. The methodology is divided into several steps. In the first step, the point clouds are recorded by the MLS system and processed, with data cleaned and classified by semi-automatic means. In the second step, terrestrial images or mobile mapping images at a later epoch are taken and registered to the point cloud, and then point clouds are projected on each image by a weighted window based z-buffering method for view dependent 2D triangulation. In the next step, stereo pairs of the terrestrial images are rectified and re-projected between each other to check the geometrical

  13. Octree-Based SIMD Strategy for Icp Registration and Alignment of 3d Point Clouds

    NASA Astrophysics Data System (ADS)

    Eggert, D.; Dalyot, S.

    2012-07-01

    Matching and fusion of 3D point clouds, such as close range laser scans, is important for creating an integrated 3D model data infrastructure. The Iterative Closest Point algorithm for alignment of point clouds is one of the most commonly used algorithms for matching of rigid bodies. Evidently, scans are acquired from different positions and might present different data characterization and accuracies, forcing complex data-handling issues. The growing demand for near real-time applications also introduces new computational requirements and constraints into such processes. This research proposes a methodology to solving the computational and processing complexities in the ICP algorithm by introducing specific performance enhancements to enable more efficient analysis and processing. An Octree data structure together with the caching of localized Delaunay triangulation-based surface meshes is implemented to increase computation efficiency and handling of data. Parallelization of the ICP process is carried out by using the Single Instruction, Multiple Data processing scheme - based on the Divide and Conquer multi-branched paradigm - enabling multiple processing elements to be performed on the same operation on multiple data independently and simultaneously. When compared to the traditional non-parallel list processing the Octree-based SIMD strategy showed a sharp increase in computation performance and efficiency, together with a reliable and accurate alignment of large 3D point clouds, contributing to a qualitative and efficient application.

  14. Street curb recognition in 3d point cloud data using morphological operations

    NASA Astrophysics Data System (ADS)

    Rodríguez-Cuenca, Borja; Concepción Alonso-Rodríguez, María; García-Cortés, Silverio; Ordóñez, Celestino

    2015-04-01

    Accurate and automatic detection of cartographic-entities saves a great deal of time and money when creating and updating cartographic databases. The current trend in remote sensing feature extraction is to develop methods that are as automatic as possible. The aim is to develop algorithms that can obtain accurate results with the least possible human intervention in the process. Non-manual curb detection is an important issue in road maintenance, 3D urban modeling, and autonomous navigation fields. This paper is focused on the semi-automatic recognition of curbs and street boundaries using a 3D point cloud registered by a mobile laser scanner (MLS) system. This work is divided into four steps. First, a coordinate system transformation is carried out, moving from a global coordinate system to a local one. After that and in order to simplify the calculations involved in the procedure, a rasterization based on the projection of the measured point cloud on the XY plane was carried out, passing from the 3D original data to a 2D image. To determine the location of curbs in the image, different image processing techniques such as thresholding and morphological operations were applied. Finally, the upper and lower edges of curbs are detected by an unsupervised classification algorithm on the curvature and roughness of the points that represent curbs. The proposed method is valid in both straight and curved road sections and applicable both to laser scanner and stereo vision 3D data due to the independence of its scanning geometry. This method has been successfully tested with two datasets measured by different sensors. The first dataset corresponds to a point cloud measured by a TOPCON sensor in the Spanish town of Cudillero. That point cloud comprises more than 6,000,000 points and covers a 400-meter street. The second dataset corresponds to a point cloud measured by a RIEGL sensor in the Austrian town of Horn. That point cloud comprises 8,000,000 points and represents a

  15. Evaluation of Partially Overlapping 3D Point Cloud's Registration by using ICP variant and CloudCompare.

    NASA Astrophysics Data System (ADS)

    Rajendra, Y. D.; Mehrotra, S. C.; Kale, K. V.; Manza, R. R.; Dhumal, R. K.; Nagne, A. D.; Vibhute, A. D.

    2014-11-01

    Terrestrial Laser Scanners (TLS) are used to get dense point samples of large object's surface. TLS is new and efficient method to digitize large object or scene. The collected point samples come into different formats and coordinates. Different scans are required to scan large object such as heritage site. Point cloud registration is considered as important task to bring different scans into whole 3D model in one coordinate system. Point clouds can be registered by using one of the three ways or combination of them, Target based, feature extraction, point cloud based. For the present study we have gone through Point Cloud Based registration approach. We have collected partially overlapped 3D Point Cloud data of Department of Computer Science & IT (DCSIT) building located in Dr. Babasaheb Ambedkar Marathwada University, Aurangabad. To get the complete point cloud information of the building we have taken 12 scans, 4 scans for exterior and 8 scans for interior façade data collection. There are various algorithms available in literature, but Iterative Closest Point (ICP) is most dominant algorithms. The various researchers have developed variants of ICP for better registration process. The ICP point cloud registration algorithm is based on the search of pairs of nearest points in a two adjacent scans and calculates the transformation parameters between them, it provides advantage that no artificial target is required for registration process. We studied and implemented three variants Brute Force, KDTree, Partial Matching of ICP algorithm in MATLAB. The result shows that the implemented version of ICP algorithm with its variants gives better result with speed and accuracy of registration as compared with CloudCompare Open Source software.

  16. Sloped terrain segmentation for autonomous drive using sparse 3D point cloud.

    PubMed

    Cho, Seoungjae; Kim, Jonghyun; Ikram, Warda; Cho, Kyungeun; Jeong, Young-Sik; Um, Kyhyun; Sim, Sungdae

    2014-01-01

    A ubiquitous environment for road travel that uses wireless networks requires the minimization of data exchange between vehicles. An algorithm that can segment the ground in real time is necessary to obtain location data between vehicles simultaneously executing autonomous drive. This paper proposes a framework for segmenting the ground in real time using a sparse three-dimensional (3D) point cloud acquired from undulating terrain. A sparse 3D point cloud can be acquired by scanning the geography using light detection and ranging (LiDAR) sensors. For efficient ground segmentation, 3D point clouds are quantized in units of volume pixels (voxels) and overlapping data is eliminated. We reduce nonoverlapping voxels to two dimensions by implementing a lowermost heightmap. The ground area is determined on the basis of the number of voxels in each voxel group. We execute ground segmentation in real time by proposing an approach to minimize the comparison between neighboring voxels. Furthermore, we experimentally verify that ground segmentation can be executed at about 19.31 ms per frame. PMID:25093204

  17. Sloped Terrain Segmentation for Autonomous Drive Using Sparse 3D Point Cloud

    PubMed Central

    Cho, Seoungjae; Kim, Jonghyun; Ikram, Warda; Cho, Kyungeun; Sim, Sungdae

    2014-01-01

    A ubiquitous environment for road travel that uses wireless networks requires the minimization of data exchange between vehicles. An algorithm that can segment the ground in real time is necessary to obtain location data between vehicles simultaneously executing autonomous drive. This paper proposes a framework for segmenting the ground in real time using a sparse three-dimensional (3D) point cloud acquired from undulating terrain. A sparse 3D point cloud can be acquired by scanning the geography using light detection and ranging (LiDAR) sensors. For efficient ground segmentation, 3D point clouds are quantized in units of volume pixels (voxels) and overlapping data is eliminated. We reduce nonoverlapping voxels to two dimensions by implementing a lowermost heightmap. The ground area is determined on the basis of the number of voxels in each voxel group. We execute ground segmentation in real time by proposing an approach to minimize the comparison between neighboring voxels. Furthermore, we experimentally verify that ground segmentation can be executed at about 19.31 ms per frame. PMID:25093204

  18. Towards 3D lidar point cloud registration improvement using optimal neighborhood knowledge

    NASA Astrophysics Data System (ADS)

    Gressin, Adrien; Mallet, Clément; Demantké, Jérôme; David, Nicolas

    2013-05-01

    Automatic 3D point cloud registration is a main issue in computer vision and remote sensing. One of the most commonly adopted solution is the well-known Iterative Closest Point (ICP) algorithm. This standard approach performs a fine registration of two overlapping point clouds by iteratively estimating the transformation parameters, assuming good a priori alignment is provided. A large body of literature has proposed many variations in order to improve each step of the process (namely selecting, matching, rejecting, weighting and minimizing). The aim of this paper is to demonstrate how the knowledge of the shape that best fits the local geometry of each 3D point neighborhood can improve the speed and the accuracy of each of these steps. First we present the geometrical features that form the basis of this work. These low-level attributes indeed describe the neighborhood shape around each 3D point. They allow to retrieve the optimal size to analyze the neighborhoods at various scales as well as the privileged local dimension (linear, planar, or volumetric). Several variations of each step of the ICP process are then proposed and analyzed by introducing these features. Such variants are compared on real datasets with the original algorithm in order to retrieve the most efficient algorithm for the whole process. Therefore, the method is successfully applied to various 3D lidar point clouds from airborne, terrestrial, and mobile mapping systems. Improvement for two ICP steps has been noted, and we conclude that our features may not be relevant for very dissimilar object samplings.

  19. Laser point cloud diluting and refined 3D reconstruction fusing with digital images

    NASA Astrophysics Data System (ADS)

    Liu, Jie; Zhang, Jianqing

    2007-06-01

    This paper shows a method to combine the imaged-based modeling technique and Laser scanning data to rebuild a realistic 3D model. Firstly use the image pair to build a relative 3D model of the object, and then register the relative model to the Laser coordinate system. Project the Laser points to one of the images and extract the feature lines from that image. After that fit the 2D projected Laser points to lines in the image and constrain their corresponding 3D points to lines in the 3D Laser space to keep the features of the model. Build TIN and cancel the redundant points, which don't impact the curvature of their neighborhood areas. Use the diluting Laser point cloud to reconstruct the geometry model of the object, and then project the texture of corresponding image onto it. The process is shown to be feasible and progressive proved by experimental results. The final model is quite similar with the real object. This method cuts down the quantity of data in the precondition of keeping the features of model. The effect of it is manifest.

  20. Fast Semantic Segmentation of 3d Point Clouds with Strongly Varying Density

    NASA Astrophysics Data System (ADS)

    Hackel, Timo; Wegner, Jan D.; Schindler, Konrad

    2016-06-01

    We describe an effective and efficient method for point-wise semantic classification of 3D point clouds. The method can handle unstructured and inhomogeneous point clouds such as those derived from static terrestrial LiDAR or photogammetric reconstruction; and it is computationally efficient, making it possible to process point clouds with many millions of points in a matter of minutes. The key issue, both to cope with strong variations in point density and to bring down computation time, turns out to be careful handling of neighborhood relations. By choosing appropriate definitions of a point's (multi-scale) neighborhood, we obtain a feature set that is both expressive and fast to compute. We evaluate our classification method both on benchmark data from a mobile mapping platform and on a variety of large, terrestrial laser scans with greatly varying point density. The proposed feature set outperforms the state of the art with respect to per-point classification accuracy, while at the same time being much faster to compute.

  1. Evaluation Model for Pavement Surface Distress on 3d Point Clouds from Mobile Mapping System

    NASA Astrophysics Data System (ADS)

    Aoki, K.; Yamamoto, K.; Shimamura, H.

    2012-07-01

    This paper proposes a methodology to evaluate the pavement surface distress for maintenance planning of road pavement using 3D point clouds from Mobile Mapping System (MMS). The issue on maintenance planning of road pavement requires scheduled rehabilitation activities for damaged pavement sections to keep high level of services. The importance of this performance-based infrastructure asset management on actual inspection data is globally recognized. Inspection methodology of road pavement surface, a semi-automatic measurement system utilizing inspection vehicles for measuring surface deterioration indexes, such as cracking, rutting and IRI, have already been introduced and capable of continuously archiving the pavement performance data. However, any scheduled inspection using automatic measurement vehicle needs much cost according to the instruments' specification or inspection interval. Therefore, implementation of road maintenance work, especially for the local government, is difficult considering costeffectiveness. Based on this background, in this research, the methodologies for a simplified evaluation for pavement surface and assessment of damaged pavement section are proposed using 3D point clouds data to build urban 3D modelling. The simplified evaluation results of road surface were able to provide useful information for road administrator to find out the pavement section for a detailed examination and for an immediate repair work. In particular, the regularity of enumeration of 3D point clouds was evaluated using Chow-test and F-test model by extracting the section where the structural change of a coordinate value was remarkably achieved. Finally, the validity of the current methodology was investigated by conducting a case study dealing with the actual inspection data of the local roads.

  2. Automated extraction and analysis of rock discontinuity characteristics from 3D point clouds

    NASA Astrophysics Data System (ADS)

    Bianchetti, Matteo; Villa, Alberto; Agliardi, Federico; Crosta, Giovanni B.

    2016-04-01

    A reliable characterization of fractured rock masses requires an exhaustive geometrical description of discontinuities, including orientation, spacing, and size. These are required to describe discontinuum rock mass structure, perform Discrete Fracture Network and DEM modelling, or provide input for rock mass classification or equivalent continuum estimate of rock mass properties. Although several advanced methodologies have been developed in the last decades, a complete characterization of discontinuity geometry in practice is still challenging, due to scale-dependent variability of fracture patterns and difficult accessibility to large outcrops. Recent advances in remote survey techniques, such as terrestrial laser scanning and digital photogrammetry, allow a fast and accurate acquisition of dense 3D point clouds, which promoted the development of several semi-automatic approaches to extract discontinuity features. Nevertheless, these often need user supervision on algorithm parameters which can be difficult to assess. To overcome this problem, we developed an original Matlab tool, allowing fast, fully automatic extraction and analysis of discontinuity features with no requirements on point cloud accuracy, density and homogeneity. The tool consists of a set of algorithms which: (i) process raw 3D point clouds, (ii) automatically characterize discontinuity sets, (iii) identify individual discontinuity surfaces, and (iv) analyse their spacing and persistence. The tool operates in either a supervised or unsupervised mode, starting from an automatic preliminary exploration data analysis. The identification and geometrical characterization of discontinuity features is divided in steps. First, coplanar surfaces are identified in the whole point cloud using K-Nearest Neighbor and Principal Component Analysis algorithms optimized on point cloud accuracy and specified typical facet size. Then, discontinuity set orientation is calculated using Kernel Density Estimation and

  3. Parameter Estimation of Fossil Oysters from High Resolution 3D Point Cloud and Image Data

    NASA Astrophysics Data System (ADS)

    Djuricic, Ana; Harzhauser, Mathias; Dorninger, Peter; Nothegger, Clemens; Mandic, Oleg; Székely, Balázs; Molnár, Gábor; Pfeifer, Norbert

    2014-05-01

    A unique fossil oyster reef was excavated at Stetten in Lower Austria, which is also the highlight of the geo-edutainment park 'Fossilienwelt Weinviertel'. It provides the rare opportunity to study the Early Miocene flora and fauna of the Central Paratethys Sea. The site presents the world's largest fossil oyster biostrome formed about 16.5 million years ago in a tropical estuary of the Korneuburg Basin. About 15,000 up to 80-cm-long shells of Crassostrea gryphoides cover a 400 m2 large area. Our project 'Smart-Geology for the World's largest fossil oyster reef' combines methods of photogrammetry, geology and paleontology to document, evaluate and quantify the shell bed. This interdisciplinary approach will be applied to test hypotheses on the genesis of the taphocenosis (e.g.: tsunami versus major storm) and to reconstruct pre- and post-event processes. Hence, we are focusing on using visualization technologies from photogrammetry in geology and paleontology in order to develop new methods for automatic and objective evaluation of 3D point clouds. These will be studied on the basis of a very dense surface reconstruction of the oyster reef. 'Smart Geology', as extension of the classic discipline, exploits massive data, automatic interpretation, and visualization. Photogrammetry provides the tools for surface acquisition and objective, automated interpretation. We also want to stress the economic aspect of using automatic shape detection in paleontology, which saves manpower and increases efficiency during the monitoring and evaluation process. Currently, there are many well known algorithms for 3D shape detection of certain objects. We are using dense 3D laser scanning data from an instrument utilizing the phase shift measuring principle, which provides accurate geometrical basis < 3 mm. However, the situation is difficult in this multiple object scenario where more than 15,000 complete or fragmentary parts of an object with random orientation are found. The goal

  4. Deriving 3d Point Clouds from Terrestrial Photographs - Comparison of Different Sensors and Software

    NASA Astrophysics Data System (ADS)

    Niederheiser, Robert; Mokroš, Martin; Lange, Julia; Petschko, Helene; Prasicek, Günther; Oude Elberink, Sander

    2016-06-01

    Terrestrial photogrammetry nowadays offers a reasonably cheap, intuitive and effective approach to 3D-modelling. However, the important choice, which sensor and which software to use is not straight forward and needs consideration as the choice will have effects on the resulting 3D point cloud and its derivatives. We compare five different sensors as well as four different state-of-the-art software packages for a single application, the modelling of a vegetated rock face. The five sensors represent different resolutions, sensor sizes and price segments of the cameras. The software packages used are: (1) Agisoft PhotoScan Pro (1.16), (2) Pix4D (2.0.89), (3) a combination of Visual SFM (V0.5.22) and SURE (1.2.0.286), and (4) MicMac (1.0). We took photos of a vegetated rock face from identical positions with all sensors. Then we compared the results of the different software packages regarding the ease of the workflow, visual appeal, similarity and quality of the point cloud. While PhotoScan and Pix4D offer the user-friendliest workflows, they are also "black-box" programmes giving only little insight into their processing. Unsatisfying results may only be changed by modifying settings within a module. The combined workflow of Visual SFM, SURE and CloudCompare is just as simple but requires more user interaction. MicMac turned out to be the most challenging software as it is less user-friendly. However, MicMac offers the most possibilities to influence the processing workflow. The resulting point-clouds of PhotoScan and MicMac are the most appealing.

  5. PointCloudExplore 2: Visual exploration of 3D gene expression

    SciTech Connect

    International Research Training Group Visualization of Large and Unstructured Data Sets, University of Kaiserslautern, Germany; Institute for Data Analysis and Visualization, University of California, Davis, CA; Computational Research Division, Lawrence Berkeley National Laboratory , Berkeley, CA; Genomics Division, LBNL; Computer Science Department, University of California, Irvine, CA; Computer Science Division,University of California, Berkeley, CA; Life Sciences Division, LBNL; Department of Molecular and Cellular Biology and the Center for Integrative Genomics, University of California, Berkeley, CA; Ruebel, Oliver; Rubel, Oliver; Weber, Gunther H.; Huang, Min-Yu; Bethel, E. Wes; Keranen, Soile V.E.; Fowlkes, Charless C.; Hendriks, Cris L. Luengo; DePace, Angela H.; Simirenko, L.; Eisen, Michael B.; Biggin, Mark D.; Hagen, Hand; Malik, Jitendra; Knowles, David W.; Hamann, Bernd

    2008-03-31

    To better understand how developmental regulatory networks are defined inthe genome sequence, the Berkeley Drosophila Transcription Network Project (BDNTP)has developed a suite of methods to describe 3D gene expression data, i.e.,the output of the network at cellular resolution for multiple time points. To allow researchersto explore these novel data sets we have developed PointCloudXplore (PCX).In PCX we have linked physical and information visualization views via the concept ofbrushing (cell selection). For each view dedicated operations for performing selectionof cells are available. In PCX, all cell selections are stored in a central managementsystem. Cells selected in one view can in this way be highlighted in any view allowingfurther cell subset properties to be determined. Complex cell queries can be definedby combining different cell selections using logical operations such as AND, OR, andNOT. Here we are going to provide an overview of PointCloudXplore 2 (PCX2), thelatest publicly available version of PCX. PCX2 has shown to be an effective tool forvisual exploration of 3D gene expression data. We discuss (i) all views available inPCX2, (ii) different strategies to perform cell selection, (iii) the basic architecture ofPCX2., and (iv) illustrate the usefulness of PCX2 using selected examples.

  6. Grammar-Supported 3d Indoor Reconstruction from Point Clouds for As-Built Bim

    NASA Astrophysics Data System (ADS)

    Becker, S.; Peter, M.; Fritsch, D.

    2015-03-01

    The paper presents a grammar-based approach for the robust automatic reconstruction of 3D interiors from raw point clouds. The core of the approach is a 3D indoor grammar which is an extension of our previously published grammar concept for the modeling of 2D floor plans. The grammar allows for the modeling of buildings whose horizontal, continuous floors are traversed by hallways providing access to the rooms as it is the case for most office buildings or public buildings like schools, hospitals or hotels. The grammar is designed in such way that it can be embedded in an iterative automatic learning process providing a seamless transition from LOD3 to LOD4 building models. Starting from an initial low-level grammar, automatically derived from the window representations of an available LOD3 building model, hypotheses about indoor geometries can be generated. The hypothesized indoor geometries are checked against observation data - here 3D point clouds - collected in the interior of the building. The verified and accepted geometries form the basis for an automatic update of the initial grammar. By this, the knowledge content of the initial grammar is enriched, leading to a grammar with increased quality. This higher-level grammar can then be applied to predict realistic geometries to building parts where only sparse observation data are available. Thus, our approach allows for the robust generation of complete 3D indoor models whose quality can be improved continuously as soon as new observation data are fed into the grammar-based reconstruction process. The feasibility of our approach is demonstrated based on a real-world example.

  7. 3D modeling of building indoor spaces and closed doors from imagery and point clouds.

    PubMed

    Díaz-Vilariño, Lucía; Khoshelham, Kourosh; Martínez-Sánchez, Joaquín; Arias, Pedro

    2015-01-01

    3D models of indoor environments are increasingly gaining importance due to the wide range of applications to which they can be subjected: from redesign and visualization to monitoring and simulation. These models usually exist only for newly constructed buildings; therefore, the development of automatic approaches for reconstructing 3D indoors from imagery and/or point clouds can make the process easier, faster and cheaper. Among the constructive elements defining a building interior, doors are very common elements and their detection can be very useful either for knowing the environment structure, to perform an efficient navigation or to plan appropriate evacuation routes. The fact that doors are topologically connected to walls by being coplanar, together with the unavoidable presence of clutter and occlusions indoors, increases the inherent complexity of the automation of the recognition process. In this work, we present a pipeline of techniques used for the reconstruction and interpretation of building interiors based on point clouds and images. The methodology analyses the visibility problem of indoor environments and goes in depth with door candidate detection. The presented approach is tested in real data sets showing its potential with a high door detection rate and applicability for robust and efficient envelope reconstruction. PMID:25654723

  8. 3D Modeling of Building Indoor Spaces and Closed Doors from Imagery and Point Clouds

    PubMed Central

    Díaz-Vilariño, Lucía; Khoshelham, Kourosh; Martínez-Sánchez, Joaquín; Arias, Pedro

    2015-01-01

    3D models of indoor environments are increasingly gaining importance due to the wide range of applications to which they can be subjected: from redesign and visualization to monitoring and simulation. These models usually exist only for newly constructed buildings; therefore, the development of automatic approaches for reconstructing 3D indoors from imagery and/or point clouds can make the process easier, faster and cheaper. Among the constructive elements defining a building interior, doors are very common elements and their detection can be very useful either for knowing the environment structure, to perform an efficient navigation or to plan appropriate evacuation routes. The fact that doors are topologically connected to walls by being coplanar, together with the unavoidable presence of clutter and occlusions indoors, increases the inherent complexity of the automation of the recognition process. In this work, we present a pipeline of techniques used for the reconstruction and interpretation of building interiors based on point clouds and images. The methodology analyses the visibility problem of indoor environments and goes in depth with door candidate detection. The presented approach is tested in real data sets showing its potential with a high door detection rate and applicability for robust and efficient envelope reconstruction. PMID:25654723

  9. A method of 3D object recognition and localization in a cloud of points

    NASA Astrophysics Data System (ADS)

    Bielicki, Jerzy; Sitnik, Robert

    2013-12-01

    The proposed method given in this article is prepared for analysis of data in the form of cloud of points directly from 3D measurements. It is designed for use in the end-user applications that can directly be integrated with 3D scanning software. The method utilizes locally calculated feature vectors (FVs) in point cloud data. Recognition is based on comparison of the analyzed scene with reference object library. A global descriptor in the form of a set of spatially distributed FVs is created for each reference model. During the detection process, correlation of subsets of reference FVs with FVs calculated in the scene is computed. Features utilized in the algorithm are based on parameters, which qualitatively estimate mean and Gaussian curvatures. Replacement of differentiation with averaging in the curvatures estimation makes the algorithm more resistant to discontinuities and poor quality of the input data. Utilization of the FV subsets allows to detect partially occluded and cluttered objects in the scene, while additional spatial information maintains false positive rate at a reasonably low level.

  10. Indoor Navigation from Point Clouds: 3d Modelling and Obstacle Detection

    NASA Astrophysics Data System (ADS)

    Díaz-Vilariño, L.; Boguslawski, P.; Khoshelham, K.; Lorenzo, H.; Mahdjoubi, L.

    2016-06-01

    In the recent years, indoor modelling and navigation has become a research of interest because many stakeholders require navigation assistance in various application scenarios. The navigational assistance for blind or wheelchair people, building crisis management such as fire protection, augmented reality for gaming, tourism or training emergency assistance units are just some of the direct applications of indoor modelling and navigation. Navigational information is traditionally extracted from 2D drawings or layouts. Real state of indoors, including opening position and geometry for both windows and doors, and the presence of obstacles is commonly ignored. In this work, a real indoor-path planning methodology based on 3D point clouds is developed. The value and originality of the approach consist on considering point clouds not only for reconstructing semantically-rich 3D indoor models, but also for detecting potential obstacles in the route planning and using these for readapting the routes according to the real state of the indoor depictured by the laser scanner.

  11. Biview learning for human posture segmentation from 3D points cloud.

    PubMed

    Qiao, Maoying; Cheng, Jun; Bian, Wei; Tao, Dacheng

    2014-01-01

    Posture segmentation plays an essential role in human motion analysis. The state-of-the-art method extracts sufficiently high-dimensional features from 3D depth images for each 3D point and learns an efficient body part classifier. However, high-dimensional features are memory-consuming and difficult to handle on large-scale training dataset. In this paper, we propose an efficient two-stage dimension reduction scheme, termed biview learning, to encode two independent views which are depth-difference features (DDF) and relative position features (RPF). Biview learning explores the complementary property of DDF and RPF, and uses two stages to learn a compact yet comprehensive low-dimensional feature space for posture segmentation. In the first stage, discriminative locality alignment (DLA) is applied to the high-dimensional DDF to learn a discriminative low-dimensional representation. In the second stage, canonical correlation analysis (CCA) is used to explore the complementary property of RPF and the dimensionality reduced DDF. Finally, we train a support vector machine (SVM) over the output of CCA. We carefully validate the effectiveness of DLA and CCA utilized in the two-stage scheme on our 3D human points cloud dataset. Experimental results show that the proposed biview learning scheme significantly outperforms the state-of-the-art method for human posture segmentation. PMID:24465721

  12. PointCloudXplore: a visualization tool for 3D gene expressiondata

    SciTech Connect

    Rubel, Oliver; Weber, Gunther H.; Keranen, Soile V.E.; Fowlkes,Charles C.; Luengo Hendriks, Cristian L.; Simirenko, Lisa; Shah, NameetaY.; Eisen, Michael B.; Biggn, Mark D.; Hagen, Hans; Sudar, Damir J.; Malik, Jitendra; Knowles, David W.; Hamann, Bernd

    2006-10-01

    The Berkeley Drosophila Transcription Network Project (BDTNP) has developed a suite of methods that support quantitative, computational analysis of three-dimensional (3D) gene expression patterns with cellular resolution in early Drosophila embryos, aiming at a more in-depth understanding of gene regulatory networks. We describe a new tool, called PointCloudXplore (PCX), that supports effective 3D gene expression data exploration. PCX is a visualization tool that uses the established visualization techniques of multiple views, brushing, and linking to support the analysis of high-dimensional datasets that describe many genes' expression. Each of the views in PointCloudXplore shows a different gene expression data property. Brushing is used to select and emphasize data associated with defined subsets of embryo cells within a view. Linking is used to show in additional views the expression data for a group of cells that have first been highlighted as a brush in a single view, allowing further data subset properties to be determined. In PCX, physical views of the data are linked to abstract data displays such as parallel coordinates. Physical views show the spatial relationships between different genes' expression patterns within an embryo. Abstract gene expression data displays on the other hand allow for an analysis of relationships between different genes directly in the gene expression space. We discuss on parallel coordinates as one example abstract data view currently available in PCX. We have developed several extensions to standard parallel coordinates to facilitate brushing and the visualization of 3D gene expression data.

  13. A continuous surface reconstruction method on point cloud captured from a 3D surface photogrammetry system

    SciTech Connect

    Liu, Wenyang; Cheung, Yam; Sabouri, Pouya; Arai, Tatsuya J.; Sawant, Amit; Ruan, Dan

    2015-11-15

    achieved submillimeter reconstruction RMSE under different configurations, demonstrating quantitatively the faith of the proposed method in preserving local structural properties of the underlying surface in the presence of noise and missing measurements, and its robustness toward variations of such characteristics. On point clouds from the human subject, the proposed method successfully reconstructed all patient surfaces, filling regions where raw point coordinate readings were missing. Within two comparable regions of interest in the chest area, similar mean curvature distributions were acquired from both their reconstructed surface and CT surface, with mean and standard deviation of (μ{sub recon} = − 2.7 × 10{sup −3} mm{sup −1}, σ{sub recon} = 7.0 × 10{sup −3} mm{sup −1}) and (μ{sub CT} = − 2.5 × 10{sup −3} mm{sup −1}, σ{sub CT} = 5.3 × 10{sup −3} mm{sup −1}), respectively. The agreement of local geometry properties between the reconstructed surfaces and the CT surface demonstrated the ability of the proposed method in faithfully representing the underlying patient surface. Conclusions: The authors have integrated and developed an accurate level-set based continuous surface reconstruction method on point clouds acquired by a 3D surface photogrammetry system. The proposed method has generated a continuous representation of the underlying phantom and patient surfaces with good robustness against noise and missing measurements. It serves as an important first step for further development of motion tracking methods during radiotherapy.

  14. Evaluation of Methods for Coregistration and Fusion of Rpas-Based 3d Point Clouds and Thermal Infrared Images

    NASA Astrophysics Data System (ADS)

    Hoegner, L.; Tuttas, S.; Xu, Y.; Eder, K.; Stilla, U.

    2016-06-01

    This paper discusses the automatic coregistration and fusion of 3d point clouds generated from aerial image sequences and corresponding thermal infrared (TIR) images. Both RGB and TIR images have been taken from a RPAS platform with a predefined flight path where every RGB image has a corresponding TIR image taken from the same position and with the same orientation with respect to the accuracy of the RPAS system and the inertial measurement unit. To remove remaining differences in the exterior orientation, different strategies for coregistering RGB and TIR images are discussed: (i) coregistration based on 2D line segments for every single TIR image and the corresponding RGB image. This method implies a mainly planar scene to avoid mismatches; (ii) coregistration of both the dense 3D point clouds from RGB images and from TIR images by coregistering 2D image projections of both point clouds; (iii) coregistration based on 2D line segments in every single TIR image and 3D line segments extracted from intersections of planes fitted in the segmented dense 3D point cloud; (iv) coregistration of both the dense 3D point clouds from RGB images and from TIR images using both ICP and an adapted version based on corresponding segmented planes; (v) coregistration of both image sets based on point features. The quality is measured by comparing the differences of the back projection of homologous points in both corrected RGB and TIR images.

  15. Recognizing Objects in 3D Point Clouds with Multi-Scale Local Features

    PubMed Central

    Lu, Min; Guo, Yulan; Zhang, Jun; Ma, Yanxin; Lei, Yinjie

    2014-01-01

    Recognizing 3D objects from point clouds in the presence of significant clutter and occlusion is a highly challenging task. In this paper, we present a coarse-to-fine 3D object recognition algorithm. During the phase of offline training, each model is represented with a set of multi-scale local surface features. During the phase of online recognition, a set of keypoints are first detected from each scene. The local surfaces around these keypoints are further encoded with multi-scale feature descriptors. These scene features are then matched against all model features to generate recognition hypotheses, which include model hypotheses and pose hypotheses. Finally, these hypotheses are verified to produce recognition results. The proposed algorithm was tested on two standard datasets, with rigorous comparisons to the state-of-the-art algorithms. Experimental results show that our algorithm was fully automatic and highly effective. It was also very robust to occlusion and clutter. It achieved the best recognition performance on all of these datasets, showing its superiority compared to existing algorithms. PMID:25517694

  16. Recognizing objects in 3D point clouds with multi-scale local features.

    PubMed

    Lu, Min; Guo, Yulan; Zhang, Jun; Ma, Yanxin; Lei, Yinjie

    2014-01-01

    Recognizing 3D objects from point clouds in the presence of significant clutter and occlusion is a highly challenging task. In this paper, we present a coarse-to-fine 3D object recognition algorithm. During the phase of offline training, each model is represented with a set of multi-scale local surface features. During the phase of online recognition, a set of keypoints are first detected from each scene. The local surfaces around these keypoints are further encoded with multi-scale feature descriptors. These scene features are then matched against all model features to generate recognition hypotheses, which include model hypotheses and pose hypotheses. Finally, these hypotheses are verified to produce recognition results. The proposed algorithm was tested on two standard datasets, with rigorous comparisons to the state-of-the-art algorithms. Experimental results show that our algorithm was fully automatic and highly effective. It was also very robust to occlusion and clutter. It achieved the best recognition performance on all of these datasets, showing its superiority compared to existing algorithms. PMID:25517694

  17. Comparison of 3D point clouds produced by LIDAR and UAV photoscan in the Rochefort cave (Belgium)

    NASA Astrophysics Data System (ADS)

    Watlet, Arnaud; Triantafyllou, Antoine; Kaufmann, Olivier; Le Mouelic, Stéphane

    2016-04-01

    Amongst today's techniques that are able to produce 3D point clouds, LIDAR and UAV (Unmanned Aerial Vehicle) photogrammetry are probably the most commonly used. Both methods have their own advantages and limitations. LIDAR scans create high resolution and high precision 3D point clouds, but such methods are generally costly, especially for sporadic surveys. Compared to LIDAR, UAV (e.g. drones) are cheap and flexible to use in different kind of environments. Moreover, the photogrammetric processing workflow of digital images taken with UAV becomes easier with the rise of many affordable software packages (e.g. Agisoft, PhotoModeler3D, VisualSFM). We present here a challenging study made at the Rochefort Cave Laboratory (South Belgium) comprising surface and underground surveys. The site is located in the Belgian Variscan fold-and-thrust belt, a region that shows many karstic networks within Devonian limestone units. A LIDAR scan has been acquired in the main chamber of the cave (~ 15000 m³) to spatialize 3D point cloud of its inner walls and infer geological beds and structures. Even if the use of LIDAR instrument was not really comfortable in such caving environment, the collected data showed a remarkable precision according to few control points geometry. We also decided to perform another challenging survey of the same cave chamber by modelling a 3D point cloud using photogrammetry of a set of DSLR camera pictures taken from the ground and UAV pictures. The aim was to compare both techniques in terms of (i) implementation of data acquisition and processing, (ii) quality of resulting 3D points clouds (points density, field vs cloud recovery and points precision), (iii) their application for geological purposes. Through Rochefort case study, main conclusions are that LIDAR technique provides higher density point clouds with slightly higher precision than photogrammetry method. However, 3D data modeled by photogrammetry provide visible light spectral information

  18. Attribute-based point cloud visualization in support of 3-D classification

    NASA Astrophysics Data System (ADS)

    Zlinszky, András; Otepka, Johannes; Kania, Adam

    2016-04-01

    Despite the rich information available in LIDAR point attributes through full waveform recording, radiometric calibration and advanced texture metrics, LIDAR-based classification is mostly done in the raster domain. Point-based analyses such as noise removal or terrain filtering are often carried out without visual investigation of the point cloud attributes used. This is because point cloud visualization software usually handle only a limited number of pre-defined point attributes and only allow colorizing the point cloud with one of these at a time. Meanwhile, point cloud classification is rapidly evolving, and uses not only the individual attributes but combinations of these. In order to understand input data and output results better, more advanced methods for visualization are needed. Here we propose an algorithm of the OPALS software package that handles visualization of the point cloud together with its attributes. The algorithm is based on the .odm (OPALS data manager) file format that efficiently handles a large number of pre-defined point attributes and also allows the user to generate new ones. Attributes of interest can be visualized individually, by applying predefined or user-generated palettes in a simple .xml format. The colours of the palette are assigned to the points by setting the respective Red, Green and Blue attributes of the point to result in the colour pre-defined by the palette for the corresponding attribute value. The algorithm handles scaling and histogram equalization based on the distribution of the point attribute to be considered. Additionally, combinations of attributes can be visualized based on RBG colour mixing. The output dataset can be in any standard format where RGB attributes are supported and visualized with conventional point cloud viewing software. Viewing the point cloud together with its attributes allows efficient selection of filter settings and classification parameters. For already classified point clouds, a large

  19. Automatic 3D Building Detection and Modeling from Airborne LiDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Sun, Shaohui

    Urban reconstruction, with an emphasis on man-made structure modeling, is an active research area with broad impact on several potential applications. Urban reconstruction combines photogrammetry, remote sensing, computer vision, and computer graphics. Even though there is a huge volume of work that has been done, many problems still remain unsolved. Automation is one of the key focus areas in this research. In this work, a fast, completely automated method to create 3D watertight building models from airborne LiDAR (Light Detection and Ranging) point clouds is presented. The developed method analyzes the scene content and produces multi-layer rooftops, with complex rigorous boundaries and vertical walls, that connect rooftops to the ground. The graph cuts algorithm is used to separate vegetative elements from the rest of the scene content, which is based on the local analysis about the properties of the local implicit surface patch. The ground terrain and building rooftop footprints are then extracted, utilizing the developed strategy, a two-step hierarchical Euclidean clustering. The method presented here adopts a "divide-and-conquer" scheme. Once the building footprints are segmented from the terrain and vegetative areas, the whole scene is divided into individual pendent processing units which represent potential points on the rooftop. For each individual building region, significant features on the rooftop are further detected using a specifically designed region-growing algorithm with surface smoothness constraints. The principal orientation of each building rooftop feature is calculated using a minimum bounding box fitting technique, and is used to guide the refinement of shapes and boundaries of the rooftop parts. Boundaries for all of these features are refined for the purpose of producing strict description. Once the description of the rooftops is achieved, polygonal mesh models are generated by creating surface patches with outlines defined by detected

  20. Processing 3D flash LADAR point-clouds in real-time for flight applications

    NASA Astrophysics Data System (ADS)

    Craig, R.; Gravseth, I.; Earhart, R. P.; Bladt, J.; Barnhill, S.; Ruppert, L.; Centamore, C.

    2007-04-01

    Ball Aerospace & Technologies Corp. has demonstrated real-time processing of 3D imaging LADAR point-cloud data to produce the industry's first time-of-flight (TOF) 3D video capability. This capability is uniquely suited to the rigorous demands of space and airborne flight applications and holds great promise in the area of autonomous navigation. It will provide long-range, three dimensional video information to autonomous flight software or pilots for immediate use in rendezvous and docking, proximity operations, landing, surface vision systems, and automatic target recognition and tracking. This is enabled by our new generation of FPGA based "pixel-tube" processors, coprocessors and their associated algorithms which have led to a number of advancements in high-speed wavefront processing along with additional advances in dynamic camera control, and space laser designs based on Ball's CALIPSO LIDAR. This evolution in LADAR is made possible by moving the mechanical complexity required for a scanning system into the electronics, where production, integration, testing and life-cycle costs can be significantly reduced. This technique requires a state of the art TOF read-out integrated circuit (ROIC) attached to a sensor array to collect high resolution temporal data, which is then processed through FPGAs. The number of calculations required to process the data is greatly reduced thanks to the fact that all points are captured at the same time and thus correlated. This correlation allows extremely efficient FPGA processing. This capability has been demonstrated in prototype form at both Marshal Space Flight Center and Langley Research Center on targets that represent docking and landing scenarios. This report outlines many aspects of this work as well as aspects of our recent testing at Marshall's Flight Robotics Laboratory.

  1. Knowledge guided object detection and identification in 3D point clouds

    NASA Astrophysics Data System (ADS)

    Karmacharya, A.; Boochs, F.; Tietz, B.

    2015-05-01

    Modern instruments like laser scanner and 3D cameras or image based techniques like structure from motion produce huge point clouds as base for further object analysis. This has considerably changed the way of data compilation away from selective manually guided processes towards automatic and computer supported strategies. However it's still a long way to achieve the quality and robustness of manual processes as data sets are mostly very complex. Looking at existing strategies 3D data processing for object detections and reconstruction rely heavily on either data driven or model driven approaches. These approaches come with their limitation on depending highly on the nature of data and inability to handle any deviation. Furthermore, the lack of capabilities to integrate other data or information in between the processing steps further exposes their limitations. This restricts the approaches to be executed with strict predefined strategy and does not allow deviations when and if new unexpected situations arise. We propose a solution that induces intelligence in the processing activities through the usage of semantics. The solution binds the objects along with other related knowledge domains to the numerical processing to facilitate the detection of geometries and then uses experts' inference rules to annotate them. The solution was tested within the prototypical application of the research project "Wissensbasierte Detektion von Objekten in Punktwolken für Anwendungen im Ingenieurbereich (WiDOP)". The flexibility of the solution is demonstrated through two entirely different USE Case scenarios: Deutsche Bahn (German Railway System) for the outdoor scenarios and Fraport (Frankfort Airport) for the indoor scenarios. Apart from the difference in their environments, they provide different conditions, which the solution needs to consider. While locations of the objects in Fraport were previously known, that of DB were not known at the beginning.

  2. Semi-automated extraction and delineation of 3D roads of street scene from mobile laser scanning point clouds

    NASA Astrophysics Data System (ADS)

    Yang, Bisheng; Fang, Lina; Li, Jonathan

    2013-05-01

    Accurate 3D road information is important for applications such as road maintenance and virtual 3D modeling. Mobile laser scanning (MLS) is an efficient technique for capturing dense point clouds that can be used to construct detailed road models for large areas. This paper presents a method for extracting and delineating roads from large-scale MLS point clouds. The proposed method partitions MLS point clouds into a set of consecutive "scanning lines", which each consists of a road cross section. A moving window operator is used to filter out non-ground points line by line, and curb points are detected based on curb patterns. The detected curb points are tracked and refined so that they are both globally consistent and locally similar. To evaluate the validity of the proposed method, experiments were conducted using two types of street-scene point clouds captured by Optech's Lynx Mobile Mapper System. The completeness, correctness, and quality of the extracted roads are over 94.42%, 91.13%, and 91.3%, respectively, which proves the proposed method is a promising solution for extracting 3D roads from MLS point clouds.

  3. Extraction and refinement of building faces in 3D point clouds

    NASA Astrophysics Data System (ADS)

    Pohl, Melanie; Meidow, Jochen; Bulatov, Dimitri

    2013-10-01

    In this paper, we present an approach to generate a 3D model of an urban scene out of sensor data. The first milestone on that way is to classify the sensor data into the main parts of a scene, such as ground, vegetation, buildings and their outlines. This has already been accomplished within our previous work. Now, we propose a four-step algorithm to model the building structure, which is assumed to consist of several dominant planes. First, we extract small elevated objects, like chimneys, using a hot-spot detector and handle the detected regions separately. In order to model the variety of roof structures precisely, we split up complex building blocks into parts. Two different approaches are used: To act on the assumption of underlying 2D ground polygons, we use geometric methods to divide them into sub-polygons. Without polygons, we use morphological operations and segmentation methods. In the third step, extraction of dominant planes takes place, by using either RANSAC or J-linkage algorithm. They operate on point clouds of sufficient confidence within the previously separated building parts and give robust results even with noisy, outlier-rich data. Last, we refine the previously determined plane parameters using geometric relations of the building faces. Due to noise, these expected properties of roofs and walls are not fulfilled. Hence, we enforce them as hard constraints and use the previously extracted plane parameters as initial values for an optimization method. To test the proposed workflow, we use both several data sets, including noisy data from depth maps and data computed by laser scanning.

  4. Surface-based matching of 3D point clouds with variable coordinates in source and target system

    NASA Astrophysics Data System (ADS)

    Ge, Xuming; Wunderlich, Thomas

    2016-01-01

    The automatic co-registration of point clouds, representing three-dimensional (3D) surfaces, is an important technique in 3D reconstruction and is widely applied in many different disciplines. An alternative approach is proposed here that estimates the transformation parameters of one or more 3D search surfaces with respect to a 3D template surface. The approach uses the nonlinear Gauss-Helmert model, minimizing the quadratically constrained least squares problem. This approach has the ability to match arbitrarily oriented 3D surfaces captured from a number of different sensors, on different time-scales and at different resolutions. In addition to the 3D surface-matching paths, the mathematical model allows the precision of the point clouds to be assessed after adjustment. The error behavior of surfaces can also be investigated based on the proposed approach. Some practical examples are presented and the results are compared with the iterative closest point and the linear least-squares approaches to demonstrate the performance and benefits of the proposed technique.

  5. 3D point cloud classification of complex natural scenes using a multi-scale dimensionality criterion: applications in geomorphology

    NASA Astrophysics Data System (ADS)

    Brodu, N.; Lague, D.

    2012-04-01

    3D point clouds derived from Terrestrial laser scanner (TLS) and photogrammetry are now frequently used in geomorphology to achieve greater precision and completeness in surveying natural environments than what was feasible a few years ago. Yet, scientific exploitation of these large and complex 3D data sets remains difficult and would benefit from automated classification procedures that could pre-process the raw point cloud data. Typical examples of applications are the separation of vegetation from ground or cliff outcrops, the distinction between fresh rock surfaces and rockfall, the classification of flat or rippled bed, and more generally the classification of 3D surfaces according to their morphology directly in the native point cloud data organization rather than after a sometime cumbersome meshing or gridding phase. Yet developing such classification procedures remains difficult because of the 3D nature of the data generated from ground based systems (as opposed to the 2.5D nature of aerial lidar data) and the heterogeneity and complexity of natural surfaces. We present a new software suite (CANUPO) that can classify raw point clouds in 3D based on a new geometrical measure: the multi-scale dimensionality. This method exploits the multi-resolution characteristics high-resolution datasets covering scales ranging from a few centimeters to hundred of meters. The dimensionality characterizes the local 3D organization of the point cloud within spheres centered on the measured points and varies from being 1D (points set along a line), 2D (points forming a plane) to the full 3D volume. By varying the diameter of the sphere, we track how the local cloud geometry behaves across scales (typically ranging from 5 cm to 1 m). We present the technique and illustrate its efficiency on two examples : separating riparian vegetation from ground, and classifying a steep mountain stream as vegetation, rock, gravel or water surface. In these two cases, separating the

  6. Comparison of 3D point clouds obtained by photogrammetric UAVs and TLS to determine the attitude of dolerite outcrops discontinuities.

    NASA Astrophysics Data System (ADS)

    Duarte, João; Gonçalves, Gil; Duarte, Diogo; Figueiredo, Fernando; Mira, Maria

    2015-04-01

    Photogrammetric Unmanned Aerial Vehicles (UAVs) and Terrestrial Laser Scanners (TLS) are two emerging technologies that allows the production of dense 3D point clouds of the sensed topographic surfaces. Although image-based stereo-photogrammetric point clouds could not, in general, compete on geometric quality over TLS point clouds, fully automated mapping solutions based on ultra-light UAVs (or drones) have recently become commercially available at very reasonable accuracy and cost for engineering and geological applications. The purpose of this paper is to compare the two point clouds generated by these two technologies, in order to automatize the manual process tasks commonly used to detect and represent the attitude of discontinuities (Stereographic projection: Schmidt net - Equal area). To avoid the difficulties of access and guarantee the data survey security conditions, this fundamental step in all geological/geotechnical studies, applied to the extractive industry and engineering works, has to be replaced by a more expeditious and reliable methodology. This methodology will allow, in a more actuated clear way, give answers to the needs of evaluation of rock masses, by mapping the structures present, which will reduce considerably the associated risks (investment, structures dimensioning, security, etc.). A case study of a dolerite outcrop locate in the center of Portugal (the dolerite outcrop is situated in the volcanic complex of Serra de Todo-o-Mundo, Casais Gaiola, intruded in Jurassic sandstones) will be used to assess this methodology. The results obtained show that the 3D point cloud produced by the Photogrammetric UAV platform has the appropriate geometric quality for extracting the parameters that define the discontinuities of the dolerite outcrops. Although, they are comparable to the manual extracted parameters, their quality is inferior to parameters extracted from the TLS point cloud.

  7. Road Signs Detection and Recognition Utilizing Images and 3d Point Cloud Acquired by Mobile Mapping System

    NASA Astrophysics Data System (ADS)

    Li, Y. H.; Shinohara, T.; Satoh, T.; Tachibana, K.

    2016-06-01

    High-definition and highly accurate road maps are necessary for the realization of automated driving, and road signs are among the most important element in the road map. Therefore, a technique is necessary which can acquire information about all kinds of road signs automatically and efficiently. Due to the continuous technical advancement of Mobile Mapping System (MMS), it has become possible to acquire large number of images and 3d point cloud efficiently with highly precise position information. In this paper, we present an automatic road sign detection and recognition approach utilizing both images and 3D point cloud acquired by MMS. The proposed approach consists of three stages: 1) detection of road signs from images based on their color and shape features using object based image analysis method, 2) filtering out of over detected candidates utilizing size and position information estimated from 3D point cloud, region of candidates and camera information, and 3) road sign recognition using template matching method after shape normalization. The effectiveness of proposed approach was evaluated by testing dataset, acquired from more than 180 km of different types of roads in Japan. The results show a very high success in detection and recognition of road signs, even under the challenging conditions such as discoloration, deformation and in spite of partial occlusions.

  8. 3D granulometry: grain-scale shape and size distribution from point cloud dataset of river environments

    NASA Astrophysics Data System (ADS)

    Steer, Philippe; Lague, Dimitri; Gourdon, Aurélie; Croissant, Thomas; Crave, Alain

    2016-04-01

    The grain-scale morphology of river sediments and their size distribution are important factors controlling the efficiency of fluvial erosion and transport. In turn, constraining the spatial evolution of these two metrics offer deep insights on the dynamics of river erosion and sediment transport from hillslopes to the sea. However, the size distribution of river sediments is generally assessed using statistically-biased field measurements and determining the grain-scale shape of river sediments remains a real challenge in geomorphology. Here we determine, with new methodological approaches based on the segmentation and geomorphological fitting of 3D point cloud dataset, the size distribution and grain-scale shape of sediments located in river environments. Point cloud segmentation is performed using either machine-learning algorithms or geometrical criterion, such as local plan fitting or curvature analysis. Once the grains are individualized into several sub-clouds, each grain-scale morphology is determined using a 3D geometrical fitting algorithm applied on the sub-cloud. If different geometrical models can be conceived and tested, only ellipsoidal models were used in this study. A phase of results checking is then performed to remove grains showing a best-fitting model with a low level of confidence. The main benefits of this automatic method are that it provides 1) an un-biased estimate of grain-size distribution on a large range of scales, from centimeter to tens of meters; 2) access to a very large number of data, only limited by the number of grains in the point-cloud dataset; 3) access to the 3D morphology of grains, in turn allowing to develop new metrics characterizing the size and shape of grains. The main limit of this method is that it is only able to detect grains with a characteristic size greater than the resolution of the point cloud. This new 3D granulometric method is then applied to river terraces both in the Poerua catchment in New-Zealand and

  9. Geometric and topological feature extraction of linear segments from 2D cross-section data of 3D point clouds

    NASA Astrophysics Data System (ADS)

    Ramamurthy, Rajesh; Harding, Kevin; Du, Xiaoming; Lucas, Vincent; Liao, Yi; Paul, Ratnadeep; Jia, Tao

    2015-05-01

    Optical measurement techniques are often employed to digitally capture three dimensional shapes of components. The digital data density output from these probes range from a few discrete points to exceeding millions of points in the point cloud. The point cloud taken as a whole represents a discretized measurement of the actual 3D shape of the surface of the component inspected to the measurement resolution of the sensor. Embedded within the measurement are the various features of the part that make up its overall shape. Part designers are often interested in the feature information since those relate directly to part function and to the analytical models used to develop the part design. Furthermore, tolerances are added to these dimensional features, making their extraction a requirement for the manufacturing quality plan of the product. The task of "extracting" these design features from the point cloud is a post processing task. Due to measurement repeatability and cycle time requirements often automated feature extraction from measurement data is required. The presence of non-ideal features such as high frequency optical noise and surface roughness can significantly complicate this feature extraction process. This research describes a robust process for extracting linear and arc segments from general 2D point clouds, to a prescribed tolerance. The feature extraction process generates the topology, specifically the number of linear and arc segments, and the geometry equations of the linear and arc segments automatically from the input 2D point clouds. This general feature extraction methodology has been employed as an integral part of the automated post processing algorithms of 3D data of fine features.

  10. Evaluating the Potential of Rtk-Uav for Automatic Point Cloud Generation in 3d Rapid Mapping

    NASA Astrophysics Data System (ADS)

    Fazeli, H.; Samadzadegan, F.; Dadrasjavan, F.

    2016-06-01

    During disaster and emergency situations, 3D geospatial data can provide essential information for decision support systems. The utilization of geospatial data using digital surface models as a basic reference is mandatory to provide accurate quick emergency response in so called rapid mapping activities. The recipe between accuracy requirements and time restriction is considered critical in this situations. UAVs as alternative platforms for 3D point cloud acquisition offer potentials because of their flexibility and practicability combined with low cost implementations. Moreover, the high resolution data collected from UAV platforms have the capabilities to provide a quick overview of the disaster area. The target of this paper is to experiment and to evaluate a low-cost system for generation of point clouds using imagery collected from a low altitude small autonomous UAV equipped with customized single frequency RTK module. The customized multi-rotor platform is used in this study. Moreover, electronic hardware is used to simplify user interaction with the UAV as RTK-GPS/Camera synchronization, and beside the synchronization, lever arm calibration is done. The platform is equipped with a Sony NEX-5N, 16.1-megapixel camera as imaging sensor. The lens attached to camera is ZEISS optics, prime lens with F1.8 maximum aperture and 24 mm focal length to deliver outstanding images. All necessary calibrations are performed and flight is implemented over the area of interest at flight height of 120 m above the ground level resulted in 2.38 cm GSD. Earlier to image acquisition, 12 signalized GCPs and 20 check points were distributed in the study area and measured with dualfrequency GPS via RTK technique with horizontal accuracy of σ = 1.5 cm and vertical accuracy of σ = 2.3 cm. results of direct georeferencing are compared to these points and experimental results show that decimeter accuracy level for 3D points cloud with proposed system is achievable, that is suitable

  11. What's the Point of a Raster ? Advantages of 3D Point Cloud Processing over Raster Based Methods for Accurate Geomorphic Analysis of High Resolution Topography.

    NASA Astrophysics Data System (ADS)

    Lague, D.

    2014-12-01

    High Resolution Topographic (HRT) datasets are predominantly stored and analyzed as 2D raster grids of elevations (i.e., Digital Elevation Models). Raster grid processing is common in GIS software and benefits from a large library of fast algorithms dedicated to geometrical analysis, drainage network computation and topographic change measurement. Yet, all instruments or methods currently generating HRT datasets (e.g., ALS, TLS, SFM, stereo satellite imagery) output natively 3D unstructured point clouds that are (i) non-regularly sampled, (ii) incomplete (e.g., submerged parts of river channels are rarely measured), and (iii) include 3D elements (e.g., vegetation, vertical features such as river banks or cliffs) that cannot be accurately described in a DEM. Interpolating the raw point cloud onto a 2D grid generally results in a loss of position accuracy, spatial resolution and in more or less controlled interpolation. Here I demonstrate how studying earth surface topography and processes directly on native 3D point cloud datasets offers several advantages over raster based methods: point cloud methods preserve the accuracy of the original data, can better handle the evaluation of uncertainty associated to topographic change measurements and are more suitable to study vegetation characteristics and steep features of the landscape. In this presentation, I will illustrate and compare Point Cloud based and Raster based workflows with various examples involving ALS, TLS and SFM for the analysis of bank erosion processes in bedrock and alluvial rivers, rockfall statistics (including rockfall volume estimate directly from point clouds) and the interaction of vegetation/hydraulics and sedimentation in salt marshes. These workflows use 2 recently published algorithms for point cloud classification (CANUPO) and point cloud comparison (M3C2) now implemented in the open source software CloudCompare.

  12. D Geological Outcrop Characterization: Automatic Detection of 3d Planes (azimuth and Dip) Using LiDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Anders, K.; Hämmerle, M.; Miernik, G.; Drews, T.; Escalona, A.; Townsend, C.; Höfle, B.

    2016-06-01

    Terrestrial laser scanning constitutes a powerful method in spatial information data acquisition and allows for geological outcrops to be captured with high resolution and accuracy. A crucial aspect for numerous geologic applications is the extraction of rock surface orientations from the data. This paper focuses on the detection of planes in rock surface data by applying a segmentation algorithm directly to a 3D point cloud. Its performance is assessed considering (1) reduced spatial resolution of data and (2) smoothing in the course of data pre-processing. The methodology is tested on simulations of progressively reduced spatial resolution defined by varying point cloud density. Smoothing of the point cloud data is implemented by modifying the neighborhood criteria during normals estima-tion. The considerable alteration of resulting planes emphasizes the influence of smoothing on the plane detection prior to the actual segmentation. Therefore, the parameter needs to be set in accordance with individual purposes and respective scales of studies. Fur-thermore, it is concluded that the quality of segmentation results does not decline even when the data volume is significantly reduced down to 10%. The azimuth and dip values of individual segments are determined for planes fit to the points belonging to one segment. Based on these results, azimuth and dip as well as strike character of the surface planes in the outcrop are assessed. Thereby, this paper contributes to a fully automatic and straightforward workflow for a comprehensive geometric description of outcrops in 3D.

  13. 3D polygonal representation of dense point clouds by triangulation, segmentation, and texture projection

    NASA Astrophysics Data System (ADS)

    Tajbakhsh, Touraj

    2010-02-01

    A basic concern of computer graphic is the modeling and realistic representation of three-dimensional objects. In this paper we present our reconstruction framework which determines a polygonal surface from a set of dense points such those typically obtained from laser scanners. We deploy the concept of adaptive blobs to achieve a first volumetric representation of the object. In the next step we estimate a coarse surface using the marching cubes method. We propose to deploy a depth-first search segmentation algorithm traversing a graph representation of the obtained polygonal mesh in order to identify all connected components. A so called supervised triangulation maps the coarse surfaces onto the dense point cloud. We optimize the mesh topology using edge exchange operations. For photo-realistic visualization of objects we finally synthesize optimal low-loss textures from available scene captures of different projections. We evaluate our framework on artificial data as well as real sensed data.

  14. 3-D earthquake surface displacements from differencing pre- and post-event LiDAR point clouds

    NASA Astrophysics Data System (ADS)

    Krishnan, A. K.; Nissen, E.; Arrowsmith, R.; Saripalli, S.

    2012-12-01

    The explosion in aerial LiDAR surveying along active faults across the western United States and elsewhere provides a high-resolution topographic baseline against which to compare repeat LiDAR datasets collected after future earthquakes. We present a new method for determining 3-D coseismic surface displacements and rotations by differencing pre- and post-earthquake LiDAR point clouds using an adaptation of the Iterative Closest Point (ICP) algorithm, a point set registration technique widely used in medical imaging, computer vision and graphics. There is no need for any gridding or smoothing of the LiDAR data and the method works well even with large mismatches in the density of the two point clouds. To explore the method's performance, we simulate pre- and post-event point clouds using real ("B4") LiDAR data on the southern San Andreas Fault perturbed with displacements of known magnitude. For input point clouds with ~2 points per square meter, we are able to reproduce displacements with a 50 m grid spacing and with horizontal and vertical accuracies of ~20 cm and ~4 cm. In the future, finer grids and improved precisions should be possible with higher shot densities and better survey geo-referencing. By capturing near-fault deformation in 3-D, LiDAR differencing with ICP will complement satellite-based techniques such as InSAR which map only certain components of the surface deformation and which often break down close to surface faulting or in areas of dense vegetation. It will be especially useful for mapping shallow fault slip and rupture zone deformation, helping inform paleoseismic studies and better constrain fault zone rheology. Because ICP can image rotations directly, the technique will also help resolve the detailed kinematics of distributed zones of faulting where block rotations may be common.

  15. Identification of damage in buildings based on gaps in 3D point clouds from very high resolution oblique airborne images

    NASA Astrophysics Data System (ADS)

    Vetrivel, Anand; Gerke, Markus; Kerle, Norman; Vosselman, George

    2015-07-01

    Point clouds generated from airborne oblique images have become a suitable source for detailed building damage assessment after a disaster event, since they provide the essential geometric and radiometric features of both roof and façades of the building. However, they often contain gaps that result either from physical damage or from a range of image artefacts or data acquisition conditions. A clear understanding of those reasons, and accurate classification of gap-type, are critical for 3D geometry-based damage assessment. In this study, a methodology was developed to delineate buildings from a point cloud and classify the present gaps. The building delineation process was carried out by identifying and merging the roof segments of single buildings from the pre-segmented 3D point cloud. This approach detected 96% of the buildings from a point cloud generated using airborne oblique images. The gap detection and classification methods were tested using two other data sets obtained with Unmanned Aerial Vehicle (UAV) images with a ground resolution of around 1-2 cm. The methods detected all significant gaps and correctly identified the gaps due to damage. The gaps due to damage were identified based on the surrounding damage pattern, applying Gabor wavelets and a histogram of gradient orientation features. Two learning algorithms - SVM and Random Forests were tested for mapping the damaged regions based on radiometric descriptors. The learning model based on Gabor features with Random Forests performed best, identifying 95% of the damaged regions. The generalization performance of the supervised model, however, was less successful: quality measures decreased by around 15-30%.

  16. Calibration of an Outdoor Distributed Camera Network with a 3D Point Cloud

    PubMed Central

    Ortega, Agustín; Silva, Manuel; Teniente, Ernesto H.; Ferreira, Ricardo; Bernardino, Alexandre; Gaspar, José; Andrade-Cetto, Juan

    2014-01-01

    Outdoor camera networks are becoming ubiquitous in critical urban areas of the largest cities around the world. Although current applications of camera networks are mostly tailored to video surveillance, recent research projects are exploiting their use to aid robotic systems in people-assisting tasks. Such systems require precise calibration of the internal and external parameters of the distributed camera network. Despite the fact that camera calibration has been an extensively studied topic, the development of practical methods for user-assisted calibration that minimize user intervention time and maximize precision still pose significant challenges. These camera systems have non-overlapping fields of view, are subject to environmental stress, and are likely to suffer frequent recalibration. In this paper, we propose the use of a 3D map covering the area to support the calibration process and develop an automated method that allows quick and precise calibration of a large camera network. We present two cases of study of the proposed calibration method: one is the calibration of the Barcelona Robot Lab camera network, which also includes direct mappings (homographies) between image coordinates and world points in the ground plane (walking areas) to support person and robot detection and localization algorithms. The second case consist of improving the GPS positioning of geo-tagged images taken with a mobile device in the Facultat de Matemàtiques i Estadística (FME) patio at the Universitat Politècnica de Catalunya (UPC). PMID:25076221

  17. Calibration of an outdoor distributed camera network with a 3D point cloud.

    PubMed

    Ortega, Agustín; Silva, Manuel; Teniente, Ernesto H; Ferreira, Ricardo; Bernardino, Alexandre; Gaspar, José; Andrade-Cetto, Juan

    2014-01-01

    Outdoor camera networks are becoming ubiquitous in critical urban areas of the largest cities around the world. Although current applications of camera networks are mostly tailored to video surveillance, recent research projects are exploiting their use to aid robotic systems in people-assisting tasks. Such systems require precise calibration of the internal and external parameters of the distributed camera network. Despite the fact that camera calibration has been an extensively studied topic, the development of practical methods for user-assisted calibration that minimize user intervention time and maximize precision still pose significant challenges. These camera systems have non-overlapping fields of view, are subject to environmental stress, and are likely to suffer frequent recalibration. In this paper, we propose the use of a 3D map covering the area to support the calibration process and develop an automated method that allows quick and precise calibration of a large camera network. We present two cases of study of the proposed calibration method: one is the calibration of the Barcelona Robot Lab camera network, which also includes direct mappings (homographies) between image coordinates and world points in the ground plane (walking areas) to support person and robot detection and localization algorithms. The second case consist of improving the GPS positioning of geo-tagged images taken with a mobile device in the Facultat de Matemàtiques i Estadística (FME) patio at the Universitat Politècnica de Catalunya (UPC). PMID:25076221

  18. The 3D Hough Transform for plane detection in point clouds: A review and a new accumulator design

    NASA Astrophysics Data System (ADS)

    Borrmann, Dorit; Elseberg, Jan; Lingemann, Kai; Nüchter, Andreas

    2011-03-01

    The Hough Transform is a well-known method for detecting parameterized objects. It is the de facto standard for detecting lines and circles in 2-dimensional data sets. For 3D it has attained little attention so far. Even for the 2D case high computational costs have lead to the development of numerous variations for the Hough Transform. In this article we evaluate different variants of the Hough Transform with respect to their applicability to detect planes in 3D point clouds reliably. Apart from computational costs, the main problem is the representation of the accumulator. Usual implementations favor geometrical objects with certain parameters due to uneven sampling of the parameter space. We present a novel approach to design the accumulator focusing on achieving the same size for each cell and compare it to existing designs. [Figure not available: see fulltext.

  19. A 3D clustering approach for point clouds to detect and quantify changes at a rock glacier front

    NASA Astrophysics Data System (ADS)

    Micheletti, Natan; Tonini, Marj; Lane, Stuart N.

    2016-04-01

    Terrestrial Laser Scanners (TLS) are extensively used in geomorphology to remotely-sense landforms and surfaces of any type and to derive digital elevation models (DEMs). Modern devices are able to collect many millions of points, so that working on the resulting dataset is often troublesome in terms of computational efforts. Indeed, it is not unusual that raw point clouds are filtered prior to DEM creation, so that only a subset of points is retained and the interpolation process becomes less of a burden. Whilst this procedure is in many cases necessary, it implicates a considerable loss of valuable information. First, and even without eliminating points, the common interpolation of points to a regular grid causes a loss of potentially useful detail. Second, it inevitably causes the transition from 3D information to only 2.5D data where each (x,y) pair must have a unique z-value. Vector-based DEMs (e.g. triangulated irregular networks) partially mitigate these issues, but still require a set of parameters to be set and a considerable burden in terms of calculation and storage. Because of the reasons above, being able to perform geomorphological research directly on point clouds would be profitable. Here, we propose an approach to identify erosion and deposition patterns on a very active rock glacier front in the Swiss Alps to monitor sediment dynamics. The general aim is to set up a semiautomatic method to isolate mass movements using 3D-feature identification directly from LiDAR data. An ultra-long range LiDAR RIEGL VZ-6000 scanner was employed to acquire point clouds during three consecutive summers. In order to isolate single clusters of erosion and deposition we applied the Density-Based Scan Algorithm with Noise (DBSCAN), previously successfully employed by Tonini and Abellan (2014) in a similar case for rockfall detection. DBSCAN requires two input parameters, strongly influencing the number, shape and size of the detected clusters: the minimum number of

  20. A Comprehensive Automated 3D Approach for Building Extraction, Reconstruction, and Regularization from Airborne Laser Scanning Point Clouds

    PubMed Central

    Dorninger, Peter; Pfeifer, Norbert

    2008-01-01

    Three dimensional city models are necessary for supporting numerous management applications. For the determination of city models for visualization purposes, several standardized workflows do exist. They are either based on photogrammetry or on LiDAR or on a combination of both data acquisition techniques. However, the automated determination of reliable and highly accurate city models is still a challenging task, requiring a workflow comprising several processing steps. The most relevant are building detection, building outline generation, building modeling, and finally, building quality analysis. Commercial software tools for building modeling require, generally, a high degree of human interaction and most automated approaches described in literature stress the steps of such a workflow individually. In this article, we propose a comprehensive approach for automated determination of 3D city models from airborne acquired point cloud data. It is based on the assumption that individual buildings can be modeled properly by a composition of a set of planar faces. Hence, it is based on a reliable 3D segmentation algorithm, detecting planar faces in a point cloud. This segmentation is of crucial importance for the outline detection and for the modeling approach. We describe the theoretical background, the segmentation algorithm, the outline detection, and the modeling approach, and we present and discuss several actual projects.

  1. a Semi-Automated Point Cloud Processing Methodology for 3d Cultural Heritage Documentation

    NASA Astrophysics Data System (ADS)

    Kıvılcım, C. Ö.; Duran, Z.

    2016-06-01

    The preliminary phase in any architectural heritage project is to obtain metric measurements and documentation of the building and its individual elements. On the other hand, conventional measurement techniques require tremendous resources and lengthy project completion times for architectural surveys and 3D model production. Over the past two decades, the widespread use of laser scanning and digital photogrammetry have significantly altered the heritage documentation process. Furthermore, advances in these technologies have enabled robust data collection and reduced user workload for generating various levels of products, from single buildings to expansive cityscapes. More recently, the use of procedural modelling methods and BIM relevant applications for historic building documentation purposes has become an active area of research, however fully automated systems in cultural heritage documentation still remains open. In this paper, we present a semi-automated methodology, for 3D façade modelling of cultural heritage assets based on parametric and procedural modelling techniques and using airborne and terrestrial laser scanning data. We present the contribution of our methodology, which we implemented in an open source software environment using the example project of a 16th century early classical era Ottoman structure, Sinan the Architect's Şehzade Mosque in Istanbul, Turkey.

  2. Historical Buildings Models and Their Handling via 3d Survey: from Points Clouds to User-Oriented Hbim

    NASA Astrophysics Data System (ADS)

    Chiabrando, F.; Sammartano, G.; Spanò, A.

    2016-06-01

    This paper retraces some research activities and application of 3D survey techniques and Building Information Modelling (BIM) in the environment of Cultural Heritage. It describes the diffusion of as-built BIM approach in the last years in Heritage Assets management, the so-called Built Heritage Information Modelling/Management (BHIMM or HBIM), that is nowadays an important and sustainable perspective in documentation and administration of historic buildings and structures. The work focuses the documentation derived from 3D survey techniques that can be understood like a significant and unavoidable knowledge base for the BIM conception and modelling, in the perspective of a coherent and complete management and valorisation of CH. It deepens potentialities, offered by 3D integrated survey techniques, to acquire productively and quite easilymany 3D information, not only geometrical but also radiometric attributes, helping the recognition, interpretation and characterization of state of conservation and degradation of architectural elements. From these data, they provide more and more high descriptive models corresponding to the geometrical complexity of buildings or aggregates in the well-known 5D (3D + time and cost dimensions). Points clouds derived from 3D survey acquisition (aerial and terrestrial photogrammetry, LiDAR and their integration) are reality-based models that can be use in a semi-automatic way to manage, interpret, and moderately simplify geometrical shapes of historical buildings that are examples, as is well known, of non-regular and complex geometry, instead of modern constructions with simple and regular ones. In the paper, some of these issues are addressed and analyzed through some experiences regarding the creation and the managing of HBIMprojects on historical heritage at different scales, using different platforms and various workflow. The paper focuses on LiDAR data handling with the aim to manage and extract geometrical information; on

  3. Combination of Tls Point Clouds and 3d Data from Kinect v2 Sensor to Complete Indoor Models

    NASA Astrophysics Data System (ADS)

    Lachat, E.; Landes, T.; Grussenmeyer, P.

    2016-06-01

    The combination of data coming from multiple sensors is more and more applied for remote sensing issues (multi-sensor imagery) but also in cultural heritage or robotics, since it often results in increased robustness and accuracy of the final data. In this paper, the reconstruction of building elements such as window frames or door jambs scanned thanks to a low cost 3D sensor (Kinect v2) is presented. Their combination within a global point cloud of an indoor scene acquired with a terrestrial laser scanner (TLS) is considered. If the added elements acquired with the Kinect sensor enable to reach a better level of detail of the final model, an adapted acquisition protocol may also provide several benefits as for example time gain. The paper aims at analyzing whether the two measurement techniques can be complementary in this context. The limitations encountered during the acquisition and reconstruction steps are also investigated.

  4. Incremental Refinement of FAÇADE Models with Attribute Grammar from 3d Point Clouds

    NASA Astrophysics Data System (ADS)

    Dehbi, Y.; Staat, C.; Mandtler, L.; Pl¨umer, L.

    2016-06-01

    Data acquisition using unmanned aerial vehicles (UAVs) has gotten more and more attention over the last years. Especially in the field of building reconstruction the incremental interpretation of such data is a demanding task. In this context formal grammars play an important role for the top-down identification and reconstruction of building objects. Up to now, the available approaches expect offline data in order to parse an a-priori known grammar. For mapping on demand an on the fly reconstruction based on UAV data is required. An incremental interpretation of the data stream is inevitable. This paper presents an incremental parser of grammar rules for an automatic 3D building reconstruction. The parser enables a model refinement based on new observations with respect to a weighted attribute context-free grammar (WACFG). The falsification or rejection of hypotheses is supported as well. The parser can deal with and adapt available parse trees acquired from previous interpretations or predictions. Parse trees derived so far are updated in an iterative way using transformation rules. A diagnostic step searches for mismatches between current and new nodes. Prior knowledge on façades is incorporated. It is given by probability densities as well as architectural patterns. Since we cannot always assume normal distributions, the derivation of location and shape parameters of building objects is based on a kernel density estimation (KDE). While the level of detail is continuously improved, the geometrical, semantic and topological consistency is ensured.

  5. Structural analysis of San Leo (RN, Italy) east and north cliffs using 3D point clouds

    NASA Astrophysics Data System (ADS)

    Spreafico, Margherita Cecilia; Bacenetti, Marco; Borgatti, Lisa; Cignetti, Martina; Giardino, Marco; Perotti, Luigi

    2013-04-01

    The town of San Leo, like many others in the historical region of Montefeltro (Northern Apennines, Italy), was built in medieval period on a calcarenite and sandstone slab, bordered by subvertical and overhanging cliffs up to 100 m high, for defense purposes. The slab and the underlying clayey substratum show widespread landslide phenomena: the first is tectonized and crossed by joints and faults, and it is affected by lateral spreading with associated rock falls, topples and tilting. Moreover, the underlying clayey substratum is involved in plastic movements, like earth flows and slides. The main cause of instability in the area, which brings about these movements, is the high deformability contrast between the plate and the underlying clays. The aim of our research is to set up a numerical model that can well describe the processes and take into account the different factors that influence the evolution of the movements. One of these factors is certainly the structural setting of the slab, characterized by several joints and faults; in order to better identify and detect the main joint sets affecting the study area a structural analysis was performed. Up to date, a series of scans of San Leo cliff taken in 2008 and 2011, with a Riegl Z420i were analyzed. Initially, we chose a test area, located in the east side of the cliff, in which analyses were performed using two different softwares: COLTOP 3D and Polyworks. We repeated the analysis using COLTOP for all the east wall and for a part of the north wall, including an area affected by a rock fall in 2006. In the test area we identified five sets with different dips and dip directions. The analysis of the east and north walls permitted to identify eight sets (seven plus the bedding) of discontinuities. We compared these results with previous ones from surveys taken by others authors in some areas and with some preliminary data from a traditional geological survey of the whole area. With traditional methods only a

  6. Semi-automatic characterization of fractured rock masses using 3D point clouds: discontinuity orientation, spacing and SMR geomechanical classification

    NASA Astrophysics Data System (ADS)

    Riquelme, Adrian; Tomas, Roberto; Abellan, Antonio; Cano, Miguel; Jaboyedoff, Michel

    2015-04-01

    Investigation of fractured rock masses for different geological applications (e.g. fractured reservoir exploitation, rock slope instability, rock engineering, etc.) requires a deep geometric understanding of the discontinuity sets affecting rock exposures. Recent advances in 3D data acquisition using photogrammetric and/or LiDAR techniques currently allow a quick and an accurate characterization of rock mass discontinuities. This contribution presents a methodology for: (a) use of 3D point clouds for the identification and analysis of planar surfaces outcropping in a rocky slope; (b) calculation of the spacing between different discontinuity sets; (c) semi-automatic calculation of the parameters that play a capital role in the Slope Mass Rating geomechanical classification. As for the part a) (discontinuity orientation), our proposal identifies and defines the algebraic equations of the different discontinuity sets of the rock slope surface by applying an analysis based on a neighbouring points coplanarity test. Additionally, the procedure finds principal orientations by Kernel Density Estimation and identifies clusters (Riquelme et al., 2014). As a result of this analysis, each point is classified with a discontinuity set and with an outcrop plane (cluster). Regarding the part b) (discontinuity spacing) our proposal utilises the previously classified point cloud to investigate how different outcropping planes are linked in space. Discontinuity spacing is calculated for each pair of linked clusters within the same discontinuity set, and then spacing values are analysed calculating their statistic values. Finally, as for the part c) the previous results are used to calculate parameters F_1, F2 and F3 of the Slope Mass Rating geomechanical classification. This analysis is carried out for each discontinuity set using their respective orientation extracted in part a). The open access tool SMRTool (Riquelme et al., 2014) is then used to calculate F1 to F3 correction

  7. See-Through Imaging of Laser-Scanned 3d Cultural Heritage Objects Based on Stochastic Rendering of Large-Scale Point Clouds

    NASA Astrophysics Data System (ADS)

    Tanaka, S.; Hasegawa, K.; Okamoto, N.; Umegaki, R.; Wang, S.; Uemura, M.; Okamoto, A.; Koyamada, K.

    2016-06-01

    We propose a method for the precise 3D see-through imaging, or transparent visualization, of the large-scale and complex point clouds acquired via the laser scanning of 3D cultural heritage objects. Our method is based on a stochastic algorithm and directly uses the 3D points, which are acquired using a laser scanner, as the rendering primitives. This method achieves the correct depth feel without requiring depth sorting of the rendering primitives along the line of sight. Eliminating this need allows us to avoid long computation times when creating natural and precise 3D see-through views of laser-scanned cultural heritage objects. The opacity of each laser-scanned object is also flexibly controllable. For a laser-scanned point cloud consisting of more than 107 or 108 3D points, the pre-processing requires only a few minutes, and the rendering can be executed at interactive frame rates. Our method enables the creation of cumulative 3D see-through images of time-series laser-scanned data. It also offers the possibility of fused visualization for observing a laser-scanned object behind a transparent high-quality photographic image placed in the 3D scene. We demonstrate the effectiveness of our method by applying it to festival floats of high cultural value. These festival floats have complex outer and inner 3D structures and are suitable for see-through imaging.

  8. Detecting and Analyzing Corrosion Spots on the Hull of Large Marine Vessels Using Colored 3d LIDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Aijazi, A. K.; Malaterre, L.; Tazir, M. L.; Trassoudaine, L.; Checchin, P.

    2016-06-01

    This work presents a new method that automatically detects and analyzes surface defects such as corrosion spots of different shapes and sizes, on large ship hulls. In the proposed method several scans from different positions and viewing angles around the ship are registered together to form a complete 3D point cloud. The R, G, B values associated with each scan, obtained with the help of an integrated camera are converted into HSV space to separate out the illumination invariant color component from the intensity. Using this color component, different surface defects such as corrosion spots of different shapes and sizes are automatically detected, within a selected zone, using two different methods depending upon the level of corrosion/defects. The first method relies on a histogram based distribution whereas the second on adaptive thresholds. The detected corrosion spots are then analyzed and quantified to help better plan and estimate the cost of repair and maintenance. Results are evaluated on real data using different standard evaluation metrics to demonstrate the efficacy as well as the technical strength of the proposed method.

  9. Automatic reconstruction of 3D urban landscape by computing connected regions and assigning them an average altitude from LiDAR point cloud image

    NASA Astrophysics Data System (ADS)

    Kawata, Yoshiyuki; Koizumi, Kohei

    2014-10-01

    The demand of 3D city modeling has been increasing in many applications such as urban planing, computer gaming with realistic city environment, car navigation system with showing 3D city map, virtual city tourism inviting future visitors to a virtual city walkthrough and others. We proposed a simple method for reconstructing a 3D urban landscape from airborne LiDAR point cloud data. The automatic reconstruction method of a 3D urban landscape was implemented by the integration of all connected regions, which were extracted and extruded from the altitude mask images. These mask images were generated from the gray scale LiDAR image by the altitude threshold ranges. In this study we demonstrated successfully in the case of Kanazawa city center scene by applying the proposed method to the airborne LiDAR point cloud data.

  10. [An automatic extraction algorithm for individual tree crown projection area and volume based on 3D point cloud data].

    PubMed

    Xu, Wei-Heng; Feng, Zhong-Ke; Su, Zhi-Fang; Xu, Hui; Jiao, You-Quan; Deng, Ou

    2014-02-01

    fixed angles to estimate crown projections, and (2) different regular volume formula to simulate crown volume according to the tree crown shapes. Based on the high-resolution 3D LIDAR point cloud data of individual tree, tree crown structure was reconstructed at a high rate of speed with high accuracy, and crown projection and volume of individual tree were extracted by this automatical untouched method, which can provide a reference for tree crown structure studies and be worth to popularize in the field of precision forestry. PMID:24822422

  11. Terrestrial laser scanning point clouds time series for the monitoring of slope movements: displacement measurement using image correlation and 3D feature tracking

    NASA Astrophysics Data System (ADS)

    Bornemann, Pierrick; Jean-Philippe, Malet; André, Stumpf; Anne, Puissant; Julien, Travelletti

    2016-04-01

    Dense multi-temporal point clouds acquired with terrestrial laser scanning (TLS) have proved useful for the study of structure and kinematics of slope movements. Most of the existing deformation analysis methods rely on the use of interpolated data. Approaches that use multiscale image correlation provide a precise and robust estimation of the observed movements; however, for non-rigid motion patterns, these methods tend to underestimate all the components of the movement. Further, for rugged surface topography, interpolated data introduce a bias and a loss of information in some local places where the point cloud information is not sufficiently dense. Those limits can be overcome by using deformation analysis exploiting directly the original 3D point clouds assuming some hypotheses on the deformation (e.g. the classic ICP algorithm requires an initial guess by the user of the expected displacement patterns). The objective of this work is therefore to propose a deformation analysis method applied to a series of 20 3D point clouds covering the period October 2007 - October 2015 at the Super-Sauze landslide (South East French Alps). The dense point clouds have been acquired with a terrestrial long-range Optech ILRIS-3D laser scanning device from the same base station. The time series are analyzed using two approaches: 1) a method of correlation of gradient images, and 2) a method of feature tracking in the raw 3D point clouds. The estimated surface displacements are then compared with GNSS surveys on reference targets. Preliminary results tend to show that the image correlation method provides a good estimation of the displacement fields at first order, but shows limitations such as the inability to track some deformation patterns, and the use of a perspective projection that does not maintain original angles and distances in the correlated images. Results obtained with 3D point clouds comparison algorithms (C2C, ICP, M3C2) bring additional information on the

  12. A new methodology in fast and accurate matching of the 2D and 3D point clouds extracted by laser scanner systems

    NASA Astrophysics Data System (ADS)

    Torabi, M.; Mousavi G., S. M.; Younesian, D.

    2015-03-01

    Registration of the point clouds is a conventional challenge in computer vision related applications. As an application, matching of train wheel profiles extracted from two viewpoints is studied in this paper. The registration problem is formulated into an optimization problem. An error minimization function for registration of the two partially overlapping point clouds is presented. The error function is defined as the sum of the squared distance between the source points and their corresponding pairs which should be minimized. The corresponding pairs are obtained thorough Iterative Closest Point (ICP) variants. Here, a point-to-plane ICP variant is employed. Principal Component Analysis (PCA) is used to obtain tangent planes. Thus it is shown that minimization of the proposed objective function diminishes point-to-plane ICP variant. We utilized this algorithm to register point clouds of two partially overlapping profiles of wheel train extracted from two viewpoints in 2D. Also, a number of synthetic point clouds and a number of real point clouds in 3D are studied to evaluate the reliability and rate of convergence in our method compared with other registration methods.

  13. Uav-Based Acquisition of 3d Point Cloud - a Comparison of a Low-Cost Laser Scanner and Sfm-Tools

    NASA Astrophysics Data System (ADS)

    Mader, D.; Blaskow, R.; Westfeld, P.; Maas, H.-G.

    2015-08-01

    The Project ADFEX (Adaptive Federative 3D Exploration of Multi Robot System) pursues the goal to develop a time- and cost-efficient system for exploration and monitoring task of unknown areas or buildings. A fleet of unmanned aerial vehicles equipped with appropriate sensors (laser scanner, RGB camera, near infrared camera, thermal camera) were designed and built. A typical operational scenario may include the exploration of the object or area of investigation by an UAV equipped with a laser scanning range finder to generate a rough point cloud in real time to provide an overview of the object on a ground station as well as an obstacle map. The data about the object enables the path planning for the robot fleet. Subsequently, the object will be captured by a RGB camera mounted on the second flying robot for the generation of a dense and accurate 3D point cloud by using of structure from motion techniques. In addition, the detailed image data serves as basis for a visual damage detection on the investigated building. This paper focuses on our experience with use of a low-cost light-weight Hokuyo laser scanner onboard an UAV. The hardware components for laser scanner based 3D point cloud acquisition are discussed, problems are demonstrated and analyzed, and a quantitative analysis of the accuracy potential is shown as well as in comparison with structure from motion-tools presented.

  14. Study of texture stitching in 3D modeling of lidar point cloud based on per-pixel linear interpolation along loop line buffer

    NASA Astrophysics Data System (ADS)

    Xu, Jianxin; Liang, Hong

    2013-07-01

    Terrestrial laser scanning creates a point cloud composed of thousands or millions of 3D points. Through pre-processing, generating TINs, mapping texture, a 3D model of a real object is obtained. When the object is too large, the object is separated into some parts. This paper mainly focuses on problem of gray uneven of two adjacent textures' intersection. The new algorithm is presented in the paper, which is per-pixel linear interpolation along loop line buffer .The experiment data derives from point cloud of stone lion which is situated in front of west gate of Henan Polytechnic University. The model flow is composed of three parts. First, the large object is separated into two parts, and then each part is modeled, finally the whole 3D model of the stone lion is composed of two part models. When the two part models are combined, there is an obvious fissure line in the overlapping section of two adjacent textures for the two models. Some researchers decrease brightness value of all pixels for two adjacent textures by some algorithms. However, some algorithms are effect and the fissure line still exists. Gray uneven of two adjacent textures is dealt by the algorithm in the paper. The fissure line in overlapping section textures is eliminated. The gray transition in overlapping section become more smoothly.

  15. Documenting a Complex Modern Heritage Building Using Multi Image Close Range Photogrammetry and 3d Laser Scanned Point Clouds

    NASA Astrophysics Data System (ADS)

    Vianna Baptista, M. L.

    2013-07-01

    Integrating different technologies and expertises help fill gaps when optimizing documentation of complex buildings. Described below is the process used in the first part of a restoration project, the architectural survey of Theatre Guaira Cultural Centre in Curitiba, Brazil. To diminish time on fieldwork, the two-person-field-survey team had to juggle, during three days, the continuous artistic activities and performers' intense schedule. Both technologies (high definition laser scanning and close-range photogrammetry) were used to record all details in the least amount of time without disturbing the artists' rehearsals and performances. Laser Scanning was ideal to record the monumental stage structure with all of its existing platforms, light fixtures, scenery walls and curtains. Although scanned with high-definition, parts of the exterior façades were also recorded using Close Range Photogrammetry. Tiny cracks on the marble plaques and mosaic tiles, not visible in the point clouds, were then able to be precisely documented in order to create the exterior façades textures and damages mapping drawings. The combination of technologies and the expertise of service providers, knowing how and what to document, and what to deliver to the client, enabled maximum benefits to the following restoration project.

  16. Registration of overlapping 3D point clouds using extracted line segments. (Polish Title: Rejestracja chmur punktów 3D w oparciu o wyodrębnione krawędzie)

    NASA Astrophysics Data System (ADS)

    Poręba, M.; Goulette, F.

    2014-12-01

    The registration of 3D point clouds collected from different scanner positions is necessary in order to avoid occlusions, ensure a full coverage of areas, and collect useful data for analyzing and documenting the surrounding environment. This procedure involves three main stages: 1) choosing appropriate features, which can be reliably extracted; 2) matching conjugate primitives; 3) estimating the transformation parameters. Currently, points and spheres are most frequently chosen as the registration features. However, due to limited point cloud resolution, proper identification and precise measurement of a common point within the overlapping laser data is almost impossible. One possible solution to this problem may be a registration process based on the Iterative Closest Point (ICP) algorithm or its variation. Alternatively, planar and linear feature-based registration techniques can also be applied. In this paper, we propose the use of line segments obtained from intersecting planes modelled within individual scans. Such primitives can be easily extracted even from low-density point clouds. Working with synthetic data, several existing line-based registration methods are evaluated according to their robustness to noise and the precision of the estimated transformation parameters. For the purpose of quantitative assessment, an accuracy criterion based on a modified Hausdorff distance is defined. Since an automated matching of segments is a challenging task that influences the correctness of the transformation parameters, a correspondence-finding algorithm is developed. The tests show that our matching algorithm provides a correct p airing with an accuracy of 99 % at least, and about 8% of omitted line pairs.

  17. Disentangling the history of complex multi-phased shell beds based on the analysis of 3D point cloud data

    NASA Astrophysics Data System (ADS)

    Harzhauser, Mathias; Djuricic, Ana; Mandic, Oleg; Dorninger, Peter; Nothegger, Clemens; Székely, Balázs; Molnár, Gábor; Pfeifer, Norbert

    2015-04-01

    Shell beds are key features in sedimentary records throughout the Phanerozoic. The interplay between burial rates and population productivity is reflected in distinct degrees of shelliness. Consequently, shell beds may provide informations on various physical processes, which led to the accumulation and preservation of hard parts. Many shell beds pass through a complex history of formation being shaped by more than one factor. In shallow marine settings, the composition of shell beds is often strongly influenced by winnowing, reworking and transport. These processes may cause considerable time averaging and the accumulation of specimens, which have lived thousands of years apart. In the best case, the environment remained stable during that time span and the mixing does not mask the overall composition. A major obstacle for the interpretation of shell beds, however, is the amalgamation of shell beds of several depositional units in a single concentration, as typically for tempestites and tsunamites. Disentangling such mixed assemblages requires deep understanding of the ecological requirements of the taxa involved - which is achievable for geologically young shell beds with living relatives - and a statistic approach to quantify the contribution by the various death assemblages. Furthermore it requires understanding of sedimentary processes potentially involved into their formation. Here we present the first attempt to describe and decipher such a multi-phase shell-bed based on a high resolution digital surface model (1 mm) combined with ortho-photos with a resolution of 0.5 mm per pixel. Documenting the oyster reef requires precisely georeferenced data; owing to high redundancy of the point cloud an accuracy of a few mm was achieved. The shell accumulation covers an area of 400 m2 with thousands of specimens, which were excavated by a three months campaign at Stetten in Lower Austria. Formed in an Early Miocene estuary of the Paratethys Sea it is mainly composed

  18. 3-D Deformation Field Of The 2010 El Mayor-Cucapah (Mexico) Earthquake From Matching Before To After Aerial Lidar Point Clouds

    NASA Astrophysics Data System (ADS)

    Hinojosa-Corona, A.; Nissen, E.; Arrowsmith, R.; Krishnan, A. K.; Saripalli, S.; Oskin, M. E.; Arregui, S. M.; Limon, J. F.

    2012-12-01

    The Mw 7.2 El Mayor-Cucapah earthquake (EMCE) of 4 April 2010 generated a ~110 km long, NW-SE trending rupture, with normal and right-lateral slip in the order of 2-3m in the Sierra Cucapah, the northern half, where the surface rupture has the most outstanding expression. Vertical and horizontal surface displacements produced by the EMCE have been addressed separately by other authors with a variety of aerial and satellite remote sensing techniques. Slip variation along fault and post-seismic scarp erosion and diffusion have been estimated in other studies using terrestrial LiDAR (TLS) on segments of the rupture. To complement these other studies, we computed the 3D deformation field by comparing pre- to post-event point clouds from aerial LiDAR surveys. The pre-event LiDAR with lower point density (0.013-0.033 pts m-2) required filtering and post-processing before comparing with the denser (9-18 pts m-2) more accurate post event dataset. The 3-dimensional surface displacement field was determined using an adaptation of the Iterative Closest Point (ICP) algorithm, implemented in the open source Point Cloud Library (PCL). The LiDAR datasets are first split into a grid of windows, and for each one, ICP iteratively converges on the rigid body transformation (comprising a translation and a rotation) that best aligns the pre- to post-event points. Testing on synthetic datasets perturbed with displacements of known magnitude showed that windows with dimensions of 100-200m gave the best results for datasets with these densities. Here we present the deformation field with detailed displacements in segments of the surface rupture where its expression was recognized by ICP from the point cloud matching, mainly the scarcely vegetated Sierra Cucapah with the Borrego and Paso Superior fault segments the most outstanding, where we are able to compare our results with values measured in the field and results from TLS reported in other works. EMC simulated displacement field for a

  19. NIF Ignition Target 3D Point Design

    SciTech Connect

    Jones, O; Marinak, M; Milovich, J; Callahan, D

    2008-11-05

    We have developed an input file for running 3D NIF hohlraums that is optimized such that it can be run in 1-2 days on parallel computers. We have incorporated increasing levels of automation into the 3D input file: (1) Configuration controlled input files; (2) Common file for 2D and 3D, different types of capsules (symcap, etc.); and (3) Can obtain target dimensions, laser pulse, and diagnostics settings automatically from NIF Campaign Management Tool. Using 3D Hydra calculations to investigate different problems: (1) Intrinsic 3D asymmetry; (2) Tolerance to nonideal 3D effects (e.g. laser power balance, pointing errors); and (3) Synthetic diagnostics.

  20. Fragmentary area repairing on the edge of 3D laser point cloud based on edge extracting of images and LS-SVM

    NASA Astrophysics Data System (ADS)

    Zhao, Ziming; Hao, Xiangyang; Liu, Songlin; Zhao, Song

    2011-06-01

    In the process of hole-repairing in point cloud, it's difficult to repair by the indeterminate boundary of fragmentary area in the edge of point cloud. In view of this condition, the article advances a method of Fragmentary area repairing on the edge of point cloud based on edge extracting of image and LS-SVM. After the registration of point cloud and corresponding image, the sub-pixel edge can be extracted from the image. Then project the training points and sub-pixel edge to the characteristic plane that has being constructed to confirm the bound and position for re-sampling. At last get the equation of fragmentary area to accomplish the repairing by Least-Squares Support Vector Machines. The experimental results demonstrate that the method guarantees accurate fine repairing.

  1. Accurate 3D point cloud comparison and volumetric change analysis of Terrestrial Laser Scan data in a hard rock coastal cliff environment

    NASA Astrophysics Data System (ADS)

    Earlie, C. S.; Masselink, G.; Russell, P.; Shail, R.; Kingston, K.

    2013-12-01

    Our understanding of the evolution of hard rock coastlines is limited due to the episodic nature and ';slow' rate at which changes occur. High-resolution surveying techniques, such as Terrestrial Laser Scanning (TLS), have just begun to be adopted as a method of obtaining detailed point cloud data to monitor topographical changes over short periods of time (weeks to months). However, the difficulties involved in comparing consecutive point cloud data sets in a complex three-dimensional plane, such as occlusion due to surface roughness and positioning of data capture point as a result of a consistently changing environment (a beach profile), mean that comparing data sets can lead to errors in the region of 10 - 20 cm. Meshing techniques are often used for point cloud data analysis for simple surfaces, but in surfaces such as rocky cliff faces, this technique has been found to be ineffective. Recession rates of hard rock coastlines in the UK are typically determined using aerial photography or airborne LiDAR data, yet the detail of the important changes occurring to the cliff face and toe are missed using such techniques. In this study we apply an algorithm (M3C2 - Multiscale Model to Model Cloud Comparison), initially developed for analysing fluvial morphological change, that directly compares point to point cloud data using surface normals that are consistent with surface roughness and measure the change that occurs along the normal direction (Lague et al., 2013). The surfaces changes are analysed using a set of user defined scales based on surface roughness and registration error. Once the correct parameters are defined, the volumetric cliff face changes are calculated by integrating the mean distance between the point clouds. The analysis has been undertaken at two hard rock sites identified for their active erosion located on the UK's south west peninsular at Porthleven in south west Cornwall and Godrevy in north Cornwall. Alongside TLS point cloud data, in

  2. 3D reconstruction of tropospheric cirrus clouds by stereovision system

    NASA Astrophysics Data System (ADS)

    Nadjib Kouahla, Mohamed; Moreels, Guy; Seridi, Hamid

    2016-07-01

    A stereo imaging method is applied to measure the altitude of cirrus clouds and provide a 3D map of the altitude of the layer centroid. They are located in the high troposphere and, sometimes in the lower stratosphere, between 6 and 10 km high. Two simultaneous images of the same scene are taken with Canon cameras (400D) in two sites distant of 37 Km. Each image processed in order to invert the perspective effect and provide a satellite-type view of the layer. Pairs of matched points that correspond to a physical emissive point in the common area are identified in calculating a correlation coefficient (ZNCC: Zero mean Normalized Cross-correlation or ZSSD: as Zero mean Sum of Squared Differences). This method is suitable for obtaining 3D representations in the case of low-contrast objects. An observational campaign was conducted in June 2014 in France. The images were taken simultaneously at Marnay (47°17'31.5" N, 5°44'58.8" E; altitude 275 m) 25 km northwest of Besancon and in Mont poupet (46°58'31.5" N, 5°52'22.7" E; altitude 600 m) southwest of Besancon at 43 km. 3D maps of the Natural cirrus clouds and artificial like "aircraft trails" are retrieved. They are compared with pseudo-relief intensity maps of the same region. The mean altitude of the cirrus barycenter is located at 8.5 ± 1km on June 11.

  3. Cloud Property Retrieval and 3D Radiative Transfer

    NASA Technical Reports Server (NTRS)

    Cahalan, Robert F.

    2003-01-01

    Cloud thickness and photon mean-free-path together determine the scale of "radiative smoothing" of cloud fluxes and radiances. This scale is observed as a change in the spatial spectrum of cloud radiances, and also as the "halo size" seen by off beam lidar such as THOR and WAIL. Such of beam lidar returns are now being used to retrieve cloud layer thickness and vertical scattering extinction profile. We illustrate with recent measurements taken at the Oklahoma ARM site, comparing these to the-dependent 3D simulations. These and other measurements sensitive to 3D transfer in clouds, coupled with Monte Carlo and other 3D transfer methods, are providing a better understanding of the dependence of radiation on cloud inhomogeneity, and to suggest new retrieval algorithms appropriate for inhomogeneous clouds. The international "Intercomparison of 3D Radiation Codes" or I3RC, program is coordinating and evaluating the variety of 3D radiative transfer methods now available, and to make them more widely available. Information is on the Web at: http://i3rc.gsfc.nasa.gov/. Input consists of selected cloud fields derived from data sources such as radar, microwave and satellite, and from models involved in the GEWEX Cloud Systems Studies. Output is selected radiative quantities that characterize the large-scale properties of the fields of radiative fluxes and heating. Several example cloud fields will be used to illustrate. I3RC is currently implementing an "open source" 3d code capable of solving the baseline cases. Maintenance of this effort is one of the goals of a new 3DRT Working Group under the International Radiation Commission. It is hoped that the 3DRT WG will include active participation by land and ocean modelers as well, such as 3D vegetation modelers participating in RAMI.

  4. 3D Viewer Platform of Cloud Clustering Management System: Google Map 3D

    NASA Astrophysics Data System (ADS)

    Choi, Sung-Ja; Lee, Gang-Soo

    The new management system of framework for cloud envrionemnt is needed by the platfrom of convergence according to computing environments of changes. A ISV and small business model is hard to adapt management system of platform which is offered from super business. This article suggest the clustering management system of cloud computing envirionments for ISV and a man of enterprise in small business model. It applies the 3D viewer adapt from map3D & earth of google. It is called 3DV_CCMS as expand the CCMS[1].

  5. Automatic determination of trunk diameter, crown base and height of scots pine (Pinus Sylvestris L.) Based on analysis of 3D point clouds gathered from multi-station terrestrial laser scanning. (Polish Title: Automatyczne okreslanie srednicy pnia, podstawy korony oraz wysokosci sosny zwyczajnej (Pinus Silvestris L.) Na podstawie analiz chmur punktow 3D pochodzacych z wielostanowiskowego naziemnego skanowania laserowego)

    NASA Astrophysics Data System (ADS)

    Ratajczak, M.; Wężyk, P.

    2015-12-01

    Rapid development of terrestrial laser scanning (TLS) in recent years resulted in its recognition and implementation in many industries, including forestry and nature conservation. The use of the 3D TLS point clouds in the process of inventory of trees and stands, as well as in the determination of their biometric features (trunk diameter, tree height, crown base, number of trunk shapes), trees and lumber size (volume of trees) is slowly becoming a practice. In addition to the measurement precision, the primary added value of TLS is the ability to automate the processing of the clouds of points 3D in the direction of the extraction of selected features of trees and stands. The paper presents the original software (GNOM) for the automatic measurement of selected features of trees, based on the cloud of points obtained by the ground laser scanner FARO. With the developed algorithms (GNOM), the location of tree trunks on the circular research surface was specified and the measurement was performed; the measurement covered the DBH (l: 1.3m), further diameters of tree trunks at different heights of the tree trunk, base of the tree crown and volume of the tree trunk (the selection measurement method), as well as the tree crown. Research works were performed in the territory of the Niepolomice Forest in an unmixed pine stand (Pinussylvestris L.) on the circular surface with a radius of 18 m, within which there were 16 pine trees (14 of them were cut down). It was characterized by a two-storey and even-aged construction (147 years old) and was devoid of undergrowth. Ground scanning was performed just before harvesting. The DBH of 16 pine trees was specified in a fully automatic way, using the algorithm GNOM with an accuracy of +2.1%, as compared to the reference measurement by the DBH measurement device. The medium, absolute measurement error in the cloud of points - using semi-automatic methods "PIXEL" (between points) and PIPE (fitting the cylinder) in the FARO Scene 5.x

  6. 3D Modeling By Consolidation Of Independent Geometries Extracted From Point Clouds - The Case Of The Modeling Of The Turckheim's Chapel (Alsace, France)

    NASA Astrophysics Data System (ADS)

    Koehl, M.; Fabre, Ph.; Schlussel, B.

    2014-06-01

    Turckheim is a small town located in Alsace, north-east of France. In the heart of the Alsatian vineyard, this city has many historical monuments including its old church. To understand the effectiveness of the project described in this paper, it is important to have a look at the history of this church. Indeed there are many historical events that explain its renovation and even its partial reconstruction. The first mention of a christian sanctuary in Turckheim dates back to 898. It will be replaced in the 12th century by a roman church (chapel), which subsists today as the bell tower. Touched by a lightning in 1661, the tower then was enhanced. In 1736, it was repaired following damage sustained in a tornado. In 1791, the town installs an organ to the church. Last milestone, the church is destroyed by fire in 1978. The organ, like the heart of the church will then have to be again restored (1983) with a simplified architecture. From this heavy and rich past, it unfortunately and as it is often the case, remains only very few documents and information available apart from facts stated in some sporadic writings. And with regard to the geometry, the positioning, the physical characteristics of the initial building, there are very little indication. Some assumptions of positions and right-of-way were well issued by different historians or archaeologists. The acquisition and 3D modeling project must therefore provide the current state of the edifice to serve as the basis of new investigations and for the generation of new hypotheses on the locations and historical shapes of this church and its original chapel (Fig. 1)

  7. Registration of 3D spectral OCT volumes using 3D SIFT feature point matching

    NASA Astrophysics Data System (ADS)

    Niemeijer, Meindert; Garvin, Mona K.; Lee, Kyungmoo; van Ginneken, Bram; Abràmoff, Michael D.; Sonka, Milan

    2009-02-01

    The recent introduction of next generation spectral OCT scanners has enabled routine acquisition of high resolution, 3D cross-sectional volumetric images of the retina. 3D OCT is used in the detection and management of serious eye diseases such as glaucoma and age-related macular degeneration. For follow-up studies, image registration is a vital tool to enable more precise, quantitative comparison of disease states. This work presents a registration method based on a recently introduced extension of the 2D Scale-Invariant Feature Transform (SIFT) framework1 to 3D.2 The SIFT feature extractor locates minima and maxima in the difference of Gaussian scale space to find salient feature points. It then uses histograms of the local gradient directions around each found extremum in 3D to characterize them in a 4096 element feature vector. Matching points are found by comparing the distance between feature vectors. We apply this method to the rigid registration of optic nerve head- (ONH) and macula-centered 3D OCT scans of the same patient that have only limited overlap. Three OCT data set pairs with known deformation were used for quantitative assessment of the method's robustness and accuracy when deformations of rotation and scaling were considered. Three-dimensional registration accuracy of 2.0+/-3.3 voxels was observed. The accuracy was assessed as average voxel distance error in N=1572 matched locations. The registration method was applied to 12 3D OCT scans (200 x 200 x 1024 voxels) of 6 normal eyes imaged in vivo to demonstrate the clinical utility and robustness of the method in a real-world environment.

  8. The application of iterative closest point (ICP) registration to improve 3D terrain mapping estimates using the flash 3D ladar system

    NASA Astrophysics Data System (ADS)

    Woods, Jack; Armstrong, Ernest E.; Armbruster, Walter; Richmond, Richard

    2010-04-01

    The primary purpose of this research was to develop an effective means of creating a 3D terrain map image (point-cloud) in GPS denied regions from a sequence of co-bore sighted visible and 3D LIDAR images. Both the visible and 3D LADAR cameras were hard mounted to a vehicle. The vehicle was then driven around the streets of an abandoned village used as a training facility by the German Army and imagery was collected. The visible and 3D LADAR images were then fused and 3D registration performed using a variation of the Iterative Closest Point (ICP) algorithm. The ICP algorithm is widely used for various spatial and geometric alignment of 3D imagery producing a set of rotation and translation transformations between two 3D images. ICP rotation and translation information obtain from registering the fused visible and 3D LADAR imagery was then used to calculate the x-y plane, range and intensity (xyzi) coordinates of various structures (building, vehicles, trees etc.) along the driven path. The xyzi coordinates information was then combined to create a 3D terrain map (point-cloud). In this paper, we describe the development and application of 3D imaging techniques (most specifically the ICP algorithm) used to improve spatial, range and intensity estimates of imagery collected during urban terrain mapping using a co-bore sighted, commercially available digital video camera with focal plan of 640×480 pixels and a 3D FLASH LADAR. Various representations of the reconstructed point-clouds for the drive through data will also be presented.

  9. A 3D Cloud-Construction Algorithm for the EarthCARE Satellite Mission

    NASA Technical Reports Server (NTRS)

    Barker, H. W.; Jerg, M. P.; Wehr, T.; Kato, S.; Donovan, D. P.; Hogan, R. J.

    2011-01-01

    This article presents and assesses an algorithm that constructs 3D distributions of cloud from passive satellite imagery and collocated 2D nadir profiles of cloud properties inferred synergistically from lidar, cloud radar and imager data.

  10. Iterative closest normal point for 3D face recognition.

    PubMed

    Mohammadzade, Hoda; Hatzinakos, Dimitrios

    2013-02-01

    The common approach for 3D face recognition is to register a probe face to each of the gallery faces and then calculate the sum of the distances between their points. This approach is computationally expensive and sensitive to facial expression variation. In this paper, we introduce the iterative closest normal point method for finding the corresponding points between a generic reference face and every input face. The proposed correspondence finding method samples a set of points for each face, denoted as the closest normal points. These points are effectively aligned across all faces, enabling effective application of discriminant analysis methods for 3D face recognition. As a result, the expression variation problem is addressed by minimizing the within-class variability of the face samples while maximizing the between-class variability. As an important conclusion, we show that the surface normal vectors of the face at the sampled points contain more discriminatory information than the coordinates of the points. We have performed comprehensive experiments on the Face Recognition Grand Challenge database, which is presently the largest available 3D face database. We have achieved verification rates of 99.6 and 99.2 percent at a false acceptance rate of 0.1 percent for the all versus all and ROC III experiments, respectively, which, to the best of our knowledge, have seven and four times less error rates, respectively, compared to the best existing methods on this database. PMID:22585097

  11. Automated Identification of Fiducial Points on 3D Torso Images

    PubMed Central

    Kawale, Manas M; Reece, Gregory P; Crosby, Melissa A; Beahm, Elisabeth K; Fingeret, Michelle C; Markey, Mia K; Merchant, Fatima A

    2013-01-01

    Breast reconstruction is an important part of the breast cancer treatment process for many women. Recently, 2D and 3D images have been used by plastic surgeons for evaluating surgical outcomes. Distances between different fiducial points are frequently used as quantitative measures for characterizing breast morphology. Fiducial points can be directly marked on subjects for direct anthropometry, or can be manually marked on images. This paper introduces novel algorithms to automate the identification of fiducial points in 3D images. Automating the process will make measurements of breast morphology more reliable, reducing the inter- and intra-observer bias. Algorithms to identify three fiducial points, the nipples, sternal notch, and umbilicus, are described. The algorithms used for localization of these fiducial points are formulated using a combination of surface curvature and 2D color information. Comparison of the 3D co-ordinates of automatically detected fiducial points and those identified manually, and geodesic distances between the fiducial points are used to validate algorithm performance. The algorithms reliably identified the location of all three of the fiducial points. We dedicate this article to our late colleague and friend, Dr. Elisabeth K. Beahm. Elisabeth was both a talented plastic surgeon and physician-scientist; we deeply miss her insight and her fellowship. PMID:25288903

  12. Scanning Cloud Radar Observations at Azores: Preliminary 3D Cloud Products

    SciTech Connect

    Kollias, P.; Johnson, K.; Jo, I.; Tatarevic, A.; Giangrande, S.; Widener, K.; Bharadwaj, N.; Mead, J.

    2010-03-15

    The deployment of the Scanning W-Band ARM Cloud Radar (SWACR) during the AMF campaign at Azores signals the first deployment of an ARM Facility-owned scanning cloud radar and offers a prelude for the type of 3D cloud observations that ARM will have the capability to provide at all the ARM Climate Research Facility sites by the end of 2010. The primary objective of the deployment of Scanning ARM Cloud Radars (SACRs) at the ARM Facility sites is to map continuously (operationally) the 3D structure of clouds and shallow precipitation and to provide 3D microphysical and dynamical retrievals for cloud life cycle and cloud-scale process studies. This is a challenging task, never attempted before, and requires significant research and development efforts in order to understand the radar's capabilities and limitations. At the same time, we need to look beyond the radar meteorology aspects of the challenge and ensure that the hardware and software capabilities of the new systems are utilized for the development of 3D data products that address the scientific needs of the new Atmospheric System Research (ASR) program. The SWACR observations at Azores provide a first look at such observations and the challenges associated with their analysis and interpretation. The set of scan strategies applied during the SWACR deployment and their merit is discussed. The scan strategies were adjusted for the detection of marine stratocumulus and shallow cumulus that were frequently observed at the Azores deployment. Quality control procedures for the radar reflectivity and Doppler products are presented. Finally, preliminary 3D-Active Remote Sensing of Cloud Locations (3D-ARSCL) products on a regular grid will be presented, and the challenges associated with their development discussed. In addition to data from the Azores deployment, limited data from the follow-up deployment of the SWACR at the ARM SGP site will be presented. This effort provides a blueprint for the effort required for the

  13. Cloud4Psi: cloud computing for 3D protein structure similarity searching

    PubMed Central

    Mrozek, Dariusz; Małysiak-Mrozek, Bożena; Kłapciński, Artur

    2014-01-01

    Summary: Popular methods for 3D protein structure similarity searching, especially those that generate high-quality alignments such as Combinatorial Extension (CE) and Flexible structure Alignment by Chaining Aligned fragment pairs allowing Twists (FATCAT) are still time consuming. As a consequence, performing similarity searching against large repositories of structural data requires increased computational resources that are not always available. Cloud computing provides huge amounts of computational power that can be provisioned on a pay-as-you-go basis. We have developed the cloud-based system that allows scaling of the similarity searching process vertically and horizontally. Cloud4Psi (Cloud for Protein Similarity) was tested in the Microsoft Azure cloud environment and provided good, almost linearly proportional acceleration when scaled out onto many computational units. Availability and implementation: Cloud4Psi is available as Software as a Service for testing purposes at: http://cloud4psi.cloudapp.net/. For source code and software availability, please visit the Cloud4Psi project home page at http://zti.polsl.pl/dmrozek/science/cloud4psi.htm. Contact: dariusz.mrozek@polsl.pl PMID:24930141

  14. 3D Cloud Effects in OCO-2 Observations - Evidence and Mitigation

    NASA Astrophysics Data System (ADS)

    Schmidt, Sebastian; Massie, Steven; Iwabuchi, Hironobu; Okamura, Rintaro; Crisp, David

    2016-04-01

    In July 2014, the NASA Orbiting Carbon Observatory (OCO-2) satellite was inserted into the 705-km Afternoon Constellation (A-Train). OCO-2 provides estimates of column-averaged CO2 dry air mixing ratios (XCO2), based on high spectral resolution radiance observations of reflected sunlight in the O2 A-band and in the weak and strong absorption CO2 bands at 1.6 and 2.1 μm. The accuracy requirement for OCO-2 XCO2 retrievals is 1 ppmv on regional scales (> 1000 km). At the single sounding level, inhomogeneous clouds, surface albedo, and aerosols introduce wavelength-dependent perturbations into the sensed radiance fields, affecting the retrieval products. Scattering and shadowing by clouds outside of the field of view (FOV) may be a leading source of error for clear-sky XCO2 retrievals in partially cloudy regions. To understand these effects, we developed a 3D OCO-2 simulator, which uses observations by MODIS (also in the A-Train) and other scene information as input to simulate OCO-2 radiance spectra at the full wavelength resolution of the three bands. It is based on MCARaTS (Monte Carlo Atmospheric Radiative Transfer Simulator) as the 3D radiative transfer solver. The OCO-2 3D simulator was applied to an observed scene near a Total Carbon Column Observing Network (TCCON) station. The 3D calculations reproduced the OCO-2 radiances, including the perturbations due to clouds, at the single sounding level. The analysis further suggests that clouds near an OCO-2 footprint leave systematic spectral imprints on the radiances, which could be parameterized to be included in the retrieval state vector. If successful, this new state vector element could account for 3D effects without the need for operational 3D radiative transfer calculations. This may be the starting point not only for the improved screening of low-level broken boundary layer clouds, but also for mitigating the effects of nearby clouds at the radiance level, thus improving the accuracy of retrievals in

  15. Evaluation of terrestrial photogrammetric point clouds derived from thermal imagery

    NASA Astrophysics Data System (ADS)

    Metcalf, Jeremy P.; Olsen, Richard C.

    2016-05-01

    Computer vision and photogrammetric techniques have been widely applied to digital imagery producing high density 3D point clouds. Using thermal imagery as input, the same techniques can be applied to infrared data to produce point clouds in 3D space, providing surface temperature information. The work presented here is an evaluation of the accuracy of 3D reconstruction of point clouds produced using thermal imagery. An urban scene was imaged over an area at the Naval Postgraduate School, Monterey, CA, viewing from above as with an airborne system. Terrestrial thermal and RGB imagery were collected from a rooftop overlooking the site using a FLIR SC8200 MWIR camera and a Canon T1i DSLR. In order to spatially align each dataset, ground control points were placed throughout the study area using Trimble R10 GNSS receivers operating in RTK mode. Each image dataset is processed to produce a dense point cloud for 3D evaluation.

  16. 3D Atmospheric Radiative Transfer for Cloud System-Resolving Models: Forward Modelling and Observations

    SciTech Connect

    Howard Barker; Jason Cole

    2012-05-17

    Utilization of cloud-resolving models and multi-dimensional radiative transfer models to investigate the importance of 3D radiation effects on the numerical simulation of cloud fields and their properties.

  17. Compression of point-texture 3D motion sequences

    NASA Astrophysics Data System (ADS)

    Song, In-Wook; Kim, Chang-Su; Lee, Sang-Uk

    2005-10-01

    In this work, we propose two compression algorithms for PointTexture 3D sequences: the octree-based scheme and the motion-compensated prediction scheme. The first scheme represents each PointTexture frame hierarchically using an octree. The geometry information in the octree nodes is encoded by the predictive partial matching (PPM) method. The encoder supports the progressive transmission of the 3D frame by transmitting the octree nodes in a top-down manner. The second scheme adopts the motion-compensated prediction to exploit the temporal correlation in 3D sequences. It first divides each frame into blocks, and then estimates the motion of each block using the block matching algorithm. In contrast to the motion-compensated 2D video coding, the prediction residual may take more bits than the original signal. Thus, in our approach, the motion compensation is used only for the blocks that can be replaced by the matching blocks. The other blocks are PPM-encoded. Extensive simulation results demonstrate that the proposed algorithms provide excellent compression performances.

  18. Point Cloud Server (pcs) : Point Clouds In-Base Management and Processing

    NASA Astrophysics Data System (ADS)

    Cura, R.; Perret, J.; Paparoditis, N.

    2015-08-01

    In addition to the traditional Geographic Information System (GIS) data such as images and vectors, point cloud data has become more available. It is appreciated for its precision and true three-Dimensional (3D) nature. However, managing the point cloud can be difficult due to scaling problems and specificities of this data type. Several methods exist but are usually fairly specialised and solve only one aspect of the management problem. In this work, we propose a complete and efficient point cloud management system based on a database server that works on groups of points rather than individual points. This system is specifically designed to solve all the needs of point cloud users: fast loading, compressed storage, powerful filtering, easy data access and exporting, and integrated processing. Moreover, the system fully integrates metadata (like sensor position) and can conjointly use point clouds with images, vectors, and other point clouds. The system also offers in-base processing for easy prototyping and parallel processing and can scale well. Lastly, the system is built on open source technologies; therefore it can be easily extended and customised. We test the system will several billion points of point clouds from Lidar (aerial and terrestrial ) and stereo-vision. We demonstrate ~ 400 million pts/h loading speed, user-transparent greater than 2 to 4:1 compression ratio, filtering in the approximately 50 ms range, and output of about a million pts/s, along with classical processing, such as object detection.

  19. Ground point filtering of UAV-based photogrammetric point clouds

    NASA Astrophysics Data System (ADS)

    Anders, Niels; Seijmonsbergen, Arie; Masselink, Rens; Keesstra, Saskia

    2016-04-01

    Unmanned Aerial Vehicles (UAVs) have proved invaluable for generating high-resolution and multi-temporal imagery. Based on photographic surveys, 3D surface reconstructions can be derived photogrammetrically so producing point clouds, orthophotos and surface models. For geomorphological or ecological applications it may be necessary to separate ground points from vegetation points. Existing filtering methods are designed for point clouds derived using other methods, e.g. laser scanning. The purpose of this paper is to test three filtering algorithms for the extraction of ground points from point clouds derived from low-altitude aerial photography. Three subareas were selected from a single flight which represent different scenarios: 1) low relief, sparsely vegetated area, 2) low relief, moderately vegetated area, 3) medium relief and moderately vegetated area. The three filtering methods are used to classify ground points in different ways, based on 1) RGB color values from training samples, 2) TIN densification as implemented in LAStools, and 3) an iterative surface lowering algorithm. Ground points are then interpolated into a digital terrain model using inverse distance weighting. The results suggest that different landscapes require different filtering methods for optimal ground point extraction. While iterative surface lowering and TIN densification are fully automated, color-based classification require fine-tuning in order to optimize the filtering results. Finally, we conclude that filtering photogrammetric point clouds could provide a cheap alternative to laser scan surveys for creating digital terrain models in sparsely vegetated areas.

  20. Registration of point cloud data for HDD stamped base inspection

    NASA Astrophysics Data System (ADS)

    Suh, Sungho; Cho, Hansang

    2015-09-01

    As a part of the HDD manufacturing process, HDD stamped base, an exterior container, is one of the most essential components in which various parts become assembled to compose a hard disk drive (HDD). Height errors that are caused by pressing, breaking or cracking can occur on the base, because it is designed by a stamping method. In order to detect the height errors, the inspection process is essential in the production fields. In the current industry, CMM (Coordinate Measurement Machine) is one of the representative machines that inspect certain regions on the product. The machine probes a designated point by an operator and judges the defect by comparing the height of the point to the originally designed height. However, the method takes much time to inspect each designated point resulting in a total of 17 minutes. In order to reduce the total inspection time, we propose an inspection method using 3D point cloud data acquired from a holographic sensor. To compare the height from acquired 3D point cloud data with the one from the originally designed CAD data, the exact point cloud registration is important. There are differences between 2D image registration and 3D point cloud registration, such as translation on each plane, rotation, tilt, and nonlinear transformations. The relationship between the acquired 3D point cloud data and the originally designed CAD data can be obtained by projective transformation. If the projective transformation matrix between the two is obtained, 3D point cloud data registration can be performed. In order to calculate 3D projective transformation matrix, corresponding points between 3D point cloud data and CAD data are required. To find the corresponding points, we use the height map which is projected from 3D point cloud data onto XY plane. The height map has pixel intensity from the height value of each point. If the height maps from 3D point cloud data and CAD data are matched, corresponding points can be estimated. As one of the

  1. Coupled fvGCM-GCE Modeling System, 3D Cloud-Resolving Model and Cloud Library

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo

    2005-01-01

    Recent GEWEX Cloud System Study (GCSS) model comparison projects have indicated that cloud- resolving models (CRMs) agree with observations better than traditional single-column models in simulating various types of clouds and cloud systems from different geographic locations. Current and future NASA satellite programs can provide cloud, precipitation, aerosol and other data at very fine spatial and temporal scales. It requires a coupled global circulation model (GCM) and cloud-scale model (termed a super-parameterization or multi-scale modeling framework, MMF) to use these satellite data to improve the understanding of the physical processes that are responsible for the variation in global and regional climate and hydrological systems. The use of a GCM will enable global coverage, and the use of a CRM will allow for better and more sophisticated physical parameterization. NASA satellite and field campaign cloud related datasets can provide initial conditions as well as validation for both the MMF and CRMs. A seed fund is available at NASA Goddard to build a MMF based on the 2D Goddard Cumulus Ensemble (GCE) model and the Goddard finite volume general circulation model (fvGCM). A prototype MMF in being developed and production runs will be conducted at the beginning of 2005. In this talk, I will present: (1) A brief review on GCE model and its applications on precipitation processes, ( 2 ) The Goddard MMF and the major difference between two existing MMFs (CSU MMF and Goddard MMF), (3) A cloud library generated by Goddard MMF, and 3D GCE model, and (4) A brief discussion on the GCE model on developing a global cloud simulator.

  2. Secure 3D watermarking algorithm based on point set projection

    NASA Astrophysics Data System (ADS)

    Liu, Quan; Zhang, Xiaomei

    2007-11-01

    3D digital models greatly facilitate the distribution and storage of information. While its copyright protection problems attract more and more research interests. A novel secure digital watermarking algorithm for 3D models is proposed in this paper. In order to survive most attacks like rotation, cropping, smoothing, adding noise, etc, the projection of the model's point set is chosen as the carrier of the watermark in the presented algorithm, in which contains the copyright information as logos, text, and so on. Then projection of the model's point set onto x, y and z plane are calculated respectively. Before watermark embedding process, the original watermark is scrambled by a key. Each projection is singular value decomposed, and the scrambled watermark is embedded into the SVD(singular value decomposed) domain of the above x, y and z plane respectively. After that we use the watermarked x, y and z plane to recover the vertices of the model and the watermarked model is attained. Only the legal user can remove the watermark from the watermarked models using the private key. Experiments are presented in the paper to show that the proposed algorithm has good performance on various malicious attacks.

  3. 3D Radiative Aspects of the Increased Aerosol Optical Depth Near Clouds

    NASA Technical Reports Server (NTRS)

    Marshak, Alexander; Wen, Guoyong; Remer, Lorraine; Cahalan, Robert; Coakley, Jim

    2007-01-01

    To characterize aerosol-cloud interactions it is important to correctly retrieve aerosol optical depth in the vicinity of clouds. It is well reported in the literature that aerosol optical depth increases with cloud cover. Part of the increase comes from real physics as humidification; another part, however, comes from 3D cloud effects in the remote sensing retrievals. In many cases it is hard to say whether the retrieved increased values of aerosol optical depth are remote sensing artifacts or real. In the presentation, we will discuss how the 3D cloud affects can be mitigated. We will demonstrate a simple model that can assess the enhanced illumination of cloud-free columns in the vicinity of clouds. This model is based on the assumption that the enhancement in the cloud-free column radiance comes from the enhanced Rayleigh scattering due to presence of surrounding clouds. A stochastic cloud model of broken cloudiness is used to simulate the upward flux.

  4. Do Fractal Models of Clouds Produces the Right 3D Radiative Effects?

    NASA Technical Reports Server (NTRS)

    Varnai, Tamas; Marshak, Alexander; Einaudi, Franco (Technical Monitor)

    2001-01-01

    Stochastic fractal models of clouds are often used to study 3D radiative effects and their influence on the remote sensing of cloud properties. Since it is important that the cloud models produce a correct radiative response, some researchers require the model parameters to match observed cloud properties such as scale-independent optical thickness variability. Unfortunately, matching these properties does not necessarily imply that the cloud models will cause the right 3D radiative effects. First, the matched properties alone only influence the 3D effects but do not completely determine them. Second, in many cases the retrieved cloud properties have been already biased by 3D radiative effects, and so the models may not match the true real clouds. Finally, the matched cloud properties cannot be considered independent from the scales at which they have been retrieved. This paper proposes an approach that helps ensure that fractal cloud models are realistic and produce the right 3D effects. The technique compares the results of radiative transfer simulations for the model clouds to new direct observations of 3D radiative effects in satellite images.

  5. Momentum Transport: 2D and 3D Cloud Resolving Model Simulations

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo

    2001-01-01

    The major objective of this study is to investigate the momentum budgets associated with several convective systems that developed during the TOGA COARE IOP (west Pacific warm pool region) and GATE (east Atlantic region). The tool for this study is the improved Goddard Cumulas Ensemble (GCE) model which includes a 3-class ice-phase microphysical scheme, explicit cloud radiative interactive processes and air-sea interactive surface processes. The model domain contains 256 x 256 grid points (with 2 km resolution) in the horizontal and 38 grid points (to a depth of 22 km) in the vertical. The 2D domain has 1024 grid points. The simulations were performed over a 7-day time period (December 19-26, 1992, for TOGA COARE and September 1-7, 1994 for GATE). Cyclic literal boundary conditions are required for this type of long-term integration. Two well organized squall systems (TOGA, COARE February 22, 1993, and GATE September 12, 1994) were also simulated using the 3D GCE model. Only 9 h simulations were required to cover the life time of the squall systems. the lateral boundary conditions were open for these two squall systems simulations. the following will be examined: (1) the momentum budgets in the convective and stratiform regions, (2) the relationship between momentum transport and cloud organization (i.e., well organized squall lines versus less organized convective), (3) the differences and similarities in momentum transport between 2D and 3D simulated convective systems, and (4) the differences and similarities in momentum budgets between cloud systems simulated with open and cyclic lateral boundary conditions. Preliminary results indicate that there are only small differences between 2D and 3D simulated momentum budgets. Major differences occur, however, between momentum budgets associated with squall systems simulated using different lateral boundary conditions.

  6. Coupled fvGCM-GCE Modeling System, 3D Cloud-Resolving Model and Cloud Library

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo

    2005-01-01

    Recent GEWEX Cloud System Study (GCSS) model comparison projects have indicated that cloud-resolving models (CRMs) agree with observations better than traditional singlecolumn models in simulating various types of clouds and cloud systems from Merent geographic locations. Current and future NASA satellite programs can provide cloud, precipitation, aerosol and other data at very fine spatial and temporal scales. It requires a coupled global circulation model (GCM) and cloudscale model (termed a super-parameterization or multiscale modeling framework, MMF) to use these satellite data to improve the understanding of the physical processes that are responsible for the variation in global and regional climate and hydrological systems. The use of a GCM will enable global coverage, and the use of a CRM will allow for better and more sophisticated physical parameteridon NASA satellite and field campaign cloud related datasets can provide initial conditions as well as validation for both the MMF and CRMs. A seed fund is available at NASA Goddard to build a MMF based on the 2D Goddard cumulus Ensemble (GCE) model and the Goddard finite volume general circulation model (fvGCM). A prototype MMF in being developed and production nms will be conducted at the beginning of 2005. In this talk, I will present: (1) A brief review on GCE model and its applications on precipitation processes, (2) The Goddard MMF and the major difference between two existing MMFs (CSU MMF and Goddard MMF), (3) A cloud library generated by Goddard MMF, and 3D GCE model, and (4) A brief discussion on the GCE model on developing a global cloud simulator.

  7. 3D modeling of clouds in GJ1214b's atmosphere

    NASA Astrophysics Data System (ADS)

    Charnay, Benjamin; Meadows, Victoria; leconte, Jérémy; Misra, Amit; Arney, Giada

    2015-11-01

    GJ1214b is a warm mini-Neptune/waterworld and one of the few low-mass exoplanets whose atmosphere is characterizable by current telescopes. Recent observations indicated a flat transit spectrum in near-infrared which has been interpreted as the presence of high and thick condensate clouds of KCl or ZnS or photochemical hazes. However, the formation of such high clouds/hazes would require a strong vertical mixing linked to the atmospheric circulation. In order to understand the transport, distribution and observational implications of such clouds/haze, we studied the atmospheric circulation and cloud formation on GJ1214b for H-dominated and water-dominated atmospheres using the Generic LMDZ GCM.Firstly, we analyzed cloud-free atmospheres. We showed that the zonal mean meridional circulation corresponds to an anti-Hadley circulation in most of the atmosphere with upwelling at midlatitude and downwelling at the equator. This circulation should strongly impact cloud formation and distribution, leading to a minimum of cloud at the equator. We also derived 1D equivalent eddy diffusion coefficients. The corresponding values should favor an efficient formation of photochemical haze in the upper atmosphere of GJ1214b.Secondly, we simulated cloudy atmospheres including latent heat release and radiative effects for KCl and ZnS clouds. We analyzed their impacts on the thermal structure. In particular, we fund that ZnS clouds may lead to the formation of a stratospheric thermal inversion. We showed that flat transit spectra consistent with HST observations are possible for cloud particle radii around 0.5 microns. Using the outputs of our GCM, we also generated emission and reflection spectra and phases curves.Finally, our results suggest that primary and secondary eclipses and phase curves observed by JWST should provide strong constraints on the nature of GJ1214b's atmosphere and clouds.

  8. Parameterization and analysis of 3-D radiative transfer in clouds

    SciTech Connect

    Varnai, Tamas

    2012-03-16

    This report provides a summary of major accomplishments from the project. The project examines the impact of radiative interactions between neighboring atmospheric columns, for example clouds scattering extra sunlight toward nearby clear areas. While most current cloud models don't consider these interactions and instead treat sunlight in each atmospheric column separately, the resulting uncertainties have remained unknown. This project has provided the first estimates on the way average solar heating is affected by interactions between nearby columns. These estimates have been obtained by combining several years of cloud observations at three DOE Atmospheric Radiation Measurement (ARM) Climate Research Facility sites (in Alaska, Oklahoma, and Papua New Guinea) with simulations of solar radiation around the observed clouds. The importance of radiative interactions between atmospheric columns was evaluated by contrasting simulations that included the interactions with those that did not. This study provides lower-bound estimates for radiative interactions: It cannot consider interactions in cross-wind direction, because it uses two-dimensional vertical cross-sections through clouds that were observed by instruments looking straight up as clouds drifted aloft. Data from new DOE scanning radars will allow future radiative studies to consider the full three-dimensional nature of radiative processes. The results reveal that two-dimensional radiative interactions increase overall day-and-night average solar heating by about 0.3, 1.2, and 4.1 Watts per meter square at the three sites, respectively. This increase grows further if one considers that most large-domain cloud simulations have resolutions that cannot specify small-scale cloud variability. For example, the increases in solar heating mentioned above roughly double for a fairly typical model resolution of 1 km. The study also examined the factors that shape radiative interactions between atmospheric columns and

  9. 3D modeling of clouds in GJ1214b's atmosphere

    NASA Astrophysics Data System (ADS)

    Charnay, Benjamin; Meadows, Victoria; leconte, Jérémy; Misra, Amit; Arnay, Giada

    2015-12-01

    GJ1214b is a warm mini-Neptune/waterworld and one of the few low-mass exoplanets whose atmosphere is characterizable by current telescopes. Recent observations indicated a flat transit spectrum in near-infrared which has been interpreted as the presence of high and thick condensate clouds of KCl or ZnS or photochemical hazes [1]. However, the formation of such high clouds/hazes would require a strong vertical mixing linked to the atmospheric circulation [2]. In order to understand the transport, distribution and observational implications of such clouds/haze, we studied the atmospheric circulation and cloud formation on GJ1214b for H-dominated and water-dominated atmospheres using the Generic LMDZ GCM.Firstly, we analyzed cloud-free atmospheres [3]. We showed that the zonal mean meridional circulation corresponds to an anti-Hadley circulation in most of the atmosphere with upwelling at midlatitude and downwelling at the equator. This circulation should strongly impact cloud formation and distribution, leading to a minimum of cloud at the equator. We also derived 1D equivalent eddy diffusion coefficients. The corresponding values should favor an efficient formation of photochemical haze in the upper atmosphere of GJ1214b.Secondly, we simulated cloudy atmospheres including latent heat release and radiative effects for KCl and ZnS clouds [4]. We analyzed their distribution and their impacts on the thermal structure. In particular, a stratospheric thermal inversion should likely be formed by absorption of stellar radiation by ZnS clouds. We showed that flat transit spectra consistent with HST observations are possible for cloud particle radii around 0.5 microns. Using the outputs of our GCM, we also generated emission and reflection spectra and phases curves.Finally, our results suggest that primary and secondary eclipses and phase curves observed by JWST should provide strong constraints on the nature of GJ1214b's atmosphere and clouds.references:[1] Kreidberg et al

  10. Evaluating point cloud accuracy of static three-dimensional laser scanning based on point cloud error ellipsoid model

    NASA Astrophysics Data System (ADS)

    Chen, Xijiang; Hua, Xianghong; Zhang, Guang; Wu, Hao; Xuan, Wei; Li, Moxiao

    2015-01-01

    Evaluation of static three-dimensional (3-D) laser scanning point cloud accuracy has become a topical research issue. Point cloud accuracy is typically estimated by comparing terrestrial laser scanning data related to a finite number of check point coordinates against those obtained by an independent source of higher accuracy. These methods can only estimate the point accuracy but not the point cloud accuracy, which is influenced by the positional error and sampling interval. It is proposed that the point cloud error ellipsoid is favorable for inspecting the point cloud accuracy, which is determined by the individual point error ellipsoid volume. The kernel of this method is the computation of the point cloud error ellipsoid volume and the determination of the functional relationship between the error ellipsoid and accuracy. The proposed point cloud accuracy evaluation method is particularly suited for small sampling intervals when there exists an intersection of two error ellipsoids, and is suited not only for planar but also for nonplanar target surfaces. The performance of the proposed method (PM) is verified using both planar and nonplanar board point clouds. The results demonstrate that the proposed evaluation method significantly outperforms the existing methods when the target surface is nonplanar or there exists an intersection of two error ellipsoids. The PM therefore has the potential for improving the reliability of point cloud digital elevation models and static 3-D laser scanning-based deformation monitoring.

  11. Extending 3D Near-Cloud Corrections from Shorter to Longer Wavelengths

    NASA Technical Reports Server (NTRS)

    Marshak, Alexander; Evans, K. Frank; Varnai, Tamas; Guoyong, Wen

    2014-01-01

    Satellite observations have shown a positive correlation between cloud amount and aerosol optical thickness (AOT) that can be explained by the humidification of aerosols near clouds, and/or by cloud contamination by sub-pixel size clouds and the cloud adjacency effect. The last effect may substantially increase reflected radiation in cloud-free columns, leading to overestimates in the retrieved AOT. For clear-sky areas near boundary layer clouds the main contribution to the enhancement of clear sky reflectance at shorter wavelengths comes from the radiation scattered into clear areas by clouds and then scattered to the sensor by air molecules. Because of the wavelength dependence of air molecule scattering, this process leads to a larger reflectance increase at shorter wavelengths, and can be corrected using a simple two-layer model. However, correcting only for molecular scattering skews spectral properties of the retrieved AOT. Kassianov and Ovtchinnikov proposed a technique that uses spectral reflectance ratios to retrieve AOT in the vicinity of clouds; they assumed that the cloud adjacency effect influences the spectral ratio between reflectances at two wavelengths less than it influences the reflectances themselves. This paper combines the two approaches: It assumes that the 3D correction for the shortest wavelength is known with some uncertainties, and then it estimates the 3D correction for longer wavelengths using a modified ratio method. The new approach is tested with 3D radiances simulated for 26 cumulus fields from Large-Eddy Simulations, supplemented with 40 aerosol profiles. The results showed that (i) for a variety of cumulus cloud scenes and aerosol profiles over ocean the 3D correction due to cloud adjacency effect can be extended from shorter to longer wavelengths and (ii) the 3D corrections for longer wavelengths are not very sensitive to unbiased random uncertainties in the 3D corrections at shorter wavelengths.

  12. A multi-resolution fractal additive scheme for blind watermarking of 3D point data

    NASA Astrophysics Data System (ADS)

    Rahmes, Mark; Wilder, Kathy; Fox, Kevin

    2013-05-01

    We present a fractal feature space for 3D point watermarking to make geospatial systems more secure. By exploiting the self similar nature of fractals, hidden information can be spatially embedded in point cloud data in an acceptable manner as described within this paper. Our method utilizes a blind scheme which provides automatic retrieval of the watermark payload without the need of the original cover data. Our method for locating similar patterns and encoding information in LiDAR point cloud data is accomplished through a look-up table or code book. The watermark is then merged into the point cloud data itself resulting in low distortion effects. With current advancements in computing technologies, such as GPGPUs, fractal processing is now applicable for processing of big data which is present in geospatial as well as other systems. This watermarking technique described within this paper can be important for systems where point data is handled by numerous aerial collectors including analysts use for systems such as a National LiDAR Data Layer.

  13. Use of the ARM Measurement of Spectral Zenith Radiance For Better Understanding Of 3D Cloud-Radiation Processes and Aerosol-Cloud Interaction

    SciTech Connect

    Chiu, Jui-Yuan

    2010-10-19

    Our proposal focuses on cloud-radiation processes in a general 3D cloud situation, with particular emphasis on cloud optical depth and effective particle size. We also focus on zenith radiance measurements, both active and passive. The proposal has three main parts. Part One exploits the "solar-background" mode of ARM lidars to allow them to retrieve cloud optical depth not just for thin clouds but for all clouds. This also enables the study of aerosol cloud interactions with a single instrument. Part Two exploits the large number of new wavelengths offered by ARM's zenith-pointing ShortWave Spectrometer (SWS), especially during CLASIC, to develop better retrievals not only of cloud optical depth but also of cloud particle size. We also propose to take advantage of the SWS's 1 Hz sampling to study the "twilight zone" around clouds where strong aerosol-cloud interactions are taking place. Part Three involves continuing our cloud optical depth and cloud fraction retrieval research with ARM's 2NFOV instrument by, first, analyzing its data from the AMF-COPS/CLOWD deployment, and second, making our algorithms part of ARM's operational data processing.

  14. Automatic Classification of Trees from Laser Scanning Point Clouds

    NASA Astrophysics Data System (ADS)

    Sirmacek, B.; Lindenbergh, R.

    2015-08-01

    Development of laser scanning technologies has promoted tree monitoring studies to a new level, as the laser scanning point clouds enable accurate 3D measurements in a fast and environmental friendly manner. In this paper, we introduce a probability matrix computation based algorithm for automatically classifying laser scanning point clouds into 'tree' and 'non-tree' classes. Our method uses the 3D coordinates of the laser scanning points as input and generates a new point cloud which holds a label for each point indicating if it belongs to the 'tree' or 'non-tree' class. To do so, a grid surface is assigned to the lowest height level of the point cloud. The grids are filled with probability values which are calculated by checking the point density above the grid. Since the tree trunk locations appear with very high values in the probability matrix, selecting the local maxima of the grid surface help to detect the tree trunks. Further points are assigned to tree trunks if they appear in the close proximity of trunks. Since heavy mathematical computations (such as point cloud organization, detailed shape 3D detection methods, graph network generation) are not required, the proposed algorithm works very fast compared to the existing methods. The tree classification results are found reliable even on point clouds of cities containing many different objects. As the most significant weakness, false detection of light poles, traffic signs and other objects close to trees cannot be prevented. Nevertheless, the experimental results on mobile and airborne laser scanning point clouds indicate the possible usage of the algorithm as an important step for tree growth observation, tree counting and similar applications. While the laser scanning point cloud is giving opportunity to classify even very small trees, accuracy of the results is reduced in the low point density areas further away than the scanning location. These advantages and disadvantages of two laser scanning point

  15. Robotic Online Path Planning on Point Cloud.

    PubMed

    Liu, Ming

    2016-05-01

    This paper deals with the path-planning problem for mobile wheeled- or tracked-robot which drive in 2.5-D environments, where the traversable surface is usually considered as a 2-D-manifold embedded in a 3-D ambient space. Specially, we aim at solving the 2.5-D navigation problem using raw point cloud as input. The proposed method is independent of traditional surface parametrization or reconstruction methods, such as a meshing process, which generally has high-computational complexity. Instead, we utilize the output of 3-D tensor voting framework on the raw point clouds. The computation of tensor voting is accelerated by optimized implementation on graphics computation unit. Based on the tensor voting results, a novel local Riemannian metric is defined using the saliency components, which helps the modeling of the latent traversable surface. Using the proposed metric, we prove that the geodesic in the 3-D tensor space leads to rational path-planning results by experiments. Compared to traditional methods, the results reveal the advantages of the proposed method in terms of smoothing the robot maneuver while considering the minimum travel distance. PMID:26011876

  16. Parameterization and Analysis of 3-D Solar Radiative Transfer in Clouds: Final Report

    SciTech Connect

    Jerry Y. Harrington

    2012-09-21

    This document reports on the research that we have done over the course of our two-year project. The report also covers the research done on this project during a 1 year no-cost extension of the grant. Our work has had two main, inter-related thrusts: The first thrust was to characterize the response of stratocumulus cloud structure and dynamics to systematic changes in cloud infrared radiative cooling and solar heating using one-dimensional radiative transfer models. The second was to couple a three-dimensional (3-D) solar radiative transfer model to the Large Eddy Simulation (LES) model that we use to simulate stratocumulus. The purpose of the studies with 3-D radiative transfer was to examine the possible influences of 3-D photon transport on the structure, evolution, and radiative properties of stratocumulus. While 3-D radiative transport has been examined in static cloud environments, few studies have attempted to examine whether the 3-D nature of radiative absorption and emission influence the structure and evolution of stratocumulus. We undertook this dual approach because only a small number of LES simulations with the 3-D radiative transfer model are possible due to the high computational costs. Consequently, LES simulations with a 1-D radiative transfer solver were used in order to examine the portions of stratocumulus parameter space that may be most sensitive to perturbations in the radiative fields. The goal was then to explore these sensitive regions with LES using full 3-D radiative transfer. Our overall goal was to discover whether 3-D radiative processes alter cloud structure and evolution, and whether this may have any indirect implications for cloud radiative properties. In addition, we collaborated with Dr. Tamas Varni, providing model output fields for his attempt at parameterizing 3-D radiative effects for cloud models.

  17. Dynamic 3-D chemical agent cloud mapping using a sensor constellation deployed on mobile platforms

    NASA Astrophysics Data System (ADS)

    Cosofret, Bogdan R.; Konno, Daisei; Rossi, David; Marinelli, William J.; Seem, Pete

    2014-05-01

    The need for standoff detection technology to provide early Chem-Bio (CB) threat warning is well documented. Much of the information obtained by a single passive sensor is limited to bearing and angular extent of the threat cloud. In order to obtain absolute geo-location, range to threat, 3-D extent and detailed composition of the chemical threat, fusion of information from multiple passive sensors is needed. A capability that provides on-the-move chemical cloud characterization is key to the development of real-time Battlespace Awareness. We have developed, implemented and tested algorithms and hardware to perform the fusion of information obtained from two mobile LWIR passive hyperspectral sensors. The implementation of the capability is driven by current Nuclear, Biological and Chemical Reconnaissance Vehicle operational tactics and represents a mission focused alternative of the already demonstrated 5-sensor static Range Test Validation System (RTVS).1 The new capability consists of hardware for sensor pointing and attitude information which is made available for streaming and aggregation as part of the data fusion process for threat characterization. Cloud information is generated using 2-sensor data ingested into a suite of triangulation and tomographic reconstruction algorithms. The approaches are amenable to using a limited number of viewing projections and unfavorable sensor geometries resulting from mobile operation. In this paper we describe the system architecture and present an analysis of results obtained during the initial testing of the system at Dugway Proving Ground during BioWeek 2013.

  18. Evaluating Voxel Enabled Scalable Intersection of Large Point Clouds

    NASA Astrophysics Data System (ADS)

    Wang, J.; Lindenbergh, R.; Menenti, M.

    2015-08-01

    Laser scanning has become a well established surveying solution for obtaining 3D geo-spatial information on objects and environment. Nowadays scanners acquire up to millions of points per second which makes point cloud huge. Laser scanning is widely applied from airborne, carborne and stable platforms, resulting in point clouds obtained at different attitudes and with different extents. Working with such different large point clouds makes the determination of their overlapping area necessary but often time consuming. In this paper, a scalable point cloud intersection determination method is presented based on voxels. The method takes two overlapping point clouds as input. It consecutively resamples the input point clouds according to a preset voxel cell size. For all non-empty cells the center of gravity of the points in contains is computed. Consecutively for those centers it is checked if they are in a voxel cell of the other point cloud. The same process is repeated after interchanging the role of the two point clouds. The quality of the results is evaluated by the distance to the pints from the other data set. Also computation time and quality of the results are compared for different voxel cell sizes. The results are demonstrated on determining he intersection between an airborne and carborne laser point clouds and show that the proposed method takes 0.10%, 0.15%, 1.26% and 14.35% of computation time compared the the classic method when using cell sizes of of 10, 8, 5 and 3 meters respectively.

  19. Reconstructing 3D coastal cliffs from airborne oblique photographs without ground control points

    NASA Astrophysics Data System (ADS)

    Dewez, T. J. B.

    2014-05-01

    Coastal cliff collapse hazard assessment requires measuring cliff face topography at regular intervals. Terrestrial laser scanner techniques have proven useful so far but are expensive to use either through purchasing the equipment or through survey subcontracting. In addition, terrestrial laser surveys take time which is sometimes incompatible with the time during with the beach is accessible at low-tide. By comparison, structure from motion techniques (SFM) are much less costly to implement, and if airborne, acquisition of several kilometers of coastline can be done in a matter of minutes. In this paper, the potential of GPS-tagged oblique airborne photographs and SFM techniques is examined to reconstruct chalk cliff dense 3D point clouds without Ground Control Points (GCP). The focus is put on comparing the relative 3D point of views reconstructed by Visual SFM with their synchronous Solmeta Geotagger Pro2 GPS locations using robust estimators. With a set of 568 oblique photos, shot from the open door of an airplane with a triplet of synchronized Nikon D7000, GPS and SFM-determined view point coordinates converge to X: ±31.5 m; Y: ±39.7 m; Z: ±13.0 m (LE66). Uncertainty in GPS position affects the model scale, angular attitude of the reference frame (the shoreline ends up tilted by 2°) and absolute positioning. Ground Control Points cannot be avoided to orient such models.

  20. 3D Aerosol-Cloud Radiative Interaction Observed in Collocated MODIS and ASTER Images of Cumulus Cloud Fields

    NASA Technical Reports Server (NTRS)

    Wen, Guoyong; Marshak, Alexander; Cahalan, Robert F.; Remer, Lorraine A.; Kleidman, Richard G.

    2007-01-01

    3D aerosol-cloud interaction is examined by analyzing two images containing cumulus clouds in biomass burning regions in Brazil. The research consists of two parts. The first part focuses on identifying 3D clo ud impacts on the reflectance of pixel selected for the MODIS aerosol retrieval based purely on observations. The second part of the resea rch combines the observations with radiative transfer computations to identify key parameters in 3D aerosol-cloud interaction. We found that 3D cloud-induced enhancement depends on optical properties of nearb y clouds as well as wavelength. The enhancement is too large to be ig nored. Associated biased error in 1D aerosol optical thickness retrie val ranges from 50% to 140% depending on wavelength and optical prope rties of nearby clouds as well as aerosol optical thickness. We caution the community to be prudent when applying 1D approximations in comp uting solar radiation in dear regions adjacent to clouds or when usin g traditional retrieved aerosol optical thickness in aerosol indirect effect research.

  1. Modeling the Impact of Drizzle and 3D Cloud Structure on Remote Sensing of Effective Radius

    NASA Technical Reports Server (NTRS)

    Platnick, Steven; Zinner, Tobias; Ackerman, S.

    2008-01-01

    Remote sensing of cloud particle size with passive sensors like MODIS is an important tool for cloud microphysical studies. As a measure of the radiatively relevant droplet size, effective radius can be retrieved with different combinations of visible through shortwave infrared channels. MODIS observations sometimes show significantly larger effective radii in marine boundary layer cloud fields derived from the 1.6 and 2.1 pm channel observations than for 3.7 pm retrievals. Possible explanations range from 3D radiative transport effects and sub-pixel cloud inhomogeneity to the impact of drizzle formation on the droplet distribution. To investigate the potential influence of these factors, we use LES boundary layer cloud simulations in combination with 3D Monte Carlo simulations of MODIS observations. LES simulations of warm cloud spectral microphysics for cases of marine stratus and broken stratocumulus, each for two different values of cloud condensation nuclei density, produce cloud structures comprising droplet size distributions with and without drizzle size drops. In this study, synthetic MODIS observations generated from 3D radiative transport simulations that consider the full droplet size distribution will be generated for each scene. The operational MODIS effective radius retrievals will then be applied to the simulated reflectances and the results compared with the LES microphysics.

  2. Virtual and Printed 3D Models for Teaching Crystal Symmetry and Point Groups

    ERIC Educational Resources Information Center

    Casas, Lluís; Estop, Euge`nia

    2015-01-01

    Both, virtual and printed 3D crystal models can help students and teachers deal with chemical education topics such as symmetry and point groups. In the present paper, two freely downloadable tools (interactive PDF files and a mobile app) are presented as examples of the application of 3D design to study point-symmetry. The use of 3D printing to…

  3. An Emprical Point Error Model for Tls Derived Point Clouds

    NASA Astrophysics Data System (ADS)

    Ozendi, Mustafa; Akca, Devrim; Topan, Hüseyin

    2016-06-01

    The random error pattern of point clouds has significant effect on the quality of final 3D model. The magnitude and distribution of random errors should be modelled numerically. This work aims at developing such an anisotropic point error model, specifically for the terrestrial laser scanner (TLS) acquired 3D point clouds. A priori precisions of basic TLS observations, which are the range, horizontal angle and vertical angle, are determined by predefined and practical measurement configurations, performed at real-world test environments. A priori precision of horizontal (𝜎𝜃) and vertical (𝜎𝛼) angles are constant for each point of a data set, and can directly be determined through the repetitive scanning of the same environment. In our practical tests, precisions of the horizontal and vertical angles were found as 𝜎𝜃=±36.6𝑐𝑐 and 𝜎𝛼=±17.8𝑐𝑐, respectively. On the other hand, a priori precision of the range observation (𝜎𝜌) is assumed to be a function of range, incidence angle of the incoming laser ray, and reflectivity of object surface. Hence, it is a variable, and computed for each point individually by employing an empirically developed formula varying as 𝜎𝜌=±2-12 𝑚𝑚 for a FARO Focus X330 laser scanner. This procedure was followed by the computation of error ellipsoids of each point using the law of variance-covariance propagation. The direction and size of the error ellipsoids were computed by the principal components transformation. The usability and feasibility of the model was investigated in real world scenarios. These investigations validated the suitability and practicality of the proposed method.

  4. CAST: Effective and Efficient User Interaction for Context-Aware Selection in 3D Particle Clouds.

    PubMed

    Yu, Lingyun; Efstathiou, Konstantinos; Isenberg, Petra; Isenberg, Tobias

    2016-01-01

    We present a family of three interactive Context-Aware Selection Techniques (CAST) for the analysis of large 3D particle datasets. For these datasets, spatial selection is an essential prerequisite to many other analysis tasks. Traditionally, such interactive target selection has been particularly challenging when the data subsets of interest were implicitly defined in the form of complicated structures of thousands of particles. Our new techniques SpaceCast, TraceCast, and PointCast improve usability and speed of spatial selection in point clouds through novel context-aware algorithms. They are able to infer a user's subtle selection intention from gestural input, can deal with complex situations such as partially occluded point clusters or multiple cluster layers, and can all be fine-tuned after the selection interaction has been completed. Together, they provide an effective and efficient tool set for the fast exploratory analysis of large datasets. In addition to presenting Cast, we report on a formal user study that compares our new techniques not only to each other but also to existing state-of-the-art selection methods. Our results show that Cast family members are virtually always faster than existing methods without tradeoffs in accuracy. In addition, qualitative feedback shows that PointCast and TraceCast were strongly favored by our participants for intuitiveness and efficiency. PMID:26390474

  5. Line segment extraction for large scale unorganized point clouds

    NASA Astrophysics Data System (ADS)

    Lin, Yangbin; Wang, Cheng; Cheng, Jun; Chen, Bili; Jia, Fukai; Chen, Zhonggui; Li, Jonathan

    2015-04-01

    Line segment detection in images is already a well-investigated topic, although it has received considerably less attention in 3D point clouds. Benefiting from current LiDAR devices, large-scale point clouds are becoming increasingly common. Most human-made objects have flat surfaces. Line segments that occur where pairs of planes intersect give important information regarding the geometric content of point clouds, which is especially useful for automatic building reconstruction and segmentation. This paper proposes a novel method that is capable of accurately extracting plane intersection line segments from large-scale raw scan points. The 3D line-support region, namely, a point set near a straight linear structure, is extracted simultaneously. The 3D line-support region is fitted by our Line-Segment-Half-Planes (LSHP) structure, which provides a geometric constraint for a line segment, making the line segment more reliable and accurate. We demonstrate our method on the point clouds of large-scale, complex, real-world scenes acquired by LiDAR devices. We also demonstrate the application of 3D line-support regions and their LSHP structures on urban scene abstraction.

  6. Processing Uav and LIDAR Point Clouds in Grass GIS

    NASA Astrophysics Data System (ADS)

    Petras, V.; Petrasova, A.; Jeziorska, J.; Mitasova, H.

    2016-06-01

    Today's methods of acquiring Earth surface data, namely lidar and unmanned aerial vehicle (UAV) imagery, non-selectively collect or generate large amounts of points. Point clouds from different sources vary in their properties such as number of returns, density, or quality. We present a set of tools with applications for different types of points clouds obtained by a lidar scanner, structure from motion technique (SfM), and a low-cost 3D scanner. To take advantage of the vertical structure of multiple return lidar point clouds, we demonstrate tools to process them using 3D raster techniques which allow, for example, the development of custom vegetation classification methods. Dense point clouds obtained from UAV imagery, often containing redundant points, can be decimated using various techniques before further processing. We implemented and compared several decimation techniques in regard to their performance and the final digital surface model (DSM). Finally, we will describe the processing of a point cloud from a low-cost 3D scanner, namely Microsoft Kinect, and its application for interaction with physical models. All the presented tools are open source and integrated in GRASS GIS, a multi-purpose open source GIS with remote sensing capabilities. The tools integrate with other open source projects, specifically Point Data Abstraction Library (PDAL), Point Cloud Library (PCL), and OpenKinect libfreenect2 library to benefit from the open source point cloud ecosystem. The implementation in GRASS GIS ensures long term maintenance and reproducibility by the scientific community but also by the original authors themselves.

  7. Influence of 3D Effects on 1D Aerosol Retrievals in Synthetic, Partially Clouded Scenes

    NASA Astrophysics Data System (ADS)

    Stap, F. A.; Hasekamp, O. P.; Emde, C.

    2014-12-01

    Most satellite measurements of the microphysical and radiative properties of aerosol near clouds are either strictly screened for, or hindered by sub-pixel cloud contamination. This may change with the advent of a new generation of aerosol retrieval algorithms,intended for multi-angle, multi-wavelength photo-polarimetric instruments such as POLDER3on board PARASOL, which show ability to separate between aerosol and cloud particles.In order to obtain the required computational efficiency these algorithms typically make use of 1D radiative transfer models and are thus unable to account for the 3D effects that occur in actual, partially clouded scenes.Here, we apply an aerosol retrieval algorithm, which employs a 1D radiative transfer code and the independent pixel approximation, on synthetic, 3D, partially cloudedscenes calculated with the Monte Carlo radiative transfer code MYSTIC.The influence of the 3D effects due to clouds on the retrieved microphysical and optical aerosol properties is presented and the ability of the algorithm to retrieve these properties in partially clouded scenes will be discussed.

  8. D Land Cover Classification Based on Multispectral LIDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Zou, Xiaoliang; Zhao, Guihua; Li, Jonathan; Yang, Yuanxi; Fang, Yong

    2016-06-01

    Multispectral Lidar System can emit simultaneous laser pulses at the different wavelengths. The reflected multispectral energy is captured through a receiver of the sensor, and the return signal together with the position and orientation information of sensor is recorded. These recorded data are solved with GNSS/IMU data for further post-processing, forming high density multispectral 3D point clouds. As the first commercial multispectral airborne Lidar sensor, Optech Titan system is capable of collecting point clouds data from all three channels at 532nm visible (Green), at 1064 nm near infrared (NIR) and at 1550nm intermediate infrared (IR). It has become a new source of data for 3D land cover classification. The paper presents an Object Based Image Analysis (OBIA) approach to only use multispectral Lidar point clouds datasets for 3D land cover classification. The approach consists of three steps. Firstly, multispectral intensity images are segmented into image objects on the basis of multi-resolution segmentation integrating different scale parameters. Secondly, intensity objects are classified into nine categories by using the customized features of classification indexes and a combination the multispectral reflectance with the vertical distribution of object features. Finally, accuracy assessment is conducted via comparing random reference samples points from google imagery tiles with the classification results. The classification results show higher overall accuracy for most of the land cover types. Over 90% of overall accuracy is achieved via using multispectral Lidar point clouds for 3D land cover classification.

  9. Accuracy assessment of building point clouds automatically generated from iphone images

    NASA Astrophysics Data System (ADS)

    Sirmacek, B.; Lindenbergh, R.

    2014-06-01

    Low-cost sensor generated 3D models can be useful for quick 3D urban model updating, yet the quality of the models is questionable. In this article, we evaluate the reliability of an automatic point cloud generation method using multi-view iPhone images or an iPhone video file as an input. We register such automatically generated point cloud on a TLS point cloud of the same object to discuss accuracy, advantages and limitations of the iPhone generated point clouds. For the chosen example showcase, we have classified 1.23% of the iPhone point cloud points as outliers, and calculated the mean of the point to point distances to the TLS point cloud as 0.11 m. Since a TLS point cloud might also include measurement errors and noise, we computed local noise values for the point clouds from both sources. Mean (μ) and standard deviation (σ) of roughness histograms are calculated as (μ1 = 0.44 m., σ1 = 0.071 m.) and (μ2 = 0.025 m., σ2 = 0.037 m.) for the iPhone and TLS point clouds respectively. Our experimental results indicate possible usage of the proposed automatic 3D model generation framework for 3D urban map updating, fusion and detail enhancing, quick and real-time change detection purposes. However, further insights should be obtained first on the circumstances that are needed to guarantee a successful point cloud generation from smartphone images.

  10. Knowledge-Based Object Detection in Laser Scanning Point Clouds

    NASA Astrophysics Data System (ADS)

    Boochs, F.; Karmacharya, A.; Marbs, A.

    2012-07-01

    Object identification and object processing in 3D point clouds have always posed challenges in terms of effectiveness and efficiency. In practice, this process is highly dependent on human interpretation of the scene represented by the point cloud data, as well as the set of modeling tools available for use. Such modeling algorithms are data-driven and concentrate on specific features of the objects, being accessible to numerical models. We present an approach that brings the human expert knowledge about the scene, the objects inside, and their representation by the data and the behavior of algorithms to the machine. This "understanding" enables the machine to assist human interpretation of the scene inside the point cloud. Furthermore, it allows the machine to understand possibilities and limitations of algorithms and to take this into account within the processing chain. This not only assists the researchers in defining optimal processing steps, but also provides suggestions when certain changes or new details emerge from the point cloud. Our approach benefits from the advancement in knowledge technologies within the Semantic Web framework. This advancement has provided a strong base for applications based on knowledge management. In the article we will present and describe the knowledge technologies used for our approach such as Web Ontology Language (OWL), used for formulating the knowledge base and the Semantic Web Rule Language (SWRL) with 3D processing and topologic built-ins, aiming to combine geometrical analysis of 3D point clouds, and specialists' knowledge of the scene and algorithmic processing.

  11. A 3D view of the outflow in the Orion Molecular Cloud 1 (OMC-1)

    NASA Astrophysics Data System (ADS)

    Nissen, H. D.; Cunningham, N. J.; Gustafsson, M.; Bally, J.; Lemaire, J.-L.; Favre, C.; Field, D.

    2012-04-01

    Context. Stars whose mass is an order of magnitude greater than the Sun play a prominent role in the evolution of galaxies, exploding as supernovae, triggering bursts of star formation and spreading heavy elements about their host galaxies. A fundamental aspect of star formation is the creation of an outflow. The fast outflow emerging from a region associated with massive star formation in the Orion Molecular Cloud 1 (OMC-1), located behind the Orion Nebula, appears to have been set in motion by an explosive event. Aims: We study the structure and dynamics of outflows in OMC-1. We combine radial velocity and proper motion data for near-IR emission of molecular hydrogen to obtain the first 3-dimensional (3D) structure of the OMC-1 outflow. Our work illustrates a new diagnostic tool for studies of star formation that will be exploited in the near future with the advent of high spatial resolution spectro-imaging in particular with data from the Atacama Large Millimeter Array (ALMA). Methods: We used published radial and proper motion velocities obtained from the shock-excited vibrational emission in the H2 v = 1-0 S(1) line at 2.122 μm obtained with the GriF instrument on the Canada-France-Hawaii Telescope, the Apache Point Observatory, the Anglo-Australian Observatory, and the Subaru Telescope. Results: These data give the 3D velocity of ejecta yielding a 3D reconstruction of the outflows. This allows one to view the material from different vantage points in space giving considerable insight into the geometry. Our analysis indicates that the ejection occurred ≲720 years ago from a distorted ring-like structure of ~15″ (6000 AU) in diameter centered on the proposed point of close encounter of the stars BN, source I and maybe also source n. We propose a simple model involving curvature of shock trajectories in magnetic fields through which the origin of the explosion and the center defined by extrapolated proper motions of BN, I and n may be brought into spatial

  12. Crop height determination with UAS point clouds

    NASA Astrophysics Data System (ADS)

    Grenzdörffer, G. J.

    2014-11-01

    The accurate determination of the height of agricultural crops helps to predict yield, biomass etc. These relationships are of great importance not only for crop production but also in grassland management, because the available biomass and food quality are valuable information. However there is no cost efficient and automatic system for the determination of the crop height available. 3D-point clouds generated from high resolution UAS imagery offer a new alternative. Two different approaches for crop height determination are presented. The "difference method" were the canopy height is determined by taking the difference between a current UAS-surface model and an existing digital terrain model (DTM) is the most suited and most accurate method. In situ measurements, vegetation indices and yield observations correlate well with the determined UAS crop heights.

  13. Retrieval of cloud microphysical parameters from INSAT-3D: a feasibility study using radiative transfer simulations

    NASA Astrophysics Data System (ADS)

    Jinya, John; Bipasha, Paul S.

    2016-05-01

    Clouds strongly modulate the Earths energy balance and its atmosphere through their interaction with the solar and terrestrial radiation. They interact with radiation in various ways like scattering, emission and absorption. By observing these changes in radiation at different wavelength, cloud properties can be estimated. Cloud properties are of utmost importance in studying different weather and climate phenomena. At present, no satellite provides cloud microphysical parameters over the Indian region with high temporal resolution. INSAT-3D imager observations in 6 spectral channels from geostationary platform offer opportunity to study continuous cloud properties over Indian region. Visible (0.65 μm) and shortwave-infrared (1.67 μm) channel radiances can be used to retrieve cloud microphysical parameters such as cloud optical thickness (COT) and cloud effective radius (CER). In this paper, we have carried out a feasibility study with the objective of cloud microphysics retrieval. For this, an inter-comparison of 15 globally available radiative transfer models (RTM) were carried out with the aim of generating a Look-up- Table (LUT). SBDART model was chosen for the simulations. The sensitivity of each spectral channel to different cloud properties was investigated. The inputs to the RT model were configured over our study region (50°S - 50°N and 20°E - 130°E) and a large number of simulations were carried out using random input vectors to generate the LUT. The determination of cloud optical thickness and cloud effective radius from spectral reflectance measurements constitutes the inverse problem and is typically solved by comparing the measured reflectances with entries in LUT and searching for the combination of COT and CER that gives the best fit. The products are available on the website www.mosdac.gov.in

  14. Precipitation Processes Developed During ARM (1997), TOGA COARE (1992) GATE (1974), SCSMEX (1998), and KWAJEX (1999): Consistent 3D, Semi-3D and 3D Cloud Resolving Model Simulations

    NASA Technical Reports Server (NTRS)

    Tao, W.-K.; Hou, A.; Atlas, R.; Starr, D.; Sud, Y.

    2003-01-01

    Real clouds and cloud systems are inherently three-dimensional (3D). Because of the limitations in computer resources, however, most cloud-resolving models (CRMs) today are still two-dimensional (2D) have been used to study the response of clouds to large-scale forcing. IN these 3D simulators, the model domain was small, and the integration time was 6 hours. Only recently have 3D experiments been performed for multi-day periods for tropical clouds systems with large horizontal domains at the National Center of Atmospheric Research (NCAR) and at NASA Goddard Space Center. At Goddard, a 3D cumulus Ensemble (GCE) model was used to simulate periods during TOGA COARE, GATE, SCSMEX, ARM, and KWAJEX using a 512 by 512 km domain (with 2-km resolution). The result indicate that surface precipitation and latent heating profiles are very similar between the 2D and 3D GCE model simulation. The major objective of this paper are: (1) to assess the performance of the super-parametrization technique, (2) calculate and examine the surface energy (especially radiation) and water budget, and (3) identify the differences and similarities in the organization and entrainment rates of convection between simulated 2D and 3D cloud systems.

  15. Evolving point-cloud features for gender classification

    NASA Astrophysics Data System (ADS)

    Keen, Brittany; Fouts, Aaron; Rizki, Mateen; Tamburino, Louis; Mendoza-Schrock, Olga L.

    2011-06-01

    In this paper we explore the use of histogram features extracted from 3D point clouds of human subjects for gender classification. Experiments are conducted using point clouds drawn from the CAESAR anthropometric database provided by the Air Force Research Laboratory (AFRL) Human Effectiveness Directorate and SAE International. This database contains approximately 4400 high resolution LIDAR whole body scans of carefully posed human subjects. Features are extracted from each point cloud by embedding the cloud in series of cylindrical shapes and computing a point count for each cylinder that characterizes a region of the subject. These measurements define rotationally invariant histogram features that are processed by a classifier to label the gender of each subject. Preliminary results using cylinder sizes defined by human experts demonstrate that gender can be predicted with 98% accuracy for the type of high density point cloud found in the CAESAR database. When point cloud densities are reduced to levels that might be obtained using stand-off sensors; gender classification accuracy degrades. We introduce an evolutionary algorithm to optimize the number and size of the cylinders used to define histogram features. The objective of this optimization process is to identify a set of cylindrical features that reduces the error rate when predicting gender from low density point clouds. A wrapper approach is used to interleave feature selection with classifier evaluation to train the evolutionary algorithm. Results of classification accuracy achieved using the evolved features are compared to the baseline feature set defined by human experts.

  16. Alignment of Point Cloud DSMs from Tls and Uav Platforms

    NASA Astrophysics Data System (ADS)

    Persad, R. A.; Armenakis, C.

    2015-08-01

    The co-registration of 3D point clouds has received considerable attention from various communities, particularly those in photogrammetry, computer graphics and computer vision. Although significant progress has been made, various challenges such as coarse alignment using multi-sensory data with different point densities and minimal overlap still exist. There is a need to address such data integration issues, particularly with the advent of new data collection platforms such as the unmanned aerial vehicles (UAVs). In this study, we propose an approach to align 3D point clouds derived photogrammetrically from UAV approximately vertical images with point clouds measured by terrestrial laser scanners (TLS). The method begins by automatically extracting 3D surface keypoints on both point cloud datasets. Afterwards, regions of interest around each keypoint are established to facilitate the establishment of scale-invariant descriptors for each of them. We use the popular SURF descriptor for matching the keypoints. In our experiments, we report the accuracies of the automatically derived transformation parameters in comparison to manually-derived reference parameter data.

  17. Small-scale effects of underwater bubble clouds on ocean reflectance: 3-D modeling results.

    PubMed

    Piskozub, Jacek; Stramski, Dariusz; Terrill, Eric; Melville, W Kendall

    2009-07-01

    We examined the effect of individual bubble clouds on remote-sensing reflectance of the ocean with a 3-D Monte Carlo model of radiative transfer. The concentrations and size distribution of bubbles were defined based on acoustical measurements of bubbles in the surface ocean. The light scattering properties of bubbles for various void fractions were calculated using Mie scattering theory. We show how the spatial pattern, magnitude, and spectral behavior of remote-sensing reflectance produced by modeled bubble clouds change due to variations in their geometric and optical properties as well as the background optical properties of the ambient water. We also determined that for realistic sizes of bubble clouds, a plane-parallel horizontally homogeneous geometry (1-D radiative transfer model) is inadequate for modeling water-leaving radiance above the cloud. PMID:19582089

  18. Long-term monitoring of structures through point cloud analysis

    NASA Astrophysics Data System (ADS)

    Jafari, Bahman; Khaloo, Ali; Lattanzi, David

    2016-04-01

    Modern remote sensing technologies have enabled the creation of high-resolution 3D point clouds of infrastructure systems. In particular, photogrammetric reconstructions using Dense-Structure-from-Motion algorithm can now yield point clouds with the necessary resolution to capture small-strain displacements. By tracking changes in these point clouds over time, displacements can be measured, leading to strain and stress estimates for long-term structural evaluations. This study determines the accuracy of a comparative point cloud analysis technique for measuring deflections in high-resolution point clouds of structural elements. Utilizing a combination of a recently developed point cloud generation process and localized nearest-neighbors cloud comparisons, the analytical technique is designed for long-term field scenarios and requires no artificial tracking, targets, and camera calibrations. A series of flexural laboratory experiments were performed in order to test the approach. The results indicate sub-millimeter accuracy in measuring the vertical deflection, making it suitable for the small-displacement analysis of a variety of large-scale infrastructure systems. Ongoing work seeks to extend this technique for comparison with as-built and finite element models.

  19. Equisolid Fisheye Stereovision Calibration and Point Cloud Computation

    NASA Astrophysics Data System (ADS)

    Moreau, J.; Ambellouis, A.; Ruichek, Y.

    2013-10-01

    This paper deals with dense 3D point cloud computation of urban environments around a vehicle. The idea is to use two fisheye views to get 3D coordinates of the surrounding scene's points. The first contribution of this paper is the adaptation of an omnidirectional stereovision self-calibration algorithm to an equisolid fisheye projection model. The second contribution is the description of a new epipolar matching based on a scan-circle principle and a dynamic programming technique adapted for fisheye images. The method is validated using both synthetic images for which ground truth is available and real images of an urban scene.

  20. Influence of 3D Radiative Effects on Satellite Retrievals of Cloud Properties

    NASA Technical Reports Server (NTRS)

    Varnai, Tamas; Marshak, Alexander; Einaudi, Franco (Technical Monitor)

    2001-01-01

    When cloud properties are retrieved from satellite observations, the calculations apply 1D theory to the 3D world: they only consider vertical structures and ignore horizontal cloud variability. This presentation discusses how big the resulting errors can be in the operational retrievals of cloud optical thickness. A new technique was developed to estimate the magnitude of potential errors by analyzing the spatial patterns of visible and infrared images. The proposed technique was used to set error bars for optical depths retrieved from new MODIS measurements. Initial results indicate that the 1 km resolution retrievals are subject to abundant uncertainties. Averaging over 50 by 50 km areas reduces the errors, but does not remove them completely; even in the relatively simple case of high sun (30 degree zenith angle), about a fifth of the examined areas had biases larger than ten percent. As expected, errors increase substantially for more oblique illumination.

  1. Accelerating 3D radiative transfer for realistic OCO-2 cloud-aerosol scenes

    NASA Astrophysics Data System (ADS)

    Schmidt, S.; Massie, S. T.; Platnick, S. E.; Song, S.

    2014-12-01

    The recently launched NASA OCO-2 satellite is expected to provide important information about the carbon dioxide distribution in the troposphere down to Earth's surface. Among the challenges in accurately retrieving CO2 concentration from the hyperspectral observations in each of the three OCO-2 bands are cloud and aerosol impacts on the observed radiances. Preliminary studies based on idealized cloud fields have shown that they can lead to spectrally dependent radiance perturbations which differ from band to band and may lead to biases in the derived products. Since OCO-2 was inserted into the A-Train, it is only natural to capitalize on sensor synergies with other instruments, in this case on the cloud and aerosol scene context that is provided by MODIS and CALIOP. Our approach is to use cloud imagery (especially for inhomogeneous scenes) for predicting the hyperspectral observations within a collocated OCO-2 footprint and comparing with the observations, which allows a systematic assessment of the causes for biases in the retrievals themselves, and their manifestation in spectral residuals for various different cloud types and distributions. Simulating a large number of cases with line-by-line calculations using a 3D code is computationally prohibitive even on large parallel computers. Therefore, we developed a number of acceleration approaches. In this contribution, we will analyze them in terms of their speed and accuracy, using cloud fields from airborne imagery collected during a recent NASA field experiment (SEAC4RS) as proxy for different types of inhomogeneous cloud fields. The broader goal of this effort is to improve OCO-2 retrievals in the vicinity of cloud fields, and to extend the range of conditions under which the instrument will provide useful results.

  2. Auotomatic Classification of Point Clouds Extracted from Ultracam Stereo Images

    NASA Astrophysics Data System (ADS)

    Modiri, M.; Masumi, M.; Eftekhari, A.

    2015-12-01

    Automatic extraction of building roofs, street and vegetation are a prerequisite for many GIS (Geographic Information System) applications, such as urban planning and 3D building reconstruction. Nowadays with advances in image processing and image matching technique by using feature base and template base image matching technique together dense point clouds are available. Point clouds classification is an important step in automatic features extraction. Therefore, in this study, the classification of point clouds based on features color and shape are implemented. We use two images by proper overlap getting by Ultracam-x camera in this study. The images are from Yasouj in IRAN. It is semi-urban area by building with different height. Our goal is classification buildings and vegetation in these points. In this article, an algorithm is developed based on the color characteristics of the point's cloud, using an appropriate DEM (Digital Elevation Model) and points clustering method. So that, firstly, trees and high vegetation are classified by using the point's color characteristics and vegetation index. Then, bare earth DEM is used to separate ground and non-ground points. Non-ground points are then divided into clusters based on height and local neighborhood. One or more clusters are initialized based on the maximum height of the points and then each cluster is extended by applying height and neighborhood constraints. Finally, planar roof segments are extracted from each cluster of points following a region-growing technique.

  3. Estimating Aircraft Heading Based on Laserscanner Derived Point Clouds

    NASA Astrophysics Data System (ADS)

    Koppanyi, Z.; Toth, C., K.

    2015-03-01

    Using LiDAR sensors for tracking and monitoring an operating aircraft is a new application. In this paper, we present data processing methods to estimate the heading of a taxiing aircraft using laser point clouds. During the data acquisition, a Velodyne HDL-32E laser scanner tracked a moving Cessna 172 airplane. The point clouds captured at different times were used for heading estimation. After addressing the problem and specifying the equation of motion to reconstruct the aircraft point cloud from the consecutive scans, three methods are investigated here. The first requires a reference model to estimate the relative angle from the captured data by fitting different cross-sections (horizontal profiles). In the second approach, iterative closest point (ICP) method is used between the consecutive point clouds to determine the horizontal translation of the captured aircraft body. Regarding the ICP, three different versions were compared, namely, the ordinary 3D, 3-DoF 3D and 2-DoF 3D ICP. It was found that 2-DoF 3D ICP provides the best performance. Finally, the last algorithm searches for the unknown heading and velocity parameters by minimizing the volume of the reconstructed plane. The three methods were compared using three test datatypes which are distinguished by object-sensor distance, heading and velocity. We found that the ICP algorithm fails at long distances and when the aircraft motion direction perpendicular to the scan plane, but the first and the third methods give robust and accurate results at 40m object distance and at ~12 knots for a small Cessna airplane.

  4. A Multiscale Constraints Method Localization of 3D Facial Feature Points

    PubMed Central

    Li, Hong-an; Zhang, Yongxin; Li, Zhanli; Li, Huilin

    2015-01-01

    It is an important task to locate facial feature points due to the widespread application of 3D human face models in medical fields. In this paper, we propose a 3D facial feature point localization method that combines the relative angle histograms with multiscale constraints. Firstly, the relative angle histogram of each vertex in a 3D point distribution model is calculated; then the cluster set of the facial feature points is determined using the cluster algorithm. Finally, the feature points are located precisely according to multiscale integral features. The experimental results show that the feature point localization accuracy of this algorithm is better than that of the localization method using the relative angle histograms. PMID:26539244

  5. Quality Assessment and Comparison of Smartphone and Leica C10 Laser Scanner Based Point Clouds

    NASA Astrophysics Data System (ADS)

    Sirmacek, Beril; Lindenbergh, Roderik; Wang, Jinhu

    2016-06-01

    3D urban models are valuable for urban map generation, environment monitoring, safety planning and educational purposes. For 3D measurement of urban structures, generally airborne laser scanning sensors or multi-view satellite images are used as a data source. However, close-range sensors (such as terrestrial laser scanners) and low cost cameras (which can generate point clouds based on photogrammetry) can provide denser sampling of 3D surface geometry. Unfortunately, terrestrial laser scanning sensors are expensive and trained persons are needed to use them for point cloud acquisition. A potential effective 3D modelling can be generated based on a low cost smartphone sensor. Herein, we show examples of using smartphone camera images to generate 3D models of urban structures. We compare a smartphone based 3D model of an example structure with a terrestrial laser scanning point cloud of the structure. This comparison gives us opportunity to discuss the differences in terms of geometrical correctness, as well as the advantages, disadvantages and limitations in data acquisition and processing. We also discuss how smartphone based point clouds can help to solve further problems with 3D urban model generation in a practical way. We show that terrestrial laser scanning point clouds which do not have color information can be colored using smartphones. The experiments, discussions and scientific findings might be insightful for the future studies in fast, easy and low-cost 3D urban model generation field.

  6. Automatic Detection of Building Points from LIDAR and Dense Image Matching Point Clouds

    NASA Astrophysics Data System (ADS)

    Maltezos, E.; Ioannidis, C.

    2015-08-01

    This study aims to detect automatically building points: (a) from LIDAR point cloud using simple techniques of filtering that enhance the geometric properties of each point, and (b) from a point cloud which is extracted applying dense image matching at high resolution colour-infrared (CIR) digital aerial imagery using the stereo method semi-global matching (SGM). At first step, the removal of the vegetation is carried out. At the LIDAR point cloud, two different methods are implemented and evaluated using initially the normals and the roughness values afterwards: (1) the proposed scan line smooth filtering and a thresholding process, and (2) a bilateral filtering and a thresholding process. For the case of the CIR point cloud, a variation of the normalized differential vegetation index (NDVI) is computed for the same purpose. Afterwards, the bare-earth is extracted using a morphological operator and removed from the rest scene so as to maintain the buildings points. The results of the extracted buildings applying each approach at an urban area in northern Greece are evaluated using an existing orthoimage as reference; also, the results are compared with the corresponding classified buildings extracted from two commercial software. Finally, in order to verify the utility and functionality of the extracted buildings points that achieved the best accuracy, the 3D models in terms of Level of Detail 1 (LoD 1) and a 3D building change detection process are indicatively performed on a sub-region of the overall scene.

  7. Distributed Network, Wireless and Cloud Computing Enabled 3-D Ultrasound; a New Medical Technology Paradigm

    PubMed Central

    Meir, Arie; Rubinsky, Boris

    2009-01-01

    Medical technologies are indispensable to modern medicine. However, they have become exceedingly expensive and complex and are not available to the economically disadvantaged majority of the world population in underdeveloped as well as developed parts of the world. For example, according to the World Health Organization about two thirds of the world population does not have access to medical imaging. In this paper we introduce a new medical technology paradigm centered on wireless technology and cloud computing that was designed to overcome the problems of increasing health technology costs. We demonstrate the value of the concept with an example; the design of a wireless, distributed network and central (cloud) computing enabled three-dimensional (3-D) ultrasound system. Specifically, we demonstrate the feasibility of producing a 3-D high end ultrasound scan at a central computing facility using the raw data acquired at the remote patient site with an inexpensive low end ultrasound transducer designed for 2-D, through a mobile device and wireless connection link between them. Producing high-end 3D ultrasound images with simple low-end transducers reduces the cost of imaging by orders of magnitude. It also removes the requirement of having a highly trained imaging expert at the patient site, since the need for hand-eye coordination and the ability to reconstruct a 3-D mental image from 2-D scans, which is a necessity for high quality ultrasound imaging, is eliminated. This could enable relatively untrained medical workers in developing nations to administer imaging and a more accurate diagnosis, effectively saving the lives of people. PMID:19936236

  8. Distributed network, wireless and cloud computing enabled 3-D ultrasound; a new medical technology paradigm.

    PubMed

    Meir, Arie; Rubinsky, Boris

    2009-01-01

    Medical technologies are indispensable to modern medicine. However, they have become exceedingly expensive and complex and are not available to the economically disadvantaged majority of the world population in underdeveloped as well as developed parts of the world. For example, according to the World Health Organization about two thirds of the world population does not have access to medical imaging. In this paper we introduce a new medical technology paradigm centered on wireless technology and cloud computing that was designed to overcome the problems of increasing health technology costs. We demonstrate the value of the concept with an example; the design of a wireless, distributed network and central (cloud) computing enabled three-dimensional (3-D) ultrasound system. Specifically, we demonstrate the feasibility of producing a 3-D high end ultrasound scan at a central computing facility using the raw data acquired at the remote patient site with an inexpensive low end ultrasound transducer designed for 2-D, through a mobile device and wireless connection link between them. Producing high-end 3D ultrasound images with simple low-end transducers reduces the cost of imaging by orders of magnitude. It also removes the requirement of having a highly trained imaging expert at the patient site, since the need for hand-eye coordination and the ability to reconstruct a 3-D mental image from 2-D scans, which is a necessity for high quality ultrasound imaging, is eliminated. This could enable relatively untrained medical workers in developing nations to administer imaging and a more accurate diagnosis, effectively saving the lives of people. PMID:19936236

  9. Motion Estimation System Utilizing Point Cloud Registration

    NASA Technical Reports Server (NTRS)

    Chen, Qi (Inventor)

    2016-01-01

    A system and method of estimation motion of a machine is disclosed. The method may include determining a first point cloud and a second point cloud corresponding to an environment in a vicinity of the machine. The method may further include generating a first extended gaussian image (EGI) for the first point cloud and a second EGI for the second point cloud. The method may further include determining a first EGI segment based on the first EGI and a second EGI segment based on the second EGI. The method may further include determining a first two dimensional distribution for points in the first EGI segment and a second two dimensional distribution for points in the second EGI segment. The method may further include estimating motion of the machine based on the first and second two dimensional distributions.

  10. Mobile viewer system for virtual 3D space using infrared LED point markers and camera

    NASA Astrophysics Data System (ADS)

    Sakamoto, Kunio; Taneji, Shoto

    2006-09-01

    The authors have developed a 3D workspace system using collaborative imaging devices. A stereoscopic display enables this system to project 3D information. In this paper, we describe the position detecting system for a see-through 3D viewer. A 3D display system is useful technology for virtual reality, mixed reality and augmented reality. We have researched spatial imaging and interaction system. We have ever proposed 3D displays using the slit as a parallax barrier, the lenticular screen and the holographic optical elements(HOEs) for displaying active image 1)2)3)4). The purpose of this paper is to propose the interactive system using these 3D imaging technologies. The observer can view virtual images in the real world when the user watches the screen of a see-through 3D viewer. The goal of our research is to build the display system as follows; when users see the real world through the mobile viewer, the display system gives users virtual 3D images, which is floating in the air, and the observers can touch these floating images and interact them such that kids can make play clay. The key technologies of this system are the position recognition system and the spatial imaging display. The 3D images are presented by the improved parallax barrier 3D display. Here the authors discuss the measuring method of the mobile viewer using infrared LED point markers and a camera in the 3D workspace (augmented reality world). The authors show the geometric analysis of the proposed measuring method, which is the simplest method using a single camera not the stereo camera, and the results of our viewer system.

  11. Progress in Understanding the Impacts of 3-D Cloud Structure on MODIS Cloud Property Retrievals for Marine Boundary Layer Clouds

    NASA Technical Reports Server (NTRS)

    Zhang, Zhibo; Werner, Frank; Miller, Daniel; Platnick, Steven; Ackerman, Andrew; DiGirolamo, Larry; Meyer, Kerry; Marshak, Alexander; Wind, Galina; Zhao, Guangyu

    2016-01-01

    Theory: A novel framework based on 2-D Tayler expansion for quantifying the uncertainty in MODIS retrievals caused by sub-pixel reflectance inhomogeneity. (Zhang et al. 2016). How cloud vertical structure influences MODIS LWP retrievals. (Miller et al. 2016). Observation: Analysis of failed MODIS cloud property retrievals. (Cho et al. 2015). Cloud property retrievals from 15m resolution ASTER observations. (Werner et al. 2016). Modeling: LES-Satellite observation simulator (Zhang et al. 2012, Miller et al. 2016).

  12. Human Body 3D Posture Estimation Using Significant Points and Two Cameras

    PubMed Central

    Juang, Chia-Feng; Chen, Teng-Chang; Du, Wei-Chin

    2014-01-01

    This paper proposes a three-dimensional (3D) human posture estimation system that locates 3D significant body points based on 2D body contours extracted from two cameras without using any depth sensors. The 3D significant body points that are located by this system include the head, the center of the body, the tips of the feet, the tips of the hands, the elbows, and the knees. First, a linear support vector machine- (SVM-) based segmentation method is proposed to distinguish the human body from the background in red, green, and blue (RGB) color space. The SVM-based segmentation method uses not only normalized color differences but also included angle between pixels in the current frame and the background in order to reduce shadow influence. After segmentation, 2D significant points in each of the two extracted images are located. A significant point volume matching (SPVM) method is then proposed to reconstruct the 3D significant body point locations by using 2D posture estimation results. Experimental results show that the proposed SVM-based segmentation method shows better performance than other gray level- and RGB-based segmentation approaches. This paper also shows the effectiveness of the 3D posture estimation results in different postures. PMID:24883422

  13. Generalized recovery algorithm for 3D super-resolution microscopy using rotating point spread functions.

    PubMed

    Shuang, Bo; Wang, Wenxiao; Shen, Hao; Tauzin, Lawrence J; Flatebo, Charlotte; Chen, Jianbo; Moringo, Nicholas A; Bishop, Logan D C; Kelly, Kevin F; Landes, Christy F

    2016-01-01

    Super-resolution microscopy with phase masks is a promising technique for 3D imaging and tracking. Due to the complexity of the resultant point spread functions, generalized recovery algorithms are still missing. We introduce a 3D super-resolution recovery algorithm that works for a variety of phase masks generating 3D point spread functions. A fast deconvolution process generates initial guesses, which are further refined by least squares fitting. Overfitting is suppressed using a machine learning determined threshold. Preliminary results on experimental data show that our algorithm can be used to super-localize 3D adsorption events within a porous polymer film and is useful for evaluating potential phase masks. Finally, we demonstrate that parallel computation on graphics processing units can reduce the processing time required for 3D recovery. Simulations reveal that, through desktop parallelization, the ultimate limit of real-time processing is possible. Our program is the first open source recovery program for generalized 3D recovery using rotating point spread functions. PMID:27488312

  14. Generalized recovery algorithm for 3D super-resolution microscopy using rotating point spread functions

    NASA Astrophysics Data System (ADS)

    Shuang, Bo; Wang, Wenxiao; Shen, Hao; Tauzin, Lawrence J.; Flatebo, Charlotte; Chen, Jianbo; Moringo, Nicholas A.; Bishop, Logan D. C.; Kelly, Kevin F.; Landes, Christy F.

    2016-08-01

    Super-resolution microscopy with phase masks is a promising technique for 3D imaging and tracking. Due to the complexity of the resultant point spread functions, generalized recovery algorithms are still missing. We introduce a 3D super-resolution recovery algorithm that works for a variety of phase masks generating 3D point spread functions. A fast deconvolution process generates initial guesses, which are further refined by least squares fitting. Overfitting is suppressed using a machine learning determined threshold. Preliminary results on experimental data show that our algorithm can be used to super-localize 3D adsorption events within a porous polymer film and is useful for evaluating potential phase masks. Finally, we demonstrate that parallel computation on graphics processing units can reduce the processing time required for 3D recovery. Simulations reveal that, through desktop parallelization, the ultimate limit of real-time processing is possible. Our program is the first open source recovery program for generalized 3D recovery using rotating point spread functions.

  15. Generalized recovery algorithm for 3D super-resolution microscopy using rotating point spread functions

    PubMed Central

    Shuang, Bo; Wang, Wenxiao; Shen, Hao; Tauzin, Lawrence J.; Flatebo, Charlotte; Chen, Jianbo; Moringo, Nicholas A.; Bishop, Logan D. C.; Kelly, Kevin F.; Landes, Christy F.

    2016-01-01

    Super-resolution microscopy with phase masks is a promising technique for 3D imaging and tracking. Due to the complexity of the resultant point spread functions, generalized recovery algorithms are still missing. We introduce a 3D super-resolution recovery algorithm that works for a variety of phase masks generating 3D point spread functions. A fast deconvolution process generates initial guesses, which are further refined by least squares fitting. Overfitting is suppressed using a machine learning determined threshold. Preliminary results on experimental data show that our algorithm can be used to super-localize 3D adsorption events within a porous polymer film and is useful for evaluating potential phase masks. Finally, we demonstrate that parallel computation on graphics processing units can reduce the processing time required for 3D recovery. Simulations reveal that, through desktop parallelization, the ultimate limit of real-time processing is possible. Our program is the first open source recovery program for generalized 3D recovery using rotating point spread functions. PMID:27488312

  16. Nonrigid point registration for 2D curves and 3D surfaces and its various applications

    NASA Astrophysics Data System (ADS)

    Wang, Hesheng; Fei, Baowei

    2013-06-01

    A nonrigid B-spline-based point-matching (BPM) method is proposed to match dense surface points. The method solves both the point correspondence and nonrigid transformation without features extraction. The registration method integrates a motion model, which combines a global transformation and a B-spline-based local deformation, into a robust point-matching framework. The point correspondence and deformable transformation are estimated simultaneously by fuzzy correspondence and by a deterministic annealing technique. Prior information about global translation, rotation and scaling is incorporated into the optimization. A local B-spline motion model decreases the degrees of freedom for optimization and thus enables the registration of a larger number of feature points. The performance of the BPM method has been demonstrated and validated using synthesized 2D and 3D data, mouse MRI and micro-CT images. The proposed BPM method can be used to register feature point sets, 2D curves, 3D surfaces and various image data.

  17. Brute Force Matching Between Camera Shots and Synthetic Images from Point Clouds

    NASA Astrophysics Data System (ADS)

    Boerner, R.; Kröhnert, M.

    2016-06-01

    3D point clouds, acquired by state-of-the-art terrestrial laser scanning techniques (TLS), provide spatial information about accuracies up to several millimetres. Unfortunately, common TLS data has no spectral information about the covered scene. However, the matching of TLS data with images is important for monoplotting purposes and point cloud colouration. Well-established methods solve this issue by matching of close range images and point cloud data by fitting optical camera systems on top of laser scanners or rather using ground control points. The approach addressed in this paper aims for the matching of 2D image and 3D point cloud data from a freely moving camera within an environment covered by a large 3D point cloud, e.g. a 3D city model. The key advantage of the free movement affects augmented reality applications or real time measurements. Therefore, a so-called real image, captured by a smartphone camera, has to be matched with a so-called synthetic image which consists of reverse projected 3D point cloud data to a synthetic projection centre whose exterior orientation parameters match the parameters of the image, assuming an ideal distortion free camera.

  18. Non-Iterative Rigid 2D/3D Point-Set Registration Using Semidefinite Programming

    NASA Astrophysics Data System (ADS)

    Khoo, Yuehaw; Kapoor, Ankur

    2016-07-01

    We describe a convex programming framework for pose estimation in 2D/3D point-set registration with unknown point correspondences. We give two mixed-integer nonlinear program (MINP) formulations of the 2D/3D registration problem when there are multiple 2D images, and propose convex relaxations for both of the MINPs to semidefinite programs (SDP) that can be solved efficiently by interior point methods. Our approach to the 2D/3D registration problem is non-iterative in nature as we jointly solve for pose and correspondence. Furthermore, these convex programs can readily incorporate feature descriptors of points to enhance registration results. We prove that the convex programs exactly recover the solution to the original nonconvex 2D/3D registration problem under noiseless condition. We apply these formulations to the registration of 3D models of coronary vessels to their 2D projections obtained from multiple intra-operative fluoroscopic images. For this application, we experimentally corroborate the exact recovery property in the absence of noise and further demonstrate robustness of the convex programs in the presence of noise.

  19. Database guided detection of anatomical landmark points in 3D images of the heart

    NASA Astrophysics Data System (ADS)

    Karavides, Thomas; Esther Leung, K. Y.; Paclik, Pavel; Hendriks, Emile A.; Bosch, Johan G.

    2010-03-01

    Automated landmark detection may prove invaluable in the analysis of real-time three-dimensional (3D) echocardiograms. By detecting 3D anatomical landmark points, the standard anatomical views can be extracted automatically in apically acquired 3D ultrasound images of the left ventricle, for better standardization of visualization and objective diagnosis. Furthermore, the landmarks can serve as an initialization for other analysis methods, such as segmentation. The described algorithm applies landmark detection in perpendicular planes of the 3D dataset. The landmark detection exploits a large database of expert annotated images, using an extensive set of Haar features for fast classification. The detection is performed using two cascades of Adaboost classifiers in a coarse to fine scheme. The method is evaluated by measuring the distance of detected and manually indicated landmark points in 25 patients. The method can detect landmarks accurately in the four-chamber (apex: 7.9+/-7.1mm, septal mitral valve point: 5.6+/-2.7mm lateral mitral valve point: 4.0+/-2.6mm) and two-chamber view (apex: 7.1+/-6.7mm, anterior mitral valve point: 5.8+/-3.5mm, inferior mitral valve point: 4.5+/-3.1mm). The results compare well to those reported by others.

  20. Melting points and chemical bonding properties of 3d transition metal elements

    NASA Astrophysics Data System (ADS)

    Takahara, Wataru

    2014-08-01

    The melting points of 3d transition metal elements show an unusual local minimal peak at manganese across Period 4 in the periodic table. The chemical bonding properties of scandium, titanium, vanadium, chromium, manganese, iron, cobalt, nickel and copper are investigated by the DV-Xα cluster method. The melting points are found to correlate with the bond overlap populations. The chemical bonding nature therefore appears to be the primary factor governing the melting points.

  1. Point Cloud Based Change Detection - an Automated Approach for Cloud-based Services

    NASA Astrophysics Data System (ADS)

    Collins, Patrick; Bahr, Thomas

    2016-04-01

    The fusion of stereo photogrammetric point clouds with LiDAR data or terrain information derived from SAR interferometry has a significant potential for 3D topographic change detection. In the present case study latest point cloud generation and analysis capabilities are used to examine a landslide that occurred in the village of Malin in Maharashtra, India, on 30 July 2014, and affected an area of ca. 44.000 m2. It focuses on Pléiades high resolution satellite imagery and the Airbus DS WorldDEMTM as a product of the TanDEM-X mission. This case study was performed using the COTS software package ENVI 5.3. Integration of custom processes and automation is supported by IDL (Interactive Data Language). Thus, ENVI analytics is running via the object-oriented and IDL-based ENVITask API. The pre-event topography is represented by the WorldDEMTM product, delivered with a raster of 12 m x 12 m and based on the EGM2008 geoid (called pre-DEM). For the post-event situation a Pléiades 1B stereo image pair of the AOI affected was obtained. The ENVITask "GeneratePointCloudsByDenseImageMatching" was implemented to extract passive point clouds in LAS format from the panchromatic stereo datasets: • A dense image-matching algorithm is used to identify corresponding points in the two images. • A block adjustment is applied to refine the 3D coordinates that describe the scene geometry. • Additionally, the WorldDEMTM was input to constrain the range of heights in the matching area, and subsequently the length of the epipolar line. The "PointCloudFeatureExtraction" task was executed to generate the post-event digital surface model from the photogrammetric point clouds (called post-DEM). Post-processing consisted of the following steps: • Adding the geoid component (EGM 2008) to the post-DEM. • Pre-DEM reprojection to the UTM Zone 43N (WGS-84) coordinate system and resizing. • Subtraction of the pre-DEM from the post-DEM. • Filtering and threshold based classification of

  2. Congruence analysis of point clouds from unstable stereo image sequences

    NASA Astrophysics Data System (ADS)

    Jepping, C.; Bethmann, F.; Luhmann, T.

    2014-06-01

    This paper deals with the correction of exterior orientation parameters of stereo image sequences over deformed free-form surfaces without control points. Such imaging situation can occur, for example, during photogrammetric car crash test recordings where onboard high-speed stereo cameras are used to measure 3D surfaces. As a result of such measurements 3D point clouds of deformed surfaces are generated for a complete stereo sequence. The first objective of this research focusses on the development and investigation of methods for the detection of corresponding spatial and temporal tie points within the stereo image sequences (by stereo image matching and 3D point tracking) that are robust enough for a reliable handling of occlusions and other disturbances that may occur. The second objective of this research is the analysis of object deformations in order to detect stable areas (congruence analysis). For this purpose a RANSAC-based method for congruence analysis has been developed. This process is based on the sequential transformation of randomly selected point groups from one epoch to another by using a 3D similarity transformation. The paper gives a detailed description of the congruence analysis. The approach has been tested successfully on synthetic and real image data.

  3. Robust sharp features infer from point clouds

    NASA Astrophysics Data System (ADS)

    Cao, Juming; Wushour, Slam; Yao, Xinhui; Li, NaiQian; Liang, Jin; Liang, Xinhe; Liu, Jianwei

    2011-07-01

    A novel sharp features extraction method is proposed in this paper. First, we calculate the displacement between the point and its local weighted average position and we label the point with salient this value as the candidate sharp feature points and we estimate the normal direction of those candidate sharp feature points by means of local PCA methods. Then we refine the normal estimated by inferring the orientation of the points near the candidate sharp feature region and bilateral filtering in the normal field of point clouds. At last we project the displacement between point and its local weighted average position along the direction of normal .We use value of this projection as the criteria of whether a point can be labeled as sharp feature. The extracted discrete sharp feature points are represented in the form of piecewised B-Spline lines. Experiment on both real scanner point clouds and synthesized point clouds show that our method of sharp features extraction are simple to be implemented and efficient for both space and time overhead as well as it robust to the noise ,outlier and un even sample witch are inherent in the point clouds.

  4. Examination about Influence for Precision of 3d Image Measurement from the Ground Control Point Measurement and Surface Matching

    NASA Astrophysics Data System (ADS)

    Anai, T.; Kochi, N.; Yamada, M.; Sasaki, T.; Otani, H.; Sasaki, D.; Nishimura, S.; Kimoto, K.; Yasui, N.

    2015-05-01

    As the 3D image measurement software is now widely used with the recent development of computer-vision technology, the 3D measurement from the image is now has acquired the application field from desktop objects as wide as the topography survey in large geographical areas. Especially, the orientation, which used to be a complicated process in the heretofore image measurement, can be now performed automatically by simply taking many pictures around the object. And in the case of fully textured object, the 3D measurement of surface features is now done all automatically from the orientated images, and greatly facilitated the acquisition of the dense 3D point cloud from images with high precision. With all this development in the background, in the case of small and the middle size objects, we are now furnishing the all-around 3D measurement by a single digital camera sold on the market. And we have also developed the technology of the topographical measurement with the air-borne images taken by a small UAV [1~5]. In this present study, in the case of the small size objects, we examine the accuracy of surface measurement (Matching) by the data of the experiments. And as to the topographic measurement, we examine the influence of GCP distribution on the accuracy by the data of the experiments. Besides, we examined the difference of the analytical results in each of the 3D image measurement software. This document reviews the processing flow of orientation and the 3D measurement of each software and explains the feature of the each software. And as to the verification of the precision of stereo-matching, we measured the test plane and the test sphere of the known form and assessed the result. As to the topography measurement, we used the air-borne image data photographed at the test field in Yadorigi of Matsuda City, Kanagawa Prefecture JAPAN. We have constructed Ground Control Point which measured by RTK-GPS and Total Station. And we show the results of analysis made

  5. Reconstruction of 3D Shapes of Opaque Cumulus Clouds from Airborne Multiangle Imaging: A Proof-of-Concept

    NASA Astrophysics Data System (ADS)

    Davis, A. B.; Bal, G.; Chen, J.

    2015-12-01

    Operational remote sensing of microphysical and optical cloud properties is invariably predicated on the assumption of plane-parallel slab geometry for the targeted cloud. The sole benefit of this often-questionable assumption about the cloud is that it leads to one-dimensional (1D) radiative transfer (RT)---a textbook, computationally tractable model. We present new results as evidence that, thanks to converging advances in 3D RT, inverse problem theory, algorithm implementation, and computer hardware, we are at the dawn of a new era in cloud remote sensing where we can finally go beyond the plane-parallel paradigm. Granted, the plane-parallel/1D RT assumption is reasonable for spatially extended stratiform cloud layers, as well as the smoothly distributed background aerosol layers. However, these 1D RT-friendly scenarios exclude cases that are critically important for climate physics. 1D RT---whence operational cloud remote sensing---fails catastrophically for cumuliform clouds that have fully 3D outer shapes and internal structures driven by shallow or deep convection. For these situations, the first order of business in a robust characterization by remote sensing is to abandon the slab geometry framework and determine the 3D geometry of the cloud, as a first step toward bone fide 3D cloud tomography. With this specific goal in mind, we deliver a proof-of-concept for an entirely new kind of remote sensing applicable to 3D clouds. It is based on highly simplified 3D RT and exploits multi-angular suites of cloud images at high spatial resolution. Airborne sensors like AirMSPI readily acquire such data. The key element of the reconstruction algorithm is a sophisticated solution of the nonlinear inverse problem via linearization of the forward model and an iteration scheme supported, where necessary, by adaptive regularization. Currently, the demo uses a 2D setting to show how either vertical profiles or horizontal slices of the cloud can be accurately reconstructed

  6. LIVAS: a 3-D multi-wavelength aerosol/cloud database based on CALIPSO and EARLINET

    NASA Astrophysics Data System (ADS)

    Amiridis, V.; Marinou, E.; Tsekeri, A.; Wandinger, U.; Schwarz, A.; Giannakaki, E.; Mamouri, R.; Kokkalis, P.; Binietoglou, I.; Solomos, S.; Herekakis, T.; Kazadzis, S.; Gerasopoulos, E.; Proestakis, E.; Kottas, M.; Balis, D.; Papayannis, A.; Kontoes, C.; Kourtidis, K.; Papagiannopoulos, N.; Mona, L.; Pappalardo, G.; Le Rille, O.; Ansmann, A.

    2015-07-01

    We present LIVAS (LIdar climatology of Vertical Aerosol Structure for space-based lidar simulation studies), a 3-D multi-wavelength global aerosol and cloud optical database, optimized to be used for future space-based lidar end-to-end simulations of realistic atmospheric scenarios as well as retrieval algorithm testing activities. The LIVAS database provides averaged profiles of aerosol optical properties for the potential spaceborne laser operating wavelengths of 355, 532, 1064, 1570 and 2050 nm and of cloud optical properties at the wavelength of 532 nm. The global database is based on CALIPSO observations at 532 and 1064 nm and on aerosol-type-dependent backscatter- and extinction-related Ångström exponents, derived from EARLINET (European Aerosol Research Lidar Network) ground-based measurements for the UV and scattering calculations for the IR wavelengths, using a combination of input data from AERONET, suitable aerosol models and recent literature. The required spectral conversions are calculated for each of the CALIPSO aerosol types and are applied to CALIPSO backscatter and extinction data corresponding to the aerosol type retrieved by the CALIPSO aerosol classification scheme. A cloud optical database based on CALIPSO measurements at 532 nm is also provided, neglecting wavelength conversion due to approximately neutral scattering behavior of clouds along the spectral range of LIVAS. Averages of particle linear depolarization ratio profiles at 532 nm are provided as well. Finally, vertical distributions for a set of selected scenes of specific atmospheric phenomena (e.g., dust outbreaks, volcanic eruptions, wild fires, polar stratospheric clouds) are analyzed and spectrally converted so as to be used as case studies for spaceborne lidar performance assessments. The final global data set includes 4-year (1 January 2008-31 December 2011) time-averaged CALIPSO (Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations) data on a uniform grid of 1

  7. 3D Cloud Radiative Effects on Aerosol Optical Thickness Retrievals in Cumulus Cloud Fields in the Biomass Burning Region in Brazil

    NASA Technical Reports Server (NTRS)

    Wen, Guo-Yong; Marshak, Alexander; Cahalan, Robert F.

    2004-01-01

    Aerosol amount in clear regions of a cloudy atmosphere is a critical parameter in studying the interaction between aerosols and clouds. Since the global cloud cover is about 50%, cloudy scenes are often encountered in any satellite images. Aerosols are more or less transparent, while clouds are extremely reflective in the visible spectrum of solar radiation. The radiative transfer in clear-cloudy condition is highly three- dimensional (3D). This paper focuses on estimating the 3D effects on aerosol optical thickness retrievals using Monte Carlo simulations. An ASTER image of cumulus cloud fields in the biomass burning region in Brazil is simulated in this study. The MODIS products (i-e., cloud optical thickness, particle effective radius, cloud top pressure, surface reflectance, etc.) are used to construct the cloud property and surface reflectance fields. To estimate the cloud 3-D effects, we assume a plane-parallel stratification of aerosol properties in the 60 km x 60 km ASTER image. The simulated solar radiation at the top of the atmosphere is compared with plane-parallel calculations. Furthermore, the 3D cloud radiative effects on aerosol optical thickness retrieval are estimated.

  8. Approximate registration of point clouds with large scale differences

    NASA Astrophysics Data System (ADS)

    Novak, D.; Schindler, K.

    2013-10-01

    3D reconstruction of objects is a basic task in many fields, including surveying, engineering, entertainment and cultural heritage. The task is nowadays often accomplished with a laser scanner, which produces dense point clouds, but lacks accurate colour information, and lacks per-point accuracy measures. An obvious solution is to combine laser scanning with photogrammetric recording. In that context, the problem arises to register the two datasets, which feature large scale, translation and rotation differences. The absence of approximate registration parameters (3D translation, 3D rotation and scale) precludes the use of fine-registration methods such as ICP. Here, we present a method to register realistic photogrammetric and laser point clouds in a fully automated fashion. The proposed method decomposes the registration into a sequence of simpler steps: first, two rotation angles are determined by finding dominant surface normal directions, then the remaining parameters are found with RANSAC followed by ICP and scale refinement. These two steps are carried out at low resolution, before computing a precise final registration at higher resolution.

  9. Status of the phenomena representation, 3D modeling, and cloud-based software architecture development

    SciTech Connect

    Smith, Curtis L.; Prescott, Steven; Kvarfordt, Kellie; Sampath, Ram; Larson, Katie

    2015-09-01

    Early in 2013, researchers at the Idaho National Laboratory outlined a technical framework to support the implementation of state-of-the-art probabilistic risk assessment to predict the safety performance of advanced small modular reactors. From that vision of the advanced framework for risk analysis, specific tasks have been underway in order to implement the framework. This report discusses the current development of a several tasks related to the framework implementation, including a discussion of a 3D physics engine that represents the motion of objects (including collision and debris modeling), cloud-based analysis tools such as a Bayesian-inference engine, and scenario simulations. These tasks were performed during 2015 as part of the technical work associated with the Advanced Reactor Technologies Program.

  10. Self-Consistent 3D Modeling of Electron Cloud Dynamics and Beam Response

    SciTech Connect

    Furman, Miguel; Furman, M.A.; Celata, C.M.; Kireeff-Covo, M.; Sonnad, K.G.; Vay, J.-L.; Venturini, M.; Cohen, R.; Friedman, A.; Grote, D.; Molvik, A.; Stoltz, P.

    2007-04-02

    We present recent advances in the modeling of beam electron-cloud dynamics, including surface effects such as secondary electron emission, gas desorption, etc, and volumetric effects such as ionization of residual gas and charge-exchange reactions. Simulations for the HCX facility with the code WARP/POSINST will be described and their validity demonstrated by benchmarks against measurements. The code models a wide range of physical processes and uses a number of novel techniques, including a large-timestep electron mover that smoothly interpolates between direct orbit calculation and guiding-center drift equations, and a new computational technique, based on a Lorentz transformation to a moving frame, that allows the cost of a fully 3D simulation to be reduced to that of a quasi-static approximation.

  11. Interactive Cosmetic Makeup of a 3D Point-Based Face Model

    NASA Astrophysics Data System (ADS)

    Kim, Jeong-Sik; Choi, Soo-Mi

    We present an interactive system for cosmetic makeup of a point-based face model acquired by 3D scanners. We first enhance the texture of a face model in 3D space using low-pass Gaussian filtering, median filtering, and histogram equalization. The user is provided with a stereoscopic display and haptic feedback, and can perform simulated makeup tasks including the application of foundation, color makeup, and lip gloss. Fast rendering is achieved by processing surfels using the GPU, and we use a BSP tree data structure and a dynamic local refinement of the facial surface to provide interactive haptics. We have implemented a prototype system and evaluated its performance.

  12. Precipitation processes developed during TOGA COARE (1992), GATE (1974), SCSMEX (1998), and KWAJEX (1999): 3D Cloud Resolving Model Simulation

    NASA Technical Reports Server (NTRS)

    Tao, W.-K.

    2006-01-01

    Real clouds and cloud systems are inherently three-dimensional (3D). Because of the limitations in computer resources, however, most cloud-resolving models (CRMs) today are still two-dimensional (2D). A few 3D CRMs have been used to study the response of clouds to large-scale forcing. In these 3D simulations, the model domain was small, and the integration time was 6 hours. Only recently have 3D experiments been performed for multi-day periods for tropical cloud systems with large horizontal domains at the National Center for Atmospheric Research (NCAR), NOAA GFDL, the U.K. Met. Office, Colorado State University and NASA Goddard Space Flight Center. An improved 3D Goddard Cumulus Ensemble (GCE) model was recently used to simulate periods during TOGA COARE (December 19-27, 1992), GATE (september 1-7, 1974), SCSMEX (May 18-26, June 2-11, 1998) and KWAJEX (August 7-13, August 18-21, and August 29-September 12, 1999) using a 512 by 512 km domain and 41 vertical layers. The major objectives of this paper are: (1) to identify the differences and similarities in the simulated precipitation processes and their associated surface and water energy budgets in TOGA COARE, GATE, KWAJEX, and SCSMEX, and (2) to asses the impact of microphysics, radiation budget and surface fluxes on the organization of convection in tropics.

  13. LiDAR Segmentation using Suitable Seed Points for 3D Building Extraction

    NASA Astrophysics Data System (ADS)

    Abdullah, S. M.; Awrangjeb, M.; Lu, G.

    2014-08-01

    Effective building detection and roof reconstruction has an influential demand over the remote sensing research community. In this paper, we present a new automatic LiDAR point cloud segmentation method using suitable seed points for building detection and roof plane extraction. Firstly, the LiDAR point cloud is separated into "ground" and "non-ground" points based on the analysis of DEM with a height threshold. Each of the non-ground point is marked as coplanar or non-coplanar based on a coplanarity analysis. Commencing from the maximum LiDAR point height towards the minimum, all the LiDAR points on each height level are extracted and separated into several groups based on 2D distance. From each group, lines are extracted and a coplanar point which is the nearest to the midpoint of each line is considered as a seed point. This seed point and its neighbouring points are utilised to generate the plane equation. The plane is grown in a region growing fashion until no new points can be added. A robust rule-based tree removal method is applied subsequently to remove planar segments on trees. Four different rules are applied in this method. Finally, the boundary of each object is extracted from the segmented LiDAR point cloud. The method is evaluated with six different data sets consisting hilly and densely vegetated areas. The experimental results indicate that the proposed method offers a high building detection and roof plane extraction rates while compared to a recently proposed method.

  14. Comparative analysis of video processing and 3D rendering for cloud video games using different virtualization technologies

    NASA Astrophysics Data System (ADS)

    Bada, Adedayo; Alcaraz-Calero, Jose M.; Wang, Qi; Grecos, Christos

    2014-05-01

    This paper describes a comprehensive empirical performance evaluation of 3D video processing employing the physical/virtual architecture implemented in a cloud environment. Different virtualization technologies, virtual video cards and various 3D benchmarks tools have been utilized in order to analyse the optimal performance in the context of 3D online gaming applications. This study highlights 3D video rendering performance under each type of hypervisors, and other factors including network I/O, disk I/O and memory usage. Comparisons of these factors under well-known virtual display technologies such as VNC, Spice and Virtual 3D adaptors reveal the strengths and weaknesses of the various hypervisors with respect to 3D video rendering and streaming.

  15. Curb-Based Street Floor Extraction from Mobile Terrestrial LIDAR Point Cloud

    NASA Astrophysics Data System (ADS)

    Ibrahim, S.; Lichti, D.

    2012-07-01

    Mobile terrestrial laser scanners (MTLS) produce huge 3D point clouds describing the terrestrial surface, from which objects like different street furniture can be generated. Extraction and modelling of the street curb and the street floor from MTLS point clouds is important for many applications such as right-of-way asset inventory, road maintenance and city planning. The proposed pipeline for the curb and street floor extraction consists of a sequence of five steps: organizing the 3D point cloud and nearest neighbour search; 3D density-based segmentation to segment the ground; morphological analysis to refine out the ground segment; derivative of Gaussian filtering to detect the curb; solving the travelling salesman problem to form a closed polygon of the curb and point-inpolygon test to extract the street floor. Two mobile laser scanning datasets of different scenes are tested with the proposed pipeline. The results of the extracted curb and street floor are evaluated based on a truth data. The obtained detection rates for the extracted street floor for the datasets are 95% and 96.53%. This study presents a novel approach to the detection and extraction of the road curb and the street floor from unorganized 3D point clouds captured by MTLS. It utilizes only the 3D coordinates of the point cloud.

  16. 3D multiple-point statistics simulation using 2D training images

    NASA Astrophysics Data System (ADS)

    Comunian, A.; Renard, P.; Straubhaar, J.

    2012-03-01

    One of the main issues in the application of multiple-point statistics (MPS) to the simulation of three-dimensional (3D) blocks is the lack of a suitable 3D training image. In this work, we compare three methods of overcoming this issue using information coming from bidimensional (2D) training images. One approach is based on the aggregation of probabilities. The other approaches are novel. One relies on merging the lists obtained using the impala algorithm from diverse 2D training images, creating a list of compatible data events that is then used for the MPS simulation. The other (s2Dcd) is based on sequential simulations of 2D slices constrained by the conditioning data computed at the previous simulation steps. These three methods are tested on the reproduction of two 3D images that are used as references, and on a real case study where two training images of sedimentary structures are considered. The tests show that it is possible to obtain 3D MPS simulations with at least two 2D training images. The simulations obtained, in particular those obtained with the s2Dcd method, are close to the references, according to a number of comparison criteria. The CPU time required to simulate with the method s2Dcd is from two to four orders of magnitude smaller than the one required by a MPS simulation performed using a 3D training image, while the results obtained are comparable. This computational efficiency and the possibility of using MPS for 3D simulation without the need for a 3D training image facilitates the inclusion of MPS in Monte Carlo, uncertainty evaluation, and stochastic inverse problems frameworks.

  17. Pre-Processing of Point-Data from Contact and Optical 3D Digitization Sensors

    PubMed Central

    Budak, Igor; Vukelić, Djordje; Bračun, Drago; Hodolič, Janko; Soković, Mirko

    2012-01-01

    Contemporary 3D digitization systems employed by reverse engineering (RE) feature ever-growing scanning speeds with the ability to generate large quantity of points in a unit of time. Although advantageous for the quality and efficiency of RE modelling, the huge number of point datas can turn into a serious practical problem, later on, when the CAD model is generated. In addition, 3D digitization processes are very often plagued by measuring errors, which can be attributed to the very nature of measuring systems, various characteristics of the digitized objects and subjective errors by the operator, which also contribute to problems in the CAD model generation process. This paper presents an integral system for the pre-processing of point data, i.e., filtering, smoothing and reduction, based on a cross-sectional RE approach. In the course of the proposed system development, major emphasis was placed on the module for point data reduction, which was designed according to a novel approach with integrated deviation analysis and fuzzy logic reasoning. The developed system was verified through its application on three case studies, on point data from objects of versatile geometries obtained by contact and laser 3D digitization systems. The obtained results demonstrate the effectiveness of the system. PMID:22368513

  18. First Prismatic Building Model Reconstruction from Tomosar Point Clouds

    NASA Astrophysics Data System (ADS)

    Sun, Y.; Shahzad, M.; Zhu, X.

    2016-06-01

    This paper demonstrates for the first time the potential of explicitly modelling the individual roof surfaces to reconstruct 3-D prismatic building models using spaceborne tomographic synthetic aperture radar (TomoSAR) point clouds. The proposed approach is modular and works as follows: it first extracts the buildings via DSM generation and cutting-off the ground terrain. The DSM is smoothed using BM3D denoising method proposed in (Dabov et al., 2007) and a gradient map of the smoothed DSM is generated based on height jumps. Watershed segmentation is then adopted to oversegment the DSM into different regions. Subsequently, height and polygon complexity constrained merging is employed to refine (i.e., to reduce) the retrieved number of roof segments. Coarse outline of each roof segment is then reconstructed and later refined using quadtree based regularization plus zig-zag line simplification scheme. Finally, height is associated to each refined roof segment to obtain the 3-D prismatic model of the building. The proposed approach is illustrated and validated over a large building (convention center) in the city of Las Vegas using TomoSAR point clouds generated from a stack of 25 images using Tomo-GENESIS software developed at DLR.

  19. Precipitation Processes developed during ARM (1997), TOGA COARE (1992), GATE (1974), SCSMEX (1998), and KWAJEX (1999), Consistent 2D, semi-3D and 3D Cloud Resolving Model Simulations

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo; Hou, A.; Atlas, R.; Starr, D.; Sud, Y.

    2003-01-01

    Real clouds and cloud systems are inherently three-dimensional (3D). Because of the limitations in computer resources, however, most cloud-resolving models (CRMs) today are still two-dimensional (2D). A few 3D CRMs have been used to study the response of clouds to large-scale forcing. In these 3D simulations, the model domain was small, and the integration time was 6 hours. The major objectives of this paper are: (1) to assess the performance of the super-parameterization technique (i.e. is 2D or semi-3D CRM appropriate for the super-parameterization?); (2) calculate and examine the surface energy (especially radiation) and water budgets; (3) identify the differences and similarities in the organization and entrainment rates of convection between simulated 2D and 3D cloud systems.

  20. Architecture of web services in the enhancement of real-time 3D video virtualization in cloud environment

    NASA Astrophysics Data System (ADS)

    Bada, Adedayo; Wang, Qi; Alcaraz-Calero, Jose M.; Grecos, Christos

    2016-04-01

    This paper proposes a new approach to improving the application of 3D video rendering and streaming by jointly exploring and optimizing both cloud-based virtualization and web-based delivery. The proposed web service architecture firstly establishes a software virtualization layer based on QEMU (Quick Emulator), an open-source virtualization software that has been able to virtualize system components except for 3D rendering, which is still in its infancy. The architecture then explores the cloud environment to boost the speed of the rendering at the QEMU software virtualization layer. The capabilities and inherent limitations of Virgil 3D, which is one of the most advanced 3D virtual Graphics Processing Unit (GPU) available, are analyzed through benchmarking experiments and integrated into the architecture to further speed up the rendering. Experimental results are reported and analyzed to demonstrate the benefits of the proposed approach.

  1. 3D Point Correspondence by Minimum Description Length in Feature Space.

    PubMed

    Chen, Jiun-Hung; Zheng, Ke Colin; Shapiro, Linda G

    2010-01-01

    Finding point correspondences plays an important role in automatically building statistical shape models from a training set of 3D surfaces. For the point correspondence problem, Davies et al. [1] proposed a minimum-description-length-based objective function to balance the training errors and generalization ability. A recent evaluation study [2] that compares several well-known 3D point correspondence methods for modeling purposes shows that the MDL-based approach [1] is the best method. We adapt the MDL-based objective function for a feature space that can exploit nonlinear properties in point correspondences, and propose an efficient optimization method to minimize the objective function directly in the feature space, given that the inner product of any vector pair can be computed in the feature space. We further employ a Mercer kernel [3] to define the feature space implicitly. A key aspect of our proposed framework is the generalization of the MDL-based objective function to kernel principal component analysis (KPCA) [4] spaces and the design of a gradient-descent approach to minimize such an objective function. We compare the generalized MDL objective function on KPCA spaces with the original one and evaluate their abilities in terms of reconstruction errors and specificity. From our experimental results on different sets of 3D shapes of human body organs, the proposed method performs significantly better than the original method. PMID:25328917

  2. Cloud-point determination for crude oils

    SciTech Connect

    Kruka, V.R.; Cadena, E.R.; Long, T.E.

    1995-08-01

    The cloud point represents the temperature at which wax or paraffin begins to precipitate from a hydrocarbon solution. Conventional American Soc. for Testing and Materials (ASTM) procedures for cloud-point determination are not applicable to dark crude oils and also do not account for potential subcooling of the wax. A review of possible methods and testing with several crude oils indicate that a reliable method consists of determining the temperature at which wax deposits begin to form on a cooled surface exposed to warm, flowing oil. A concurrent thermal analysis of the waxy hydrocarbon can indicate the presence of possible multiple wax-precipitation temperature regions in the solution.

  3. LIDAR, Point Clouds, and their Archaeological Applications

    SciTech Connect

    White, Devin A

    2013-01-01

    It is common in contemporary archaeological literature, in papers at archaeological conferences, and in grant proposals to see heritage professionals use the term LIDAR to refer to high spatial resolution digital elevation models and the technology used to produce them. The goal of this chapter is to break that association and introduce archaeologists to the world of point clouds, in which LIDAR is only one member of a larger family of techniques to obtain, visualize, and analyze three-dimensional measurements of archaeological features. After describing how point clouds are constructed, there is a brief discussion on the currently available software and analytical techniques designed to make sense of them.

  4. Isotropic 3D Super-resolution Imaging with a Self-bending Point Spread Function

    PubMed Central

    Jia, Shu; Vaughan, Joshua C.; Zhuang, Xiaowei

    2014-01-01

    Airy beams maintain their intensity profiles over a large propagation distance without substantial diffraction and exhibit lateral bending during propagation1-5. This unique property has been exploited for micromanipulation of particles6, generation of plasma channels7 and guidance of plasmonic waves8, but has not been explored for high-resolution optical microscopy. Here, we introduce a self-bending point spread function (SB-PSF) based on Airy beams for three-dimensional (3D) super-resolution fluorescence imaging. We designed a side-lobe-free SB-PSF and implemented a two-channel detection scheme to enable unambiguous 3D localization of fluorescent molecules. The lack of diffraction and the propagation-dependent lateral bending make the SB-PSF well suited for precise 3D localization of molecules over a large imaging depth. Using this method, we obtained super-resolution imaging with isotropic 3D localization precision of 10-15 nm over a 3 μm imaging depth from ∼2000 photons per localization. PMID:25383090

  5. A formal classification of 3D medial axis points and their local geometry.

    PubMed

    Giblin, Peter; Kimia, Benjamin B

    2004-02-01

    This paper proposes a novel hypergraph skeletal representation for 3D shape based on a formal derivation of the generic structure of its medial axis. By classifying each skeletal point by its order of contact, we show that, generically, the medial axis consists of five types of points, which are then organized into sheets, curves, and points: 1) sheets (manifolds with boundary) which are the locus of bitangent spheres with regular tangency A1(2) (Ak(n) notation means n distinct k-fold tangencies of the sphere of contact, as explained in the text); two types of curves, 2) the intersection curve of three sheets and the locus of centers of tritangent spheres, A1(3), and 3) the boundary of sheets, which are the locus of centers of spheres whose radius equals the larger principal curvature, i.e., higher order contact A3 points; and two types of points, 4) centers of quad-tangent spheres, A1(4), and 5) centers of spheres with one regular tangency and one higher order tangency, A1A3. The geometry of the 3D medial axis thus consists of sheets (A1(2)) bounded by one type of curve (A3) on their free end, which corresponds to ridges on the surface, and attached to two other sheets at another type of curve (A1(3)), which support a generalized cylinder description. The A3 curves can only end in A1A3 points where they must meet an A1(3) curve. The A1(3) curves meet together in fours at an A1(4) point. This formal result leads to a compact representation for 3D shape, referred to as the medial axis hypergraph representation consisting of nodes (A1(4) and A1A3 points), links between pairs of nodes (A1(3) and A3 curves) and hyperlinks between groups of links (A1(2) sheets). The description of the local geometry at nodes by itself is sufficient to capture qualitative aspects of shapes, in analogy to 2D. We derive a pointwise reconstruction formula to reconstruct a surface from this medial axis hypergraph together with the radius function. Thus, this information completely

  6. Cloud 3D Effects Evidenced in Landsat Power Spectra and Autocorrelation Functions

    NASA Technical Reports Server (NTRS)

    Oreopoulos, Lazaros; Marshak, Alexander; Cahalan, Robert F.; Wen, Guoyong

    1999-01-01

    the spectral signatures of decorrelation between reflectance and optical depth at large scales becoming stronger as the magnitude of cloud top variations increase. Finally, the usefulness of power spectral analysis in evaluating the skill of novel optical depth retrieval techniques in removing 3D radiative effects is demonstrated. New techniques using inverse Non-local Independent Pixel Approximation (NIPA) and Normalized Difference of Nadir Reflectivity (NDNR) yield optical depth fields which better match the scale-by-scale variability of the true optical depth field.

  7. Segment based shape matching in terrestrial laser scanning point clouds

    NASA Astrophysics Data System (ADS)

    Bremer, Magnus; Rutzinger, Martin; Wichmann, Volker

    2013-04-01

    Change detection of dynamic surface elements is an important application in geomorphological analysis. In order to be able to investigate such changes, the high spatial resolution and accuracy of the laser scanning technology is exploited. Dealing with laser scanning data, most change detection approaches are aiming at the assessment of volumetric changes due to erosion and deposition by geomorphologic processes. In these cases the areas of erosion and deposition are spatially separated and can be investigated in a cut-and-fill analysis. Where slow changes are controlled by interior deformation of material mixtures due to gravity, surface changes are mostly due to slight movements of objects and not to absolute material losses and gains. In complex terrain an object-based approach for the reconstruction of 3D change vectors is required. Depending on the level of scale, terrain can be subdivided into a large number of small planar patches. Using 3D point cloud data from terrestrial laser scanning, this can be done by a planar segmentation procedure grouping laser points of flat surfaces. Rotating each point cloud segment into its best fit plane, its 2D footprint shows specific local surface characteristics. Thus, each surface patch has a unique fingerprint that can be described by a variety of segment features. In an experimental framework we test the capability of shape based matching for the derivation of change vectors on dynamic surfaces. To consider different data characteristics such as varying point densities and scan perspectives, terrestrial laser scans of a rock glacier are acquired from three positions with an Optech ILRIS3D terrestrial laser scanner. Additionally, the point density is manipulated in order to simulate three different levels of point density. For the matching of surface patches, we test various non-metric shape features such as roundness, concavity and elongation. Besides, we use metric shape features such as patch area, perimeter and the

  8. Error analysis in stereo vision for location measurement of 3D point

    NASA Astrophysics Data System (ADS)

    Li, Yunting; Zhang, Jun; Tian, Jinwen

    2015-12-01

    Location measurement of 3D point in stereo vision is subjected to different sources of uncertainty that propagate to the final result. For current methods of error analysis, most of them are based on ideal intersection model to calculate the uncertainty region of point location via intersecting two fields of view of pixel that may produce loose bounds. Besides, only a few of sources of error such as pixel error or camera position are taken into account in the process of analysis. In this paper we present a straightforward and available method to estimate the location error that is taken most of source of error into account. We summed up and simplified all the input errors to five parameters by rotation transformation. Then we use the fast algorithm of midpoint method to deduce the mathematical relationships between target point and the parameters. Thus, the expectations and covariance matrix of 3D point location would be obtained, which can constitute the uncertainty region of point location. Afterwards, we turned back to the error propagation of the primitive input errors in the stereo system and throughout the whole analysis process from primitive input errors to localization error. Our method has the same level of computational complexity as the state-of-the-art method. Finally, extensive experiments are performed to verify the performance of our methods.

  9. Lacunarity analysis of raster datasets and 1D, 2D, and 3D point patterns

    NASA Astrophysics Data System (ADS)

    Dong, Pinliang

    2009-10-01

    Spatial scale plays an important role in many fields. As a scale-dependent measure for spatial heterogeneity, lacunarity describes the distribution of gaps within a set at multiple scales. In Earth science, environmental science, and ecology, lacunarity has been increasingly used for multiscale modeling of spatial patterns. This paper presents the development and implementation of a geographic information system (GIS) software extension for lacunarity analysis of raster datasets and 1D, 2D, and 3D point patterns. Depending on the application requirement, lacunarity analysis can be performed in two modes: global mode or local mode. The extension works for: (1) binary (1-bit) and grey-scale datasets in any raster format supported by ArcGIS and (2) 1D, 2D, and 3D point datasets as shapefiles or geodatabase feature classes. For more effective measurement of lacunarity for different patterns or processes in raster datasets, the extension allows users to define an area of interest (AOI) in four different ways, including using a polygon in an existing feature layer. Additionally, directionality can be taken into account when grey-scale datasets are used for local lacunarity analysis. The methodology and graphical user interface (GUI) are described. The application of the extension is demonstrated using both simulated and real datasets, including Brodatz texture images, a Spaceborne Imaging Radar (SIR-C) image, simulated 1D points on a drainage network, and 3D random and clustered point patterns. The options of lacunarity analysis and the effects of polyline arrangement on lacunarity of 1D points are also discussed. Results from sample data suggest that the lacunarity analysis extension can be used for efficient modeling of spatial patterns at multiple scales.

  10. Street environment change detection from mobile laser scanning point clouds

    NASA Astrophysics Data System (ADS)

    Xiao, Wen; Vallet, Bruno; Brédif, Mathieu; Paparoditis, Nicolas

    2015-09-01

    Mobile laser scanning (MLS) has become a popular technique for road inventory, building modelling, infrastructure management, mobility assessment, etc. Meanwhile, due to the high mobility of MLS systems, it is easy to revisit interested areas. However, change detection using MLS data of street environment has seldom been studied. In this paper, an approach that combines occupancy grids and a distance-based method for change detection from MLS point clouds is proposed. Unlike conventional occupancy grids, our occupancy-based method models space based on scanning rays and local point distributions in 3D without voxelization. A local cylindrical reference frame is presented for the interpolation of occupancy between rays according to the scanning geometry. The Dempster-Shafer theory (DST) is utilized for both intra-data evidence fusion and inter-data consistency assessment. Occupancy of reference point cloud is fused at the location of target points and then the consistency is evaluated directly on the points. A point-to-triangle (PTT) distance-based method is combined to improve the occupancy-based method. Because it is robust to penetrable objects, e.g. vegetation, which cause self-conflicts when modelling occupancy. The combined method tackles irregular point density and occlusion problems, also eliminates false detections on penetrable objects.

  11. Detectability limitations with 3-D point reconstruction algorithms using digital radiography

    SciTech Connect

    Lindgren, Erik

    2015-03-31

    The estimated impact of pores in clusters on component fatigue will be highly conservative when based on 2-D rather than 3-D pore positions. To 3-D position and size defects using digital radiography and 3-D point reconstruction algorithms in general require a lower inspection time and in some cases work better with planar geometries than X-ray computed tomography. However, the increase in prior assumptions about the object and the defects will increase the intrinsic uncertainty in the resulting nondestructive evaluation output. In this paper this uncertainty arising when detecting pore defect clusters with point reconstruction algorithms is quantified using simulations. The simulation model is compared to and mapped to experimental data. The main issue with the uncertainty is the possible masking (detectability zero) of smaller defects around some other slightly larger defect. In addition, the uncertainty is explored in connection to the expected effects on the component fatigue life and for different amount of prior object-defect assumptions made.

  12. Overview of 3D-TRACE, a NASA Initiative in Three-Dimensional Tomography of the Aerosol-Cloud Environment

    NASA Astrophysics Data System (ADS)

    Davis, Anthony; Diner, David; Yanovsky, Igor; Garay, Michael; Xu, Feng; Bal, Guillaume; Schechner, Yoav; Aides, Amit; Qu, Zheng; Emde, Claudia

    2013-04-01

    Remote sensing is a key tool for sorting cloud ensembles by dynamical state, aerosol environments by source region, and establishing causal relationships between aerosol amounts, type, and cloud microphysics-the so-called indirect aerosol climate impacts, and one of the main sources of uncertainty in current climate models. Current satellite imagers use data processing approaches that invariably start with cloud detection/masking to isolate aerosol air-masses from clouds, and then rely on one-dimensional (1D) radiative transfer (RT) to interpret the aerosol and cloud measurements in isolation. Not only does this lead to well-documented biases for the estimates of aerosol radiative forcing and cloud optical depths in current missions, but it is fundamentally inadequate for future missions such as EarthCARE where capturing the complex, three-dimensional (3D) interactions between clouds and aerosols is a primary objective. In order to advance the state of the art, the next generation of satellite information processing systems must incorporate technologies that will enable the treatment of the atmosphere as a fully 3D environment, represented more realistically as a continuum. At one end, there is an optically thin background dominated by aerosols and molecular scattering that is strongly stratified and relatively homogeneous in the horizontal. At the other end, there are optically thick embedded elements, clouds and aerosol plumes, which can be more or less uniform and quasi-planar or else highly 3D with boundaries in all directions; in both cases, strong internal variability may be present. To make this paradigm shift possible, we propose to combine the standard models for satellite signal prediction physically grounded in 1D and 3D RT, both scalar and vector, with technologies adapted from biomedical imaging, digital image processing, and computer vision. This will enable us to demonstrate how the 3D distribution of atmospheric constituents, and their associated

  13. Impacts of 3-D radiative effects on satellite cloud detection and their consequences on cloud fraction and aerosol optical depth retrievals

    NASA Astrophysics Data System (ADS)

    Yang, Yuekui; di Girolamo, Larry

    2008-02-01

    We present the first examination on how 3-D radiative transfer impacts satellite cloud detection that uses a single visible channel threshold. The 3-D radiative transfer through predefined heterogeneous cloud fields embedded in a range of horizontally homogeneous aerosol fields have been carried out to generate synthetic nadir-viewing satellite images at a wavelength of 0.67 μm. The finest spatial resolution of the cloud field is 30 m. We show that 3-D radiative effects cause significant histogram overlap between the radiance distribution of clear and cloudy pixels, the degree to which depends on many factors (resolution, solar zenith angle, surface reflectance, aerosol optical depth (AOD), cloud top variability, etc.). This overlap precludes the existence of a threshold that can correctly separate all clear pixels from cloudy pixels. The region of clear/cloud radiance overlap includes moderately large (up to 5 in our simulations) cloud optical depths. Purpose-driven cloud masks, defined by different thresholds, are applied to the simulated images to examine their impact on retrieving cloud fraction and AOD. Large (up to 100s of %) systematic errors were observed that depended on the type of cloud mask and the factors that influence the clear/cloud radiance overlap, with a strong dependence on solar zenith angle. Different strategies in computing domain-averaged AOD were performed showing that the domain-averaged BRF from all clear pixels produced the smallest AOD biases with the weakest (but still large) dependence on solar zenith angle. The large dependence of the bias on solar zenith angle has serious implications for climate research that uses satellite cloud and aerosol products.

  14. Saturation point structure of marine stratocumulus clouds

    NASA Technical Reports Server (NTRS)

    Boers, Reinout; Betts, Alan K.

    1988-01-01

    An investigation of the microstructure of a Pacific stratocumulus capped boundary layer is presented. A complex structure of three branches, identified using conserved variable diagrams, is found to correspond well to a conceptual model for the unstable, radiatively cooled cloud topped boundary layer. A simple conditional sampling method was used to identify saturation point pairs for ascending and descending branches of the internal boundary layer circulation. Results indicate a primary circulation scale of 5 km and provide a reasonable cloud top entrainment rate of 1 cm/s.

  15. Articulated Non-Rigid Point Set Registration for Human Pose Estimation from 3D Sensors

    PubMed Central

    Ge, Song; Fan, Guoliang

    2015-01-01

    We propose a generative framework for 3D human pose estimation that is able to operate on both individual point sets and sequential depth data. We formulate human pose estimation as a point set registration problem, where we propose three new approaches to address several major technical challenges in this research. First, we integrate two registration techniques that have a complementary nature to cope with non-rigid and articulated deformations of the human body under a variety of poses. This unique combination allows us to handle point sets of complex body motion and large pose variation without any initial conditions, as required by most existing approaches. Second, we introduce an efficient pose tracking strategy to deal with sequential depth data, where the major challenge is the incomplete data due to self-occlusions and view changes. We introduce a visible point extraction method to initialize a new template for the current frame from the previous frame, which effectively reduces the ambiguity and uncertainty during registration. Third, to support robust and stable pose tracking, we develop a segment volume validation technique to detect tracking failures and to re-initialize pose registration if needed. The experimental results on both benchmark 3D laser scan and depth datasets demonstrate the effectiveness of the proposed framework when compared with state-of-the-art algorithms. PMID:26131673

  16. Adapting histogram for automatic noise data removal in building interior point cloud data

    NASA Astrophysics Data System (ADS)

    Shukor, S. A. Abdul; Rushforth, E. J.

    2015-05-01

    3D point cloud data is now preferred by researchers to generate 3D models. These models can be used throughout a variety of applications including 3D building interior models. The rise of Building Information Modeling (BIM) for Architectural, Engineering, Construction (AEC) applications has given 3D interior modelling more attention recently. To generate a 3D model representing the building interior, a laser scanner is used to collect the point cloud data. However, this data often comes with noise. This is due to several factors including the surrounding objects, lighting and specifications of the laser scanner. This paper highlights on the usage of the histogram to remove the noise data. Histograms, used in statistics and probability, are regularly being used in a number of applications like image processing, where a histogram can represent the total number of pixels in an image at each intensity level. Here, histograms represent the number of points recorded at range distance intervals in various projections. As unwanted noise data has a sparser cloud density compared to the required data and is usually situated at a notable distance from the required data, noise data will have lower frequencies in the histogram. By defining the acceptable range using the average frequency, points below this range can be removed. This research has shown that these histograms have the capabilities to remove unwanted data from 3D point cloud data representing building interiors automatically. This feature will aid the process of data preprocessing in producing an ideal 3D model from the point cloud data.

  17. 3D Cloud Tomography, Followed by Mean Optical and Microphysical Properties, with Multi-Angle/Multi-Pixel Data

    NASA Astrophysics Data System (ADS)

    Davis, A. B.; von Allmen, P. A.; Marshak, A.; Bal, G.

    2010-12-01

    The geometrical assumption in all operational cloud remote sensing algorithms is that clouds are plane-parallel slabs, which applies relatively well to the most uniform stratus layers. Its benefit is to justify using classic 1D radiative transfer (RT) theory, where angular details (solar, viewing, azimuthal) are fully accounted for and precise phase functions can be used, to generate the look-up tables used in the retrievals. Unsurprisingly, these algorithms catastrophically fail when applied to cumulus-type clouds, which are highly 3D. This is unfortunate for the cloud-process modeling community that may thrive on in situ airborne data, but would very much like to use satellite data for more than illustrations in their presentations and publications. So, how can we obtain quantitative information from space-based observations of finite aspect ratio clouds? Cloud base/top heights, vertically projected area, mean liquid water content (LWC), and volume-averaged droplet size would be a good start. Motivated by this science need, we present a new approach suitable for sparse cumulus fields where we turn the tables on the standard procedure in cloud remote sensing. We make no a priori assumption about cloud shape, save an approximately flat base, but use brutal approximations about the RT that is necessarily 3D. Indeed, the first order of business is to roughly determine the cloud's outer shape in one of two ways, which we will frame as competing initial guesses for the next phase of shape refinement and volume-averaged microphysical parameter estimation. Both steps use multi-pixel/multi-angle techniques amenable to MISR data, the latter adding a bi-spectral dimension using collocated MODIS data. One approach to rough cloud shape determination is to fit the multi-pixel/multi-angle data with a geometric primitive such as a scalene hemi-ellipsoid with 7 parameters (translation in 3D space, 3 semi-axes, 1 azimuthal orientation); for the radiometry, a simple radiosity

  18. An investigation of pointing postures in a 3D stereoscopic environment.

    PubMed

    Lin, Chiuhsiang Joe; Ho, Sui-Hua; Chen, Yan-Jyun

    2015-05-01

    Many object pointing and selecting techniques for large screens have been proposed in the literature. There is a lack of quantitative evidence suggesting proper pointing postures for interacting with stereoscopic targets in immersive virtual environments. The objective of this study was to explore users' performances and experiences of using different postures while interacting with 3D targets remotely in an immersive stereoscopic environment. Two postures, hand-directed and gaze-directed pointing methods, were compared in order to investigate the postural influences. Two stereo parallaxes, negative and positive parallaxes, were compared for exploring how target depth variances would impact users' performances and experiences. Fifteen participants were recruited to perform two interactive tasks, tapping and tracking tasks, to simulate interaction behaviors in the stereoscopic environment. Hand-directed pointing is suggested for both tapping and tracking tasks due to its significantly better overall performance, less muscle fatigue, and better usability. However, a gaze-directed posture is probably a better alternative than hand-directed pointing for tasks with high accuracy requirements in home-in phases. Additionally, it is easier for users to interact with targets with negative parallax than with targets with positive parallax. Based on the findings of this research, future applications involving different pointing techniques should consider both pointing performances and postural effects as a result of pointing task precision requirements and potential postural fatigue. PMID:25683543

  19. Point Cloud Oriented Shoulder Line Extraction in Loess Hilly Area

    NASA Astrophysics Data System (ADS)

    Min, Li; Xin, Yang; Liyang, Xiong

    2016-06-01

    Shoulder line is the significant line in hilly area of Loess Plateau in China, dividing the surface into positive and negative terrain (P-N terrains). Due to the point cloud vegetation removal methods of P-N terrains are different, there is an imperative need for shoulder line extraction. In this paper, we proposed an automatic shoulder line extraction method based on point cloud. The workflow is as below: (i) ground points were selected by using a grid filter in order to remove most of noisy points. (ii) Based on DEM interpolated by those ground points, slope was mapped and classified into two classes (P-N terrains), using Natural Break Classified method. (iii) The common boundary between two slopes is extracted as shoulder line candidate. (iv) Adjust the filter gird size and repeat step i-iii until the shoulder line candidate matches its real location. (v) Generate shoulder line of the whole area. Test area locates in Madigou, Jingbian County of Shaanxi Province, China. A total of 600 million points are acquired in the test area of 0.23km2, using Riegl VZ400 3D Laser Scanner in August 2014. Due to the limit Granted computing performance, the test area is divided into 60 blocks and 13 of them around the shoulder line were selected for filter grid size optimizing. The experiment result shows that the optimal filter grid size varies in diverse sample area, and a power function relation exists between filter grid size and point density. The optimal grid size was determined by above relation and shoulder lines of 60 blocks were then extracted. Comparing with the manual interpretation results, the accuracy of the whole result reaches 85%. This method can be applied to shoulder line extraction in hilly area, which is crucial for point cloud denoising and high accuracy DEM generation.

  20. Points based reconstruction and rendering of 3D shapes from large volume dataset

    NASA Astrophysics Data System (ADS)

    Zhao, Mingchang; Tian, Jie; He, Huiguang; Li, Guangming

    2003-05-01

    In the field of medical imaging, researchers often need visualize lots of 3D datasets to get the informaiton contained in these datasets. But the huge data genreated by modern medical imaging device challenge the real time processing and rendering algorithms at all the time. Spurring by the great achievement of Points Based Rendering (PBR) in the fields of computer graphics to render very large meshes, we propose a new algorithm to use the points as basic primitive of surface reconstruction and rendering to interactively reconstruct and render very large volume dataset. By utilizing the special characteristics of medical image datasets, we obtain a fast and efficient points-based reconstruction and rendering algorithm in common PC. The experimental results show taht this algorithm is feasible and efficient.

  1. Molecular surface point environments for virtual screening and the elucidation of binding patterns (MOLPRINT 3D).

    PubMed

    Bender, Andreas; Mussa, Hamse Y; Gill, Gurprem S; Glen, Robert C

    2004-12-16

    A novel method (MOLPRINT 3D) for virtual screening and the elucidation of ligand-receptor binding patterns is introduced that is based on environments of molecular surface points. The descriptor uses points relative to the molecular coordinates, thus it is translationally and rotationally invariant. Due to its local nature, conformational variations cause only minor changes in the descriptor. If surface point environments are combined with the Tanimoto coefficient and applied to virtual screening, they achieve retrieval rates comparable to that of two-dimensional (2D) fingerprints. The identification of active structures with minimal 2D similarity ("scaffold hopping") is facilitated. In combination with information-gain-based feature selection and a naive Bayesian classifier, information from multiple molecules can be combined and classification performance can be improved. Selected features are consistent with experimentally determined binding patterns. Examples are given for angiotensin-converting enzyme inhibitors, 3-hydroxy-3-methylglutaryl-coenzyme A reductase inhibitors, and thromboxane A2 antagonists. PMID:15588092

  2. Precipitation Processes Developed During ARM (1997), TOGA COARE (1992), GATE (1974), SCSMEX (1998), and KWAJEX (1999): Consistent 2D, Semi-3D and 3D Cloud Resolving Model Simulations

    NASA Technical Reports Server (NTRS)

    Tao, W-K.

    2003-01-01

    Real clouds and cloud systems are inherently three-dimensional (3D). Because of the limitations in computer resources, however, most cloud-resolving models (CRMs) today are still two-dimensional (2D). A few 3D CRMs have been used to study the response of clouds to large-scale forcing. In these 3D simulations, the model domain was small, and the integration time was 6 hours. Only recently have 3D experiments been performed for multi-day periods for tropical cloud systems with large horizontal domains at the National Center for Atmospheric Research (NACAR) and at NASA Goddard Space Flight Center . At Goddard, a 3D Goddard Cumulus Ensemble (GCE) model was used to simulate periods during TOGA COARE, SCSMEX and KWAJEX using 512 by 512 km domain (with 2 km resolution). The results indicate that surface precipitation and latent heating profiles are very similar between the 2D and 3D GCE model simulations. The reason for the strong similarity between the 2D and 3D CRM simulations is that the same observed large-scale advective tendencies of potential temperature, water vapor mixing ratio, and horizontal momentum were used as the main focusing in both the 2D and 3D models. Interestingly, the 2D and 3D versions of the CRM used at CSU showed significant differences in the rainfall and cloud statistics for three ARM cases. The major objectives of this paper are: (1) to assess the performance of the super-parameterization technique, (2) calculate and examine the surface energy (especially radiation) and water budgets, and (3) identify the differences and similarities in the organization and entrainment rates of convection between simulated 2D and 3D cloud systems.

  3. Use of the ARM Measurements of Spectral Zenith Radiance for Better Understanding of 3D Cloud-Radiation Processes & Aerosol-Cloud Interaction

    SciTech Connect

    Alexander Marshak; Warren Wiscombe; Yuri Knyazikhin; Christine Chiu

    2011-05-24

    We proposed a variety of tasks centered on the following question: what can we learn about 3D cloud-radiation processes and aerosol-cloud interaction from rapid-sampling ARM measurements of spectral zenith radiance? These ARM measurements offer spectacular new and largely unexploited capabilities in both the temporal and spectral domains. Unlike most other ARM instruments, which average over many seconds or take samples many seconds apart, the new spectral zenith radiance measurements are fast enough to resolve natural time scales of cloud change and cloud boundaries as well as the transition zone between cloudy and clear areas. In the case of the shortwave spectrometer, the measurements offer high time resolution and high spectral resolution, allowing new discovery-oriented science which we intend to pursue vigorously. Research objectives are, for convenience, grouped under three themes: • Understand radiative signature of the transition zone between cloud-free and cloudy areas using data from ARM shortwave radiometers, which has major climatic consequences in both aerosol direct and indirect effect studies. • Provide cloud property retrievals from the ARM sites and the ARM Mobile Facility for studies of aerosol-cloud interactions. • Assess impact of 3D cloud structures on aerosol properties using passive and active remote sensing techniques from both ARM and satellite measurements.

  4. Use of the ARM Measurements of Spectral Zenith Radiance for Better Understanding of 3D Cloud-Radiation Processes & Aerosol-Cloud Interaction

    SciTech Connect

    Chiu, Jui-Yuan Christine

    2014-04-10

    This project focuses on cloud-radiation processes in a general three-dimensional cloud situation, with particular emphasis on cloud optical depth and effective particle size. The proposal has two main parts. Part one exploits the large number of new wavelengths offered by the Atmospheric Radiation Measurement (ARM) zenith-pointing ShortWave Spectrometer (SWS), to develop better retrievals not only of cloud optical depth but also of cloud particle size. We also take advantage of the SWS’ high sampling resolution to study the “twilight zone” around clouds where strong aerosol-cloud interactions are taking place. Part two involves continuing our cloud optical depth and cloud fraction retrieval research with ARM’s 2-channel narrow vield-of-view radiometer and sunphotometer instrument by, first, analyzing its data from the ARM Mobile Facility deployments, and second, making our algorithms part of ARM’s operational data processing.

  5. Reconstruction, Quantification, and Visualization of Forest Canopy Based on 3d Triangulations of Airborne Laser Scanning Point Data

    NASA Astrophysics Data System (ADS)

    Vauhkonen, J.

    2015-03-01

    Reconstruction of three-dimensional (3D) forest canopy is described and quantified using airborne laser scanning (ALS) data with densities of 0.6-0.8 points m-2 and field measurements aggregated at resolutions of 400-900 m2. The reconstruction was based on computational geometry, topological connectivity, and numerical optimization. More precisely, triangulations and their filtrations, i.e. ordered sets of simplices belonging to the triangulations, based on the point data were analyzed. Triangulating the ALS point data corresponds to subdividing the underlying space of the points into weighted simplicial complexes with weights quantifying the (empty) space delimited by the points. Reconstructing the canopy volume populated by biomass will thus likely require filtering to exclude that volume from canopy voids. The approaches applied for this purpose were (i) to optimize the degree of filtration with respect to the field measurements, and (ii) to predict this degree by means of analyzing the persistent homology of the obtained triangulations, which is applied for the first time for vegetation point clouds. When derived from optimized filtrations, the total tetrahedral volume had a high degree of determination (R2) with the stem volume considered, both alone (R2=0.65) and together with other predictors (R2=0.78). When derived by analyzing the topological persistence of the point data and without any field input, the R2 were lower, but the predictions still showed a correlation with the field-measured stem volumes. Finally, producing realistic visualizations of a forested landscape using the persistent homology approach is demonstrated.

  6. Existence of two MHD reconnection modes in a solar 3D magnetic null point topology

    NASA Astrophysics Data System (ADS)

    Pariat, Etienne; Antiochos, Spiro; DeVore, C. Richard; Dalmasse, Kévin

    2012-07-01

    Magnetic topologies with a 3D magnetic null point are common in the solar atmosphere and occur at different spatial scales: such structures can be associated with some solar eruptions, with the so-called pseudo-streamers, and with numerous coronal jets. We have recently developed a series of numerical experiments that model magnetic reconnection in such configurations in order to study and explain the properties of jet-like features. Our model uses our state-of-the-art adaptive-mesh MHD solver ARMS. Energy is injected in the system by line-tied motion of the magnetic field lines in a corona-like configuration. We observe that, in the MHD framework, two reconnection modes eventually appear in the course of the evolution of the system. A very impulsive one, associated with a highly dynamic and fully 3D current sheet, is associated with the energetic generation of a jet. Before and after the generation of the jet, a quasi-steady reconnection mode, more similar to the standard 2D Sweet-Parker model, presents a lower global reconnection rate. We show that the geometry of the magnetic configuration influences the trigger of one or the other mode. We argue that this result carries important implications for the observed link between observational features such as solar jets, solar plumes, and the emission of coronal bright points.

  7. Unconventional superconductivity at mesoscopic point contacts on the 3D Dirac semimetal Cd3As2.

    PubMed

    Aggarwal, Leena; Gaurav, Abhishek; Thakur, Gohil S; Haque, Zeba; Ganguli, Ashok K; Sheet, Goutam

    2016-01-01

    Three-dimensional (3D) Dirac semimetals exist close to topological phase boundaries which, in principle, should make it possible to drive them into exotic new phases, such as topological superconductivity, by breaking certain symmetries. A practical realization of this idea has, however, hitherto been lacking. Here we show that the mesoscopic point contacts between pure silver (Ag) and the 3D Dirac semimetal Cd3As2 (ref. ) exhibit unconventional superconductivity with a critical temperature (onset) greater than 6 K whereas neither Cd3As2 nor Ag are superconductors. A gap amplitude of 6.5 meV is measured spectroscopically in this phase that varies weakly with temperature and survives up to a remarkably high temperature of 13 K, indicating the presence of a robust normal-state pseudogap. The observations indicate the emergence of a new unconventional superconducting phase that exists in a quantum mechanically confined region under a point contact between a Dirac semimetal and a normal metal. PMID:26524131

  8. 3D shape descriptors for face segmentation and fiducial points detection: an anatomical-based analysis

    NASA Astrophysics Data System (ADS)

    Salazar, Augusto E.; Cerón, Alexander; Prieto, Flavio A.

    2011-03-01

    The behavior of nine 3D shape descriptors which were computed on the surface of 3D face models, is studied. The set of descriptors includes six curvature-based ones, SPIN images, Folded SPIN Images, and Finger prints. Instead of defining clusters of vertices based on the value of a given primitive surface feature, a face template composed by 28 anatomical regions, is used to segment the models and to extract the location of different landmarks and fiducial points. Vertices are grouped by: region, region boundaries, and subsampled versions of them. The aim of this study is to analyze the discriminant capacity of each descriptor to characterize regions and to identify key points on the facial surface. The experiment includes testing with data from neutral faces and faces showing expressions. Also, in order to see the usefulness of the bending-invariant canonical form (BICF) to handle variations due to facial expressions, the descriptors are computed directly from the surface and also from its BICF. In the results: the values, distributions, and relevance indexes of each set of vertices, were analyzed.

  9. Inter-point procrustes: identifying regional and large differences in 3D anatomical shapes.

    PubMed

    Lekadir, Karim; Frangi, Alejandro F; Yang, Guang-Zhong

    2012-01-01

    This paper presents a new approach for the robust alignment and interpretation of 3D anatomical structures with large and localized shape differences. In such situations, existing techniques based on the well-known Procrustes analysis can be significantly affected due to the introduced non-Gaussian distribution of the residuals. In the proposed technique, influential points that induce large dissimilarities are identified and displaced with the aim to obtain an intermediate template with an improved distribution of the residuals. The key element of the algorithm is the use of pose invariant shape variables to robustly guide both the influential point detection and displacement steps. The intermediate template is then used as the basis for the estimation of the final pose parameters between the source and destination shapes, enabling to effectively highlight the regional differences of interest. The validation using synthetic and real datasets of different morphologies demonstrates robustness up-to 50% regional differences and potential for shape classification. PMID:23286119

  10. 3D Printed Molecules and Extended Solid Models for Teaching Symmetry and Point Groups

    ERIC Educational Resources Information Center

    Scalfani, Vincent F.; Vaid, Thomas P.

    2014-01-01

    Tangible models help students and researchers visualize chemical structures in three dimensions (3D). 3D printing offers a unique and straightforward approach to fabricate plastic 3D models of molecules and extended solids. In this article, we prepared a series of digital 3D design files of molecular structures that will be useful for teaching…

  11. 3D Modeling of GJ1214b’s Atmosphere: Formation of Inhomogeneous High Clouds and Observational Implications

    NASA Astrophysics Data System (ADS)

    Charnay, B.; Meadows, V.; Misra, A.; Leconte, J.; Arney, G.

    2015-11-01

    The warm sub-Neptune GJ1214b has a featureless transit spectrum that may be due to the presence of high and thick clouds or haze. Here, we simulate the atmosphere of GJ1214b with a 3D General Circulation Model for cloudy hydrogen-dominated atmospheres, including cloud radiative effects. We show that the atmospheric circulation is strong enough to transport micrometric cloud particles to the upper atmosphere and generally leads to a minimum of cloud at the equator. By scattering stellar light, clouds increase the planetary albedo to 0.4-0.6 and cool the atmosphere below 1 mbar. However, the heating by ZnS clouds leads to the formation of a stratospheric thermal inversion above 10 mbar, with temperatures potentially high enough on the dayside to evaporate KCl clouds. We show that flat transit spectra consistent with Hubble Space Telescope observations are possible if cloud particle radii are around 0.5 μm, and that such clouds should be optically thin at wavelengths >3 μm. Using simulated cloudy atmospheres that fit the observed spectra we generate transit, emission, and reflection spectra and phase curves for GJ1214b. We show that a stratospheric thermal inversion would be readily accessible in near- and mid-infrared atmospheric spectral windows. We find that the amplitude of the thermal phase curves is strongly dependent on metallicity, but only slightly impacted by clouds. Our results suggest that primary and secondary eclipses and phase curves observed by the James Webb Space Telescope in the near- to mid-infrared should provide strong constraints on the nature of GJ1214b's atmosphere and clouds.

  12. Traffic sign detection in MLS acquired point clouds for geometric and image-based semantic inventory

    NASA Astrophysics Data System (ADS)

    Soilán, Mario; Riveiro, Belén; Martínez-Sánchez, Joaquín; Arias, Pedro

    2016-04-01

    Nowadays, mobile laser scanning has become a valid technology for infrastructure inspection. This technology permits collecting accurate 3D point clouds of urban and road environments and the geometric and semantic analysis of data became an active research topic in the last years. This paper focuses on the detection of vertical traffic signs in 3D point clouds acquired by a LYNX Mobile Mapper system, comprised of laser scanning and RGB cameras. Each traffic sign is automatically detected in the LiDAR point cloud, and its main geometric parameters can be automatically extracted, therefore aiding the inventory process. Furthermore, the 3D position of traffic signs are reprojected on the 2D images, which are spatially and temporally synced with the point cloud. Image analysis allows for recognizing the traffic sign semantics using machine learning approaches. The presented method was tested in road and urban scenarios in Galicia (Spain). The recall results for traffic sign detection are close to 98%, and existing false positives can be easily filtered after point cloud projection. Finally, the lack of a large, publicly available Spanish traffic sign database is pointed out.

  13. Commissioning a small-field biological irradiator using point, 2D, and 3D dosimetry techniques

    PubMed Central

    Newton, Joseph; Oldham, Mark; Thomas, Andrew; Li, Yifan; Adamovics, John; Kirsch, David G.; Das, Shiva

    2011-01-01

    Purpose: To commission a small-field biological irradiator, the XRad225Cx from Precision x-Ray, Inc., for research use. The system produces a 225 kVp x-ray beam and is equipped with collimating cones that produce both square and circular radiation fields ranging in size from 1 to 40 mm. This work incorporates point, 2D, and 3D measurements to determine output factors (OF), percent-depth-dose (PDD) and dose profiles at multiple depths. Methods: Three independent dosimetry systems were used: ion-chambers (a farmer chamber and a micro-ionisation chamber), 2D EBT2 radiochromic film, and a novel 3D dosimetry system (DLOS/PRESAGE®). Reference point dose rates and output factors were determined from in-air ionization chamber measurements for fields down to ∼13 mm using the formalism of TG61. PDD, profiles, and output factors at three separate depths (0, 0.5, and 2 cm), were determined for all field sizes from EBT2 film measurements in solid water. Several film PDD curves required a scaling correction, reflecting the challenge of accurate film alignment in very small fields. PDDs, profiles, and output factors were also determined with the 3D DLOS/PRESAGE® system which generated isotropic 0.2 mm data, in scan times of 20 min. Results: Surface output factors determined by ion-chamber were observed to gradually drop by ∼9% when the field size was reduced from 40 to 13 mm. More dramatic drops were observed for the smallest fields as determined by EBT∼18% and ∼42% for the 2.5 mm and 1 mm fields, respectively. PRESAGE® and film output factors agreed well for fields <20 mm (where 3D data were available) with mean deviation of 2.2% (range 1%–4%). PDD values at 2 cm depth varied from ∼72% for the 40 mm field, down to ∼55% for the 1 mm field. EBT and PRESAGE® PDDs agreed within ∼3% in the typical therapy region (1–4 cm). At deeper depths the EBT curves were slightly steeper (2.5% at 5 cm). These results indicate good overall consistency between ion-chamber, EBT

  14. Commissioning a small-field biological irradiator using point, 2D, and 3D dosimetry techniques

    SciTech Connect

    Newton, Joseph; Oldham, Mark; Thomas, Andrew; Li Yifan; Adamovics, John; Kirsch, David G.; Das, Shiva

    2011-12-15

    Purpose: To commission a small-field biological irradiator, the XRad225Cx from Precision x-Ray, Inc., for research use. The system produces a 225 kVp x-ray beam and is equipped with collimating cones that produce both square and circular radiation fields ranging in size from 1 to 40 mm. This work incorporates point, 2D, and 3D measurements to determine output factors (OF), percent-depth-dose (PDD) and dose profiles at multiple depths. Methods: Three independent dosimetry systems were used: ion-chambers (a farmer chamber and a micro-ionisation chamber), 2D EBT2 radiochromic film, and a novel 3D dosimetry system (DLOS/PRESAGE registered ). Reference point dose rates and output factors were determined from in-air ionization chamber measurements for fields down to {approx}13 mm using the formalism of TG61. PDD, profiles, and output factors at three separate depths (0, 0.5, and 2 cm), were determined for all field sizes from EBT2 film measurements in solid water. Several film PDD curves required a scaling correction, reflecting the challenge of accurate film alignment in very small fields. PDDs, profiles, and output factors were also determined with the 3D DLOS/PRESAGE registered system which generated isotropic 0.2 mm data, in scan times of 20 min. Results: Surface output factors determined by ion-chamber were observed to gradually drop by {approx}9% when the field size was reduced from 40 to 13 mm. More dramatic drops were observed for the smallest fields as determined by EBT{approx}18% and {approx}42% for the 2.5 mm and 1 mm fields, respectively. PRESAGE registered and film output factors agreed well for fields <20 mm (where 3D data were available) with mean deviation of 2.2% (range 1%-4%). PDD values at 2 cm depth varied from {approx}72% for the 40 mm field, down to {approx}55% for the 1 mm field. EBT and PRESAGE registered PDDs agreed within {approx}3% in the typical therapy region (1-4 cm). At deeper depths the EBT curves were slightly steeper (2.5% at 5 cm

  15. A Voxel-Based Approach for Imaging Voids in Three-Dimensional Point Clouds

    NASA Astrophysics Data System (ADS)

    Salvaggio, Katie N.

    Geographically accurate scene models have enormous potential beyond that of just simple visualizations in regard to automated scene generation. In recent years, thanks to ever increasing computational efficiencies, there has been significant growth in both the computer vision and photogrammetry communities pertaining to automatic scene reconstruction from multiple-view imagery. The result of these algorithms is a three-dimensional (3D) point cloud which can be used to derive a final model using surface reconstruction techniques. However, the fidelity of these point clouds has not been well studied, and voids often exist within the point cloud. Voids exist in texturally difficult areas, as well as areas where multiple views were not obtained during collection, constant occlusion existed due to collection angles or overlapping scene geometry, or in regions that failed to triangulate accurately. It may be possible to fill in small voids in the scene using surface reconstruction or hole-filling techniques, but this is not the case with larger more complex voids, and attempting to reconstruct them using only the knowledge of the incomplete point cloud is neither accurate nor aesthetically pleasing. A method is presented for identifying voids in point clouds by using a voxel-based approach to partition the 3D space. By using collection geometry and information derived from the point cloud, it is possible to detect unsampled voxels such that voids can be identified. This analysis takes into account the location of the camera and the 3D points themselves to capitalize on the idea of free space, such that voxels that lie on the ray between the camera and point are devoid of obstruction, as a clear line of sight is a necessary requirement for reconstruction. Using this approach, voxels are classified into three categories: occupied (contains points from the point cloud), free (rays from the camera to the point passed through the voxel), and unsampled (does not contain points

  16. Extracting roads from dense point clouds in large scale urban environment

    NASA Astrophysics Data System (ADS)

    Boyko, Aleksey; Funkhouser, Thomas

    2011-12-01

    This paper describes a method for extracting roads from a large scale unstructured 3D point cloud of an urban environment consisting of many superimposed scans taken at different times. Given a road map and a point cloud, our system automatically separates road surfaces from the rest of the point cloud. Starting with an approximate map of the road network given in the form of 2D intersection locations connected by polylines, we first produce a 3D representation of the map by optimizing Cardinal splines to minimize the distances to points of the cloud under continuity constraints. We then divide the road network into independent patches, making it feasible to process a large point cloud with a small in-memory working set. For each patch, we fit a 2D active contour to an attractor function with peaks at small vertical discontinuities to predict the locations of curbs. Finally, we output a set of labeled points, where points lying within the active contour are tagged as "road" and the others are not. During experiments with a LIDAR point set containing almost a billion points spread over six square kilometers of a city center, our method provides 86% correctness and 94% completeness.

  17. Automatic Roof Plane Detection and Analysis in Airborne Lidar Point Clouds for Solar Potential Assessment

    PubMed Central

    Jochem, Andreas; Höfle, Bernhard; Rutzinger, Martin; Pfeifer, Norbert

    2009-01-01

    A relative height threshold is defined to separate potential roof points from the point cloud, followed by a segmentation of these points into homogeneous areas fulfilling the defined constraints of roof planes. The normal vector of each laser point is an excellent feature to decompose the point cloud into segments describing planar patches. An object-based error assessment is performed to determine the accuracy of the presented classification. It results in 94.4% completeness and 88.4% correctness. Once all roof planes are detected in the 3D point cloud, solar potential analysis is performed for each point. Shadowing effects of nearby objects are taken into account by calculating the horizon of each point within the point cloud. Effects of cloud cover are also considered by using data from a nearby meteorological station. As a result the annual sum of the direct and diffuse radiation for each roof plane is derived. The presented method uses the full 3D information for both feature extraction and solar potential analysis, which offers a number of new applications in fields where natural processes are influenced by the incoming solar radiation (e.g., evapotranspiration, distribution of permafrost). The presented method detected fully automatically a subset of 809 out of 1,071 roof planes where the arithmetic mean of the annual incoming solar radiation is more than 700 kWh/m2. PMID:22346695

  18. Simulated KWAJEX Convective Systems Using a 2D and 3D Cloud Resolving Model and Their Comparisons with Radar Observations

    NASA Technical Reports Server (NTRS)

    Shie, Chung-Lin; Tao, Wei-Kuo; Simpson, Joanne

    2003-01-01

    The 1999 Kwajalein Atoll field experiment (KWAJEX), one of several major TRMM (Tropical Rainfall Measuring Mission) field experiments, has successfully obtained a wealth of information and observation data on tropical convective systems over the western Central Pacific region. In this paper, clouds and convective systems that developed during three active periods (Aug 7-12, Aug 17-21, and Aug 29-Sep 13) around Kwajalein Atoll site are simulated using both 2D and 3D Goddard Cumulus Ensemble (GCE) models. Based on numerical results, the clouds and cloud systems are generally unorganized and short lived. These features are validated by radar observations that support the model results. Both the 2D and 3D simulated rainfall amounts and their stratiform contribution as well as the heat, water vapor, and moist static energy budgets are examined for the three convective episodes. Rainfall amounts are quantitatively similar between the two simulations, but the stratiform contribution is considerably larger in the 2D simulation. Regardless of dimension, fo all three cases, the large-scale forcing and net condensation are the two major physical processes that account for the evolution of the budgets with surface latent heat flux and net radiation solar and long-wave radiation)being secondary processes. Quantitative budget differences between 2D and 3D as well as between various episodes will be detailed.Morover, simulated radar signatures and Q1/Q2 fields from the three simulations are compared to each other and with radar and sounding observations.

  19. Point clouds segmentation as base for as-built BIM creation

    NASA Astrophysics Data System (ADS)

    Macher, H.; Landes, T.; Grussenmeyer, P.

    2015-08-01

    In this paper, a three steps segmentation approach is proposed in order to create 3D models from point clouds acquired by TLS inside buildings. The three scales of segmentation are floors, rooms and planes composing the rooms. First, floor segmentation is performed based on analysis of point distribution along Z axis. Then, for each floor, room segmentation is achieved considering a slice of point cloud at ceiling level. Finally, planes are segmented for each room, and planes corresponding to ceilings and floors are identified. Results of each step are analysed and potential improvements are proposed. Based on segmented point clouds, the creation of as-built BIM is considered in a future work section. Not only the classification of planes into several categories is proposed, but the potential use of point clouds acquired outside buildings is also considered.

  20. The point-source method for 3D reconstructions for the Helmholtz and Maxwell equations

    NASA Astrophysics Data System (ADS)

    Ben Hassen, M. F.; Erhard, K.; Potthast, R.

    2006-02-01

    We use the point-source method (PSM) to reconstruct a scattered field from its associated far field pattern. The reconstruction scheme is described and numerical results are presented for three-dimensional acoustic and electromagnetic scattering problems. We give new proofs of the algorithms, based on the Green and Stratton-Chu formulae, which are more general than with the former use of the reciprocity relation. This allows us to handle the case of limited aperture data and arbitrary incident fields. Both for 3D acoustics and electromagnetics, numerical reconstructions of the field for different settings and with noisy data are shown. For shape reconstruction in acoustics, we develop an appropriate strategy to identify areas with good reconstruction quality and combine different such regions into one joint function. Then, we show how shapes of unknown sound-soft scatterers are found as level curves of the total reconstructed field.

  1. Roof Modelling Potential of Unmanned Air Vehicle Point Clouds with Respect to Terrestrial Laser Scanning

    NASA Astrophysics Data System (ADS)

    Karakis, Serkan; Gunes Sefercik, Umut; Atalay, Can

    2016-07-01

    In parallel with the improvement of laser scanning technologies, dense point clouds which provide the detailed description of terrain and non-terrain objects became indispensable for remotely-sensed data users. Owing to the large demand, besides laser scanning, point clouds were started to achieve using photogrammetric images. Unmanned air vehicle (UAV) images are one of the most preferred data for creating dense point clouds by the advantage of low cost, rapid and periodically gain. In this study, we tried to assess the roof modelling potential of UAV point clouds by comparing three dimensional (3D) roof models produced from UAV and terrestrial laser scanning (TLS) point clouds. In the study, very popular low cost action camera SJ4000 and Faro Laser Scanner Focus3D X 330 were used to provide point clouds and the roof of Bulent Ecevit University Civil Aviation Academy building was utilized. For the validation of horizontal and vertical geolocation accuracies, standard deviation was used as the main indicator. The visual results demonstrated that UAV roof model is almost coherent with TLS roof model after the filtering-based refinement on noisy pixels and systematic bias correction. Moreover, the horizontal geolocation accuracy is approx. |5cm| both in X and Y directions and bias corrected vertical geolocation accuracy is approx. 17cm for zero roof slope.

  2. 3D cloud detection and tracking system for solar forecast using multiple sky imagers

    SciTech Connect

    Peng, Zhenzhou; Yu, Dantong; Huang, Dong; Heiser, John; Yoo, Shinjae; Kalb, Paul

    2015-06-23

    We propose a system for forecasting short-term solar irradiance based on multiple total sky imagers (TSIs). The system utilizes a novel method of identifying and tracking clouds in three-dimensional space and an innovative pipeline for forecasting surface solar irradiance based on the image features of clouds. First, we develop a supervised classifier to detect clouds at the pixel level and output cloud mask. In the next step, we design intelligent algorithms to estimate the block-wise base height and motion of each cloud layer based on images from multiple TSIs. Thus, this information is then applied to stitch images together into larger views, which are then used for solar forecasting. We examine the system’s ability to track clouds under various cloud conditions and investigate different irradiance forecast models at various sites. We confirm that this system can 1) robustly detect clouds and track layers, and 2) extract the significant global and local features for obtaining stable irradiance forecasts with short forecast horizons from the obtained images. Finally, we vet our forecasting system at the 32-megawatt Long Island Solar Farm (LISF). Compared with the persistent model, our system achieves at least a 26% improvement for all irradiance forecasts between one and fifteen minutes.

  3. 3D cloud detection and tracking system for solar forecast using multiple sky imagers

    DOE PAGESBeta

    Peng, Zhenzhou; Yu, Dantong; Huang, Dong; Heiser, John; Yoo, Shinjae; Kalb, Paul

    2015-06-23

    We propose a system for forecasting short-term solar irradiance based on multiple total sky imagers (TSIs). The system utilizes a novel method of identifying and tracking clouds in three-dimensional space and an innovative pipeline for forecasting surface solar irradiance based on the image features of clouds. First, we develop a supervised classifier to detect clouds at the pixel level and output cloud mask. In the next step, we design intelligent algorithms to estimate the block-wise base height and motion of each cloud layer based on images from multiple TSIs. Thus, this information is then applied to stitch images together intomore » larger views, which are then used for solar forecasting. We examine the system’s ability to track clouds under various cloud conditions and investigate different irradiance forecast models at various sites. We confirm that this system can 1) robustly detect clouds and track layers, and 2) extract the significant global and local features for obtaining stable irradiance forecasts with short forecast horizons from the obtained images. Finally, we vet our forecasting system at the 32-megawatt Long Island Solar Farm (LISF). Compared with the persistent model, our system achieves at least a 26% improvement for all irradiance forecasts between one and fifteen minutes.« less

  4. Integration of Libration Point Orbit Dynamics into a Universal 3-D Autonomous Formation Flying Algorithm

    NASA Technical Reports Server (NTRS)

    Folta, David; Bauer, Frank H. (Technical Monitor)

    2001-01-01

    The autonomous formation flying control algorithm developed by the Goddard Space Flight Center (GSFC) for the New Millennium Program (NMP) Earth Observing-1 (EO-1) mission is investigated for applicability to libration point orbit formations. In the EO-1 formation-flying algorithm, control is accomplished via linearization about a reference transfer orbit with a state transition matrix (STM) computed from state inputs. The effect of libration point orbit dynamics on this algorithm architecture is explored via computation of STMs using the flight proven code, a monodromy matrix developed from a N-body model of a libration orbit, and a standard STM developed from the gravitational and coriolis effects as measured at the libration point. A comparison of formation flying Delta-Vs calculated from these methods is made to a standard linear quadratic regulator (LQR) method. The universal 3-D approach is optimal in the sense that it can be accommodated as an open-loop or closed-loop control using only state information.

  5. Historical relics visualization by fusing terrestrial laser point-clouds and aerial orthophoto

    NASA Astrophysics Data System (ADS)

    Yan, Li; Zhao, Xu; Xiang, Xin; Lu, Tie Ding; Yi, Xue Feng

    2009-10-01

    There are many large-scale historical relics in China's stone-desert district, all this sorts relics are threatening by wind deflation. For documenting the cultural heritage in detail and avoiding a long working time in formidable desert conditions, we need an accurate and fast way to record the relics' 3D information. Laser scanners offer various applications on conservation of cultural heritage in recent years, such as static surveying, precise modeling and visualization for data acquiring purpose. Point clouds generated by terrestrial laser scanner and aerial image both are valuable data sources for the reconstruction of objects'3D models. This study exploits an approach for recording relic's datum by long-range terrestrial laser scanner (Optech ILRIS-3D) and fusing the scan data with gray information for visualization tasks. As a result of obtaining full scene needs several scans, we transform every scan individually into one global reference frame for decreasing the accumulative errors of point-clouds registration of common ICP approaches, and build 3D point-clouds model of historical relics by this way. For the purpose to add texture information on the point-clouds model, we consider using the approach by fussing the corresponding aerial orthophoto with the model because the objects have very similar texture information in desert. Results show its efficiency and feasible.

  6. Automatic registration of Iphone images to LASER point clouds of the urban structures using shape features

    NASA Astrophysics Data System (ADS)

    Sirmacek, B.; Lindenbergh, R. C.; Menenti, M.

    2013-10-01

    Fusion of 3D airborne laser (LIDAR) data and terrestrial optical imagery can be applied in 3D urban modeling and model up-dating. The most challenging aspect of the fusion procedure is registering the terrestrial optical images on the LIDAR point clouds. In this article, we propose an approach for registering these two different data from different sensor sources. As we use iPhone camera images which are taken in front of the interested urban structure by the application user and the high resolution LIDAR point clouds of the acquired by an airborne laser sensor. After finding the photo capturing position and orientation from the iPhone photograph metafile, we automatically select the area of interest in the point cloud and transform it into a range image which has only grayscale intensity levels according to the distance from the image acquisition position. We benefit from local features for registering the iPhone image to the generated range image. In this article, we have applied the registration process based on local feature extraction and graph matching. Finally, the registration result is used for facade texture mapping on the 3D building surface mesh which is generated from the LIDAR point cloud. Our experimental results indicate possible usage of the proposed algorithm framework for 3D urban map updating and enhancing purposes.

  7. From Point Clouds to Architectural Models: Algorithms for Shape Reconstruction

    NASA Astrophysics Data System (ADS)

    Canciani, M.; Falcolini, C.; Saccone, M.; Spadafora, G.

    2013-02-01

    The use of terrestrial laser scanners in architectural survey applications has become more and more common. Row data complexity, as given by scanner restitution, leads to several problems about design and 3D-modelling starting from Point Clouds. In this context we present a study on architectural sections and mathematical algorithms for their shape reconstruction, according to known or definite geometrical rules, focusing on shapes of different complexity. Each step of the semi-automatic algorithm has been developed using Mathematica software and CAD, integrating both programs in order to reconstruct a geometrical CAD model of the object. Our study is motivated by the fact that, for architectural survey, most of three dimensional modelling procedures concerning point clouds produce superabundant, but often unnecessary, information and are also very expensive in terms of cpu time using more and more sophisticated hardware and software. On the contrary, it's important to simplify/decimate the point cloud in order to recognize a particular form out of some definite geometric/architectonic shapes. Such a process consists of several steps: first the definition of plane sections and characterization of their architecture; secondly the construction of a continuous plane curve depending on some parameters. In the third step we allow the selection on the curve of some nodal points with given specific characteristics (symmetry, tangency conditions, shadowing exclusion, corners, … ). The fourth and last step is the construction of a best shape defined by the comparison with an abacus of known geometrical elements, such as moulding profiles, leading to a precise architectonical section. The algorithms have been developed and tested in very different situations and are presented in a case study of complex geometries such as some mouldings profiles in the Church of San Carlo alle Quattro Fontane.

  8. Airborne Lidar Point Cloud Density Indices

    NASA Astrophysics Data System (ADS)

    Shih, P. T.; Huang, C.-M.

    2006-12-01

    Airborne lidar is useful for collecting a large volume and high density of points with three dimensional coordinates. Among these points are terrain points, as well as those points located aboveground. For DEM production, the density of the terrain points is an important quality index. While the penetration rate of laser points is dependent on the surface type characteristics, there are also different ways to present the point density. Namely, the point density could be measured by subdividing the surveyed area into cells, then computing the ratio of the number of points in each respective cell to its area. In this case, there will be one density value for each cell. The other method is to construct the TIN, and count the number of triangles in the cell, divided by the area of the cell. Aside from counting the number of triangles, the area of the largest, or the 95% ranking, triangle, could be used as an index as well. The TIN could also be replaced by Voronoi diagrams (Thiessen Polygon), and a polygon with even density could be derived from human interpretation. The nature of these indices is discussed later in this research paper. Examples of different land cover types: bare earth, built-up, low vegetation, low density forest, and high density forest; are extracted from point clouds collected in 2005 by ITRI under a contract from the Ministry of the Interior. It is found that all these indices are capable of reflecting the differences of the land cover type. However, further investigation is necessary to determine which the most descriptive one is.

  9. 3D Modeling of interactions between Jupiter’s ammonia clouds and large anticyclones

    NASA Astrophysics Data System (ADS)

    Palotai, Csaba; Dowling, Timothy E.; Fletcher, Leigh N.

    2014-04-01

    The motions of Jupiter’s tropospheric jets and vortices are made visible by its outermost clouds, which are expected to be largely composed of ammonia ice. Several groups have demonstrated that much of this dynamics can be reproduced in the vorticity fields of high-resolution models that, surprisingly, do not contain any clouds. While this reductionist approach is valuable, it has natural limitations. Here we report on numerical simulations that use the EPIC Jupiter model with a realistic ammonia-cloud microphysics module, focusing on how observable ammonia clouds interact with the Great Red Spot (GRS) and Oval BA. Maps of column-integrated ammonia-cloud density in the model resemble visible-band images of Jupiter and potential-vorticity maps. On the other hand, vertical cross sections through the model vortices reveal considerable heterogeneity in cloud density values between pressure levels in the vicinity of large anticyclones, and interestingly, ammonia snow appears occasionally. Away from the vortices, the ammonia clouds form at the levels expected from traditional one-dimensional models, and inside the vortices, the clouds are elevated and thick, in agreement with Galileo NIMS observations. However, rather than gathering slowly into place as a result of Jupiter’s weak secondary circulation, the ammonia clouds instead form high and thick inside the large anticyclones as soon as the cloud microphysics module is enabled. This suggests that any weak secondary circulation that might be present in Jupiter’s anticyclones, such as may arise because of radiative damping of their temperature anomalies, may have little or no direct effect on the altitude or thickness of the ammonia clouds. Instead, clouds form at those locations because the top halves of large anticyclones must be cool for the vortex to be able to fit under the tropopause, which is a primary-circulation, thermal-wind-shear effect of the stratification, not a secondary-circulation thermal feature

  10. Geometric Point Quality Assessment for the Automated, Markerless and Robust Registration of Unordered Tls Point Clouds

    NASA Astrophysics Data System (ADS)

    Weinmann, M.; Jutzi, B.

    2015-08-01

    The faithful 3D reconstruction of urban environments is an important prerequisite for tasks such as city modeling, scene interpretation or urban accessibility analysis. Typically, a dense and accurate 3D reconstruction is acquired with terrestrial laser scanning (TLS) systems by capturing several scans from different locations, and the respective point clouds have to be aligned correctly in a common coordinate frame. In this paper, we present an accurate and robust method for a keypoint-based registration of unordered point clouds via projective scan matching. Thereby, we involve a consistency check which removes unreliable feature correspondences and thus increases the ratio of inlier correspondences which, in turn, leads to a faster convergence of the RANSAC algorithm towards a suitable solution. This consistency check is fully generic and it not only favors geometrically smooth object surfaces, but also those object surfaces with a reasonable incidence angle. We demonstrate the performance of the proposed methodology on a standard TLS benchmark dataset and show that a highly accurate and robust registration may be achieved in a fully automatic manner without using artificial markers.

  11. 3D Monte Carlo simulation of solar radiance in the clear-sky and low-cloud atmosphere for retrieval of aerosol and cloud characteristics

    NASA Astrophysics Data System (ADS)

    Zhuravleva, Tatiana; Bedareva, Tatiana; Nasrtdinov, Ilmir

    As is well known, the spectral measurements of direct and diffuse solar radiation can be used to retrieve the optical and microphysical characteristics of atmospheric aerosol and clouds. Most methods of radiation calculations, which are used to solve the inverse problems, are implemented under the assumption of horizontal homogeneity of the atmosphere (clear-sky and overcast conditions). However, it is recognized that the 3D effects of clouds have a significant impact on the transfer of solar radiation in the atmosphere which can be the cause of errors in retrieval of aerosol and cloud properties. In this work, we present the algorithms of the Monte Carlo method for calculating the angular structure of diffuse radiation in the molecular-aerosol atmosphere and the appearance of isolated cloud. The simulation of radiative characteristics with specified spectral resolution is performed in spherical model of the atmosphere for the conditions of observations at the Earth’s surface and at the top of the atmosphere. Cloud is approximated by inverted paraboloid. The molecular absorption is accounted for on the basis of approximation of transmission function by short exponential series (k-distribution method). The specific features of the radiative transfer, caused by the 3D effects of clouds, are considered depending on cloud location in space and its sizes, sensing scheme, and illumination conditions. The simulation results of the brightness fields in the clear sky and in the appearance of isolated cloud are compared. This work was supported in part by the Russian Fund for Basic Research (through the grant no. 12-05-00169).

  12. Comparison of clinical bracket point registration with 3D laser scanner and coordinate measuring machine

    PubMed Central

    Nouri, Mahtab; Farzan, Arash; Baghban, Ali Reza Akbarzadeh; Massudi, Reza

    2015-01-01

    OBJECTIVE: The aim of the present study was to assess the diagnostic value of a laser scanner developed to determine the coordinates of clinical bracket points and to compare with the results of a coordinate measuring machine (CMM). METHODS: This diagnostic experimental study was conducted on maxillary and mandibular orthodontic study casts of 18 adults with normal Class I occlusion. First, the coordinates of the bracket points were measured on all casts by a CMM. Then, the three-dimensional coordinates (X, Y, Z) of the bracket points were measured on the same casts by a 3D laser scanner designed at Shahid Beheshti University, Tehran, Iran. The validity and reliability of each system were assessed by means of intraclass correlation coefficient (ICC) and Dahlberg's formula. RESULTS: The difference between the mean dimension and the actual value for the CMM was 0.0066 mm. (95% CI: 69.98340, 69.99140). The mean difference for the laser scanner was 0.107 ± 0.133 mm (95% CI: -0.002, 0.24). In each method, differences were not significant. The ICC comparing the two methods was 0.998 for the X coordinate, and 0.996 for the Y coordinate; the mean difference for coordinates recorded in the entire arch and for each tooth was 0.616 mm. CONCLUSION: The accuracy of clinical bracket point coordinates measured by the laser scanner was equal to that of CMM. The mean difference in measurements was within the range of operator errors. PMID:25741826

  13. Dense point-cloud representation of a scene using monocular vision

    NASA Astrophysics Data System (ADS)

    Diskin, Yakov; Asari, Vijayan

    2015-03-01

    We present a three-dimensional (3-D) reconstruction system designed to support various autonomous navigation applications. The system presented focuses on the 3-D reconstruction of a scene using only a single moving camera. Utilizing video frames captured at different points in time allows us to determine the depths of a scene. In this way, the system can be used to construct a point-cloud model of its unknown surroundings. We present the step-by-step methodology and analysis used in developing the 3-D reconstruction technique. We present a reconstruction framework that generates a primitive point cloud, which is computed based on feature matching and depth triangulation analysis. To populate the reconstruction, we utilized optical flow features to create an extremely dense representation model. With the third algorithmic modification, we introduce the addition of the preprocessing step of nonlinear single-image super resolution. With this addition, the depth accuracy of the point cloud, which relies on precise disparity measurement, has significantly increased. Our final contribution is an additional postprocessing step designed to filter noise points and mismatched features unveiling the complete dense point-cloud representation (DPR) technique. We measure the success of DPR by evaluating the visual appeal, density, accuracy, and computational expense and compare with two state-of-the-art techniques.

  14. From Point Cloud to Bim: a Survey of Existing Approaches

    NASA Astrophysics Data System (ADS)

    Hichri, N.; Stefani, C.; De Luca, L.; Veron, P.; Hamon, G.

    2013-07-01

    In order to handle more efficiently projects of restoration, documentation and maintenance of historical buildings, it is essential to rely on a 3D enriched model for the building. Today, the concept of Building Information Modelling (BIM) is widely adopted for the semantization of digital mockups and few research focused on the value of this concept in the field of cultural heritage. In addition historical buildings are already built, so it is necessary to develop a performing approach, based on a first step of building survey, to develop a semantically enriched digital model. For these reasons, this paper focuses on this chain starting with a point cloud and leading to the well-structured final BIM; and proposes an analysis and a survey of existing approaches on the topics of: acquisition, segmentation and BIM creation. It also, presents a critical analysis on the application of this chain in the field of cultural heritage.

  15. Segmentation and Reconstruction of Buildings with Aerial Oblique Photography Point Clouds

    NASA Astrophysics Data System (ADS)

    Liu, P.; Li, Y. C.; Hu, W.; Ding, X. B.

    2015-06-01

    Oblique photography technology as an excellent method for 3-D city model construction has brought itself to large-scale recognition and undeniable high social status. Tilt and vertical images with the high overlaps and different visual angles can produce a large number of dense matching point clouds data with spectral information. This paper presents a method of buildings reconstruction with stereo matching dense point clouds from aerial oblique images, which includes segmentation of buildings and reconstruction of building roofs. We summarize the characteristics of stereo matching point clouds from aerial oblique images and outline the problems with existing methods. Then we present the method for segmentation of building roofs, which based on colors and geometrical derivatives such as normal and curvature. Finally, a building reconstruction approach is developed based on the geometrical relationship. The experiment and analysis show that the methods are effective on building reconstruction with stereo matching point clouds from aerial oblique images.

  16. D Point Cloud Model Colorization by Dense Registration of Digital Images

    NASA Astrophysics Data System (ADS)

    Crombez, N.; Caron, G.; Mouaddib, E.

    2015-02-01

    Architectural heritage is a historic and artistic property which has to be protected, preserved, restored and must be shown to the public. Modern tools like 3D laser scanners are more and more used in heritage documentation. Most of the time, the 3D laser scanner is completed by a digital camera which is used to enrich the accurate geometric informations with the scanned objects colors. However, the photometric quality of the acquired point clouds is generally rather low because of several problems presented below. We propose an accurate method for registering digital images acquired from any viewpoints on point clouds which is a crucial step for a good colorization by colors projection. We express this image-to-geometry registration as a pose estimation problem. The camera pose is computed using the entire images intensities under a photometric visual and virtual servoing (VVS) framework. The camera extrinsic and intrinsic parameters are automatically estimated. Because we estimates the intrinsic parameters we do not need any informations about the camera which took the used digital image. Finally, when the point cloud model and the digital image are correctly registered, we project the 3D model in the digital image frame and assign new colors to the visible points. The performance of the approach is proven in simulation and real experiments on indoor and outdoor datasets of the cathedral of Amiens, which highlight the success of our method, leading to point clouds with better photometric quality and resolution.

  17. Automatic registration of terrestrial point clouds based on panoramic reflectance images and efficient BaySAC

    NASA Astrophysics Data System (ADS)

    Kang, Zhizhong

    2013-10-01

    This paper presents a new approach to automatic registration of terrestrial laser scanning (TLS) point clouds utilizing a novel robust estimation method by an efficient BaySAC (BAYes SAmpling Consensus). The proposed method directly generates reflectance images from 3D point clouds, and then using SIFT algorithm extracts keypoints to identify corresponding image points. The 3D corresponding points, from which transformation parameters between point clouds are computed, are acquired by mapping the 2D ones onto the point cloud. To remove false accepted correspondences, we implement a conditional sampling method to select the n data points with the highest inlier probabilities as a hypothesis set and update the inlier probabilities of each data point using simplified Bayes' rule for the purpose of improving the computation efficiency. The prior probability is estimated by the verification of the distance invariance between correspondences. The proposed approach is tested on four data sets acquired by three different scanners. The results show that, comparing with the performance of RANSAC, BaySAC leads to less iterations and cheaper computation cost when the hypothesis set is contaminated with more outliers. The registration results also indicate that, the proposed algorithm can achieve high registration accuracy on all experimental datasets.

  18. Status report on the 'Merging' of the Electron-Cloud Code POSINST with the 3-D Accelerator PIC CODE WARP

    SciTech Connect

    Vay, J.-L.; Furman, M.A.; Azevedo, A.W.; Cohen, R.H.; Friedman, A.; Grote, D.P.; Stoltz, P.H.

    2004-04-19

    We have integrated the electron-cloud code POSINST [1] with WARP [2]--a 3-D parallel Particle-In-Cell accelerator code developed for Heavy Ion Inertial Fusion--so that the two can interoperate. Both codes are run in the same process, communicate through a Python interpreter (already used in WARP), and share certain key arrays (so far, particle positions and velocities). Currently, POSINST provides primary and secondary sources of electrons, beam bunch kicks, a particle mover, and diagnostics. WARP provides the field solvers and diagnostics. Secondary emission routines are provided by the Tech-X package CMEE.

  19. Simultaneous colour visualizations of multiple ALS point cloud attributes for land cover and vegetation analysis

    NASA Astrophysics Data System (ADS)

    Zlinszky, András; Schroiff, Anke; Otepka, Johannes; Mandlburger, Gottfried; Pfeifer, Norbert

    2014-05-01

    LIDAR point clouds hold valuable information for land cover and vegetation analysis, not only in the spatial distribution of the points but also in their various attributes. However, LIDAR point clouds are rarely used for visual interpretation, since for most users, the point cloud is difficult to interpret compared to passive optical imagery. Meanwhile, point cloud viewing software is available allowing interactive 3D interpretation, but typically only one attribute at a time. This results in a large number of points with the same colour, crowding the scene and often obscuring detail. We developed a scheme for mapping information from multiple LIDAR point attributes to the Red, Green, and Blue channels of a widely used LIDAR data format, which are otherwise mostly used to add information from imagery to create "photorealistic" point clouds. The possible combinations of parameters are therefore represented in a wide range of colours, but relative differences in individual parameter values of points can be well understood. The visualization was implemented in OPALS software, using a simple and robust batch script, and is viewer independent since the information is stored in the point cloud data file itself. In our case, the following colour channel assignment delivered best results: Echo amplitude in the Red, echo width in the Green and normalized height above a Digital Terrain Model in the Blue channel. With correct parameter scaling (but completely without point classification), points belonging to asphalt and bare soil are dark red, low grassland and crop vegetation are bright red to yellow, shrubs and low trees are green and high trees are blue. Depending on roof material and DTM quality, buildings are shown from red through purple to dark blue. Erroneously high or low points, or points with incorrect amplitude or echo width usually have colours contrasting from terrain or vegetation. This allows efficient visual interpretation of the point cloud in planar

  20. Observation of superconductivity induced by a point contact on 3D Dirac semimetal Cd3As2 crystals.

    PubMed

    Wang, He; Wang, Huichao; Liu, Haiwen; Lu, Hong; Yang, Wuhao; Jia, Shuang; Liu, Xiong-Jun; Xie, X C; Wei, Jian; Wang, Jian

    2016-01-01

    Three-dimensional (3D) Dirac semimetals, which possess 3D linear dispersion in the electronic structure as a bulk analogue of graphene, have lately generated widespread interest in both materials science and condensed matter physics. Recently, crystalline Cd3As2 has been proposed and proved to be a 3D Dirac semimetal that can survive in the atmosphere. Here, by using point contact spectroscopy measurements, we observe exotic superconductivity around the point contact region on the surface of Cd3As2 crystals. The zero-bias conductance peak (ZBCP) and double conductance peaks (DCPs) symmetric around zero bias suggest p-wave-like unconventional superconductivity. Considering the topological properties of 3D Dirac semimetals, our findings may indicate that Cd3As2 crystals under certain conditions could be topological superconductors, which are predicted to support Majorana zero modes or gapless Majorana edge/surface modes in the boundary depending on the dimensionality of the material. PMID:26524129

  1. LIVAS: a 3-D multi-wavelength aerosol/cloud climatology based on CALIPSO and EARLINET

    NASA Astrophysics Data System (ADS)

    Amiridis, V.; Marinou, E.; Tsekeri, A.; Wandinger, U.; Schwarz, A.; Giannakaki, E.; Mamouri, R.; Kokkalis, P.; Binietoglou, I.; Solomos, S.; Herekakis, T.; Kazadzis, S.; Gerasopoulos, E.; Balis, D.; Papayannis, A.; Kontoes, C.; Kourtidis, K.; Papagiannopoulos, N.; Mona, L.; Pappalardo, G.; Le Rille, O.; Ansmann, A.

    2015-01-01

    We present LIVAS, a 3-dimentional multi-wavelength global aerosol and cloud optical climatology, optimized to be used for future space-based lidar end-to-end simulations of realistic atmospheric scenarios as well as retrieval algorithm testing activities. LIVAS database provides averaged profiles of aerosol optical properties for the potential space-borne laser operating wavelengths of 355, 532, 1064, 1570 and 2050 nm and of cloud optical properties at the wavelength of 532 nm. The global climatology is based on CALIPSO observations at 532 and 1064 nm and on aerosol-type-dependent spectral conversion factors for backscatter and extinction, derived from EARLINET ground-based measurements for the UV and scattering calculations for the IR wavelengths, using a combination of input data from AERONET, suitable aerosol models and recent literature. The required spectral conversion factors are calculated for each of the CALIPSO aerosol types and are applied to CALIPSO extinction and backscatter data correspondingly to the aerosol type retrieved by the CALIPSO aerosol classification scheme. A cloud climatology based on CALIPSO measurements at 532 nm is also provided, neglecting wavelength conversion due to approximately neutral scattering behavior of clouds along the spectral range of LIVAS. Averages of particle linear depolarization ratio profiles at 532 nm are provided as well. Finally, vertical distributions for a set of selected scenes of specific atmospheric phenomena (e.g., dust outbreaks, volcanic eruptions, wild fires, polar stratospheric clouds) are analyzed and spectrally converted so as to be used as case studies for space-borne lidar performance assessments. The final global climatology includes 4-year (1 January 2008-31 December 2011) time-averaged CALIPSO data on a uniform grid of 1×1 degree with the original high vertical resolution of CALIPSO in order to ensure realistic simulations of the atmospheric variability in lidar end-to-end simulations.

  2. 3D dust clouds (Yukawa Balls) in strongly coupled dusty plasmas

    SciTech Connect

    Melzer, A.; Passvogel, M.; Miksch, T.; Ikkurthi, V. R.; Schneider, R.; Block, D.; Piel, A.

    2010-06-16

    Three-dimensional finite systems of charged dust particles confined to concentric spherical shells in a dusty plasma, so-called 'Yukawa balls', have been studied with respect to their static and dynamic properties. Here, we review the charging of particles in a dusty plasma discharge by computer simulations and the respective particle arrangements. The normal mode spectrum of Yukawa balls is measured from the 3D thermal Brownian motion of the dust particles around their equilibrium positions.

  3. D Modeling of Components of a Garden by Using Point Cloud Data

    NASA Astrophysics Data System (ADS)

    Kumazakia, R.; Kunii, Y.

    2016-06-01

    Laser measurement is currently applied to several tasks such as plumbing management, road investigation through mobile mapping systems, and elevation model utilization through airborne LiDAR. Effective laser measurement methods have been well-documented in civil engineering, but few attempts have been made to establish equally effective methods in landscape engineering. By using point cloud data acquired through laser measurement, the aesthetic landscaping of Japanese gardens can be enhanced. This study focuses on simple landscape simulations for pruning and rearranging trees as well as rearranging rocks, lanterns, and other garden features by using point cloud data. However, such simulations lack concreteness. Therefore, this study considers the construction of a library of garden features extracted from point cloud data. The library would serve as a resource for creating new gardens and simulating gardens prior to conducting repairs. Extracted garden features are imported as 3ds Max objects, and realistic 3D models are generated by using a material editor system. As further work toward the publication of a 3D model library, file formats for tree crowns and trunks should be adjusted. Moreover, reducing the size of created models is necessary. Models created using point cloud data are informative because simply shaped garden features such as trees are often seen in the 3D industry.

  4. Voxelization algorithms for geospatial applications: Computational methods for voxelating spatial datasets of 3D city models containing 3D surface, curve and point data models.

    PubMed

    Nourian, Pirouz; Gonçalves, Romulo; Zlatanova, Sisi; Ohori, Ken Arroyo; Vu Vo, Anh

    2016-01-01

    Voxel representations have been used for years in scientific computation and medical imaging. The main focus of our research is to provide easy access to methods for making large-scale voxel models of built environment for environmental modelling studies while ensuring they are spatially correct, meaning they correctly represent topological and semantic relations among objects. In this article, we present algorithms that generate voxels (volumetric pixels) out of point cloud, curve, or surface objects. The algorithms for voxelization of surfaces and curves are a customization of the topological voxelization approach [1]; we additionally provide an extension of this method for voxelization of point clouds. The developed software has the following advantages:•It provides easy management of connectivity levels in the resulting voxels.•It is not dependant on any external library except for primitive types and constructs; therefore, it is easy to integrate them in any application.•One of the algorithms is implemented in C++ and C for platform independence and efficiency. PMID:27408832

  5. Integration of Point Clouds Originated from Laser Scaner and Photogrammetric Images for Visualization of Complex Details of Historical Buildings

    NASA Astrophysics Data System (ADS)

    Altuntas, C.

    2015-02-01

    Three-dimensional (3D) models of historical buildings are created for documentation and virtual realization of them. Laser scanning and photogrammetry are extensively used to perform for these aims. The selection of the method that will be used in threedimensional modelling study depends on the scale and shape of the object, and also applicability of the method. Laser scanners are high cost instruments. However, the cameras are low cost instruments. The off-the-shelf cameras are used for taking the photogrammetric images. The camera is imaging the object details by carrying on hand while the laser scanner makes ground based measurement. Laser scanner collect high density spatial data in a short time from the measurement area. On the other hand, image based 3D (IB3D) measurement uses images to create 3D point cloud data. The image matching and the creation of the point cloud can be done automatically. Historical buildings include more complex details. Thus, all details cannot be measured by terrestrial laser scanner (TLS) due to the blocking the details with each others. Especially, the artefacts which have complex shapes cannot be measured in full details. They cause occlusion on the point cloud model. However it is possible to record photogrammetric images and creation IB3D point cloud for these areas. Thus the occlusion free 3D model is created by the integration of point clouds originated from the TLS and photogrammetric images. In this study, usability of laser scanning in conjunction with image based modelling for creation occlusion free three-dimensional point cloud model of historical building was evaluated. The IB3D point cloud was created in the areas that could not been measured by TLS. Then laser scanning and IB3D point clouds were integrated in the common coordinate system. The registration point clouds were performed with the iterative closest point (ICP) and georeferencing methods. Accuracy of the registration was evaluated by convergency and its

  6. Comparison of ZEB1 and Leica C10 Indoor Laser Scanning Point Clouds

    NASA Astrophysics Data System (ADS)

    Sirmacek, Beril; Shen, Yueqian; Lindenbergh, Roderik; Zlatanova, Sisi; Diakite, Abdoulaye

    2016-06-01

    We present a comparison of point cloud generation and quality of data acquired by Zebedee (Zeb1) and Leica C10 devices which are used in the same building interior. Both sensor devices come with different practical and technical advantages. As it could be expected, these advantages come with some drawbacks. Therefore, depending on the requirements of the project, it is important to have a vision about what to expect from different sensors. In this paper, we provide a detailed analysis of the point clouds of the same room interior acquired from Zeb1 and Leica C10 sensors. First, it is visually assessed how different features appear in both the Zeb1 and Leica C10 point clouds. Next, a quantitative analysis is given by comparing local point density, local noise level and stability of local normals. Finally, a simple 3D room plan is extracted from both the Zeb1 and the Leica C10 point clouds and the lengths of constructed line segments connecting corners of the room are compared. The results show that Zeb1 is far superior in ease of data acquisition. No heavy handling, hardly no measurement planning and no point cloud registration is required from the operator. The resulting point cloud has a quality in the order of centimeters, which is fine for generating a 3D interior model of a building. Our results also clearly show that fine details of for example ornaments are invisible in the Zeb1 data. If point clouds with a quality in the order of millimeters are required, still a high-end laser scanner like the Leica C10 is required, in combination with a more sophisticated, time-consuming and elaborative data acquisition and processing approach.

  7. Registration algorithm of point clouds based on multiscale normal features

    NASA Astrophysics Data System (ADS)

    Lu, Jun; Peng, Zhongtao; Su, Hang; Xia, GuiHua

    2015-01-01

    The point cloud registration technology for obtaining a three-dimensional digital model is widely applied in many areas. To improve the accuracy and speed of point cloud registration, a registration method based on multiscale normal vectors is proposed. The proposed registration method mainly includes three parts: the selection of key points, the calculation of feature descriptors, and the determining and optimization of correspondences. First, key points are selected from the point cloud based on the changes of magnitude of multiscale curvatures obtained by using principal components analysis. Then the feature descriptor of each key point is proposed, which consists of 21 elements based on multiscale normal vectors and curvatures. The correspondences in a pair of two point clouds are determined according to the descriptor's similarity of key points in the source point cloud and target point cloud. Correspondences are optimized by using a random sampling consistency algorithm and clustering technology. Finally, singular value decomposition is applied to optimized correspondences so that the rigid transformation matrix between two point clouds is obtained. Experimental results show that the proposed point cloud registration algorithm has a faster calculation speed, higher registration accuracy, and better antinoise performance.

  8. SSM-HPC: Front View Gait Recognition Using Spherical Space Model with Human Point Clouds

    NASA Astrophysics Data System (ADS)

    Ryu, Jegoon; Kamata, Sei-Ichiro; Ahrary, Alireza

    In this paper, we propose a novel gait recognition framework - Spherical Space Model with Human Point Clouds (SSM-HPC) to recognize front view of human gait. A new gait representation - Marching in Place (MIP) gait is also introduced which preserves the spatiotemporal characteristics of individual gait manner. In comparison with the previous studies on gait recognition which usually use human silhouette images from image sequences, this research applies three dimensional (3D) point clouds data of human body obtained from stereo camera. The proposed framework exhibits gait recognition rates superior to those of other gait recognition methods.

  9. Imaging open-path Fourier transform infrared spectrometer for 3D cloud profiling

    NASA Astrophysics Data System (ADS)

    Rentz Dupuis, Julia; Mansur, David J.; Vaillancourt, Robert; Carlson, David; Evans, Thomas; Schundler, Elizabeth; Todd, Lori; Mottus, Kathleen

    2009-05-01

    OPTRA is developing an imaging open-path Fourier transform infrared (I-OP-FTIR) spectrometer for 3D profiling of chemical and biological agent simulant plumes released into test ranges and chambers. An array of I-OP-FTIR instruments positioned around the perimeter of the test site, in concert with advanced spectroscopic algorithms, enables real time tomographic reconstruction of the plume. The approach is intended as a referee measurement for test ranges and chambers. This Small Business Technology Transfer (STTR) effort combines the instrumentation and spectroscopic capabilities of OPTRA, Inc. with the computed tomographic expertise of the University of North Carolina, Chapel Hill.

  10. The algorithm to generate color point-cloud with the registration between panoramic image and laser point-cloud

    NASA Astrophysics Data System (ADS)

    Zeng, Fanyang; Zhong, Ruofei

    2014-03-01

    Laser point cloud contains only intensity information and it is necessary for visual interpretation to obtain color information from other sensor. Cameras can provide texture, color, and other information of the corresponding object. Points with color information of corresponding pixels in digital images can be used to generate color point-cloud and is conducive to the visualization, classification and modeling of point-cloud. Different types of digital cameras are used in different Mobile Measurement Systems (MMS).the principles and processes for generating color point-cloud in different systems are not the same. The most prominent feature of the panoramic images is the field of 360 degrees view angle in the horizontal direction, to obtain the image information around the camera as much as possible. In this paper, we introduce a method to generate color point-cloud with panoramic image and laser point-cloud, and deduce the equation of the correspondence between points in panoramic images and laser point-clouds. The fusion of panoramic image and laser point-cloud is according to the collinear principle of three points (the center of the omnidirectional multi-camera system, the image point on the sphere, the object point). The experimental results show that the proposed algorithm and formulae in this paper are correct.

  11. Visualisation of Complex 3d City Models on Mobile Webbrowsers Using Cloud-Based Image Provisioning

    NASA Astrophysics Data System (ADS)

    Christen, M.; Nebiker, S.

    2015-08-01

    Rendering large city models with high polygon count and a vast amount of textures at interactive frame rates is a rather difficult to impossible task as it highly depends on the client hardware, which is often insufficient, even if out-of-core rendering techniques and level of detail approaches are used. Rendering complex city models on mobile devices is even more challenging. An approach of rendering and caching very large city models in the cloud using ray-tracing based image provisioning is introduced. This allows rendering large scenes efficiently, including on mobile devices. With this approach, it is possible to render cities with nearly unlimited number of polygons and textures.

  12. D Building Reconstruction from LIDAR Point Clouds by Adaptive Dual Contouring

    NASA Astrophysics Data System (ADS)

    Orthuber, E.; Avbelj, J.

    2015-03-01

    This paper presents a novel workflow for data-driven building reconstruction from Light Detection and Ranging (LiDAR) point clouds. The method comprises building extraction, a detailed roof segmentation using region growing with adaptive thresholds, segment boundary creation, and a structural 3D building reconstruction approach using adaptive 2.5D Dual Contouring. First, a 2D-grid is overlain on the segmented point cloud. Second, in each grid cell 3D vertices of the building model are estimated from the corresponding LiDAR points. Then, the number of 3D vertices is reduced in a quad-tree collapsing procedure, and the remaining vertices are connected according to their adjacency in the grid. Roof segments are represented by a Triangular Irregular Network (TIN) and are connected to each other by common vertices or - at height discrepancies - by vertical walls. Resulting 3D building models show a very high accuracy and level of detail, including roof superstructures such as dormers. The workflow is tested and evaluated for two data sets, using the evaluation method and test data of the "ISPRS Test Project on Urban Classification and 3D Building Reconstruction" (Rottensteiner et al., 2012). Results show that the proposed method is comparable with the state of the art approaches, and outperforms them regarding undersegmentation and completeness of the scene reconstruction.

  13. Imaging open-path Fourier transform infrared spectrometer for 3D cloud profiling

    NASA Astrophysics Data System (ADS)

    Rentz Dupuis, Julia; Mansur, David J.; Vaillancourt, Robert; Carlson, David; Evans, Thomas; Schundler, Elizabeth; Todd, Lori; Mottus, Kathleen

    2010-04-01

    OPTRA has developed an imaging open-path Fourier transform infrared (I-OP-FTIR) spectrometer for 3D profiling of chemical and biological agent simulant plumes released into test ranges and chambers. An array of I-OP-FTIR instruments positioned around the perimeter of the test site, in concert with advanced spectroscopic algorithms, enables real time tomographic reconstruction of the plume. The approach is intended as a referee measurement for test ranges and chambers. This Small Business Technology Transfer (STTR) effort combines the instrumentation and spectroscopic capabilities of OPTRA, Inc. with the computed tomographic expertise of the University of North Carolina, Chapel Hill. In this paper, we summarize the design and build and detail system characterization and test of a prototype I-OP-FTIR instrument. System characterization includes radiometric performance and spectral resolution. Results from a series of tomographic reconstructions of sulfur hexafluoride plumes in a laboratory setting are also presented.

  14. Imaging open-path Fourier transform infrared spectrometer for 3D cloud profiling

    NASA Astrophysics Data System (ADS)

    Rentz Dupuis, Julia; Mansur, David J.; Engel, James R.; Vaillancourt, Robert; Todd, Lori; Mottus, Kathleen

    2008-04-01

    OPTRA and University of North Carolina are developing an imaging open-path Fourier transform infrared (I-OP-FTIR) spectrometer for 3D profiling of chemical and biological agent simulant plumes released into test ranges and chambers. An array of I-OP-FTIR instruments positioned around the perimeter of the test site, in concert with advanced spectroscopic algorithms, enables real time tomographic reconstruction of the plume. The approach will be considered as a candidate referee measurement for test ranges and chambers. This Small Business Technology Transfer (STTR) effort combines the instrumentation and spectroscopic capabilities of OPTRA, Inc. with the computed tomographic expertise of the University of North Carolina, Chapel Hill. In this paper, we summarize progress to date and overall system performance projections based on the instrument, spectroscopy, and tomographic reconstruction accuracy. We then present a preliminary optical design of the I-OP-FTIR.

  15. Comparison of different techniques in optical trap for generating picokelvin 3D atom cloud in microgravity

    NASA Astrophysics Data System (ADS)

    Yao, Hepeng; Luan, Tian; Li, Chen; Zhang, Yin; Ma, Zhaoyuan; Chen, Xuzong

    2016-01-01

    Pursuing ultralow temperature 3D atom gas under microgravity conditions is one of the popular topics in the field of ultracold research. Many groups around the world are using, or are planning to use, delta-kick cooling (DKC) in microgravity. Our group has also proposed a two-stage crossed beam cooling (TSCBC) method that also provides a path to picokelvin temperatures. In this paper, we compare the characteristics of TSCBC and DKC for producing a picokelvin system in microgravity. Using a direct simulation Monte Carlo (DSMC) method, we simulate the cooling process of 87Rb using the two different cooling techniques. Under the same initial conditions, 87Rb can reach 7 pK in 15 s using TSCBC and 75 pK in 5.1 s with DKC. The simulation results show that TSCBC can reach lower temperatures compared with DKC, but needs more time and a more stable laser.

  16. Trees Detection from Laser Point Clouds Acquired in Dense Urban Areas by a Mobile Mapping System

    NASA Astrophysics Data System (ADS)

    Monnier, F.; Vallet, B.; Soheilian, B.

    2012-07-01

    3D reconstruction of trees is of great interest in large-scale 3D city modelling. Laser scanners provide geometrically accurate 3D point clouds that are very useful for object recognition in complex urban scenes. Trees often cause important occlusions on building façades. Their recognition can lead to occlusion maps that are useful for many façade oriented applications such as visual based localisation and automatic image tagging. This paper proposes a pipeline to detect trees in point clouds acquired in dense urban areas with only laser informations (x,y, z coordinates and intensity). It is based on local geometric descriptors computed on each laser point using a determined neighbourhood. These descriptors describe the local shape of objects around every 3D laser point. A projection of these values on a 2D horizontal accumulation space followed by a combination of morphological filters provides individual tree clusters. The pipeline is evaluated and the results are presented on a set of one million laser points using a man made ground truth.

  17. Classification-based scene modeling for urban point clouds

    NASA Astrophysics Data System (ADS)

    Hao, Wen; Wang, Yinghui

    2014-03-01

    The three-dimensional modeling of urban scenes is an important topic that can be used for various applications. We present a comprehensive strategy to reconstruct a scene from urban point clouds. First, the urban point clouds are classified into the ground points, planar points on the ground, and nonplanar points on the ground by using the support vector machine algorithm which takes several differential geometry properties as features. Second, the planar points and nonplanar points on the ground are segmented into patches by using different segmentation methods. A collection of characteristics of point cloud segments like height, size, topological relationship, and ratio between the width and length are applied to extract different objects after removing the unwanted segments. Finally, the buildings, ground, and trees in the scene are reconstructed, resulting in a hybrid model representing the urban scene. Experimental results demonstrate that the proposed method can be used as a robust way to reconstruct the scene from the massive point clouds.

  18. Scan Profiles Based Method for Segmentation and Extraction of Planar Objects in Mobile Laser Scanning Point Clouds

    NASA Astrophysics Data System (ADS)

    Nguyen, Hoang Long; Belton, David; Helmholz, Petra

    2016-06-01

    The demand for accurate spatial data has been increasing rapidly in recent years. Mobile laser scanning (MLS) systems have become a mainstream technology for measuring 3D spatial data. In a MLS point cloud, the point clouds densities of captured point clouds of interest features can vary: they can be sparse and heterogeneous or they can be dense. This is caused by several factors such as the speed of the carrier vehicle and the specifications of the laser scanner(s). The MLS point cloud data needs to be processed to get meaningful information e.g. segmentation can be used to find meaningful features (planes, corners etc.) that can be used as the inputs for many processing steps (e.g. registration, modelling) that are more difficult when just using the point cloud. Planar features are dominating in manmade environments and they are widely used in point clouds registration and calibration processes. There are several approaches for segmentation and extraction of planar objects available, however the proposed methods do not focus on properly segment MLS point clouds automatically considering the different point densities. This research presents the extension of the segmentation method based on planarity of the features. This proposed method was verified using both simulated and real MLS point cloud datasets. The results show that planar objects in MLS point clouds can be properly segmented and extracted by the proposed segmentation method.

  19. Automated 3D Motion Tracking using Gabor Filter Bank, Robust Point Matching, and Deformable Models

    PubMed Central

    Wang, Xiaoxu; Chung, Sohae; Metaxas, Dimitris; Axel, Leon

    2013-01-01

    Tagged Magnetic Resonance Imaging (tagged MRI or tMRI) provides a means of directly and noninvasively displaying the internal motion of the myocardium. Reconstruction of the motion field is needed to quantify important clinical information, e.g., the myocardial strain, and detect regional heart functional loss. In this paper, we present a three-step method for this task. First, we use a Gabor filter bank to detect and locate tag intersections in the image frames, based on local phase analysis. Next, we use an improved version of the Robust Point Matching (RPM) method to sparsely track the motion of the myocardium, by establishing a transformation function and a one-to-one correspondence between grid tag intersections in different image frames. In particular, the RPM helps to minimize the impact on the motion tracking result of: 1) through-plane motion, and 2) relatively large deformation and/or relatively small tag spacing. In the final step, a meshless deformable model is initialized using the transformation function computed by RPM. The model refines the motion tracking and generates a dense displacement map, by deforming under the influence of image information, and is constrained by the displacement magnitude to retain its geometric structure. The 2D displacement maps in short and long axis image planes can be combined to drive a 3D deformable model, using the Moving Least Square method, constrained by the minimization of the residual error at tag intersections. The method has been tested on a numerical phantom, as well as on in vivo heart data from normal volunteers and heart disease patients. The experimental results show that the new method has a good performance on both synthetic and real data. Furthermore, the method has been used in an initial clinical study to assess the differences in myocardial strain distributions between heart disease (left ventricular hypertrophy) patients and the normal control group. The final results show that the proposed method

  20. Qualification of Point Clouds Measured by SFM Software

    NASA Astrophysics Data System (ADS)

    Oda, K.; Hattori, S.; Saeki, H.; Takayama, T.; Honma, R.

    2015-05-01

    This paper proposes a qualification method of a point cloud created by SfM (Structure-from-Motion) software. Recently, SfM software is popular for creating point clouds. Point clouds created by SfM Software seems to be correct, but in many cases, the result does not have correct scale, or does not have correct coordinates in reference coordinate system, and in these cases it is hard to evaluate the quality of the point clouds. To evaluate this correctness of the point clouds, we propose to use the difference between point clouds with different source of images. If the shape of the point clouds with different source of images is correct, two shapes of different source might be almost same. To compare the two or more shapes of point cloud, iterative-closest-point (ICP) is implemented. Transformation parameters (rotation and translation) are iteratively calculated so as to minimize sum of squares of distances. This paper describes the procedure of the evaluation and some test results.

  1. Precipitation Processes developed during ARM (1997), TOGA COARE(1992), GATE(1 974), SCSMEX(1998) and KWAJEX(1999): Consistent 2D and 3D Cloud Resolving Model Simulations

    NASA Technical Reports Server (NTRS)

    Tao, W.-K.; Shie, C.-H.; Simpson, J.; Starr, D.; Johnson, D.; Sud, Y.

    2003-01-01

    Real clouds and clouds systems are inherently three dimensional (3D). Because of the limitations in computer resources, however, most cloud-resolving models (CRMs) today are still two-dimensional (2D). A few 3D CRMs have been used to study the response of clouds to large-scale forcing. In these 3D simulations, the model domain was small, and the integration time was 6 hours. Only recently have 3D experiments been performed for multi-day periods for tropical cloud system with large horizontal domains at the National Center for Atmospheric Research. The results indicate that surface precipitation and latent heating profiles are very similar between the 2D and 3D simulations of these same cases. The reason for the strong similarity between the 2D and 3D CRM simulations is that the observed large-scale advective tendencies of potential temperature, water vapor mixing ratio, and horizontal momentum were used as the main forcing in both the 2D and 3D models. Interestingly, the 2D and 3D versions of the CRM used in CSU and U.K. Met Office showed significant differences in the rainfall and cloud statistics for three ARM cases. The major objectives of this project are to calculate and axamine: (1)the surface energy and water budgets, (2) the precipitation processes in the convective and stratiform regions, (3) the cloud upward and downward mass fluxes in the convective and stratiform regions; (4) cloud characteristics such as size, updraft intensity and lifetime, and (5) the entrainment and detrainment rates associated with clouds and cloud systems that developed in TOGA COARE, GATE, SCSMEX, ARM and KWAJEX. Of special note is that the analyzed (model generated) data sets are all produced by the same current version of the GCE model, i.e. consistent model physics and configurations. Trajectory analyse and inert tracer calculation will be conducted to identify the differences and similarities in the organization of convection between simulated 2D and 3D cloud systems.

  2. Analysis of Point Cloud Generation from UAS Images

    NASA Astrophysics Data System (ADS)

    Ostrowski, S.; Jóźków, G.; Toth, C.; Vander Jagt, B.

    2014-11-01

    Unmanned Aerial Systems (UAS) allow for the collection of low altitude aerial images, along with other geospatial information from a variety of companion sensors. The images can then be processed using sophisticated algorithms from the Computer Vision (CV) field, guided by the traditional and established procedures from photogrammetry. Based on highly overlapped images, new software packages which were specifically developed for UAS technology can easily create ground models, such as Point Clouds (PC), Digital Surface Model (DSM), orthoimages, etc. The goal of this study is to compare the performance of three different software packages, focusing on the accuracy of the 3D products they produce. Using a Nikon D800 camera installed on an ocotocopter UAS platform, images were collected during subsequent field tests conducted over the Olentangy River, north from the Ohio State University campus. Two areas around bike bridges on the Olentangy River Trail were selected because of the challenge the packages would have in creating accurate products; matching pixels over the river and dense canopy on the shore presents difficult scenarios to model. Ground Control Points (GCP) were gathered at each site to tie the models to a local coordinate system and help assess the absolute accuracy for each package. In addition, the models were also relatively compared to each other using their PCs.

  3. Orientation of Airborne Laser Scanning Point Clouds with Multi-View, Multi-Scale Image Blocks

    PubMed Central

    Rönnholm, Petri; Hyyppä, Hannu; Hyyppä, Juha; Haggrén, Henrik

    2009-01-01

    Comprehensive 3D modeling of our environment requires integration of terrestrial and airborne data, which is collected, preferably, using laser scanning and photogrammetric methods. However, integration of these multi-source data requires accurate relative orientations. In this article, two methods for solving relative orientation problems are presented. The first method includes registration by minimizing the distances between of an airborne laser point cloud and a 3D model. The 3D model was derived from photogrammetric measurements and terrestrial laser scanning points. The first method was used as a reference and for validation. Having completed registration in the object space, the relative orientation between images and laser point cloud is known. The second method utilizes an interactive orientation method between a multi-scale image block and a laser point cloud. The multi-scale image block includes both aerial and terrestrial images. Experiments with the multi-scale image block revealed that the accuracy of a relative orientation increased when more images were included in the block. The orientations of the first and second methods were compared. The comparison showed that correct rotations were the most difficult to detect accurately by using the interactive method. Because the interactive method forces laser scanning data to fit with the images, inaccurate rotations cause corresponding shifts to image positions. However, in a test case, in which the orientation differences included only shifts, the interactive method could solve the relative orientation of an aerial image and airborne laser scanning data repeatedly within a couple of centimeters. PMID:22454569

  4. Satellite and Surface Data Synergy for Developing a 3D Cloud Structure and Properties Characterization Over the ARM SGP. Stage 1: Cloud Amounts, Optical Depths, and Cloud Heights Reconciliation

    NASA Technical Reports Server (NTRS)

    Genkova, I.; Long, C. N.; Heck, P. W.; Minnis, P.

    2003-01-01

    One of the primary Atmospheric Radiation Measurement (ARM) Program objectives is to obtain measurements applicable to the development of models for better understanding of radiative processes in the atmosphere. We address this goal by building a three-dimensional (3D) characterization of the cloud structure and properties over the ARM Southern Great Plains (SGP). We take the approach of juxtaposing the cloud properties as retrieved from independent satellite and ground-based retrievals, and looking at the statistics of the cloud field properties. Once these retrievals are well understood, they will be used to populate the 3D characterization database. As a first step we determine the relationship between surface fractional sky cover and satellite viewing angle dependent cloud fraction (CF). We elaborate on the agreement intercomparing optical depth (OD) datasets from satellite and ground using available retrieval algorithms with relation to the CF, cloud height, multi-layer cloud presence, and solar zenith angle (SZA). For the SGP Central Facility, where output from the active remote sensing cloud layer (ARSCL) valueadded product (VAP) is available, we study the uncertainty of satellite estimated cloud heights and evaluate the impact of this uncertainty for radiative studies.

  5. Towards automatic indoor reconstruction of cluttered building rooms from point clouds

    NASA Astrophysics Data System (ADS)

    Previtali, M.; Barazzetti, L.; Brumana, R.; Scaioni, M.

    2014-05-01

    Terrestrial laser scanning is increasingly used in architecture and building engineering for as-built modelling of large and medium size civil structures. However, raw point clouds derived from laser scanning survey are generally not directly ready for generation of such models. A manual modelling phase has to be undertaken to edit and complete 3D models, which may cover indoor or outdoor environments. This paper presents an automated procedure to turn raw point clouds into semantically-enriched models of building interiors. The developed method mainly copes with a geometric complexity typical of indoor scenes with prevalence of planar surfaces, such as walls, floors and ceilings. A characteristic aspect of indoor modelling is the large amount of clutter and occlusion that may characterize any point clouds. For this reason the developed reconstruction pipeline was designed to recover and complete missing parts in a plausible way. The accuracy of the presented method was evaluated against traditional manual modelling and showed comparable results.

  6. Hail formation and growth in a 3D cloud model with hail-bin microphysics

    NASA Astrophysics Data System (ADS)

    Guo, Xueliang; Huang, Meiyuan

    The hailstorm of 22 July 1976 in Colorado was studied using a three-dimensional compressible nonhydrostatic cloud model with hail-bin microphysics and parameterized bulk hail microphysics. Results show that observed storm features, such as long-lasting, transient weak-echo vaults and a pronounced forward overhang structure can be better simulated in the model with hail-bin microphysics. The role of a feeder updraft in forming and transferring graupel into a main updraft is analyzed using three-dimensional information on hail and graupel locations and corresponding wind field data from the simulations with hail-bin microphysics. It is found that the formation of a feeder cell with weaker updraft along the side of a main cell has two important roles in forming of hail in the simulated multicellular hailstorm. One is to efficiently transfer graupel descended along the edge of the main updraft or from a massive forward overhang region into the main updraft by preventing the rapid fall of graupel to the surface, and by lifting the low-level inflow by which graupel can be advected into the main updraft. Second, to evolve as a daughter cell in which hail from the decaying old cell can continue their growth. Based on the study, the primary role of a feeder cell is to transfer hail embryos originally formed in a main cell to reenter the main cell rather than to generate initial hail embryos as proposed by previous studies.

  7. Point cloud vs drawing on archaeological site

    NASA Astrophysics Data System (ADS)

    Alby, E.

    2015-08-01

    Archaeology is a discipline closely related to the representation of objects that are at the center of its concerns. At different times of the archaeological method, representation approach takes different forms. It takes place on the archaeological excavation, during the exploration, or in a second time in the warehouse, object after object. It occurs also in different drawing scales. The use of topographical positioning techniques has found its place for decades in the stratigraphic process. Plans and sections are thus readjusted to each other, on the excavation site. These techniques are available to the archaeologist since a long time. The most of the time, a qualified member of the team performs himself these simple topographical operations. The two issues raised in this article are: three-dimensional acquisition techniques can they, first find their place in the same way on the excavation site, and is it conceivable that it could serve to support the representation? The drawing during the excavations is a very time-consuming phase; has it still its place on site? Currently, the drawing is part of the archaeological stratigraphy method. It helps documenting the different layers, which are gradually destroyed during the exploration. Without systematic documentation, any scientific reasoning cannot be done retrospectively and the conclusions would not be any evidence. Is it possible to imagine another way to document these phases without loss compared to the drawing? Laser scanning and photogrammetry are approved as acquisition techniques. What can they bring more to what is already done for archaeologists? Archaeological practice can be seen as divided into two parts: preventive archeology and classical archeology. The first has largely adopted the techniques that provide point clouds to save valuable time on site. Everything that is not destroyed by the archaeological approach will be destroyed by the building construction that triggered the excavations. The

  8. Building a LiDAR point cloud simulator: Testing algorithms for high resolution topographic change

    NASA Astrophysics Data System (ADS)

    Carrea, Dario; Abellán, Antonio; Derron, Marc-Henri; Jaboyedoff, Michel

    2014-05-01

    Terrestrial laser technique (TLS) is becoming a common tool in Geosciences, with clear applications ranging from the generation of a high resolution 3D models to the monitoring of unstable slopes and the quantification of morphological changes. Nevertheless, like every measurement techniques, TLS still has some limitations that are not clearly understood and affect the accuracy of the dataset (point cloud). A challenge in LiDAR research is to understand the influence of instrumental parameters on measurement errors during LiDAR acquisition. Indeed, different critical parameters interact with the scans quality at different ranges: the existence of shadow areas, the spatial resolution (point density), and the diameter of the laser beam, the incidence angle and the single point accuracy. The objective of this study is to test the main limitations of different algorithms usually applied on point cloud data treatment, from alignment to monitoring. To this end, we built in MATLAB(c) environment a LiDAR point cloud simulator able to recreate the multiple sources of errors related to instrumental settings that we normally observe in real datasets. In a first step we characterized the error from single laser pulse by modelling the influence of range and incidence angle on single point data accuracy. In a second step, we simulated the scanning part of the system in order to analyze the shifting and angular error effects. Other parameters have been added to the point cloud simulator, such as point spacing, acquisition window, etc., in order to create point clouds of simple and/or complex geometries. We tested the influence of point density and vitiating point of view on the Iterative Closest Point (ICP) alignment and also in some deformation tracking algorithm with same point cloud geometry, in order to determine alignment and deformation detection threshold. We also generated a series of high resolution point clouds in order to model small changes on different environments

  9. 3D hybrid simulations of the interaction of a magnetic cloud with a bow shock

    NASA Astrophysics Data System (ADS)

    Turc, L.; Fontaine, D.; Savoini, P.; Modolo, R.

    2015-08-01

    In this paper, we investigate the interaction of a magnetic cloud (MC) with a planetary bow shock using hybrid simulations. It is the first time to our knowledge that this interaction is studied using kinetic simulations which include self-consistently both the ion foreshock and the shock wave dynamics. We show that when the shock is in a quasi-perpendicular configuration, the MC's magnetic structure in the magnetosheath remains similar to that in the solar wind, whereas it is strongly altered downstream of a quasi-parallel shock. The latter can result in a reversal of the magnetic field north-south component in some parts of the magnetosheath. We also investigate how the MC affects in turn the outer parts of the planetary environment, i.e., from the foreshock to the magnetopause. We find the following: (i) The decrease of the Alfvén Mach number at the MC's arrival causes an attenuation of the foreshock region because of the weakening of the bow shock. (ii) The foreshock moves along the bow shock's surface, following the rotation of the MC's magnetic field. (iii) Owing to the low plasma beta, asymmetric flows arise inside the magnetosheath, due to the magnetic tension force which accelerates the particles in some parts of the magnetosheath and slows them down in others. (iv) The quasi-parallel region forms a depression in the shock's surface. Other deformations of the magnetopause and the bow shock are also highlighted. All these effects can contribute to significantly modify the solar wind/magnetosphere coupling during MC events.

  10. A 3-D numerical study of pinhole diffraction to predict the accuracy of EUV point diffraction interferometry

    SciTech Connect

    Goldberg, K.A. |; Tejnil, E.; Bokor, J. |

    1995-12-01

    A 3-D electromagnetic field simulation is used to model the propagation of extreme ultraviolet (EUV), 13-nm, light through sub-1500 {Angstrom} dia pinholes in a highly absorptive medium. Deviations of the diffracted wavefront phase from an ideal sphere are studied within 0.1 numerical aperture, to predict the accuracy of EUV point diffraction interferometersused in at-wavelength testing of nearly diffraction-limited EUV optical systems. Aberration magnitudes are studied for various 3-D pinhole models, including cylindrical and conical pinhole bores.

  11. Automatic Procedure for the Registration of Thermographic Images with Point Clouds

    NASA Astrophysics Data System (ADS)

    Lagüela, S.; Armesto, J.; Arias, P.; Zakhor, A.

    2012-07-01

    This paper presents a procedure for the automatic registration of thermographies with laser scanning point clouds. Given the heterogeneous nature of the two modalities, we propose a feature-based approach, satisfying the requisite that extracted features have to be invariant not only to rotation, translation and scale but also to changes in illumination and dimensionality. As speed and minimum operator interaction are prerequisites for the viability of the process in the building industry, our automatic registration procedure includes automatic feature extraction with no human intervention. With this aim, a line segment detector is used to extract 2D lines from thermographies, and 3D lines are extracted through segmentation of the point cloud. Feature-matching and the relative pose between thermographies and point cloud are obtained from an iterative procedure applied to detect and reject outliers; this includes rotation matrix and translation vector calculation and the application of the RANSAC algorithm to find a consistent set of matches. An automatically textured thermographic 3D model is the expected result of these procedures once the point cloud is filtered and triangulated.

  12. Novel volumetric 3D display based on point light source optical reconstruction using multi focal lens array

    NASA Astrophysics Data System (ADS)

    Lee, Jin su; Lee, Mu young; Kim, Jun oh; Kim, Cheol joong; Won, Yong Hyub

    2015-03-01

    Generally, volumetric 3D display panel produce volume-filling three dimensional images. This paper discusses a volumetric 3D display based on periodical point light sources(PLSs) construction using a multi focal lens array(MFLA). The voxel of discrete 3D images is formed in the air via construction of point light source emitted by multi focal lens array. This system consists of a parallel beam, a spatial light modulator(SLM), a lens array, and a polarizing filter. The multi focal lens array is made with UV adhesive polymer droplet control using a dispersing machine. The MFLA consists of 20x20 circular lens array. Each lens aperture of the MFLA shows 300um on average. The polarizing filter is placed after the SLM and the MFLA to set in phase mostly mode. By the point spread function, the PLSs of the system are located by the focal length of each lens of the MFLA. It can also provide the moving parallax and relatively high resolution. However it has a limit of viewing angle and crosstalk by a property of each lens. In our experiment, we present the letter `C', `O', `DE' and ball's surface with the different depth location. It could be seen clearly that when CCD camera is moved to its position following as transverse axis of the display system. From our result, we expect that varifocal lens like EWOD and LC-lens can be applied for real time volumetric 3D display system.

  13. Accuracy Assessment of Mobile Mapping Point Clouds Using the Existing Environment as Terrestrial Reference

    NASA Astrophysics Data System (ADS)

    Hofmann, S.; Brenner, C.

    2016-06-01

    Mobile mapping data is widely used in various applications, what makes it especially important for data users to get a statistically verified quality statement on the geometric accuracy of the acquired point clouds or its processed products. The accuracy of point clouds can be divided into an absolute and a relative quality, where the absolute quality describes the position of the point cloud in a world coordinate system such as WGS84 or UTM, whereas the relative accuracy describes the accuracy within the point cloud itself. Furthermore, the quality of processed products such as segmented features depends on the global accuracy of the point cloud but mainly on the quality of the processing steps. Several data sources with different characteristics and quality can be thought of as potential reference data, such as cadastral maps, orthophoto, artificial control objects or terrestrial surveys using a total station. In this work a test field in a selected residential area was acquired as reference data in a terrestrial survey using a total station. In order to reach high accuracy the stationing of the total station was based on a newly made geodetic network with a local accuracy of less than 3 mm. The global position of the network was determined using a long time GNSS survey reaching an accuracy of 8 mm. Based on this geodetic network a 3D test field with facades and street profiles was measured with a total station, each point with a two-dimensional position and altitude. In addition, the surface of poles of street lights, traffic signs and trees was acquired using the scanning mode of the total station. Comparing this reference data to the acquired mobile mapping point clouds of several measurement campaigns a detailed quality statement on the accuracy of the point cloud data is made. Additionally, the advantages and disadvantages of the described reference data source concerning availability, cost, accuracy and applicability are discussed.

  14. Scan Line Based Road Marking Extraction from Mobile LiDAR Point Clouds.

    PubMed

    Yan, Li; Liu, Hua; Tan, Junxiang; Li, Zan; Xie, Hong; Chen, Changjun

    2016-01-01

    Mobile Mapping Technology (MMT) is one of the most important 3D spatial data acquisition technologies. The state-of-the-art mobile mapping systems, equipped with laser scanners and named Mobile LiDAR Scanning (MLS) systems, have been widely used in a variety of areas, especially in road mapping and road inventory. With the commercialization of Advanced Driving Assistance Systems (ADASs) and self-driving technology, there will be a great demand for lane-level detailed 3D maps, and MLS is the most promising technology to generate such lane-level detailed 3D maps. Road markings and road edges are necessary information in creating such lane-level detailed 3D maps. This paper proposes a scan line based method to extract road markings from mobile LiDAR point clouds in three steps: (1) preprocessing; (2) road points extraction; (3) road markings extraction and refinement. In preprocessing step, the isolated LiDAR points in the air are removed from the LiDAR point clouds and the point clouds are organized into scan lines. In the road points extraction step, seed road points are first extracted by Height Difference (HD) between trajectory data and road surface, then full road points are extracted from the point clouds by moving least squares line fitting. In the road markings extraction and refinement step, the intensity values of road points in a scan line are first smoothed by a dynamic window median filter to suppress intensity noises, then road markings are extracted by Edge Detection and Edge Constraint (EDEC) method, and the Fake Road Marking Points (FRMPs) are eliminated from the detected road markings by segment and dimensionality feature-based refinement. The performance of the proposed method is evaluated by three data samples and the experiment results indicate that road points are well extracted from MLS data and road markings are well extracted from road points by the applied method. A quantitative study shows that the proposed method achieves an average

  15. Examining In-Cloud Convective Turbulence in Relation to Total Lightning and the 3D Wind Field of Severe Thunderstorms

    NASA Astrophysics Data System (ADS)

    Al-Momar, S. A.; Deierling, W.; Williams, J. K.; Hoffman, E. G.

    2014-12-01

    Convectively induced turbulence (CIT) is commonly listed as a cause or factor in weather-related commercial aviation accidents. In-cloud CIT is generated in part by shears between convective updrafts and downdrafts. Total lightning is also dependent on a robust updraft and the resulting storm electrification. The relationship between total lightning and turbulence could prove useful in operational aviation settings with the use of future measurements from the geostationary lightning mapper (GLM) onboard the GOES-R satellite. Providing nearly hemispheric coverage of total lightning, the GLM could help identify CIT in otherwise data-sparse locations. For a severe thunderstorm case on 7 June 2012 in northeast Colorado, in-cloud eddy dissipation rate estimates from the NCAR/NEXRAD Turbulence Detection Algorithm were compared with cloud electrification data from the Colorado Lightning Mapping Array and radar products from the Denver, Colorado WSR-88D. These comparisons showed that high concentrations of very high frequency (VHF) source densities emitted by lightning occurred near and downstream of the storm's convective core. Severe turbulence was also shown to occur near this area, extending near the melting level of the storm and spreading upward and outward. Additionally, increases/decreases in VHF sources and turbulence volumes occurred within a few minutes of each other; although, light turbulence was shown to increase near one storm's dissipation. This may be due to increased shear from the now downdraft dominate storm. The 3D wind field from this case, obtained by either a dual-Doppler or a Variational Doppler Radar Assimilation System (VDRAS) analysis, will also be examined to further study the relationships between total lightning and thunderstorm kinematics. If these results prove to be robust, lightning may serve as a strong indicator of the location of moderate or greater turbulence.

  16. Automatic Registration of Approximately Leveled Point Clouds of Urban Scenes

    NASA Astrophysics Data System (ADS)

    Moussa, A.; Elsheimy, N.

    2015-08-01

    Registration of point clouds is a necessary step to obtain a complete overview of scanned objects of interest. The majority of the current registration approaches target the general case where a full range of the registration parameters search space is assumed and searched. It is very common in urban objects scanning to have leveled point clouds with small roll and pitch angles and with also a small height differences. For such scenarios the registration search problem can be handled faster to obtain a coarse registration of two point clouds. In this paper, a fully automatic approach is proposed for registration of approximately leveled point clouds. The proposed approach estimates a coarse registration based on three registration parameters and then conducts a fine registration step using iterative closest point approach. The approach has been tested on three data sets of different areas and the achieved registration results validate the significance of the proposed approach.

  17. Urban Road Detection in Airbone Laser Scanning Point Cloud Using Random Forest Algorithm

    NASA Astrophysics Data System (ADS)

    Kaczałek, B.; Borkowski, A.

    2016-06-01

    The objective of this research is to detect points that describe a road surface in an unclassified point cloud of the airborne laser scanning (ALS). For this purpose we use the Random Forest learning algorithm. The proposed methodology consists of two stages: preparation of features and supervised point cloud classification. In this approach we consider ALS points, representing only the last echo. For these points RGB, intensity, the normal vectors, their mean values and the standard deviations are provided. Moreover, local and global height variations are taken into account as components of a feature vector. The feature vectors are calculated on a basis of the 3D Delaunay triangulation. The proposed methodology was tested on point clouds with the average point density of 12 pts/m2 that represent large urban scene. The significance level of 15% was set up for a decision tree of the learning algorithm. As a result of the Random Forest classification we received two subsets of ALS points. One of those groups represents points belonging to the road network. After the classification evaluation we achieved from 90% of the overall classification accuracy. Finally, the ALS points representing roads were merged and simplified into road network polylines using morphological operations.

  18. Interior Reconstruction Using the 3d Hough Transform

    NASA Astrophysics Data System (ADS)

    Dumitru, R.-C.; Borrmann, D.; Nüchter, A.

    2013-02-01

    Laser scanners are often used to create accurate 3D models of buildings for civil engineering purposes, but the process of manually vectorizing a 3D point cloud is time consuming and error-prone (Adan and Huber, 2011). Therefore, the need to characterize and quantify complex environments in an automatic fashion arises, posing challenges for data analysis. This paper presents a system for 3D modeling by detecting planes in 3D point clouds, based on which the scene is reconstructed at a high architectural level through removing automatically clutter and foreground data. The implemented software detects openings, such as windows and doors and completes the 3D model by inpainting.

  19. Lidar point cloud representation of canopy structure for biomass estimation

    NASA Astrophysics Data System (ADS)

    Neuenschwander, A. L.; Krofcheck, D. J.; Litvak, M. E.

    2014-12-01

    Laser mapping systems (lidar) have become an essential remote sensing tool for determining local and regional estimates of biomass. Lidar data (possibly in conjunction with optical imagery) can be used to segment the landscape into either individual trees or clusters of trees. Canopy characteristics (i.e. max, mean height) for a segmented tree are typically derived from a rasterized canopy height model (CHM) and subsequently used in a regression model to estimate biomass. The process of rasterizing the lidar point cloud into a CHM, however, reduces the amount information about the tree structure. Here, we compute statistics for each segmented tree from the raw lidar point cloud rather than a rasterized CHM. Working directly from the lidar point cloud enables a more accurate representation of the canopy structure. Biomass estimates from the point cloud method are compared against biomass estimates derived from a CHM for a Juniper savanna in New Mexico.

  20. Ice Formation in Arctic Mixed-Phase Clouds: Insights from a 3-D Cloud-Resolving Model with Size-Resolved Aerosol and Cloud Microphysics

    SciTech Connect

    Fan, Jiwen; Ovtchinnikov, Mikhail; Comstock, Jennifer M.; McFarlane, Sally A.; Khain, Alexander

    2009-02-27

    The single-layer mixed-phase clouds observed during the Atmospheric Radiation Measurement (ARM) program’s Mixed-Phase Arctic Cloud Experiment (MPACE) are simulated with a 3-dimensional cloud-resolving model the System for Atmospheric Modeling (SAM) coupled with an explicit bin microphysics scheme and a radar-lidar simulator. Two possible ice enhancement mechanisms – activation of droplet evaporation residues by condensation-followed-by-freezing and droplet freezing by contact freezing inside-out, are scrutinized by extensive comparisons with aircraft and radar and lidar measurements. The locations of ice initiation associated with each mechanism and the role of ice nuclei (IN) in the evolution of mixed-phase clouds are mainly addressed. Simulations with either mechanism agree well with the in-situ and remote sensing measurements on ice microphysical properties but liquid water content is slightly underpredicted. These two mechanisms give very similar cloud microphysical, macrophysical, dynamical, and radiative properties, although the ice nucleation properties (rate, frequency and location) are completely different. Ice nucleation from activation of evaporation nuclei is most efficient near cloud top areas concentrated on the edges of updrafts, while ice initiation from the drop freezing process has no significant location preference (occurs anywhere that droplet evaporation is significant). Both enhanced nucleation mechanisms contribute dramatically to ice formation with ice particle concentration of 10-15 times higher relative to the simulation without either of them. The contribution of ice nuclei (IN) recycling from ice particle evaporation to IN and ice particle concentration is found to be very significant in this case. Cloud can be very sensitive to IN initially and form a nonquilibrium transition condition, but become much less sensitive as cloud evolves to a steady mixed-phase condition. The parameterization of Meyers et al. [1992] with the observed

  1. Observation of superconductivity induced by a point contact on 3D Dirac semimetal Cd3 As2 crystals

    NASA Astrophysics Data System (ADS)

    Wang, He; Wang, Huichao; Liu, Haiwen; Lu, Hong; Yang, Wuhao; Jia, Shuang; Liu, Xiongjun; Xie, Xincheng; Wei, Jian; Wang, Jian

    The 3D Dirac semimetal state is located at the topological phase boundary and can potentially be driven into other topological phases including topological insulator, topological metal and the long-pursuit topological superconductor states. Crystalline Cd3As2 has been proposed and proved to be one of 3D Dirac semimetals which can survive in atmosphere. By precisely controlled point contact (PC) measurements, we observe the exotic superconductivity in the vicinity of the point contact region on the surface of Cd3As2 crystal, which might be induced by the local pressure in the out-of-plane direction from the metallic tip for PC. The observation of zero bias conductance peak (ZBCP) and double conductance peaks (DCPs) symmetric to zero bias further reveals p-wave like unconventional superconductivity in Cd3As2. Considering the special topological property of the 3D Dirac semimetal, our findings may indicate that the Cd3As2 crystal under certain conditions is a candidate of topological superconductor, which is predicted to support Majorana zero modes or gapless Majorana edge/surface modes on the boundary depending on the dimensionality of the material. This work was financially supported by the National Basic Research Program of China (Greanted Nos. 2012CB927400).

  2. 3D modelling of the early martian climate under a denser CO2 atmosphere: Temperatures and CO2 ice clouds

    NASA Astrophysics Data System (ADS)

    Forget, F.; Wordsworth, R.; Millour, E.; Madeleine, J.-B.; Kerber, L.; Leconte, J.; Marcq, E.; Haberle, R. M.

    2013-01-01

    On the basis of geological evidence, it is often stated that the early martian climate was warm enough for liquid water to flow on the surface thanks to the greenhouse effect of a thick atmosphere. We present 3D global climate simulations of the early martian climate performed assuming a faint young Sun and a CO2 atmosphere with surface pressure between 0.1 and 7 bars. The model includes a detailed radiative transfer model using revised CO2 gas collision induced absorption properties, and a parameterisation of the CO2 ice cloud microphysical and radiative properties. A wide range of possible climates is explored using various values of obliquities, orbital parameters, cloud microphysic parameters, atmospheric dust loading, and surface properties. Unlike on present day Mars, for pressures higher than a fraction of a bar, surface temperatures vary with altitude because of the adiabatic cooling and warming of the atmosphere when it moves vertically. In most simulations, CO2 ice clouds cover a major part of the planet. Previous studies had suggested that they could have warmed the planet thanks to their scattering greenhouse effect. However, even assuming parameters that maximize this effect, it does not exceed +15 K. Combined with the revised CO2 spectroscopy and the impact of surface CO2 ice on the planetary albedo, we find that a CO2 atmosphere could not have raised the annual mean temperature above 0 °C anywhere on the planet. The collapse of the atmosphere into permanent CO2 ice caps is predicted for pressures higher than 3 bar, or conversely at pressure lower than 1 bar if the obliquity is low enough. Summertime diurnal mean surface temperatures above 0 °C (a condition which could have allowed rivers and lakes to form) are predicted for obliquity larger than 40° at high latitudes but not in locations where most valley networks or layered sedimentary units are observed. In the absence of other warming mechanisms, our climate model results are thus consistent

  3. Effects of cyclone diameter on performance of 1D3D cyclones: Cut point and slope

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Cyclones are a commonly used air pollution abatement device for separating particulate matter (PM) from air streams in industrial processes. Several mathematical models have been proposed to predict the cut point of cyclones as cyclone diameter varies. The objective of this research was to determine...

  4. Segmentation and Crown Parameter Extraction of Individual Trees in AN Airborne Tomosar Point Cloud

    NASA Astrophysics Data System (ADS)

    Shahzad, M.; Schmitt, M.; Zhu, X. X.

    2015-03-01

    The analysis of individual trees is an important field of research in the forest remote sensing community. While the current state-of-theart mostly focuses on the exploitation of optical imagery and airborne LiDAR data, modern SAR sensors have not yet met the interest of the research community in that regard. This paper describes how several critical parameters of individual deciduous trees can be extraced from airborne multi-aspect TomoSAR point clouds: First, the point cloud is segmented by unsupervised mean shift clustering. Then ellipsoid models are fitted to the points of each cluster. Finally, from these 3D ellipsoids the geometrical tree parameters location, height and crown radius are extracted. Evaluation with respect to a manually derived reference dataset prove that almost 86% of all trees are localized, thus providing a promising perspective for further research towards individual tree recognition from SAR data.

  5. Using DOE-ARM and Space-Based Assets to Assess the Quality of Air Force Weather 3D Cloud Analysis and Forecast Products

    NASA Astrophysics Data System (ADS)

    Nobis, T. E.

    2015-12-01

    Air Force Weather (AFW) has documented requirements for global cloud analysis and forecasting to support DoD missions around the world. To meet these needs, AFW utilizes a number of cloud products. Cloud analyses are constructed using 17 different near real time satellite sources. Products include analysis of the individual satellite transmissions at native satellite resolution and an hourly global merge of all 17 sources on a 24km grid. AFW has also recently started creation of a time delayed global cloud reanalysis to produce a 'best possible' analysis for climatology and verification purposes. Forecasted cloud products include global short-range cloud forecasts created using advection techniques as well as statistically post processed cloud forecast products derived from various global and regional numerical weather forecast models. All of these cloud products cover different spatial and temporal resolutions and are produced on a number of different grid projections. The longer term vision of AFW is to consolidate these various approaches into uniform global numerical weather modeling (NWM) system using advanced cloudy-data assimilation processes to construct the analysis and a licensed version of UKMO's Unified Model to produce the various cloud forecast products. In preparation for this evolution in cloud modeling support, AFW has started to aggressively benchmark the performance of their current capabilities. Cloud information collected from so called 'active' sensors on the ground at the DOE-ARM sites and from space by such instruments as CloudSat, CALIPSO and CATS are being utilized to characterize the performance of AFW products derived largely by passive means. The goal is to understand the performance of the 3D cloud analysis and forecast products of today to help shape the requirements and standards for the future NWM driven system.This presentation will present selected results from these benchmarking efforts and highlight insights and observations

  6. Tunnel Point Cloud Filtering Method Based on Elliptic Cylindrical Model

    NASA Astrophysics Data System (ADS)

    Zhua, Ningning; Jiaa, Yonghong; Luo, Lun

    2016-06-01

    The large number of bolts and screws that attached to the subway shield ring plates, along with the great amount of accessories of metal stents and electrical equipments mounted on the tunnel walls, make the laser point cloud data include lots of non-tunnel section points (hereinafter referred to as non-points), therefore affecting the accuracy for modeling and deformation monitoring. This paper proposed a filtering method for the point cloud based on the elliptic cylindrical model. The original laser point cloud data was firstly projected onto a horizontal plane, and a searching algorithm was given to extract the edging points of both sides, which were used further to fit the tunnel central axis. Along the axis the point cloud was segmented regionally, and then fitted as smooth elliptic cylindrical surface by means of iteration. This processing enabled the automatic filtering of those inner wall non-points. Experiments of two groups showed coincident results, that the elliptic cylindrical model based method could effectively filter out the non-points, and meet the accuracy requirements for subway deformation monitoring. The method provides a new mode for the periodic monitoring of tunnel sections all-around deformation in subways routine operation and maintenance.

  7. Fast and Robust Segmentation and Classification for Change Detection in Urban Point Clouds

    NASA Astrophysics Data System (ADS)

    Roynard, X.; Deschaud, J.-E.; Goulette, F.

    2016-06-01

    Change detection is an important issue in city monitoring to analyse street furniture, road works, car parking, etc. For example, parking surveys are needed but are currently a laborious task involving sending operators in the streets to identify the changes in car locations. In this paper, we propose a method that performs a fast and robust segmentation and classification of urban point clouds, that can be used for change detection. We apply this method to detect the cars, as a particular object class, in order to perform parking surveys automatically. A recently proposed method already addresses the need for fast segmentation and classification of urban point clouds, using elevation images. The interest to work on images is that processing is much faster, proven and robust. However there may be a loss of information in complex 3D cases: for example when objects are one above the other, typically a car under a tree or a pedestrian under a balcony. In this paper we propose a method that retain the three-dimensional information while preserving fast computation times and improving segmentation and classification accuracy. It is based on fast region-growing using an octree, for the segmentation, and specific descriptors with Random-Forest for the classification. Experiments have been performed on large urban point clouds acquired by Mobile Laser Scanning. They show that the method is as fast as the state of the art, and that it gives more robust results in the complex 3D cases.

  8. Design point variation of 3-D loss and deviation for axial compressor middle stages

    NASA Technical Reports Server (NTRS)

    Roberts, William B.; Serovy, George K.; Sandercock, Donald M.

    1988-01-01

    The available data on middle-stage research compressors operating near design point are used to derive simple empirical models for the spanwise variation of three-dimensional viscous loss coefficients for middle-stage axial compressor blading. The models make it possible to quickly estimate the total loss and deviation across the blade span when the three-dimensional distribution is superimposed on the two-dimensional variation calculated for each blade element. It is noted that extrapolated estimates should be used with caution since the correlations have been derived from a limited data base.

  9. An exact solution for the 3D MHD stagnation-point flow of a micropolar fluid

    NASA Astrophysics Data System (ADS)

    Borrelli, A.; Giantesio, G.; Patria, M. C.

    2015-01-01

    The influence of a non-uniform external magnetic field on the steady three dimensional stagnation-point flow of a micropolar fluid over a rigid uncharged dielectric at rest is studied. The total magnetic field is parallel to the velocity at infinity. It is proved that this flow is possible only in the axisymmetric case. The governing nonlinear partial differential equations are reduced to a system of ordinary differential equations by a similarity transformation, before being solved numerically. The effects of the governing parameters on the fluid flow and on the magnetic field are illustrated graphically and discussed.

  10. Real-time estimation of FLE statistics for 3-D tracking with point-based registration.

    PubMed

    Wiles, Andrew D; Peters, Terry M

    2009-09-01

    Target registration error (TRE) has become a widely accepted error metric in point-based registration since the error metric was introduced in the 1990s. It is particularly prominent in image-guided surgery (IGS) applications where point-based registration is used in both image registration and optical tracking. In point-based registration, the TRE is a function of the fiducial marker geometry, location of the target and the fiducial localizer error (FLE). While the first two items are easily obtained, the FLE is usually estimated using an a priori technique and applied without any knowledge of real-time information. However, if the FLE can be estimated in real-time, particularly as it pertains to optical tracking, then the TRE can be estimated more robustly. In this paper, a method is presented where the FLE statistics are estimated from the latest measurement of the fiducial registration error (FRE) statistics. The solution is obtained by solving a linear system of equations of the form Ax=b for each marker at each time frame where x are the six independent FLE covariance parameters and b are the six independent estimated FRE covariance parameters. The A matrix is only a function of the tool geometry and hence the inverse of the matrix can be computed a priori and used at each instant in which the FLE estimation is required, hence minimizing the level of computation at each frame. When using a good estimate of the FRE statistics, Monte Carlo simulations demonstrate that the root mean square of the FLE can be computed within a range of 70-90 microm. Robust estimation of the TRE for an optically tracked tool, using a good estimate of the FLE, will provide two enhancements in IGS. First, better patient to image registration will be obtained by using the TRE of the optical tool as a weighting factor of point-based registration used to map the patient to image space. Second, the directionality of the TRE can be relayed back to the surgeon giving the surgeon the option

  11. Simulations of 3D Magnetic Merging: Resistive Scalings for Null Point and QSL Reconnection

    NASA Astrophysics Data System (ADS)

    Effenberger, Frederic; Craig, I. J. D.

    2016-01-01

    Starting from an exact, steady-state, force-free solution of the magnetohydrodynamic (MHD) equations, we investigate how resistive current layers are induced by perturbing line-tied three-dimensional magnetic equilibria. This is achieved by the superposition of a weak perturbation field in the domain, in contrast to studies where the boundary is driven by slow motions, like those present in photospheric active regions. Our aim is to quantify how the current structures are altered by the contribution of so-called quasi-separatrix layers (QSLs) as the null point is shifted outside the computational domain. Previous studies based on magneto-frictional relaxation have indicated that despite the severe field line gradients of the QSL, the presence of a null is vital in maintaining fast reconnection. Here, we explore this notion using highly resolved simulations of the full MHD evolution. We show that for the null-point configuration, the resistive scaling of the peak current density is close to J˜η^{-1}, while the scaling is much weaker, i.e. J˜η^{-0.4}, when only the QSL connectivity gradients provide a site for the current accumulation.

  12. Hinode observations and 3D magnetic structure of an X-ray bright point

    NASA Astrophysics Data System (ADS)

    Alexander, C. E.; Del Zanna, G.; Maclean, R. C.

    2011-02-01

    Aims: We present complete Hinode Solar Optical Telescope (SOT), X-Ray Telescope (XRT)and EUV Imaging Spectrometer (EIS) observations of an X-ray bright point (XBP) observed on the 10, 11 of October 2007 over its entire lifetime (~12 h). We aim to show how the measured plasma parameters of the XBP change over time and also what kind of similarities the X-ray emission has to a potential magnetic field model. Methods: Information from all three instruments on-board Hinode was used to study its entire evolution. XRT data was used to investigate the structure of the bright point and to measure the X-ray emission. The EIS instrument was used to measure various plasma parameters over the entire lifetime of the XBP. Lastly, the SOT was used to measure the magnetic field strength and provide a basis for potential field extrapolations of the photospheric fields to be made. These were performed and then compared to the observed coronal features. Results: The XBP measured ~15´´ in size and was found to be formed directly above an area of merging and cancelling magnetic flux on the photosphere. A good correlation between the rate of X-ray emission and decrease in total magnetic flux was found. The magnetic fragments of the XBP were found to vary on very short timescales (minutes), however the global quasi-bipolar structure remained throughout the lifetime of the XBP. The potential field extrapolations were a good visual fit to the observed coronal loops in most cases, meaning that the magnetic field was not too far from a potential state. Electron density measurements were obtained using a line ratio of Fe XII and the average density was found to be 4.95 × 109 cm-3 with the volumetric plasma filling factor calculated to have an average value of 0.04. Emission measure loci plots were then used to infer a steady temperature of log Te [ K] ~ 6.1. The calculated Fe XII Doppler shifts show velocity changes in and around the bright point of ±15 km s-1 which are observed to change

  13. Interactive PDF files with embedded 3D designs as support material to study the 32 crystallographic point groups

    NASA Astrophysics Data System (ADS)

    Arribas, Victor; Casas, Lluís; Estop, Eugènia; Labrador, Manuel

    2014-01-01

    Crystallography and X-ray diffraction techniques are essential topics in geosciences and other solid-state sciences. Their fundamentals, which include point symmetry groups, are taught in the corresponding university courses. In-depth meaningful learning of symmetry concepts is difficult and requires capacity for abstraction and spatial vision. Traditionally, wooden crystallographic models are used as support material. In this paper, we describe a new interactive tool, freely available, inspired in such models. Thirty-two PDF files containing embedded 3D models have been created. Each file illustrates a point symmetry group and can be used to teach/learn essential symmetry concepts and the International Hermann-Mauguin notation of point symmetry groups. Most interactive computer-aided tools devoted to symmetry deal with molecular symmetry and disregard crystal symmetry so we have developed a tool that fills the existing gap.

  14. Absence of Critical Points of Solutions to the Helmholtz Equation in 3D

    NASA Astrophysics Data System (ADS)

    Alberti, Giovanni S.

    2016-05-01

    The focus of this paper is to show the absence of critical points for the solutions to the Helmholtz equation in a bounded domain {Ωsubset{R}3} , given by div(a nabla u_{ω}g)-ω qu_{ω}g=0&quad {in Ω,} u_{ω}g=g&quad{on partialΩ.} We prove that for an admissible g there exists a finite set of frequencies K in a given interval and an open cover {overline{Ω}=\\cup_{ωin K} Ω_{ω}} such that {|nabla u_{ω}g(x)| > 0} for every {ωin K} and {xinΩ_{ω}} . The set K is explicitly constructed. If the spectrum of this problem is simple, which is true for a generic domain {Ω} , the admissibility condition on g is a generic property.

  15. Well log analysis to assist the interpretation of 3-D seismic data at Milne Point, north slope of Alaska

    USGS Publications Warehouse

    Lee, Myung W.

    2005-01-01

    In order to assess the resource potential of gas hydrate deposits in the North Slope of Alaska, 3-D seismic and well data at Milne Point were obtained from BP Exploration (Alaska), Inc. The well-log analysis has three primary purposes: (1) Estimate gas hydrate or gas saturations from the well logs; (2) predict P-wave velocity where there is no measured P-wave velocity in order to generate synthetic seismograms; and (3) edit P-wave velocities where degraded borehole conditions, such as washouts, affected the P-wave measurement significantly. Edited/predicted P-wave velocities were needed to map the gas-hydrate-bearing horizons in the complexly faulted upper part of 3-D seismic volume. The estimated gas-hydrate/gas saturations from the well logs were used to relate to seismic attributes in order to map regional distribution of gas hydrate inside the 3-D seismic grid. The P-wave velocities were predicted using the modified Biot-Gassmann theory, herein referred to as BGTL, with gas-hydrate saturations estimated from the resistivity logs, porosity, and clay volume content. The effect of gas on velocities was modeled using the classical Biot-Gassman theory (BGT) with parameters estimated from BGTL.

  16. Robust approximation of the Medial Axis Transform of LiDAR point clouds as a tool for visualisation

    NASA Astrophysics Data System (ADS)

    Peters, Ravi; Ledoux, Hugo

    2016-05-01

    Governments and companies around the world collect point clouds (datasets containing elevation points) because these are useful for many applications, e.g. to reconstruct 3D city models, to understand and predict the impact of floods, and to monitor dikes. We address in this paper the visualisation of point clouds, which is perhaps the most essential instrument a practitioner or a scientist has to analyse and understand such datasets. We argue that it is currently hampered by two main problems: (1) point clouds are often massive (several billion points); (2) the viewer's perception of depth and structure is often lost (because of the sparse and unstructured points). We propose solving both problems by using the Medial Axis Transform (MAT) and its properties. This allows us to (1) smartly simplify a point cloud in a geometry-dependent way (to preserve only significant features), and (2) to render splats whose radii are adaptive to the distribution of points (and thus obtain less "holes" in the surface). Our main contribution is a series of heuristics that allows us to compute the MAT robustly for noisy real-world LiDAR point clouds, and to compute the MAT for point clouds that do not fit into the main memory. We have implemented our algorithms, we report on experiments made with point clouds (of more than one billion points), and we demonstrate that we are able to render scenes with much less points than in the original point cloud (we preserve around 10%) while retaining good depth-perception and a sense of structure at close viewing distances.

  17. 3D imaging with the light sword optical element and deconvolution of distance-dependent point spread functions

    NASA Astrophysics Data System (ADS)

    Makowski, Michal; Petelczyc, Krzysztof; Kolodziejczyk, Andrzej; Jaroszewicz, Zbigniew; Ducin, Izabela; Kakarenko, Karol; Siemion, Agnieszka; Siemion, Andrzej; Suszek, Jaroslaw; Sypek, Maciej; Wojnowski, Dariusz

    2010-12-01

    The experimental demonstration of a blind deconvolution method on an imaging system with a Light Sword optical element (LSOE) used instead of a lens. Try-and-error deconvolution of known Point Spread Functions (PSF) from an input image captured on a single CCD camera is done. By establishing the optimal PSF providing the optimal contrast of optotypes seen in a frame, one can know the defocus parameter and hence the object distance. Therefore with a single exposure on a standard CCD camera we gain information on the depth of a 3-D scene. Exemplary results for a simple scene containing three optotypes at three distances from the imaging element are presented.

  18. a Semi-Automatic Procedure for Texturing of Laser Scanning Point Clouds with Google Streetview Images

    NASA Astrophysics Data System (ADS)

    Lichtenauer, J. F.; Sirmacek, B.

    2015-08-01

    We introduce a method to texture 3D urban models with photographs that even works for Google Streetview images and can be done with currently available free software. This allows realistic texturing, even when it is not possible or cost-effective to (re)visit a scanned site to take textured scans or photographs. Mapping a photograph onto a 3D model requires knowledge of the intrinsic and extrinsic camera parameters. The common way to obtain intrinsic parameters of a camera is by taking several photographs of a calibration object with a priori known structure. The extra challenge of using images from a database such as Google Streetview, rather than your own photographs, is that it does not allow for any controlled calibration. To overcome this limitation, we propose to calibrate the panoramic viewer of Google Streetview using Structure from Motion (SfM) on any structure of which Google Streetview offers views from multiple angles. After this, the extrinsic parameters for any other view can be calculated from 3 or more tie points between the image from Google Streetview and a 3D model of the scene. These point correspondences can either be obtained automatically or selected by manual annotation. We demonstrate how this procedure provides realistic 3D urban models in an easy and effective way, by using it to texture a publicly available point cloud from a terrestrial laser scan made in Bremen, Germany, with a screenshot from Google Streetview, after estimating the focal length from views from Paris, France.

  19. The Neighboring Column Approximation (NCA) - A fast approach for the calculation of 3D thermal heating rates in cloud resolving models

    NASA Astrophysics Data System (ADS)

    Klinger, Carolin; Mayer, Bernhard

    2016-01-01

    Due to computational costs, radiation is usually neglected or solved in plane parallel 1D approximation in today's numerical weather forecast and cloud resolving models. We present a fast and accurate method to calculate 3D heating and cooling rates in the thermal spectral range that can be used in cloud resolving models. The parameterization considers net fluxes across horizontal box boundaries in addition to the top and bottom boundaries. Since the largest heating and cooling rates occur inside the cloud, close to the cloud edge, the method needs in first approximation only the information if a grid box is at the edge of a cloud or not. Therefore, in order to calculate the heating or cooling rates of a specific grid box, only the directly neighboring columns are used. Our so-called Neighboring Column Approximation (NCA) is an analytical consideration of cloud side effects which can be considered a convolution of a 1D radiative transfer result with a kernel or radius of 1 grid-box (5 pt stencil) and which does usually not break the parallelization of a cloud resolving model. The NCA can be easily applied to any cloud resolving model that includes a 1D radiation scheme. Due to the neglect of horizontal transport of radiation further away than one model column, the NCA works best for model resolutions of about 100 m or lager. In this paper we describe the method and show a set of applications of LES cloud field snap shots. Correction terms, gains and restrictions of the NCA are described. Comprehensive comparisons to the 3D Monte Carlo Model MYSTIC and a 1D solution are shown. In realistic cloud fields, the full 3D simulation with MYSTIC shows cooling rates up to -150 K/d (100 m resolution) while the 1D solution shows maximum coolings of only -100 K/d. The NCA is capable of reproducing the larger 3D cooling rates. The spatial distribution of the heating and cooling is improved considerably. Computational costs are only a factor of 1.5-2 higher compared to a 1D

  20. Automatic Creation of Structural Models from Point Cloud Data: the Case of Masonry Structures

    NASA Astrophysics Data System (ADS)

    Riveiro, B.; Conde-Carnero, B.; González-Jorge, H.; Arias, P.; Caamaño, J. C.

    2015-08-01

    One of the fields where 3D modelling has an important role is in the application of such 3D models to structural engineering purposes. The literature shows an intense activity on the conversion of 3D point cloud data to detailed structural models, which has special relevance in masonry structures where geometry plays a key role. In the work presented in this paper, color data (from Intensity attribute) is used to automatically segment masonry structures with the aim of isolating masonry blocks and defining interfaces in an automatic manner using a 2.5D approach. An algorithm for the automatic processing of laser scanning data based on an improved marker-controlled watershed segmentation was proposed and successful results were found. Geometric accuracy and resolution of point cloud are constrained by the scanning instruments, giving accuracy levels reaching a few millimetres in the case of static instruments and few centimetres in the case of mobile systems. In any case, the algorithm is not significantly sensitive to low quality images because acceptable segmentation results were found in cases where blocks could not be visually segmented.

  1. Partial difference operators on weighted graphs for image processing on surfaces and point clouds.

    PubMed

    Lozes, Francois; Elmoataz, Abderrahim; Lezoray, Olivier

    2014-09-01

    Partial difference equations (PDEs) and variational methods for image processing on Euclidean domains spaces are very well established because they permit to solve a large range of real computer vision problems. With the recent advent of many 3D sensors, there is a growing interest in transposing and solving PDEs on surfaces and point clouds. In this paper, we propose a simple method to solve such PDEs using the framework of PDEs on graphs. This latter approach enables us to transcribe, for surfaces and point clouds, many models and algorithms designed for image processing. To illustrate our proposal, three problems are considered: (1) p -Laplacian restoration and inpainting; (2) PDEs mathematical morphology; and (3) active contours segmentation. PMID:25020095

  2. Automatic registration of optical aerial imagery to a LiDAR point cloud for generation of city models

    NASA Astrophysics Data System (ADS)

    Abayowa, Bernard O.; Yilmaz, Alper; Hardie, Russell C.

    2015-08-01

    This paper presents a framework for automatic registration of both the optical and 3D structural information extracted from oblique aerial imagery to a Light Detection and Ranging (LiDAR) point cloud without prior knowledge of an initial alignment. The framework employs a coarse to fine strategy in the estimation of the registration parameters. First, a dense 3D point cloud and the associated relative camera parameters are extracted from the optical aerial imagery using a state-of-the-art 3D reconstruction algorithm. Next, a digital surface model (DSM) is generated from both the LiDAR and the optical imagery-derived point clouds. Coarse registration parameters are then computed from salient features extracted from the LiDAR and optical imagery-derived DSMs. The registration parameters are further refined using the iterative closest point (ICP) algorithm to minimize global error between the registered point clouds. The novelty of the proposed approach is in the computation of salient features from the DSMs, and the selection of matching salient features using geometric invariants coupled with Normalized Cross Correlation (NCC) match validation. The feature extraction and matching process enables the automatic estimation of the coarse registration parameters required for initializing the fine registration process. The registration framework is tested on a simulated scene and aerial datasets acquired in real urban environments. Results demonstrates the robustness of the framework for registering optical and 3D structural information extracted from aerial imagery to a LiDAR point cloud, when co-existing initial registration parameters are unavailable.

  3. The effect of load on torques in point-to-point arm movements: a 3D model.

    PubMed

    Tibold, Robert; Laczko, Jozsef

    2012-01-01

    A dynamic, 3-dimensional model was developed to simulate slightly restricted (pronation-supination was not allowed) point-to-point movements of the upper limb under different external loads, which were modeled using 3 objects of distinct masses held in the hand. The model considered structural and biomechanical properties of the arm and measured coordinates of joint positions. The model predicted muscle torques generated by muscles and needed to produce the measured rotations in the shoulder and elbow joints. The effect of different object masses on torque profiles, magnitudes, and directions were studied. Correlation analysis has shown that torque profiles in the shoulder and elbow joints are load invariant. The shape of the torque magnitude-time curve is load invariant but it is scaled with the mass of the load. Objects with larger masses are associated with a lower deflection of the elbow torque with respect to the sagittal plane. Torque direction-time curve is load invariant scaled with the mass of the load. The authors propose that the load invariance of the torque magnitude-time curve and torque direction-time curve holds for object transporting arm movements not restricted to a plane. PMID:22938084

  4. Automatic 3D Building Detection and Modeling from Airborne LiDAR Point Clouds

    ERIC Educational Resources Information Center

    Sun, Shaohui

    2013-01-01

    Urban reconstruction, with an emphasis on man-made structure modeling, is an active research area with broad impact on several potential applications. Urban reconstruction combines photogrammetry, remote sensing, computer vision, and computer graphics. Even though there is a huge volume of work that has been done, many problems still remain…

  5. Coarse Point Cloud Registration by Egi Matching of Voxel Clusters

    NASA Astrophysics Data System (ADS)

    Wang, Jinhu; Lindenbergh, Roderik; Shen, Yueqian; Menenti, Massimo

    2016-06-01

    Laser scanning samples the surface geometry of objects efficiently and records versatile information as point clouds. However, often more scans are required to fully cover a scene. Therefore, a registration step is required that transforms the different scans into a common coordinate system. The registration of point clouds is usually conducted in two steps, i.e. coarse registration followed by fine registration. In this study an automatic marker-free coarse registration method for pair-wise scans is presented. First the two input point clouds are re-sampled as voxels and dimensionality features of the voxels are determined by principal component analysis (PCA). Then voxel cells with the same dimensionality are clustered. Next, the Extended Gaussian Image (EGI) descriptor of those voxel clusters are constructed using significant eigenvectors of each voxel in the cluster. Correspondences between clusters in source and target data are obtained according to the similarity between their EGI descriptors. The random sampling consensus (RANSAC) algorithm is employed to remove outlying correspondences until a coarse alignment is obtained. If necessary, a fine registration is performed in a final step. This new method is illustrated on scan data sampling two indoor scenarios. The results of the tests are evaluated by computing the point to point distance between the two input point clouds. The presented two tests resulted in mean distances of 7.6 mm and 9.5 mm respectively, which are adequate for fine registration.

  6. A quantitative study of 3D-scanning frequency and Δd of tracking points on the tooth surface

    PubMed Central

    Li, Hong; Lyu, Peijun; Sun, Yuchun; Wang, Yong; Liang, Xiaoyue

    2015-01-01

    Micro-movement of human jaws in the resting state might influence the accuracy of direct three-dimensional (3D) measurement. Providing a reference for sampling frequency settings of intraoral scanning systems to overcome this influence is important. In this study, we measured micro-movement, or change in distance (∆d), as the change in position of a single tracking point from one sampling time point to another in five human subjects. ∆d of tracking points on incisors at 7 sampling frequencies was judged against the clinical accuracy requirement to select proper sampling frequency settings. The curve equation was then fit quantitatively between ∆d median and the sampling frequency to predict the trend of ∆d with increasing f. The difference of ∆d among the subjects and the difference between upper and lower incisor feature points of the same subject were analyzed by a non-parametric test (α = 0.05). Significant differences of incisor feature points were noted among different subjects and between upper and lower jaws of the same subject (P < 0.01). Overall, ∆d decreased with increasing frequency. When the frequency was 60 Hz, ∆d nearly reached the clinical accuracy requirement. Frequencies higher than 60 Hz did not significantly decrease Δd further. PMID:26400112

  7. The Iqmulus Urban Showcase: Automatic Tree Classification and Identification in Huge Mobile Mapping Point Clouds

    NASA Astrophysics Data System (ADS)

    Böhm, J.; Bredif, M.; Gierlinger, T.; Krämer, M.; Lindenberg, R.; Liu, K.; Michel, F.; Sirmacek, B.

    2016-06-01

    Current 3D data capturing as implemented on for example airborne or mobile laser scanning systems is able to efficiently sample the surface of a city by billions of unselective points during one working day. What is still difficult is to extract and visualize meaningful information hidden in these point clouds with the same efficiency. This is where the FP7 IQmulus project enters the scene. IQmulus is an interactive facility for processing and visualizing big spatial data. In this study the potential of IQmulus is demonstrated on a laser mobile mapping point cloud of 1 billion points sampling ~ 10 km of street environment in Toulouse, France. After the data is uploaded to the IQmulus Hadoop Distributed File System, a workflow is defined by the user consisting of retiling the data followed by a PCA driven local dimensionality analysis, which runs efficiently on the IQmulus cloud facility using a Spark implementation. Points scattering in 3 directions are clustered in the tree class, and are separated next into individual trees. Five hours of processing at the 12 node computing cluster results in the automatic identification of 4000+ urban trees. Visualization of the results in the IQmulus fat client helps users to appreciate the results, and developers to identify remaining flaws in the processing workflow.

  8. Continuum Limit of Total Variation on Point Clouds

    NASA Astrophysics Data System (ADS)

    García Trillos, Nicolás; Slepčev, Dejan

    2016-04-01

    We consider point clouds obtained as random samples of a measure on a Euclidean domain. A graph representing the point cloud is obtained by assigning weights to edges based on the distance between the points they connect. Our goal is to develop mathematical tools needed to study the consistency, as the number of available data points increases, of graph-based machine learning algorithms for tasks such as clustering. In particular, we study when the cut capacity, and more generally total variation, on these graphs is a good approximation of the perimeter (total variation) in the continuum setting. We address this question in the setting of Γ-convergence. We obtain almost optimal conditions on the scaling, as the number of points increases, of the size of the neighborhood over which the points are connected by an edge for the Γ-convergence to hold. Taking of the limit is enabled by a transportation based metric which allows us to suitably compare functionals defined on different point clouds.

  9. From cloud-of-point coordinates to three-dimensional virtual environment: the data conversion system

    NASA Astrophysics Data System (ADS)

    Sitnik, Robert; Kujawinska, Malgorzata

    2002-02-01

    The sequential steps of conversion of data gathered by a full-field 3-D shape measurement optical system into CAD/CAM and multimedia environments are discussed. The complete triangulation algorithm, which automatically creates the triangle mesh from the input cloud of points, is described. Each block of this algorithm is explained in detail with special attention paid to the parameters controlling the quality of the data conversion process. The adaptive process of reducing the number of the triangles based on a second derivative of local curvature of an objects' surface is explained. The error analysis is discussed at each step of the cloud data processing in dependency of the algorithm initial parameters. The three algorithms that process additional color information (R,G,B) into the texture mapped on the triangle mesh is presented. The usefulness of the complete conversion process is proved by the manufacturing of an exemplary object, exporting a human 3-D face to the Internet and an example object into a 3-D virtual environment.

  10. The Complete (3-D) Co-Seismic Displacements Using Point-Like Targets Tracking With Ascending And Descending SAR Data

    NASA Astrophysics Data System (ADS)

    Hu, Xie; Wang, Teng; Liao, Mingsheng

    2013-12-01

    SAR Interferometry (InSAR) has its unique advantages, e.g., all weather/time accessibility, cm-level accuracy and large spatial coverage, however, it can only obtain one dimensional measurement along line-of-sight (LOS) direction. Offset tracking is an important complement to measure large and rapid displacements in both azimuth and range directions. Here we perform offset tracking on detected point-like targets (PT) by calculating the cross-correlation with a sinc-like template. And a complete 3-D displacement field can be derived using PT offset tracking from a pair of ascending and descending data. The presented case study on 2010 M7.2 El Mayor-Cucapah earthquake helps us better understand the rupture details.

  11. 3-D seismic over the Fausse Pointe Field: A case history of acquisition in a harsh environment

    SciTech Connect

    Duncan, P.M.; Nester, D.C.; Martin, J.A.; Moles, J.R.

    1995-12-31

    A 50 square mile 3D seismic survey was successfully acquired over Fausse Point Field in the latter half of 1994. The geophysical and logistical challenges of this project were immense. The steep dips and extensive range of target depths required a large shoot area with a relatively fine sampling interval. The surface, while essentially flat, included areas of cane field, crawfish ponds, thick brush, swamp, open lakes and deep canals -- all typical of southern Louisiana. Planning and permitting of the survey began in late 1993. Field operations began in June 1994 and were complete in January 1995. Field personnel numbered 150 at the peak of operations. More than 19,000 crew hours were required to complete the job at a cost of over 5,000,000. The project was complete on time and on budget. The resulting images of the salt dome and surrounding rocks are not only beautiful but are revealing many opportunities for new hydrocarbon development.

  12. Genomic cloud computing: legal and ethical points to consider

    PubMed Central

    Dove, Edward S; Joly, Yann; Tassé, Anne-Marie; Burton, Paul; Chisholm, Rex; Fortier, Isabel; Goodwin, Pat; Harris, Jennifer; Hveem, Kristian; Kaye, Jane; Kent, Alistair; Knoppers, Bartha Maria; Lindpaintner, Klaus; Little, Julian; Riegman, Peter; Ripatti, Samuli; Stolk, Ronald; Bobrow, Martin; Cambon-Thomsen, Anne; Dressler, Lynn; Joly, Yann; Kato, Kazuto; Knoppers, Bartha Maria; Rodriguez, Laura Lyman; McPherson, Treasa; Nicolás, Pilar; Ouellette, Francis; Romeo-Casabona, Carlos; Sarin, Rajiv; Wallace, Susan; Wiesner, Georgia; Wilson, Julia; Zeps, Nikolajs; Simkevitz, Howard; De Rienzo, Assunta; Knoppers, Bartha M

    2015-01-01

    The biggest challenge in twenty-first century data-intensive genomic science, is developing vast computer infrastructure and advanced software tools to perform comprehensive analyses of genomic data sets for biomedical research and clinical practice. Researchers are increasingly turning to cloud computing both as a solution to integrate data from genomics, systems biology and biomedical data mining and as an approach to analyze data to solve biomedical problems. Although cloud computing provides several benefits such as lower costs and greater efficiency, it also raises legal and ethical issues. In this article, we discuss three key ‘points to consider' (data control; data security, confidentiality and transfer; and accountability) based on a preliminary review of several publicly available cloud service providers' Terms of Service. These ‘points to consider' should be borne in mind by genomic research organizations when negotiating legal arrangements to store genomic data on a large commercial cloud service provider's servers. Diligent genomic cloud computing means leveraging security standards and evaluation processes as a means to protect data and entails many of the same good practices that researchers should always consider in securing their local infrastructure. PMID:25248396

  13. Genomic cloud computing: legal and ethical points to consider.

    PubMed

    Dove, Edward S; Joly, Yann; Tassé, Anne-Marie; Knoppers, Bartha M

    2015-10-01

    The biggest challenge in twenty-first century data-intensive genomic science, is developing vast computer infrastructure and advanced software tools to perform comprehensive analyses of genomic data sets for biomedical research and clinical practice. Researchers are increasingly turning to cloud computing both as a solution to integrate data from genomics, systems biology and biomedical data mining and as an approach to analyze data to solve biomedical problems. Although cloud computing provides several benefits such as lower costs and greater efficiency, it also raises legal and ethical issues. In this article, we discuss three key 'points to consider' (data control; data security, confidentiality and transfer; and accountability) based on a preliminary review of several publicly available cloud service providers' Terms of Service. These 'points to consider' should be borne in mind by genomic research organizations when negotiating legal arrangements to store genomic data on a large commercial cloud service provider's servers. Diligent genomic cloud computing means leveraging security standards and evaluation processes as a means to protect data and entails many of the same good practices that researchers should always consider in securing their local infrastructure. PMID:25248396

  14. Classification of mobile terrestrial laser point clouds using semantic constraints

    NASA Astrophysics Data System (ADS)

    Pu, Shi; Zhan, Qingming

    2009-08-01

    With mobile terrestrial laser scanning, laser point clouds of large urban areas can be acquainted rapidly during normal speed driving. Classification of the laser points is beneficial to the city reconstruction from laser point cloud, but a manual classification process can be rather time-consuming due to the huge amount of laser points. Although the pulse return is often used to automate classification, it is only possible to distinguish limited types such as vegetation and ground. In this paper we present a new method which classifies mobile terrestrial laser point clouds using only coordinate information. First, a point of a whole urban scene is segmented, and geometric properties of each segment are computed. Then semantic constraints for several object types are derived from observation and knowledge. These constraints concern not only geometric properties of the semantic objects, but also regulate the topological and hierarchical relations between objects. A search tree is formulated from the semantic constraints and applied to the laser segments for interpretation. 2D map can provide the approximate locations of the buildings and roads as well as the roads' dominant directions, so it is integrated to reduce the search space. The applicability of this method is demonstrated with a Lynx data of the city Enschede and a Streetmapper data of the city Esslingen. Four object types: ground, road, building façade, and traffic symbols, are classified in these data sets.

  15. Automatic Extraction and Regularization of Building Outlines from Airborne LIDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Albers, Bastian; Kada, Martin; Wichmann, Andreas

    2016-06-01

    Building outlines are needed for various applications like urban planning, 3D city modelling and updating cadaster. Their automatic reconstruction, e.g. from airborne laser scanning data, as regularized shapes is therefore of high relevance. Today's airborne laser scanning technology can produce dense 3D point clouds with high accuracy, which makes it an eligible data source to reconstruct 2D building outlines or even 3D building models. In this paper, we propose an automatic building outline extraction and regularization method that implements a trade-off between enforcing strict shape restriction and allowing flexible angles using an energy minimization approach. The proposed procedure can be summarized for each building as follows: (1) an initial building outline is created from a given set of building points with the alpha shape algorithm; (2) a Hough transform is used to determine the main directions of the building and to extract line segments which are oriented accordingly; (3) the alpha shape boundary points are then repositioned to both follow these segments, but also to respect their original location, favoring long line segments and certain angles. The energy function that guides this trade-off is evaluated with the Viterbi algorithm.

  16. A Coupled fcGCM-GCE Modeling System: A 3D Cloud Resolving Model and a Regional Scale Model

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo

    2005-01-01

    Recent GEWEX Cloud System Study (GCSS) model comparison projects have indicated that cloud-resolving models (CRMs) agree with observations better than traditional single-column models in simulating various types of clouds and cloud systems from different geographic locations. Current and future NASA satellite programs can provide cloud, precipitation, aerosol and other data at very fine spatial and temporal scales. It requires a coupled global circulation model (GCM) and cloud-scale model (termed a super-parameterization or multi-scale modeling framework, MMF) to use these satellite data to improve the understanding of the physical processes that are responsible for the variation in global and regional climate and hydrological systems. The use of a GCM will enable global coverage, and the use of a CRM will allow for better and ore sophisticated physical parameterization. NASA satellite and field campaign cloud related datasets can provide initial conditions as well as validation for both the MMF and CRMs. The Goddard MMF is based on the 2D Goddard Cumulus Ensemble (GCE) model and the Goddard finite volume general circulation model (fvGCM), and it has started production runs with two years results (1998 and 1999). Also, at Goddard, we have implemented several Goddard microphysical schemes (21CE, several 31CE), Goddard radiation (including explicity calculated cloud optical properties), and Goddard Land Information (LIS, that includes the CLM and NOAH land surface models) into a next generation regional scale model, WRF. In this talk, I will present: (1) A Brief review on GCE model and its applications on precipitation processes (microphysical and land processes), (2) The Goddard MMF and the major difference between two existing MMFs (CSU MMF and Goddard MMF), and preliminary results (the comparison with traditional GCMs), (3) A discussion on the Goddard WRF version (its developments and applications), and (4) The characteristics of the four-dimensional cloud data

  17. Segmentation-Based Ground Points Detection from Mobile Laser Scanning Point Cloud

    NASA Astrophysics Data System (ADS)

    Lin, X.; Zhang, J.

    2015-06-01

    In most Mobile Laser Scanning (MLS) applications, filtering is a necessary step. In this paper, a segmentation-based filtering method is proposed for MLS point cloud, where a segment rather than an individual point is the basic processing unit. Particularly, the MLS point cloud in some blocks are clustered into segments by a surface growing algorithm, then the object segments are detected and removed. A segment-based filtering method is employed to detect the ground segments. Two MLS point cloud datasets are used to evaluate the proposed method. Experiments indicate that, compared with the classic progressive TIN (Triangulated Irregular Network) densification algorithm, the proposed method is capable of reducing the omission error, the commission error and total error by 3.62%, 7.87% and 5.54% on average, respectively.

  18. Production of Lightning NO(x) and its Vertical Distribution Calculated from 3-D Cloud-scale Chemical Transport Model Simulations

    NASA Technical Reports Server (NTRS)

    Ott, Lesley; Pickering, Kenneth; Stenchikov, Georgiy; Allen, Dale; DeCaria, Alex; Ridley, Brian; Lin, Ruei-Fong; Lang, Steve; Tao, Wei-Kuo

    2009-01-01

    A 3-D cloud scale chemical transport model that includes a parameterized source of lightning NO(x), based on observed flash rates has been used to simulate six midlatitude and subtropical thunderstorms observed during four field projects. Production per intracloud (P(sub IC) and cloud-to-ground (P(sub CG)) flash is estimated by assuming various values of P(sub IC) and P(sub CG) for each storm and determining which production scenario yields NO(x) mixing ratios that compare most favorably with in-cloud aircraft observations. We obtain a mean P(sub CG) value of 500 moles NO (7 kg N) per flash. The results of this analysis also suggest that on average, P(sub IC) may be nearly equal to P(sub CG), which is contrary to the common assumption that intracloud flashes are significantly less productive of NO than are cloud-to-ground flashes. This study also presents vertical profiles of the mass of lightning NO(x), after convection based on 3-D cloud-scale model simulations. The results suggest that following convection, a large percentage of lightning NO(x), remains in the middle and upper troposphere where it originated, while only a small percentage is found near the surface. The results of this work differ from profiles calculated from 2-D cloud-scale model simulations with a simpler lightning parameterization that were peaked near the surface and in the upper troposphere (referred to as a "C-shaped" profile). The new model results (a backward C-shaped profile) suggest that chemical transport models that assume a C-shaped vertical profile of lightning NO(x) mass may place too much mass neat the surface and too little in the middle troposphere.

  19. Correction and Densification of Uas-Based Photogrammetric Thermal Point Cloud

    NASA Astrophysics Data System (ADS)

    Akcay, O.; Erenoglu, R. C.; Erenoglu, O.

    2016-06-01

    Photogrammetric processing algorithms can suffer problems due to either the initial image quality (noise, low radiometric quality, shadows and so on) or to certain surface materials (shiny or textureless objects). This can result in noisy point clouds and/or difficulties in feature extraction. Specifically, dense point clouds which are generated with photogrammetric method using a lightweight thermal camera, are more noisy and sparse than the point clouds of high-resolution digital camera images. In this paper, new method which produces more reliable and dense thermal point cloud using the sparse thermal point cloud and high resolution digital point cloud was considered. Both thermal and digital images were obtained with UAS (Unmanned Aerial System) based lightweight Optris PI 450 and Canon EOS 605D camera images. Thermal and digital point clouds, and orthophotos were produced using photogrammetric methods. Problematic thermal point cloud was transformed to a high density thermal point cloud using image processing methods such as rasterizing, registering, interpolation and filling. The results showed that the obtained thermal point cloud - up to chosen processing parameters - was 87% more densify than the original point cloud. The second improvement was gained at the height accuracy of the thermal point cloud. New densified point cloud has more consistent elevation model while the original thermal point cloud shows serious deviations from the expected surface model.

  20. Analysis of shallow landslides by morphometry parameters derived from terrestrial laser scanning point clouds

    NASA Astrophysics Data System (ADS)

    Mayr, A.; Rutzinger, M.; Bremer, M.; Wiegand, C.; Kringer, K.; Geitner, C.

    2012-04-01

    Erosion by shallow landslides is a widespread and growing phenomenon in mountainous areas. The major consequences are loss of soil and regolith as well as damages on infrastructure and provision of unconsolidated material for secondary processes such as mudflows. In this study we present a concept for extracting morphometry parameters from terrestrial laser scanning (TLS) point clouds in order to investigate the relation between slope surface structure and regolith depth. TLS is used to collect high-resolution point cloud data of an affected slope in the Schmirn Valley (Tyrol, Austria). Regolith depth is considered to be one of the important factors for the development of shallow landslides. However, direct field measurements are labour- and time-consuming. In this study we developed an approach, to investigate the relation between regolith depth and surface morphometry parameters. The reference regolith depth information is derived from lightweight dynamic cone penetrometer tests (DCPT) within the test site. The suggested approach integrates spatial analysis of Geographic Information Systems and point cloud processing algorithms. It will help to enhance the prediction of shallow landslide occurrence by (i) deriving high resolution 3D morphometric parameters and (ii) determining regolith depth with a reasonable effort due to automation. In future we want to be able to contribute with this concept to the detailed modelling of shallow landslide susceptibility on alpine slopes.

  1. Road traffic sign detection and classification from mobile LiDAR point clouds

    NASA Astrophysics Data System (ADS)

    Weng, Shengxia; Li, Jonathan; Chen, Yiping; Wang, Cheng

    2016-03-01

    Traffic signs are important roadway assets that provide valuable information of the road for drivers to make safer and easier driving behaviors. Due to the development of mobile mapping systems that can efficiently acquire dense point clouds along the road, automated detection and recognition of road assets has been an important research issue. This paper deals with the detection and classification of traffic signs in outdoor environments using mobile light detection and ranging (Li- DAR) and inertial navigation technologies. The proposed method contains two main steps. It starts with an initial detection of traffic signs based on the intensity attributes of point clouds, as the traffic signs are always painted with highly reflective materials. Then, the classification of traffic signs is achieved based on the geometric shape and the pairwise 3D shape context. Some results and performance analyses are provided to show the effectiveness and limits of the proposed method. The experimental results demonstrate the feasibility and effectiveness of the proposed method in detecting and classifying traffic signs from mobile LiDAR point clouds.

  2. Automated Point Cloud Correspondence Detection for Underwater Mapping Using AUVs

    NASA Technical Reports Server (NTRS)

    Hammond, Marcus; Clark, Ashley; Mahajan, Aditya; Sharma, Sumant; Rock, Stephen

    2015-01-01

    An algorithm for automating correspondence detection between point clouds composed of multibeam sonar data is presented. This allows accurate initialization for point cloud alignment techniques even in cases where accurate inertial navigation is not available, such as iceberg profiling or vehicles with low-grade inertial navigation systems. Techniques from computer vision literature are used to extract, label, and match keypoints between "pseudo-images" generated from these point clouds. Image matches are refined using RANSAC and information about the vehicle trajectory. The resulting correspondences can be used to initialize an iterative closest point (ICP) registration algorithm to estimate accumulated navigation error and aid in the creation of accurate, self-consistent maps. The results presented use multibeam sonar data obtained from multiple overlapping passes of an underwater canyon in Monterey Bay, California. Using strict matching criteria, the method detects 23 between-swath correspondence events in a set of 155 pseudo-images with zero false positives. Using less conservative matching criteria doubles the number of matches but introduces several false positive matches as well. Heuristics based on known vehicle trajectory information are used to eliminate these.

  3. An automated method to register airborne and terrestrial laser scanning point clouds

    NASA Astrophysics Data System (ADS)

    Yang, Bisheng; Zang, Yufu; Dong, Zhen; Huang, Ronggang

    2015-11-01

    Laser scanning techniques have been widely used to capture three-dimensional (3D) point clouds of various scenes (e.g. urban scenes). In particular, airborne laser scanning (ALS) and mobile laser scanning (MLS), terrestrial laser scanning (TLS) are effective to capture point clouds from top or side view. Registering the complimentary point clouds captured by ALS and MLS/TLS provides an aligned data source for many purposes (e.g. 3D reconstruction). Among these MLS can be directly geo-referenced to ALS according to the equipped position systems. For small scanning areas or dense building areas, TLS is used instead of MLS. However, registering ALS and TLS datasets suffers from poor automation and robustness because of few overlapping areas and sparse corresponding geometric features. A robust method for the registration of TLS and ALS datasets is proposed, which has four key steps. (1) extracts building outlines from TLS and ALS data sets independently; (2) obtains the potential matching pairs of outlines according to the geometric constraints between building outlines; (3) constructs the Laplacian matrices of the extracted building outlines to model the topology between the geometric features; (4) calculates the correlation coefficients of the extracted geometric features by decomposing the Laplacian matrices into the spectral space, providing correspondences between the extracted features for coarse registration. Finally, the multi-line adjustment strategy is employed for the fine registration. The robustness and accuracy of the proposed method are verified using field data, demonstrating a reliable and stable solution to accurately register ALS and TLS datasets.

  4. Object-Based Analysis of Aerial Photogrammetric Point Cloud and Spectral Data for Land Cover Mapping

    NASA Astrophysics Data System (ADS)

    Debella-Gilo, M.; Bjørkelo, K.; Breidenbach, J.; Rahlf, J.

    2013-04-01

    The acquisition of 3D point data with the use of both aerial laser scanning (ALS) and matching of aerial stereo images coupled with advances in image processing algorithms in the past years provide opportunities to map land cover types with better precision than before. The present study applies Object-Based Image Analysis (OBIA) to 3D point cloud data obtained from matching of stereo aerial images together with spectral data to map land cover types of the Nord-Trøndelag county of Norway. The multi-resolution segmentation algorithm of the Definiens eCognition™ software is used to segment the scenes into homogenous objects. The objects are then classified into different land cover types using rules created based on the definitions given for each land cover type by the Norwegian Forest and Landscape Institute. The quality of the land cover map was evaluated using data collected in the field as part of the Norwegian National Forest Inventory. The results show that the classification has an overall accuracy of about 80% and a kappa index of about 0.65. OBIA is found to be a suitable method for utilizing 3D remote sensing data for land cover mapping in an effort to replace manual delineation methods.

  5. Octree-based region growing for point cloud segmentation

    NASA Astrophysics Data System (ADS)

    Vo, Anh-Vu; Truong-Hong, Linh; Laefer, Debra F.; Bertolotto, Michela

    2015-06-01

    This paper introduces a novel, region-growing algorithm for the fast surface patch segmentation of three-dimensional point clouds of urban environments. The proposed algorithm is composed of two stages based on a coarse-to-fine concept. First, a region-growing step is performed on an octree-based voxelized representation of the input point cloud to extract major (coarse) segments. The output is then passed through a refinement process. As part of this, there are two competing factors related to voxel size selection. To balance the constraints, an adaptive octree is created in two stages. Empirical studies on real terrestrial and airborne laser scanning data for complex buildings and an urban setting show the proposed approach to be at least an order of magnitude faster when compared to a conventional region growing method and able to incorporate semantic-based feature criteria, while achieving precision, recall, and fitness scores of at least 75% and as much as 95%.

  6. Rapid Inspection of Pavement Markings Using Mobile LIDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Zhang, Haocheng; Li, Jonathan; Cheng, Ming; Wang, Cheng

    2016-06-01

    This study aims at building a robust semi-automated pavement marking extraction workflow based on the use of mobile LiDAR point clouds. The proposed workflow consists of three components: preprocessing, extraction, and classification. In preprocessing, the mobile LiDAR point clouds are converted into the radiometrically corrected intensity imagery of the road surface. Then the pavement markings are automatically extracted with the intensity using a set of algorithms, including Otsu's thresholding, neighbor-counting filtering, and region growing. Finally, the extracted pavement markings are classified with the geometric parameters using a manually defined decision tree. Case studies are conducted using the mobile LiDAR dataset acquired in Xiamen (Fujian, China) with different road environments by the RIEGL VMX-450 system. The results demonstrated that the proposed workflow and our software tool can achieve 93% in completeness, 95% in correctness, and 94% in F-score when using Xiamen dataset.

  7. Mapping with Small UAS: A Point Cloud Accuracy Assessment

    NASA Astrophysics Data System (ADS)

    Toth, Charles; Jozkow, Grzegorz; Grejner-Brzezinska, Dorota

    2015-12-01

    Interest in using inexpensive Unmanned Aerial System (UAS) technology for topographic mapping has recently significantly increased. Small UAS platforms equipped with consumer grade cameras can easily acquire high-resolution aerial imagery allowing for dense point cloud generation, followed by surface model creation and orthophoto production. In contrast to conventional airborne mapping systems, UAS has limited ground coverage due to low flying height and limited flying time, yet it offers an attractive alternative to high performance airborne systems, as the cost of the sensors and platform, and the flight logistics, is relatively low. In addition, UAS is better suited for small area data acquisitions and to acquire data in difficult to access areas, such as urban canyons or densely built-up environments. The main question with respect to the use of UAS is whether the inexpensive consumer sensors installed in UAS platforms can provide the geospatial data quality comparable to that provided by conventional systems. This study aims at the performance evaluation of the current practice of UAS-based topographic mapping by reviewing the practical aspects of sensor configuration, georeferencing and point cloud generation, including comparisons between sensor types and processing tools. The main objective is to provide accuracy characterization and practical information for selecting and using UAS solutions in general mapping applications. The analysis is based on statistical evaluation as well as visual examination of experimental data acquired by a Bergen octocopter with three different image sensor configurations, including a GoPro HERO3+ Black Edition, a Nikon D800 DSLR and a Velodyne HDL-32. In addition, georeferencing data of varying quality were acquired and evaluated. The optical imagery was processed by using three commercial point cloud generation tools. Comparing point clouds created by active and passive sensors by using different quality sensors, and finally

  8. Effect of Clouds on Optical Imaging of the Space Shuttle During the Ascent Phase: A Statistical Analysis Based on a 3D Model

    NASA Technical Reports Server (NTRS)

    Short, David A.; Lane, Robert E., Jr.; Winters, Katherine A.; Madura, John T.

    2004-01-01

    Clouds are highly effective in obscuring optical images of the Space Shuttle taken during its ascent by ground-based and airborne tracking cameras. Because the imagery is used for quick-look and post-flight engineering analysis, the Columbia Accident Investigation Board (CAIB) recommended the return-to-flight effort include an upgrade of the imaging system to enable it to obtain at least three useful views of the Shuttle from lift-off to at least solid rocket booster (SRB) separation (NASA 2003). The lifetimes of individual cloud elements capable of obscuring optical views of the Shuttle are typically 20 minutes or less. Therefore, accurately observing and forecasting cloud obscuration over an extended network of cameras poses an unprecedented challenge for the current state of observational and modeling techniques. In addition, even the best numerical simulations based on real observations will never reach "truth." In order to quantify the risk that clouds would obscure optical imagery of the Shuttle, a 3D model to calculate probabilistic risk was developed. The model was used to estimate the ability of a network of optical imaging cameras to obtain at least N simultaneous views of the Shuttle from lift-off to SRB separation in the presence of an idealized, randomized cloud field.

  9. Rapid high-fidelity visualisation of multispectral 3D mapping

    NASA Astrophysics Data System (ADS)

    Tudor, Philip M.; Christy, Mark

    2011-06-01

    Mobile LIDAR scanning typically provides captured 3D data in the form of 3D 'Point Clouds'. Combined with colour imagery these data produce coloured point clouds or, if further processed, polygon-based 3D models. The use of point clouds is simple and rapid, but visualisation can appear ghostly and diffuse. Textured 3D models provide high fidelity visualisation, but their creation is time consuming, difficult to automate and can modify key terrain details. This paper describes techniques for the visualisation of fused multispectral 3D data that approach the visual fidelity of polygon-based models with the rapid turnaround and detail of 3D point clouds. The general approaches to data capture and data fusion are identified as well as the central underlying mathematical transforms, data management and graphics processing techniques used to support rapid, interactive visualisation of very large multispectral 3D datasets. Performance data with respect to real-world 3D mapping as well as illustrations of visualisation outputs are included.

  10. Determining Stand Parameters from Uas-Based Point Clouds

    NASA Astrophysics Data System (ADS)

    Yilmaz, V.; Serifoglu, C.; Gungor, O.

    2016-06-01

    In Turkey, forest management plans are produced by terrestrial surveying techniques for 10 or 20 year periods, which can be considered quite long to maintain the sustainability of forests. For a successful forest management plan, it is necessary to collect accurate information about the stand parameters and store them in dynamic and robust databases. The position, number, height and closure of trees are among the most important stand parameters required for a forest management plan. Determining the position of each single tree is challenging in such an area consisting of too many interlocking trees. Hence, in this study, an object-based tree detection methodology has been developed in MATLAB programming language to determine the position of each tree top in a highly closed area. The developed algorithm uses the Canopy Height Model (CHM), which is computed from the Digital Terrain Model (DTM) and Digital Surface Model (DSM) generated by using the point cloud extracted from the images taken from a UAS (Unmanned Aerial System). The heights of trees have been determined by using the CHM. The closure of the trees has been determined with the written MATLAB script. The results show that the developed tree detection methodology detected more than 70% of the trees successfully. It can also be concluded that the stand parameters may be determined by using the UAS-based point clouds depending on the characteristics of the study area. In addition, determination of the stand parameters by using point clouds reduces the time needed to produce forest management plans.

  11. Micelle Mediated Trace Level Sulfide Quantification through Cloud Point Extraction

    PubMed Central

    Devaramani, Samrat; Malingappa, Pandurangappa

    2012-01-01

    A simple cloud point extraction protocol has been proposed for the quantification of sulfide at trace level. The method is based on the reduction of iron (III) to iron (II) by the sulfide and the subsequent complexation of metal ion with nitroso-R salt in alkaline medium. The resulting green-colored complex was extracted through cloud point formation using cationic surfactant, that is, cetylpyridinium chloride, and the obtained surfactant phase was homogenized by ethanol before its absorbance measurement at 710 nm. The reaction variables like metal ion, ligand, surfactant concentration, and medium pH on the cloud point extraction of the metal-ligand complex have been optimized. The interference effect of the common anions and cations was studied. The proposed method has been successfully applied to quantify the trace level sulfide in the leachate samples of the landfill and water samples from bore wells and ponds. The validity of the proposed method has been studied by spiking the samples with known quantities of sulfide as well as comparing with the results obtained by the standard method. PMID:22619597

  12. Roughness Estimation from Point Clouds - A Comparison of Terrestrial Laser Scanning and Image Matching by Unmanned Aerial Vehicle Acquisitions

    NASA Astrophysics Data System (ADS)

    Rutzinger, Martin; Bremer, Magnus; Ragg, Hansjörg

    2013-04-01

    Recently, terrestrial laser scanning (TLS) and matching of images acquired by unmanned arial vehicles (UAV) are operationally used for 3D geodata acquisition in Geoscience applications. However, the two systems cover different application domains in terms of acquisition conditions and data properties i.e. accuracy and line of sight. In this study we investigate the major differences between the two platforms for terrain roughness estimation. Terrain roughness is an important input for various applications such as morphometry studies, geomorphologic mapping, and natural process modeling (e.g. rockfall, avalanche, and hydraulic modeling). Data has been collected simultaneously by TLS using an Optech ILRIS3D and a rotary UAV using an octocopter from twins.nrn for a 900 m² test site located in a riverbed in Tyrol, Austria (Judenbach, Mieming). The TLS point cloud has been acquired from three scan positions. These have been registered using iterative closest point algorithm and a target-based referencing approach. For registration geometric targets (spheres) with a diameter of 20 cm were used. These targets were measured with dGPS for absolute georeferencing. The TLS point cloud has an average point density of 19,000 pts/m², which represents a point spacing of about 5 mm. 15 images where acquired by UAV in a height of 20 m using a calibrated camera with focal length of 18.3 mm. A 3D point cloud containing RGB attributes was derived using APERO/MICMAC software, by a direct georeferencing approach based on the aircraft IMU data. The point cloud is finally co-registered with the TLS data to guarantee an optimal preparation in order to perform the analysis. The UAV point cloud has an average point density of 17,500 pts/m², which represents a point spacing of 7.5 mm. After registration and georeferencing the level of detail of roughness representation in both point clouds have been compared considering elevation differences, roughness and representation of different grain

  13. Thick fibrous composite reinforcements behave as special second-gradient materials: three-point bending of 3D interlocks

    NASA Astrophysics Data System (ADS)

    Madeo, Angela; Ferretti, Manuel; dell'Isola, Francesco; Boisse, Philippe

    2015-08-01

    In this paper, we propose to use a second gradient, 3D orthotropic model for the characterization of the mechanical behavior of thick woven composite interlocks. Such second-gradient theory is seen to directly account for the out-of-plane bending rigidity of the yarns at the mesoscopic scale which is, in turn, related to the bending stiffness of the fibers composing the yarns themselves. The yarns' bending rigidity evidently affects the macroscopic bending of the material and this fact is revealed by presenting a three-point bending test on specimens of composite interlocks. These specimens differ one from the other for the different relative direction of the yarns with respect to the edges of the sample itself. Both types of specimens are independently seen to take advantage of a second-gradient modeling for the correct description of their macroscopic bending modes. The results presented in this paper are essential for the setting up of a correct continuum framework suitable for the mechanical characterization of composite interlocks. The few second-gradient parameters introduced by the present model are all seen to be associated with peculiar deformation modes of the mesostructure (bending of the yarns) and are determined by inverse approach. Although the presented results undoubtedly represent an important step toward the complete characterization of the mechanical behavior of fibrous composite reinforcements, more complex hyperelastic second-gradient constitutive laws must be conceived in order to account for the description of all possible mesostructure-induced deformation patterns.

  14. Iso-sciatic point: novel approach to distinguish shadowing 3-D mask effects from scanner aberrations in extreme ultraviolet lithography

    NASA Astrophysics Data System (ADS)

    Leunissen, Leonardus H. A.; Gronheid, Roel; Gao, Weimin

    2006-06-01

    Extreme ultraviolet lithography (EUVL) uses a reflective mask with a multilayer coating. Therefore, the illumination is an off-axis ring field system that is non-telecentric on the mask side. This non-zero angle of incidence combined with the three-dimensional mask topography results in the so-called "shadowing effect". The shadowing causes the printed CD to depend on the orientation as well as on the position in the slit and it will significantly influence the image formation [1,2]. In addition, simulations show that the Bossung curves are asymmetrical due to 3-D mask effects and their best focus depends on the shadowing angle [3]. Such tilts in the Bossung curves are usually associated with aberrations in the optical system. In this paper, we describe an approach in which both properties can be disentangled. Bossung curve simulations with varying effective angles of incidence (between 0 and 6 degrees) show that at discrete defocus offsets, the printed linewidth is independent of the incident angle (and thus independent of the shadowing effect), the so-called iso-sciatic (constant shadowing) point. For an ideal optical system this means that the size of a printed feature with a given mask-CD and orientation does not change through slit. With a suitable test structure it is possible to use this effect to distinguish between mask topography and imaging effects from aberrations through slit. Simulations for the following aberrations tested the approach: spherical, coma and astigmatism.

  15. Generating synthetic 3D density fluctuation data to verify two-point measurement of parallel correlation length

    NASA Astrophysics Data System (ADS)

    Kim, Jaewook; Ghim, Young-Chul; Nuclear Fusion and Plasma Lab Team

    2014-10-01

    A BES (beam emission spectroscopy) system and an MIR (Microwave Imaging Reflectometer) system installed in KSTAR measure 2D (radial and poloidal) density fluctuations at two different toroidal locations. This gives a possibility of measuring the parallel correlation length of ion-scale turbulence in KSTAR. Due to lack of measurement points in toroidal direction and shorter separation distance between the diagnostics compared to an expected parallel correlation length, it is necessary to confirm whether a conventional statistical method, i.e., using a cross-correlation function, is valid for measuring the parallel correlation length. For this reason, we generated synthetic 3D density fluctuation data following Gaussian random field in a toroidal coordinate system that mimic real density fluctuation data. We measure the correlation length of the synthetic data by fitting a Gaussian function to the cross-correlation function. We observe that there is disagreement between the measured and actual correlation lengths, and the degree of disagreement is a function of at least, correlation length, correlation time and advection velocity of synthetic data. We identify the cause of disagreement and propose an appropriate method to measure correct correlation length.

  16. a Data Driven Method for Building Reconstruction from LiDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Sajadian, M.; Arefi, H.

    2014-10-01

    Airborne laser scanning, commonly referred to as LiDAR, is a superior technology for three-dimensional data acquisition from Earth's surface with high speed and density. Building reconstruction is one of the main applications of LiDAR system which is considered in this study. For a 3D reconstruction of the buildings, the buildings points should be first separated from the other points such as; ground and vegetation. In this paper, a multi-agent strategy has been proposed for simultaneous extraction and segmentation of buildings from LiDAR point clouds. Height values, number of returned pulse, length of triangles, direction of normal vectors, and area are five criteria which have been utilized in this step. Next, the building edge points are detected using a new method named "Grid Erosion". A RANSAC based technique has been employed for edge line extraction. Regularization constraints are performed to achieve the final lines. Finally, by modelling of the roofs and walls, 3D building model is reconstructed. The results indicate that the proposed method could successfully extract the building from LiDAR data and generate the building models automatically. A qualitative and quantitative assessment of the proposed method is then provided.

  17. Automatic Single Tree Detection in Plantations using UAV-based Photogrammetric Point clouds

    NASA Astrophysics Data System (ADS)

    Kattenborn, T.; Sperlich, M.; Bataua, K.; Koch, B.

    2014-08-01

    For reasons of documentation, management and certification there is a high interest in efficient inventories of palm plantations on the single plant level. Recent developments in unmanned aerial vehicle (UAV) technology facilitate spatial and temporal flexible acquisition of high resolution 3D data. Common single tree detection approaches are based on Very High Resolution (VHR) satellite or Airborne Laser Scanning (ALS) data. However, VHR data is often limited to clouds and does commonly not allow for height measurements. VHR and in particualar ALS data are characterized by high relatively high acquisition costs. Sperlich et al. (2013) already demonstrated the high potential of UAV-based photogrammetric point clouds for single tree detection using pouring algorithms. This approach was adjusted and improved for an application on palm plantation. The 9.4ha test site on Tarawa, Kiribati, comprised densely scattered growing palms, as well as abundant undergrowth and trees. Using a standard consumer grade camera mounted on an octocopter two flight campaigns at 70m and 100m altitude were performed to evaluate the effect Ground Sampling Distance (GSD) and image overlap. To avoid comission errors and improve the terrain interpolation the point clouds were classified based on the geometric characteristics of the classes, i.e. (1) palm, (2) other vegetation (3) and ground. The mapping accuracy amounts for 86.1 % for the entire study area and 98.2 % for dense growing palm stands. We conclude that this flexible and automatic approach has high capabilities for operational use.

  18. Initial Self-Consistent 3D Electron-Cloud Simulations of the LHC Beam with the Code WARP+POSINST

    SciTech Connect

    Vay, J; Furman, M A; Cohen, R H; Friedman, A; Grote, D P

    2005-10-11

    We present initial results for the self-consistent beam-cloud dynamics simulations for a sample LHC beam, using a newly developed set of modeling capability based on a merge [1] of the three-dimensional parallel Particle-In-Cell (PIC) accelerator code WARP [2] and the electron-cloud code POSINST [3]. Although the storage ring model we use as a test bed to contain the beam is much simpler and shorter than the LHC, its lattice elements are realistically modeled, as is the beam and the electron cloud dynamics. The simulated mechanisms for generation and absorption of the electrons at the walls are based on previously validated models available in POSINST [3, 4].

  19. Gridless, pattern-driven point cloud completion and extension

    NASA Astrophysics Data System (ADS)

    Gravey, Mathieu; Mariethoz, Gregoire

    2016-04-01

    While satellites offer Earth observation with a wide coverage, other remote sensing techniques such as terrestrial LiDAR can acquire very high-resolution data on an area that is limited in extension and often discontinuous due to shadow effects. Here we propose a numerical approach to merge these two types of information, thereby reconstructing high-resolution data on a continuous large area. It is based on a pattern matching process that completes the areas where only low-resolution data is available, using bootstrapped high-resolution patterns. Currently, the most common approach to pattern matching is to interpolate the point data on a grid. While this approach is computationally efficient, it presents major drawbacks for point clouds processing because a significant part of the information is lost in the point-to-grid resampling, and that a prohibitive amount of memory is needed to store large grids. To address these issues, we propose a gridless method that compares point clouds subsets without the need to use a grid. On-the-fly interpolation involves a heavy computational load, which is met by using a GPU high-optimized implementation and a hierarchical pattern searching strategy. The method is illustrated using data from the Val d'Arolla, Swiss Alps, where high-resolution terrestrial LiDAR data are fused with lower-resolution Landsat and WorldView-3 acquisitions, such that the density of points is homogeneized (data completion) and that it is extend to a larger area (data extension).

  20. Exploring point-cloud features from partial body views for gender classification

    NASA Astrophysics Data System (ADS)

    Fouts, Aaron; McCoppin, Ryan; Rizki, Mateen; Tamburino, Louis; Mendoza-Schrock, Olga

    2012-06-01

    In this paper we extend a previous exploration of histogram features extracted from 3D point cloud images of human subjects for gender discrimination. Feature extraction used a collection of concentric cylinders to define volumes for counting 3D points. The histogram features are characterized by a rotational axis and a selected set of volumes derived from the concentric cylinders. The point cloud images are drawn from the CAESAR anthropometric database provided by the Air Force Research Laboratory (AFRL) Human Effectiveness Directorate and SAE International. This database contains approximately 4400 high resolution LIDAR whole body scans of carefully posed human subjects. Success from our previous investigation was based on extracting features from full body coverage which required integration of multiple camera images. With the full body coverage, the central vertical body axis and orientation are readily obtainable; however, this is not the case with a one camera view providing less than one half body coverage. Assuming that the subjects are upright, we need to determine or estimate the position of the vertical axis and the orientation of the body about this axis relative to the camera. In past experiments the vertical axis was located through the center of mass of torso points projected on the ground plane and the body orientation derived using principle component analysis. In a natural extension of our previous work to partial body views, the absence of rotational invariance about the cylindrical axis greatly increases the difficulty for gender classification. Even the problem of estimating the axis is no longer simple. We describe some simple feasibility experiments that use partial image histograms. Here, the cylindrical axis is assumed to be known. We also discuss experiments with full body images that explore the sensitivity of classification accuracy relative to displacements of the cylindrical axis. Our initial results provide the basis for further

  1. Terrestrial and unmanned aerial system imagery for deriving photogrammetric three-dimensional point clouds and volume models of mass wasting sites

    NASA Astrophysics Data System (ADS)

    Hämmerle, Martin; Schütt, Fabian; Höfle, Bernhard

    2016-04-01

    Three-dimensional (3-D) geodata of mass wasting sites are important to model surfaces, volumes, and their changes over time. With a photogrammetric approach commonly known as structure from motion, 3-D point clouds can be derived from image collections in a straightforward way. The quality of point clouds covering a quarry dump derived from terrestrial and aerial imagery is compared and assessed. A comprehensive set of quality indicators is calculated and compared to surveyed reference data and to a terrestrial LiDAR point cloud. The examined indicators are completeness of coverage, point density, vertical accuracy, multiscale point cloud distance, scaling accuracy, and dump volume. It is found that the photogrammetric datasets generally represent the examined dump well with, for example, an area coverage of up to 90% and 100% in case of terrestrial and aerial imagery, respectively, a maximum scaling difference of 0.62%, and volume estimations reaching up to 100% of the LiDAR reference. Combining the advantages of 3-D geodata derived from terrestrial (high detail, accurate volume calculation even with a small number of input images) and aerial images (high coverage) can be a promising method to further improve the quality of 3-D geodata derived with low-cost approaches.

  2. Studies of 3D-cloud optical depth from small to very large values, and of the radiation and remote sensing impacts of larger-drop clustering

    SciTech Connect

    Wiscombe, Warren; Marshak, Alexander; Knyazikhin, Yuri; Chiu, Christine

    2007-05-04

    We have basically completed all the goals stated in the previous proposal and published or submitted journal papers thereon, the only exception being First-Principles Monte Carlo which has taken more time than expected. We finally finished the comprehensive book on 3D cloud radiative transfer (edited by Marshak and Davis and published by Springer), with many contributions by ARM scientists; this book was highlighted in the 2005 ARM Annual Report. We have also completed (for now) our pioneering work on new models of cloud drop clustering based on ARM aircraft FSSP data, with applications both to radiative transfer and to rainfall. This clustering work was highlighted in the FY07 “Our Changing Planet” (annual report of the US Climate Change Science Program). Our group published 22 papers, one book, and 5 chapters in that book, during this proposal period. All are listed at the end of this section. Below, we give brief highlights of some of those papers.

  3. a Robust Registration Algorithm for Point Clouds from Uav Images for Change Detection

    NASA Astrophysics Data System (ADS)

    Al-Rawabdeh, A.; Al-Gurrani, H.; Al-Durgham, K.; Detchev, I.; He, F.; El-Sheimy, N.; Habib, A.

    2016-06-01

    Landslides are among the major threats to urban landscape and manmade infrastructure. They often cause economic losses, property damages, and loss of lives. Temporal monitoring data of landslides from different epochs empowers the evaluation of landslide progression. Alignment of overlapping surfaces from two or more epochs is crucial for the proper analysis of landslide dynamics. The traditional methods for point-cloud-based landslide monitoring rely on using a variation of the Iterative Closest Point (ICP) registration procedure to align any reconstructed surfaces from different epochs to a common reference frame. However, sometimes the ICP-based registration can fail or may not provide sufficient accuracy. For example, point clouds from different epochs might fit to local minima due to lack of geometrical variability within the data. Also, manual interaction is required to exclude any non-stable areas from the registration process. In this paper, a robust image-based registration method is introduced for the simultaneous evaluation of all registration parameters. This includes the Interior Orientation Parameters (IOPs) of the camera and the Exterior Orientation Parameters (EOPs) of the involved images from all available observation epochs via a bundle block adjustment with self-calibration. Next, a semi-global dense matching technique is implemented to generate dense 3D point clouds for each epoch using the images captured in a particular epoch separately. The normal distances between any two consecutive point clouds can then be readily computed, because the point clouds are already effectively co-registered. A low-cost DJI Phantom II Unmanned Aerial Vehicle (UAV) was customised and used in this research for temporal data collection over an active soil creep area in Lethbridge, Alberta, Canada. The customisation included adding a GPS logger and a Large-Field-Of-View (LFOV) action camera which facilitated capturing high-resolution geo-tagged images in two epochs

  4. Object-Based Point Cloud Analysis of Full-Waveform Airborne Laser Scanning Data for Urban Vegetation Classification

    PubMed Central

    Rutzinger, Martin; Höfle, Bernhard; Hollaus, Markus; Pfeifer, Norbert

    2008-01-01

    Airborne laser scanning (ALS) is a remote sensing technique well-suited for 3D vegetation mapping and structure characterization because the emitted laser pulses are able to penetrate small gaps in the vegetation canopy. The backscattered echoes from the foliage, woody vegetation, the terrain, and other objects are detected, leading to a cloud of points. Higher echo densities (>20 echoes/m2) and additional classification variables from full-waveform (FWF) ALS data, namely echo amplitude, echo width and information on multiple echoes from one shot, offer new possibilities in classifying the ALS point cloud. Currently FWF sensor information is hardly used for classification purposes. This contribution presents an object-based point cloud analysis (OBPA) approach, combining segmentation and classification of the 3D FWF ALS points designed to detect tall vegetation in urban environments. The definition tall vegetation includes trees and shrubs, but excludes grassland and herbage. In the applied procedure FWF ALS echoes are segmented by a seeded region growing procedure. All echoes sorted descending by their surface roughness are used as seed points. Segments are grown based on echo width homogeneity. Next, segment statistics (mean, standard deviation, and coefficient of variation) are calculated by aggregating echo features such as amplitude and surface roughness. For classification a rule base is derived automatically from a training area using a statistical classification tree. To demonstrate our method we present data of three sites with around 500,000 echoes each. The accuracy of the classified vegetation segments is evaluated for two independent validation sites. In a point-wise error assessment, where the classification is compared with manually classified 3D points, completeness and correctness better than 90% are reached for the validation sites. In comparison to many other algorithms the proposed 3D point classification works on the original measurements

  5. Density of point clouds in mobile laser scanning. (Polish Title: Gestosc chmury punktow pochodzacej z mobilnego skanowania laserowego)

    NASA Astrophysics Data System (ADS)

    Warchoł, A.

    2015-12-01

    The LiDAR (Light Detection And Ranging) technology is becoming a more and more popular method to collect spatial information. The acquisition of 3D data by means of one or several laser scanners mounted on a mobile platform (car) could quickly provide large volumes of dense data with centimeter-level accuracy. This is, therefore, the ideal solution to obtain information about objects with elongated shapes (corridors), and their surroundings. Point clouds used by specific applications must fulfill certain quality criteria, such as quantitative and qualitative indicators (i.e. precision, accuracy, density, completeness).Usually, the client fixes some parameter values that must be achieved. In terms of the precision, this parameter is well described, whereas in the case of density point clouds the discussion is still open. Due to the specificities of the MLS (Mobile Laser Scanning), the solution from ALS (Airborne Laser Scanning) cannot be directly applied. Hence, the density of the final point clouds, calculated as the number of points divided by "flat" surface area, is inappropriate. We present in this article three different ways of determining and interpreting point cloud density on three different test fields. The first method divides the number of points by the "flat" area, the second by the "three-dimensional" area, and the last one refers to a voxel approach. The most reliable method seems to be the voxel method, which in addition to the local density values also presents their spatial distribution.

  6. A Robust Linear Feature-Based Procedure for Automated Registration of Point Clouds

    PubMed Central

    Poreba, Martyna; Goulette, François

    2015-01-01

    With the variety of measurement techniques available on the market today, fusing multi-source complementary information into one dataset is a matter of great interest. Target-based, point-based and feature-based methods are some of the approaches used to place data in a common reference frame by estimating its corresponding transformation parameters. This paper proposes a new linear feature-based method to perform accurate registration of point clouds, either in 2D or 3D. A two-step fast algorithm called Robust Line Matching and Registration (RLMR), which combines coarse and fine registration, was developed. The initial estimate is found from a triplet of conjugate line pairs, selected by a RANSAC algorithm. Then, this transformation is refined using an iterative optimization algorithm. Conjugates of linear features are identified with respect to a similarity metric representing a line-to-line distance. The efficiency and robustness to noise of the proposed method are evaluated and discussed. The algorithm is valid and ensures valuable results when pre-aligned point clouds with the same scale are used. The studies show that the matching accuracy is at least 99.5%. The transformation parameters are also estimated correctly. The error in rotation is better than 2.8% full scale, while the translation error is less than 12.7%. PMID:25594589

  7. Reconstruction of forest geometries from terrestrial laser scanning point clouds for canopy radiative transfer modelling

    NASA Astrophysics Data System (ADS)

    Bremer, Magnus; Schmidtner, Korbinian; Rutzinger, Martin

    2015-04-01

    The architecture of forest canopies is a key parameter for forest ecological issues helping to model the variability of wood biomass and foliage in space and time. In order to understand the nature of subpixel effects of optical space-borne sensors with coarse spatial resolution, hypothetical 3D canopy models are widely used for the simulation of radiative transfer in forests. Thereby, radiation is traced through the atmosphere and canopy geometries until it reaches the optical sensor. For a realistic simulation scene we decompose terrestrial laser scanning point cloud data of leaf-off larch forest plots in the Austrian Alps and reconstruct detailed model ready input data for radiative transfer simulations. The point clouds are pre-classified into primitive classes using Principle Component Analysis (PCA) using scale adapted radius neighbourhoods. Elongated point structures are extracted as tree trunks. The tree trunks are used as seeds for a Dijkstra-growing procedure, in order to obtain single tree segmentation in the interlinked canopies. For the optimized reconstruction of branching architectures as vector models, point cloud skeletonisation is used in combination with an iterative Dijkstra-growing and by applying distance constraints. This allows conducting a hierarchical reconstruction preferring the tree trunk and higher order branches and avoiding over-skeletonization effects. Based on the reconstructed branching architectures, larch needles are modelled based on the hierarchical level of branches and the geometrical openness of the canopy. For radiative transfer simulations, branch architectures are used as mesh geometries representing branches as cylindrical pipes. Needles are either used as meshes or as voxel-turbids. The presented workflow allows an automatic classification and single tree segmentation in interlinked canopies. The iterative Dijkstra-growing using distance constraints generated realistic reconstruction results. As the mesh representation

  8. Electronic and magnetic structure of 3d-transition-metal point defects in silicon calculated from first principles

    NASA Astrophysics Data System (ADS)

    Beeler, F.; Andersen, O. K.; Scheffler, M.

    1990-01-01

    We describe spin-unrestricted self-consistent linear muffin-tin-orbital (LMTO) Green-function calculations for Sc, Ti, V, Cr, Mn, Fe, Co, Ni, and Cu transition-metal impurities in crystalline silicon. Both defect sites of tetrahedral symmetry are considered. All possible charge states with their spin multiplicities, magnetization densities, and energy levels are discussed and explained with a simple physical picture. The early transition-metal interstitial and late transition-metal substitutional 3d ions are found to have low spin. This is in conflict with the generally accepted crystal-field model of Ludwig and Woodbury, but not with available experimental data. For the interstitial 3d ions, the calculated deep donor and acceptor levels reproduce all experimentally observed transitions. For substitutional 3d ions, a large number of predictions is offered to be tested by future experimental studies.

  9. Hierarchical Higher Order Crf for the Classification of Airborne LIDAR Point Clouds in Urban Areas

    NASA Astrophysics Data System (ADS)

    Niemeyer, J.; Rottensteiner, F.; Soergel, U.; Heipke, C.

    2016-06-01

    We propose a novel hierarchical approach for the classification of airborne 3D lidar points. Spatial and semantic context is incorporated via a two-layer Conditional Random Field (CRF). The first layer operates on a point level and utilises higher order cliques. Segments are generated from the labelling obtained in this way. They are the entities of the second layer, which incorporates larger scale context. The classification result of the segments is introduced as an energy term for the next iteration of the point-based layer. This framework iterates and mutually propagates context to improve the classification results. Potentially wrong decisions can be revised at later stages. The output is a labelled point cloud as well as segments roughly corresponding to object instances. Moreover, we present two new contextual features for the segment classification: the distance and the orientation of a segment with respect to the closest road. It is shown that the classification benefits from these features. In our experiments the hierarchical framework improve the overall accuracies by 2.3% on a point-based level and by 3.0% on a segment-based level, respectively, compared to a purely point-based classification.

  10. Augmented reality system using lidar point cloud data for displaying dimensional information of objects on mobile phones

    NASA Astrophysics Data System (ADS)

    Gupta, S.; Lohani, B.

    2014-05-01

    Mobile augmented reality system is the next generation technology to visualise 3D real world intelligently. The technology is expanding at a fast pace to upgrade the status of a smart phone to an intelligent device. The research problem identified and presented in the current work is to view actual dimensions of various objects that are captured by a smart phone in real time. The methodology proposed first establishes correspondence between LiDAR point cloud, that are stored in a server, and the image t hat is captured by a mobile. This correspondence is established using the exterior and interior orientation parameters of the mobile camera and the coordinates of LiDAR data points which lie in the viewshed of the mobile camera. A pseudo intensity image is generated using LiDAR points and their intensity. Mobile image and pseudo intensity image are then registered using image registration method SIFT thereby generating a pipeline to locate a point in point cloud corresponding to a point (pixel) on the mobile image. The second part of the method uses point cloud data for computing dimensional information corresponding to the pairs of points selected on mobile image and fetch the dimensions on top of the image. This paper describes all steps of the proposed method. The paper uses an experimental setup to mimic the mobile phone and server system and presents some initial but encouraging results

  11. Time Resolved 3-D Mapping of Atmospheric Aerosols and Clouds During the Recent ARM Water Vapor IOP

    NASA Technical Reports Server (NTRS)

    Schwemmer, Geary; Miller, David; Wilkerson, Thomas; Andrus, Ionio; Starr, David OC. (Technical Monitor)

    2001-01-01

    The HARLIE lidar was deployed at the ARM SGP site in north central Oklahoma and recorded over 100 hours of data on 16 days between 17 September and 6 October 2000 during the recent Water Vapor Intensive Operating Period (IOP). Placed in a ground-based trailer for upward looking scanning measurements of clouds and aerosols, HARLIE provided a unique record of time-resolved atmospheric backscatter at 1 micron wavelength. The conical scanning lidar images atmospheric backscatter along the surface of an inverted 90 degree (full angle) cone up to an altitude of 20 km. 360 degree scans having spatial resolutions of 20 meters in the vertical and 1 degree in azimuth were obtained every 36 seconds. Various boundary layer and cloud parameters are derived from the lidar data, as well as atmospheric wind vectors where there is Sufficiently resolved structure that can be traced moving through the surface described by the scanning laser beam. Comparison of HARLIE measured winds with radiosonde measured winds validates the accuracy of this new technique for remotely measuring atmospheric winds without Doppler information.

  12. Simple computation of reaction–diffusion processes on point clouds

    PubMed Central

    Macdonald, Colin B.; Merriman, Barry; Ruuth, Steven J.

    2013-01-01

    The study of reaction–diffusion processes is much more complicated on general curved surfaces than on standard Cartesian coordinate spaces. Here we show how to formulate and solve systems of reaction–diffusion equations on surfaces in an extremely simple way, using only the standard Cartesian form of differential operators, and a discrete unorganized point set to represent the surface. Our method decouples surface geometry from the underlying differential operators. As a consequence, it becomes possible to formulate and solve rather general reaction–diffusion equations on general surfaces without having to consider the complexities of differential geometry or sophisticated numerical analysis. To illustrate the generality of the method, computations for surface diffusion, pattern formation, excitable media, and bulk-surface coupling are provided for a variety of complex point cloud surfaces. PMID:23690616

  13. Bayesian Multiscale Modeling of Closed Curves in Point Clouds

    PubMed Central

    Gu, Kelvin; Pati, Debdeep; Dunson, David B.

    2014-01-01

    Modeling object boundaries based on image or point cloud data is frequently necessary in medical and scientific applications ranging from detecting tumor contours for targeted radiation therapy, to the classification of organisms based on their structural information. In low-contrast images or sparse and noisy point clouds, there is often insufficient data to recover local segments of the boundary in isolation. Thus, it becomes critical to model the entire boundary in the form of a closed curve. To achieve this, we develop a Bayesian hierarchical model that expresses highly diverse 2D objects in the form of closed curves. The model is based on a novel multiscale deformation process. By relating multiple objects through a hierarchical formulation, we can successfully recover missing boundaries by borrowing structural information from similar objects at the appropriate scale. Furthermore, the model’s latent parameters help interpret the population, indicating dimensions of significant structural variability and also specifying a ‘central curve’ that summarizes the collection. Theoretical properties of our prior are studied in specific cases and efficient Markov chain Monte Carlo methods are developed, evaluated through simulation examples and applied to panorex teeth images for modeling teeth contours and also to a brain tumor contour detection problem. PMID:25544786

  14. Towards Object Driven Floor Plan Extraction from Laser Point Cloud

    NASA Astrophysics Data System (ADS)

    Babacan, K.; Jung, J.; Wichmann, A.; Jahromi, B. A.; Shahbazi, M.; Sohn, G.; Kada, M.

    2016-06-01

    During the last years, the demand for indoor models has increased for various purposes. As a provisional step to proceed towards higher dimensional indoor models, powerful and flexible floor plans can be utilised. Therefore, several methods have been proposed that provide automatically generated floor plans from laser point clouds. The prevailing methodology seeks to attain semantic enhancement of a model (e.g. the identification and labelling of its components) built upon already reconstructed (a priori) geometry. In contrast, this paper demonstrates preliminary research on the possibility to directly incorporate semantic knowledge, which is itself derived from the raw data during the extraction, into the geometric modelling process. In this regard, we propose a new method to automatically extract floor plans from raw point clouds. It is based on a hierarchical space partitioning of the data, integrated with primitive selection actuated by object detection. First, planar primitives corresponding to vertical architectural structures are extracted using M-estimator SAmple and Consensus (MSAC). The set of the resulting line segments are refined by a selection process through a novel door detection algorithm, considering optimization of prior information and fitness to the data. The selected lines are used as hyperlines to partition the space into enclosed areas. Finally, a floor plan is extracted from these partitions by Minimum Description Length (MDL) hypothesis ranking. The algorithm is applied on a real mobile laser scanner dataset and the results are evaluated both in terms of door detection and consecutive floor plan extraction.

  15. Automatic registration of large-scale urban scene point clouds based on semantic feature points

    NASA Astrophysics Data System (ADS)

    Yang, Bisheng; Dong, Zhen; Liang, Fuxun; Liu, Yuan

    2016-03-01

    Point clouds collected by terrestrial laser scanning (TLS) from large-scale urban scenes contain a wide variety of objects (buildings, cars, pole-like objects, and others) with symmetric and incomplete structures, and relatively low-textured surfaces, all of which pose great challenges for automatic registration between scans. To address the challenges, this paper proposes a registration method to provide marker-free and multi-view registration based on the semantic feature points extracted. First, the method detects the semantic feature points within a detection scheme, which includes point cloud segmentation, vertical feature lines extraction and semantic information calculation and finally takes the intersections of these lines with the ground as the semantic feature points. Second, the proposed method matches the semantic feature points using geometrical constraints (3-point scheme) as well as semantic information (category and direction), resulting in exhaustive pairwise registration between scans. Finally, the proposed method implements multi-view registration by constructing a minimum spanning tree of the fully connected graph derived from exhaustive pairwise registration. Experiments have demonstrated that the proposed method performs well in various urban environments and indoor scenes with the accuracy at the centimeter level and improves the efficiency, robustness, and accuracy of registration in comparison with the feature plane-based methods.

  16. Automatic Method for Building Indoor Boundary Models from Dense Point Clouds Collected by Laser Scanners

    PubMed Central

    Valero, Enrique; Adán, Antonio; Cerrada, Carlos

    2012-01-01

    In this paper we present a method that automatically yields Boundary Representation Models (B-rep) for indoors after processing dense point clouds collected by laser scanners from key locations through an existing facility. Our objective is particularly focused on providing single models which contain the shape, location and relationship of primitive structural elements of inhabited scenarios such as walls, ceilings and floors. We propose a discretization of the space in order to accurately segment the 3D data and generate complete B-rep models of indoors in which faces, edges and vertices are coherently connected. The approach has been tested in real scenarios with data coming from laser scanners yielding promising results. We have deeply evaluated the results by analyzing how reliably these elements can be detected and how accurately they are modeled. PMID:23443369

  17. Octree-based segmentation for terrestrial LiDAR point cloud data in industrial applications

    NASA Astrophysics Data System (ADS)

    Su, Yun-Ting; Bethel, James; Hu, Shuowen

    2016-03-01

    Automated and efficient algorithms to perform segmentation of terrestrial LiDAR data is critical for exploitation of 3D point clouds, where the ultimate goal is CAD modeling of the segmented data. In this work, a novel segmentation technique is proposed, starting with octree decomposition to recursively divide the scene into octants or voxels, followed by a novel split and merge framework that uses graph theory and a series of connectivity analyses to intelligently merge components into larger connected components. The connectivity analysis, based on a combination of proximity, orientation, and curvature connectivity criteria, is designed for the segmentation of pipes, vessels, and walls from terrestrial LiDAR data of piping systems at industrial sites, such as oil refineries, chemical plants, and steel mills. The proposed segmentation method is exercised on two terrestrial LiDAR datasets of a steel mill and a chemical plant, demonstrating its ability to correctly reassemble and segregate features of interest.

  18. Sharing Clouds: Showing, Distributing, and Sharing Large Point Datasets

    NASA Astrophysics Data System (ADS)

    Grigsby, S.

    2012-12-01

    Sharing large data sets with colleagues and the general public presents a unique technological challenge for scientists. In addition to large data volumes, there are significant challenges in representing data that is often irregular, multidimensional and spatial in nature. For derived data products, additional challenges exist in displaying and providing provenance data. For this presentation, several open source technologies are demonstrated for the remote display and access of large irregular point data sets. These technologies and techniques include the remote viewing of point data using HTML5 and OpenGL, which provides a highly accessible preview of the data sets for a range of audiences. Intermediate levels of accessibility and high levels of interactivity are accomplished with technologies such as wevDAV, which allows collaborators to run analysis on local clients, using data stored and administered on remote servers. Remote processing and analysis, including provenance tracking, will be discussed at the workgroup level. The data sets used for this presentation include data acquired from the NSF funded National Center for Airborne Laser Mapping (NCALM), and data acquired for research and instructional use in NASA's Student Airborne Research Program (SARP). These datasets include Light Ranging And Detection (LiDAR) point clouds ranging in size from several hundred thousand to several hundred million data points; the techniques and technologies discussed are applicable to other forms of irregular point data.

  19. 3-D Surface Visualization of pH Titration "Topos": Equivalence Point Cliffs, Dilution Ramps, and Buffer Plateaus

    ERIC Educational Resources Information Center

    Smith, Garon C.; Hossain, Md Mainul; MacCarthy, Patrick

    2014-01-01

    3-D topographic surfaces ("topos") can be generated to visualize how pH behaves during titration and dilution procedures. The surfaces are constructed by plotting computed pH values above a composition grid with volume of base added in one direction and overall system dilution on the other. What emerge are surface features that…

  20. Visualization of Buffer Capacity with 3-D "Topo" Surfaces: Buffer Ridges, Equivalence Point Canyons and Dilution Ramps

    ERIC Educational Resources Information Center

    Smith, Garon C.; Hossain, Md Mainul

    2016-01-01

    BufCap TOPOS is free software that generates 3-D topographical surfaces ("topos") for acid-base equilibrium studies. It portrays pH and buffer capacity behavior during titration and dilution procedures. Topo surfaces are created by plotting computed pH and buffer capacity values above a composition grid with volume of NaOH as the x axis…

  1. Post-Earthquake Geology in the ERA of Ubiquitous Point Clouds

    NASA Astrophysics Data System (ADS)

    Oskin, M. E.; Arrowsmith, R.; Nissen, E.; Morelan, A. E., III; Trexler, C. C.; Gold, P. O.; Elliott, A. J.; Crosby, C. J.; Kellogg, L. H.

    2015-12-01

    High-precision 3D imaging with lidar and structure-from-motion photogrammetry is revolutionizing the collection of post-earthquake displacement information. Massive point-cloud datasets, and their differences epoch to epoch, provide valuable information for scientific, engineering, and emergency response, and also pose challenges to process, handle, analyze, share, and visualize. In the physical world, earthquake surface ruptures and secondary deformation features are ephemeral, subject to natural degradation by erosion, or to repair of the built environment. Post-earthquake 3D imaging overcomes this limitation by virtually archiving the primary surface expression of deformation. This allows geologists make precise, repeatable measurements, and to assess subtle, distributed deformation often missed by traditional field methods. Generally, the more local and inexpensive the technique, the quicker that a response can be organized: ground and drone-based SfM (hours), terrestrial laser scanning (days), to airborne lidar (weeks). With the growth of high-resolution topography along fault zones and for other mapping purposes, it is increasingly likely that a large earthquake will coincide with an existing data set. Such an event beholds the exciting promise of point cloud differencing to develop a high-resolution, fully three dimensional displacement and rotation field. Existing paired airborne lidar data sets from Japan, New Zealand, Mexico, and California reveal new and informative features of earthquake-induced near-field deformation, but also illustrate that significant challenges impede the separation a tectonic signal from noise and uncertainty within lidar data. In a future earthquakes, there will be great opportunities, and soon enough, an imperative, to measure deformation at sub-meter resolution over entire cities, and along faults hundreds of kilometers in length. As a community, we stand at a threshold, watching this oncoming deluge of repeat and ubiquitous

  2. Successful gas hydrate prospecting using 3D seismic - A case study for the Mt. Elbert prospect, Milne Point, North Slope Alaska

    USGS Publications Warehouse

    Inks, T.L.; Agena, W.F.

    2008-01-01

    In February 2007, the Mt. Elbert Prospect stratigraphic test well, Milne Point, North Slope Alaska encountered thick methane gas hydrate intervals, as predicted by 3D seismic interpretation and modeling. Methane gas hydrate-saturated sediment was found in two intervals, totaling more than 100 ft., identified and mapped based on seismic character and wavelet modeling.

  3. Uav-Based Photogrammetric Point Clouds - Tree STEM Mapping in Open Stands in Comparison to Terrestrial Laser Scanner Point Clouds

    NASA Astrophysics Data System (ADS)

    Fritz, A.; Kattenborn, T.; Koch, B.

    2013-08-01

    In both ecology and forestry, there is a high demand for structural information of forest stands. Forest structures, due to their heterogeneity and density, are often difficult to assess. Hence, a variety of technologies are being applied to account for this "difficult to come by" information. Common techniques are aerial images or ground- and airborne-Lidar. In the present study we evaluate the potential use of unmanned aerial vehicles (UAVs) as a platform for tree stem detection in open stands. A flight campaign over a test site near Freiburg, Germany covering a target area of 120 × 75 [m2] was conducted. The dominant tree species of the site is oak (quercus robur) with almost no understory growth. Over 1000 images with a tilt angle of 45° were shot. The flight pattern applied consisted of two antipodal staggered flight routes at a height of 55 [m] above the ground. We used a Panasonic G3 consumer camera equipped with a 14-42 [mm] standard lens and a 16.6 megapixel sensor. The data collection took place in leaf-off state in April 2013. The area was prepared with artificial ground control points for transformation of the structure-from-motion (SFM) point cloud into real world coordinates. After processing, the results were compared with a terrestrial laser scanner (TLS) point cloud of the same area. In the 0.9 [ha] test area, 102 individual trees above 7 [cm] diameter at breast height were located on in the TLS-cloud. We chose the software CMVS/PMVS-2 since its algorithms are developed with focus on dense reconstruction. The processing chain for the UAV-acquired images consists of six steps: a. cleaning the data: removing of blurry, under- or over exposed and off-site images; b. applying the SIFT operator [Lowe, 2004]; c. image matching; d. bundle adjustment; e. clustering; and f. dense reconstruction. In total, 73 stems were considered as reconstructed and located within one meter of the reference trees. In general stems were far less accurate and complete as

  4. Detection of Slope Movement by Comparing Point Clouds Created by SFM Software

    NASA Astrophysics Data System (ADS)

    Oda, Kazuo; Hattori, Satoko; Takayama, Toko

    2016-06-01

    This paper proposes movement detection method between point clouds created by SFM software, without setting any onsite georeferenced points. SfM software, like Smart3DCaputure, PhotoScan, and Pix4D, are convenient for non-professional operator of photogrammetry, because these systems require simply specification of sequence of photos and output point clouds with colour index which corresponds to the colour of original image pixel where the point is projected. SfM software can execute aerial triangulation and create dense point clouds fully automatically. This is useful when monitoring motion of unstable slopes, or loos rocks in slopes along roads or railroads. Most of existing method, however, uses mesh-based DSM for comparing point clouds before/after movement and it cannot be applied in such cases that part of slopes forms overhangs. And in some cases movement is smaller than precision of ground control points and registering two point clouds with GCP is not appropriate. Change detection method in this paper adopts CCICP (Classification and Combined ICP) algorithm for registering point clouds before / after movement. The CCICP algorithm is a type of ICP (Iterative Closest Points) which minimizes point-to-plane, and point-to-point distances, simultaneously, and also reject incorrect correspondences based on point classification by PCA (Principle Component Analysis). Precision test shows that CCICP method can register two point clouds up to the 1 pixel size order in original images. Ground control points set in site are useful for initial setting of two point clouds. If there are no GCPs in site of slopes, initial setting is achieved by measuring feature points as ground control points in the point clouds before movement, and creating point clouds after movement with these ground control points. When the motion is rigid transformation, in case that a loose Rock is moving in slope, motion including rotation can be analysed by executing CCICP for a loose rock and

  5. Semi-automatic extraction of sectional view from point clouds - The case of Ottmarsheim's abbey-church

    NASA Astrophysics Data System (ADS)

    Landes, T.; Bidino, S.; Guild, R.

    2014-06-01

    Today, elevations or sectional views of buildings are often produced from terrestrial laser scanning. However, due to the amount of data to process and because usually 2D maps are required by customers, the 3D point cloud is often degraded into 2D slices. In a sectional view, not only the portions of the objet which are intersected by the cutting plane but also edges and contours of other parts of the object which are visible behind the cutting plane are represented. To avoid the tedious manual drawing, the aim of this work is to propose a semi-automatic approach for creating sectional views by point cloud processing. The extraction of sectional views requires in a first step the segmentation of the point cloud into planar and non-planar entities. Since in cultural heritage buildings, arches, vaults, columns can be found, the position and the direction of the sectional view must be taken into account before contours extraction. Indeed, the edges of surfaces of revolution depend on the chosen view. The developed extraction approach is detailed based on point clouds acquired inside and outside churches. The resulting sectional view has been evaluated in a qualitative and quantitative way by comparing it with a reference sectional view made by hand. A mean deviation of 3 cm between both sections proves that the proposed approach is promising. Regarding the processing time, despite a few manual corrections, it has saved 40% of the time required for manual drawing.

  6. Joint detection of anatomical points on surface meshes and color images for visual registration of 3D dental models

    NASA Astrophysics Data System (ADS)

    Destrez, Raphaël.; Albouy-Kissi, Benjamin; Treuillet, Sylvie; Lucas, Yves

    2015-04-01

    Computer aided planning for orthodontic treatment requires knowing occlusion of separately scanned dental casts. A visual guided registration is conducted starting by extracting corresponding features in both photographs and 3D scans. To achieve this, dental neck and occlusion surface are firstly extracted by image segmentation and 3D curvature analysis. Then, an iterative registration process is conducted during which feature positions are refined, guided by previously found anatomic edges. The occlusal edge image detection is improved by an original algorithm which follows Canny's poorly detected edges using a priori knowledge of tooth shapes. Finally, the influence of feature extraction and position optimization is evaluated in terms of the quality of the induced registration. Best combination of feature detection and optimization leads to a positioning average error of 1.10 mm and 2.03°.

  7. Integrated ray tracing simulation of annual variation of spectral bio-signatures from cloud free 3D optical Earth model

    NASA Astrophysics Data System (ADS)

    Ryu, Dongok; Kim, Sug-Whan; Kim, Dae Wook; Lee, Jae-Min; Lee, Hanshin; Park, Won Hyun; Seong, Sehyun; Ham, Sun-Jeong

    2010-09-01

    Understanding the Earth spectral bio-signatures provides an important reference datum for accurate de-convolution of collapsed spectral signals from potential earth-like planets of other star systems. This study presents a new ray tracing computation method including an improved 3D optical earth model constructed with the coastal line and vegetation distribution data from the Global Ecological Zone (GEZ) map. Using non-Lambertian bidirectional scattering distribution function (BSDF) models, the input earth surface model is characterized with three different scattering properties and their annual variations depending on monthly changes in vegetation distribution, sea ice coverage and illumination angle. The input atmosphere model consists of one layer with Rayleigh scattering model from the sea level to 100 km in altitude and its radiative transfer characteristics is computed for four seasons using the SMART codes. The ocean scattering model is a combination of sun-glint scattering and Lambertian scattering models. The land surface scattering is defined with the semi empirical parametric kernel method used for MODIS and POLDER missions. These three component models were integrated into the final Earth model that was then incorporated into the in-house built integrated ray tracing (IRT) model capable of computing both spectral imaging and radiative transfer performance of a hypothetical space instrument as it observes the Earth from its designated orbit. The IRT model simulation inputs include variation in earth orientation, illuminated phases, and seasonal sea ice and vegetation distribution. The trial simulation runs result in the annual variations in phase dependent disk averaged spectra (DAS) and its associated bio-signatures such as NDVI. The full computational details are presented together with the resulting annual variation in DAS and its associated bio-signatures.

  8. Decontamination of oil-polluted soil by cloud point extraction.

    PubMed

    Komáromy-Hiller, G; von Wandruszka, R

    1995-01-01

    An extraction procedure based on cloud point phase separation of nonionic surfactants was used to remove oil contamination from soils. The detergent employed was Triton X-114, and its clouding behavior was monitored by means of a fluorescence probe. Changes in the I (1)I (3) ratio of pyrene indicated gradual dehydration of the detergent micelles upon heating. The rate of phase separation, and the volume and water content of the micellar phase were determined. In the practical clean-up, 85-98% of the oil present in the soil was found to enter the micellar phase of the separated washing liquid. A 15-min washing time with 3-5% detergent was found to be sufficient for this degree of contaminant removal from soil containing 0.009-0.017% oil, using a liquid:solid ratio of 5:2. The extraction efficiency decreased with increasing carbon content of the soil. The process holds promise for large-scale treatment of oil-polluted soils. PMID:18966205

  9. Point Cloud Mapping Methods for Documenting Cultural Landscape Features at the Wormsloe State Historic Site, Savannah, Georgia, USA

    NASA Astrophysics Data System (ADS)

    Jordana, T. R.; Goetcheus, C. L.; Madden, M.

    2016-06-01

    Documentation of the three-dimensional (3D) cultural landscape has traditionally been conducted during site visits using conventional photographs, standard ground surveys and manual measurements. In recent years, there have been rapid developments in technologies that produce highly accurate 3D point clouds, including aerial LiDAR, terrestrial laser scanning, and photogrammetric data reduction from unmanned aerial systems (UAS) images and hand held photographs using Structure from Motion (SfM) methods. These 3D point clouds can be precisely scaled and used to conduct measurements of features even after the site visit has ended. As a consequence, it is becoming increasingly possible to collect non-destructive data for a wide variety of cultural site features, including landscapes, buildings, vegetation, artefacts and gardens. As part of a project for the U.S. National Park Service, a variety of data sets have been collected for the Wormsloe State Historic Site, near Savannah, Georgia, USA. In an effort to demonstrate the utility and versatility of these methods at a range of scales, comparisons of the features mapped with different techniques will be discussed with regards to accuracy, data set completeness, cost and ease-of-use.

  10. 3D surface reconstruction based on image stitching from gastric endoscopic video sequence

    NASA Astrophysics Data System (ADS)

    Duan, Mengyao; Xu, Rong; Ohya, Jun

    2013-09-01

    This paper proposes a method for reconstructing 3D detailed structures of internal organs such as gastric wall from endoscopic video sequences. The proposed method consists of the four major steps: Feature-point-based 3D reconstruction, 3D point cloud stitching, dense point cloud creation and Poisson surface reconstruction. Before the first step, we partition one video sequence into groups, where each group consists of two successive frames (image pairs), and each pair in each group contains one overlapping part, which is used as a stitching region. Fist, the 3D point cloud of each group is reconstructed by utilizing structure from motion (SFM). Secondly, a scheme based on SIFT features registers and stitches the obtained 3D point clouds, by estimating the transformation matrix of the overlapping part between different groups with high accuracy and efficiency. Thirdly, we select the most robust SIFT feature points as the seed points, and then obtain the dense point cloud from sparse point cloud via a depth testing method presented by Furukawa. Finally, by utilizing Poisson surface reconstruction, polygonal patches for the internal organs are obtained. Experimental results demonstrate that the proposed method achieves a high accuracy and efficiency for 3D reconstruction of gastric surface from an endoscopic video sequence.

  11. Cloud Point Extraction for Electroanalysis: Anodic Stripping Voltammetry of Cadmium

    PubMed Central

    Rusinek, Cory A.; Bange, Adam; Papautsky, Ian; Heineman, William R.

    2016-01-01

    Cloud point extraction (CPE) is a well-established technique for the pre-concentration of hydrophobic species from water without the use of organic solvents. Subsequent analysis is then typically performed via atomic absorption spectroscopy (AAS), UV-Vis spectroscopy, or high performance liquid chromatography (HPLC). However, the suitability of CPE for electroanalytical methods such as stripping voltammetry has not been reported. We demonstrate the use of CPE for electroanalysis using the determination of cadmium (Cd2+) by anodic stripping voltammetry (ASV) as a representative example. Rather than using the chelating agents which are commonly used in CPE to form a hydrophobic, extractable metal complex, we used iodide and sulfuric acid to neutralize the charge on Cd2+ to form an extractable ion pair. Triton X-114 was chosen as the surfactant for the extraction because its cloud point temperature is near room temperature (22–25° C). Bare glassy carbon (GC), bismuth-coated glassy carbon (Bi-GC), and mercury-coated glassy carbon (Hg-GC) electrodes were compared for the CPE-ASV. A detection limit for Cd2+ of 1.7 nM (0.2 ppb) was obtained with the Hg-GC electrode. Comparison of ASV analysis without CPE was also investigated and a 20x decrease (4.0 ppb) in the detection limit was observed. The suitability of this procedure for the analysis of tap and river water samples was also demonstrated. This simple, versatile, environmentally friendly and cost-effective extraction method is potentially applicable to a wide variety of transition metals and organic compounds that are amenable to detection by electroanalytical methods. PMID:25996561

  12. Spatially explicit spectral analysis of point clouds and geospatial data

    NASA Astrophysics Data System (ADS)

    Buscombe, Daniel

    2016-01-01

    The increasing use of spatially explicit analyses of high-resolution spatially distributed data (imagery and point clouds) for the purposes of characterising spatial heterogeneity in geophysical phenomena necessitates the development of custom analytical and computational tools. In recent years, such analyses have become the basis of, for example, automated texture characterisation and segmentation, roughness and grain size calculation, and feature detection and classification, from a variety of data types. In this work, much use has been made of statistical descriptors of localised spatial variations in amplitude variance (roughness), however the horizontal scale (wavelength) and spacing of roughness elements is rarely considered. This is despite the fact that the ratio of characteristic vertical to horizontal scales is not constant and can yield important information about physical scaling relationships. Spectral analysis is a hitherto under-utilised but powerful means to acquire statistical information about relevant amplitude and wavelength scales, simultaneously and with computational efficiency. Further, quantifying spatially distributed data in the frequency domain lends itself to the development of stochastic models for probing the underlying mechanisms which govern the spatial distribution of geological and geophysical phenomena. The software package PySESA (Python program for Spatially Explicit Spectral Analysis) has been developed for generic analyses of spatially distributed data in both the spatial and frequency domains. Developed predominantly in Python, it accesses libraries written in Cython and C++ for efficiency. It is open source and modular, therefore readily incorporated into, and combined with, other data analysis tools and frameworks with particular utility for supporting research in the fields of geomorphology, geophysics, hydrography, photogrammetry and remote sensing. The analytical and computational structure of the toolbox is described

  13. Cloud Point Extraction for Electroanalysis: Anodic Stripping Voltammetry of Cadmium.

    PubMed

    Rusinek, Cory A; Bange, Adam; Papautsky, Ian; Heineman, William R

    2015-06-16

    Cloud point extraction (CPE) is a well-established technique for the preconcentration of hydrophobic species from water without the use of organic solvents. Subsequent analysis is then typically performed via atomic absorption spectroscopy (AAS), UV-vis spectroscopy, or high performance liquid chromatography (HPLC). However, the suitability of CPE for electroanalytical methods such as stripping voltammetry has not been reported. We demonstrate the use of CPE for electroanalysis using the determination of cadmium (Cd(2+)) by anodic stripping voltammetry (ASV). Rather than using the chelating agents which are commonly used in CPE to form a hydrophobic, extractable metal complex, we used iodide and sulfuric acid to neutralize the charge on Cd(2+) to form an extractable ion pair. This offers good selectivity for Cd(2+) as no interferences were observed from other heavy metal ions. Triton X-114 was chosen as the surfactant for the extraction because its cloud point temperature is near room temperature (22-25 °C). Bare glassy carbon (GC), bismuth-coated glassy carbon (Bi-GC), and mercury-coated glassy carbon (Hg-GC) electrodes were compared for the CPE-ASV. A detection limit for Cd(2+) of 1.7 nM (0.2 ppb) was obtained with the Hg-GC electrode. ASV with CPE gave a 20x decrease (4.0 ppb) in the detection limit compared to ASV without CPE. The suitability of this procedure for the analysis of tap and river water samples was demonstrated. This simple, versatile, environmentally friendly, and cost-effective extraction method is potentially applicable to a wide variety of transition metals and organic compounds that are amenable to detection by electroanalytical methods. PMID:25996561

  14. Spatially explicit spectral analysis of point clouds and geospatial data

    USGS Publications Warehouse

    Buscombe, Daniel D.

    2015-01-01

    The increasing use of spatially explicit analyses of high-resolution spatially distributed data (imagery and point clouds) for the purposes of characterising spatial heterogeneity in geophysical phenomena necessitates the development of custom analytical and computational tools. In recent years, such analyses have become the basis of, for example, automated texture characterisation and segmentation, roughness and grain size calculation, and feature detection and classification, from a variety of data types. In this work, much use has been made of statistical descriptors of localised spatial variations in amplitude variance (roughness), however the horizontal scale (wavelength) and spacing of roughness elements is rarely considered. This is despite the fact that the ratio of characteristic vertical to horizontal scales is not constant and can yield important information about physical scaling relationships. Spectral analysis is a hitherto under-utilised but powerful means to acquire statistical information about relevant amplitude and wavelength scales, simultaneously and with computational efficiency. Further, quantifying spatially distributed data in the frequency domain lends itself to the development of stochastic models for probing the underlying mechanisms which govern the spatial distribution of geological and geophysical phenomena. The software packagePySESA (Python program for Spatially Explicit Spectral Analysis) has been developed for generic analyses of spatially distributed data in both the spatial and frequency domains. Developed predominantly in Python, it accesses libraries written in Cython and C++ for efficiency. It is open source and modular, therefore readily incorporated into, and combined with, other data analysis tools and frameworks with particular utility for supporting research in the fields of geomorphology, geophysics, hydrography, photogrammetry and remote sensing. The analytical and computational structure of the toolbox is

  15. Evaluation of Vertical Lacunarity Profiles in Forested Areas Using Airborne Laser Scanning Point Clouds

    NASA Astrophysics Data System (ADS)

    Székely, B.; Kania, A.; Standovár, T.; Heilmeier, H.

    2016-06-01

    The horizontal variation and vertical layering of the vegetation are important properties of the canopy structure determining the habitat; three-dimensional (3D) distribution of objects (shrub layers, understory vegetation, etc.) is related to the environmental factors (e.g., illumination, visibility). It has been shown that gaps in forests, mosaic-like structures are essential to biodiversity; various methods have been introduced to quantify this property. As the distribution of gaps in the vegetation is a multi-scale phenomenon, in order to capture it in its entirety, scale-independent methods are preferred; one of these is the calculation of lacunarity. We used Airborne Laser Scanning point clouds measured over a forest plantation situated in a former floodplain. The flat topographic relief ensured that the tree growth is independent of the topographic effects. The tree pattern in the plantation crops provided various quasi-regular and irregular patterns, as well as various ages of the stands. The point clouds were voxelized and layers of voxels were considered as images for two-dimensional input. These images calculated for a certain vicinity of reference points were taken as images for the computation of lacunarity curves, providing a stack of lacunarity curves for each reference points. These sets of curves have been compared to reveal spatial changes of this property. As the dynamic range of the lacunarity values is very large, the natural logarithms of the values were considered. Logarithms of lacunarity functions show canopy-related variations, we analysed these variations along transects. The spatial variation can be related to forest properties and ecology-specific aspects.

  16. Feature selection for quality assessment of indoor mobile mapping point clouds

    NASA Astrophysics Data System (ADS)

    Huang, Fangfang; Wen, Chenglu; Wang, Cheng; Li, Jonathan

    2016-03-01

    Owing to complexity of indoor environment, such as close range, multi-angle, occlusion, uneven lighting conditions and lack of absolute positioning information, quality assessment of indoor mobile mapping point clouds is a tough and challenging task. It is meaningful to evaluate the features extracted from indoor point clouds prior to further quality assessment. In this paper, we mainly focus on feature extraction depend upon indoor RGB-D camera for the quality assessment of point cloud data, which is proposed for selecting and screening local features, using random forest algorithm to find the optimum feature for the next step's quality assessment. First, we collect indoor point clouds data and classify them into classes of complete or incomplete. Then, we extract high dimensional features from the input point clouds data. Afterwards, we select discriminative features through random forest. Experimental results on different classes demonstrate the effective and promising performance of the presented method for point clouds quality assessment.

  17. Individual 3D region-of-interest atlas of the human brain: neural-network-based tissue classification with automatic training point extraction

    NASA Astrophysics Data System (ADS)

    Wagenknecht, Gudrun; Kaiser, Hans-Juergen; Obladen, Thorsten; Sabri, Osama; Buell, Udalrich

    2000-06-01

    The purpose of individual 3D region-of-interest atlas extraction is to automatically define anatomically meaningful regions in 3D MRI images for quantification of functional parameters (PET, SPECT: rMRGlu, rCBF). The first step of atlas extraction is to automatically classify brain tissue types into gray matter (GM), white matter (WM), cerebrospinal fluid (CSF), scalp/bone (SB) and background (BG). A feed-forward neural network with back-propagation training algorithm is used and compared to other numerical classifiers. It can be trained by a sample from the individual patient data set in question. Classification is done by a 'winner takes all' decision. Automatic extraction of a user-specified number of training points is done in a cross-sectional slice. Background separation is done by simple region growing. The most homogeneous voxels define the region for WM training point extraction (TPE). Non-white-matter and nonbackground regions are analyzed for GM and CSF training points. For SB TPE, the distance from the BG region is one feature. For each class, spatially uniformly distributed training points are extracted by a random generator from these regions. Simulated and real 3D MRI images are analyzed and error rates for TPE and classification calculated. The resulting class images can be analyzed for extraction of anatomical ROIs.

  18. Two-step adaptive extraction method for ground points and breaklines from lidar point clouds

    NASA Astrophysics Data System (ADS)

    Yang, Bisheng; Huang, Ronggang; Dong, Zhen; Zang, Yufu; Li, Jianping

    2016-09-01

    The extraction of ground points and breaklines is a crucial step during generation of high quality digital elevation models (DEMs) from airborne LiDAR point clouds. In this study, we propose a novel automated method for this task. To overcome the disadvantages of applying a single filtering method in areas with various types of terrain, the proposed method first classifies the points into a set of segments and one set of individual points, which are filtered by segment-based filtering and multi-scale morphological filtering, respectively. In the process of multi-scale morphological filtering, the proposed method removes amorphous objects from the set of individual points to decrease the effect of the maximum scale on the filtering result. The proposed method then extracts the breaklines from the ground points, which provide a good foundation for generation of a high quality DEM. Finally, the experimental results demonstrate that the proposed method extracts ground points in a robust manner while preserving the breaklines.

  19. The Investigation of Accuracy of 3 Dimensional Models Generated From Point Clouds with Terrestrial Laser Scanning

    NASA Astrophysics Data System (ADS)

    Gumus, Kutalmis; Erkaya, Halil

    2013-04-01

    In Terrestrial laser scanning (TLS) applications, it is necessary to take into consideration the conditions that affect the scanning process, especially the general characteristics of the laser scanner, geometric properties of the scanned object (shape, size, etc.), and its spatial location in the environment. Three dimensional models obtained with TLS, allow determining the geometric features and relevant magnitudes of the scanned object in an indirect way. In order to compare the spatial location and geometric accuracy of the 3-dimensional model created by Terrestrial laser scanning, it is necessary to use measurement tools that give more precise results than TLS. Geometric comparisons are performed by analyzing the differences between the distances, the angles between surfaces and the measured values taken from cross-sections between the data from the 3-dimensional model created with TLS and the values measured by other measurement devices The performance of the scanners, the size and shape of the scanned objects are tested using reference objects the sizes of which are determined with high precision. In this study, the important points to consider when choosing reference objects were highlighted. The steps up to processing the point clouds collected by scanning, regularizing these points and modeling in 3 dimensions was presented visually. In order to test the geometric correctness of the models obtained by Terrestrial laser scanners, sample objects with simple geometric shapes such as cubes, rectangular prisms and cylinders that are made of concrete were used as reference models. Three dimensional models were generated by scanning these reference models with Trimble Mensi GS 100. The dimension of the 3D model that is created from point clouds was compared with the precisely measured dimensions of the reference objects. For this purpose, horizontal and vertical cross-sections were taken from the reference objects and generated 3D models and the proximity of

  20. Detection of Geometric Keypoints and its Application to Point Cloud Coarse Registration

    NASA Astrophysics Data System (ADS)

    Bueno, M.; Martínez-Sánchez, J.; González-Jorge, H.; Lorenzo, H.

    2016-06-01

    Acquisition of large scale scenes, frequently, involves the storage of large amount of data, and also, the placement of several scan positions to obtain a complete object. This leads to a situation with a different coordinate system in each scan position. Thus, a preprocessing of it to obtain a common reference frame is usually needed before analysing it. Automatic point cloud registration without locating artificial markers is a challenging field of study. The registration of millions or billions of points is a demanding task. Subsampling the original data usually solves the situation, at the cost of reducing the precision of the final registration. In this work, a study of the subsampling via the detection of keypoints and its capability to apply in coarse alignment is performed. The keypoints obtained are based on geometric features of each individual point, and are extracted using the Difference of Gaussians approach over 3D data. The descriptors include features as eigenentropy, change of curvature and planarity. Experiments demonstrate that the coarse alignment, obtained through these keypoints outperforms the coarse registration root mean squared error of an operator by 3 - 5 cm. The applicability of these keypoints is tested and verified in five different case studies.

  1. 3d-3d correspondence revisited

    NASA Astrophysics Data System (ADS)

    Chung, Hee-Joong; Dimofte, Tudor; Gukov, Sergei; Sułkowski, Piotr

    2016-04-01

    In fivebrane compactifications on 3-manifolds, we point out the importance of all flat connections in the proper definition of the effective 3d {N}=2 theory. The Lagrangians of some theories with the desired properties can be constructed with the help of homological knot invariants that categorify colored Jones polynomials. Higgsing the full 3d theories constructed this way recovers theories found previously by Dimofte-Gaiotto-Gukov. We also consider the cutting and gluing of 3-manifolds along smooth boundaries and the role played by all flat connections in this operation.

  2. A shape-based segmentation method for mobile laser scanning point clouds

    NASA Astrophysics Data System (ADS)

    Yang, Bisheng; Dong, Zhen

    2013-07-01

    Segmentation of mobile laser point clouds of urban scenes into objects is an important step for post-processing (e.g., interpretation) of point clouds. Point clouds of urban scenes contain numerous objects with significant size variability, complex and incomplete structures, and holes or variable point densities, raising great challenges for the segmentation of mobile laser point clouds. This paper addresses these challenges by proposing a shape-based segmentation method. The proposed method first calculates the optimal neighborhood size of each point to derive the geometric features associated with it, and then classifies the point clouds according to geometric features using support vector machines (SVMs). Second, a set of rules are defined to segment the classified point clouds, and a similarity criterion for segments is proposed to overcome over-segmentation. Finally, the segmentation output is merged based on topological connectivity into a meaningful geometrical abstraction. The proposed method has been tested on point clouds of two urban scenes obtained by different mobile laser scanners. The results show that the proposed method segments large-scale mobile laser point clouds with good accuracy and computationally effective time cost, and that it segments pole-like objects particularly well.

  3. Three-dimensional lidar point-cloud visualization and analysis of coseismic deformation using LidarViewer

    NASA Astrophysics Data System (ADS)

    Oskin, M. E.; Kreylos, O.; Banesh, D.; Hamann, B.; Gold, P. O.; Elliott, A. J.; Hinojosa, A.; Kellogg, L. H.

    2012-12-01

    We summarize new point-cloud analysis techniques, and results obtained from lidar data collected from the 2010 El Mayor-Cucapah earthquake surface rupture, using LidarViewer, an open-source software platform developed at the UC Davis KeckCAVES. Imaging of earthquake deformation with multi-resolution and multi-temporal lidar presents several challenges for visualization and analysis. Instruments, data resolution, and even the geodetic reference frame may change significantly between surveys. Grid-based techniques fail to adequately represent fully 3-D features, such as scarps and vegetation, and introduce aliasing artifacts that are especially troublesome when the deformation signal sought is less than the point spacing. Once obtained, the resulting dense field of 3-D vectors derived from differential lidar are difficult to visualize together with the terrain, limiting interpretation of these results. Points are the native, resolution-independent format of lidar, but working with massive point data sets can overwhelm system memory. LidarViewer overcomes these challenges using hierarchal data storage, view-dependent rendering, and an efficient, recursive data analysis framework. Pre-earthquake airborne lidar, collected as part of a regional survey, are very sparse (0.013 pts/m2) compared to the post-earthquake survey (9 pts/m2). A simple, \\chi2 minimization approach to matching these data sets takes advantage of this dramatic resolution difference to extract 3-D ground motion. We visualize the resulting displacement field in a 3-D environment using streamline-based approaches, colored by elevation change, and superimposed on the post-earthquake topography. This fused data product encourages exploration and assessment of the deformation signal and its relationship to landscape features, such as fault scarps, vegetation, and topographic relief. Terrestrial lidar scans collected within two weeks of the earthquake reveal the surface rupture at centimeter resolution

  4. A New Stochastic Modeling of 3-D Mud Drapes Inside Point Bar Sands in Meandering River Deposits

    SciTech Connect

    Yin, Yanshu

    2013-12-15

    The environment of major sediments of eastern China oilfields is a meandering river where mud drapes inside point bar sand occur and are recognized as important factors for underground fluid flow and distribution of the remaining oil. The present detailed architectural analysis, and the related mud drapes' modeling inside a point bar, is practical work to enhance oil recovery. This paper illustrates a new stochastic modeling of mud drapes inside point bars. The method is a hierarchical strategy and composed of three nested steps. Firstly, the model of meandering channel bodies is established using the Fluvsim method. Each channel centerline obtained from the Fluvsim is preserved for the next simulation. Secondly, the curvature ratios of each meandering river at various positions are calculated to determine the occurrence of each point bar. The abandoned channel is used to characterize the geometry of each defined point bar. Finally, mud drapes inside each point bar are predicted through random sampling of various parameters, such as number, horizontal intervals, dip angle, and extended distance of mud drapes. A dataset, collected from a reservoir in the Shengli oilfield of China, was used to illustrate the mud drapes' building procedure proposed in this paper. The results show that the inner architectural elements of the meandering river are depicted fairly well in the model. More importantly, the high prediction precision from the cross validation of five drilled wells shows the practical value and significance of the proposed method.

  5. Accuracy of 3d Reconstruction in AN Illumination Dome

    NASA Astrophysics Data System (ADS)

    MacDonald, Lindsay; Toschi, Isabella; Nocerino, Erica; Hess, Mona; Remondino, Fabio; Robson, Stuart

    2016-06-01

    The accuracy of 3D surface reconstruction was compared from image sets of a Metric Test Object taken in an illumination dome by two methods: photometric stereo and improved structure-from-motion (SfM), using point cloud data from a 3D colour laser scanner as the reference. Metrics included pointwise height differences over the digital elevation model (DEM), and 3D Euclidean differences between corresponding points. The enhancement of spatial detail was investigated by blending high frequency detail from photometric normals, after a Poisson surface reconstruction, with low frequency detail from a DEM derived from SfM.

  6. Buildings and Terrain of Urban Area Point Cloud Segmentation based on PCL

    NASA Astrophysics Data System (ADS)

    Liu, Ying; Zhong, Ruofei

    2014-03-01

    One current problem with laser radar point data classification is building and urban terrain segmentation, this paper proposes a point cloud segmentation method base on PCL libraries. PCL is a large cross-platform open source C++ programming library, which implements a large number of point cloud related efficient data structures and generic algorithms involving point cloud retrieval, filtering, segmentation, registration, feature extraction and curved surface reconstruction, visualization, etc. Due to laser radar point cloud characteristics with large amount of data, unsymmetrical distribution, this paper proposes using the data structure of kd-tree to organize data; then using Voxel Grid filter for point cloud resampling, namely to reduce the amount of point cloud data, and at the same time keep the point cloud shape characteristic; use PCL Segmentation Module, we use a Euclidean Cluster Extraction class with Europe clustering for buildings and ground three-dimensional point cloud segmentation. The experimental results show that this method avoids the multiple copy system existing data needs, saves the program storage space through the call of PCL library method and class, shortens the program compiled time and improves the running speed of the program.

  7. Simulation and Analysis of Icesat-2 Point Clouds

    NASA Astrophysics Data System (ADS)

    Kerekes, J. P.; Brown, S. D.; Zhang, J.; Yang, J.; Csatho, B. M.; Schenk, A. F.

    2014-12-01

    The ATLAS instrument on the upcoming ICESat-2 mission contains a high-repetition rate micropulse laser and photon counting detectors for high sensitivity and dense along-track sampling. As evidenced by the airborne MABEL photon-counting system, the data collected by photon-counting detectors have substantial noise and will require considerable processing to accurately retrieve the surface elevation in many situations. To study the characteristics of these data and in support of pre-launch algorithm development, researchers at RIT have been generating simulated ATLAS point clouds using their DIRISG tool, a first-principles radiative transfer remote sensing data simulation package. Included in the simulated data are noise returns specified using pre-launch measurements of the flight detectors. These simulated data have been used to assess the accuracy of surface-finding algorithms and to study the anticipated elevation retrieval performance on complex snow and ice surfaces. This work has found single-track biases of up to 2 cm and error standard deviations of up to 10 cm on complex snow surfaces. Additionally, the research has shown quantitative sensitivity confirming smoother surfaces result in higher accuracy and a lower surface diffuse albedo result in a smaller bias.

  8. Satellite remote sensing and cloud modeling of St. Anthony, Minnesota storm clouds and dew point depression

    NASA Technical Reports Server (NTRS)

    Hung, R. J.; Tsao, Y. D.

    1988-01-01

    Rawinsonde data and geosynchronous satellite imagery were used to investigate the life cycles of St. Anthony, Minnesota's severe convective storms. It is found that the fully developed storm clouds, with overshooting cloud tops penetrating above the tropopause, collapsed about three minutes before the touchdown of the tornadoes. Results indicate that the probability of producing an outbreak of tornadoes causing greater damage increases when there are higher values of potential energy storage per unit area for overshooting cloud tops penetrating the tropopause. It is also found that there is less chance for clouds with a lower moisture content to be outgrown as a storm cloud than clouds with a higher moisture content.

  9. Correlation of Point B and Lymph Node Dose in 3D-Planned High-Dose-Rate Cervical Cancer Brachytherapy

    SciTech Connect

    Lee, Larissa J.; Sadow, Cheryl A.; Russell, Anthony; Viswanathan, Akila N.

    2009-11-01

    Purpose: To compare high dose rate (HDR) point B to pelvic lymph node dose using three-dimensional-planned brachytherapy for cervical cancer. Methods and Materials: Patients with FIGO Stage IB-IIIB cervical cancer received 70 tandem HDR applications using CT-based treatment planning. The obturator, external, and internal iliac lymph nodes (LN) were contoured. Per fraction (PF) and combined fraction (CF) right (R), left (L), and bilateral (Bil) nodal doses were analyzed. Point B dose was compared with LN dose-volume histogram (DVH) parameters by paired t test and Pearson correlation coefficients. Results: Mean PF and CF doses to point B were R 1.40 Gy +- 0.14 (CF: 7 Gy), L 1.43 +- 0.15 (CF: 7.15 Gy), and Bil 1.41 +- 0.15 (CF: 7.05 Gy). The correlation coefficients between point B and the D100, D90, D50, D2cc, D1cc, and D0.1cc LN were all less than 0.7. Only the D2cc to the obturator and the D0.1cc to the external iliac nodes were not significantly different from the point B dose. Significant differences between R and L nodal DVHs were seen, likely related to tandem deviation from irregular tumor anatomy. Conclusions: With HDR brachytherapy for cervical cancer, per fraction nodal dose approximates a dose equivalent to teletherapy. Point B is a poor surrogate for dose to specific nodal groups. Three-dimensional defined nodal contours during brachytherapy provide a more accurate reflection of delivered dose and should be part of comprehensive planning of the total dose to the pelvic nodes, particularly when there is evidence of pathologic involvement.

  10. CASTLE3D - A Computer Aided System for Labelling Archaeological Excavations in 3D

    NASA Astrophysics Data System (ADS)

    Houshiar, H.; Borrmann, D.; Elseberg, J.; Nüchter, A.; Näth, F.; Winkler, S.

    2015-08-01

    Documentation of archaeological excavation sites with conventional methods and tools such as hand drawings, measuring tape and archaeological notes is time consuming. This process is prone to human errors and the quality of the documentation depends on the qualification of the archaeologist on site. Use of modern technology and methods in 3D surveying and 3D robotics facilitate and improve this process. Computer-aided systems and databases improve the documentation quality and increase the speed of data acquisition. 3D laser scanning is the state of the art in modelling archaeological excavation sites, historical sites and even entire cities or landscapes. Modern laser scanners are capable of data acquisition of up to 1 million points per second. This provides a very detailed 3D point cloud of the environment. 3D point clouds and 3D models of an excavation site provide a better representation of the environment for the archaeologist and for documentation. The point cloud can be used both for further studies on the excavation and for the presentation of results. This paper introduces a Computer aided system for labelling archaeological excavations in 3D (CASTLE3D). Consisting of a set of tools for recording and georeferencing the 3D data from an excavation site, CASTLE3D is a novel documentation approach in industrial archaeology. It provides a 2D and 3D visualisation of the data and an easy-to-use interface that enables the archaeologist to select regions of interest and to interact with the data in both representations. The 2D visualisation and a 3D orthogonal view of the data provide cuts of the environment that resemble the traditional hand drawings. The 3D perspective view gives a realistic view of the environment. CASTLE3D is designed as an easy-to-use on-site semantic mapping tool for archaeologists. Each project contains a predefined set of semantic information that can be used to label findings in the data. Multiple regions of interest can be joined under

  11. A graph edit dictionary for correcting errors in roof topology graphs reconstructed from point clouds

    NASA Astrophysics Data System (ADS)

    Xiong, B.; Oude Elberink, S.; Vosselman, G.

    2014-07-01

    In the task of 3D building model reconstruction from point clouds we face the problem of recovering a roof topology graph in the presence of noise, small roof faces and low point densities. Errors in roof topology graphs will seriously affect the final modelling results. The aim of this research is to automatically correct these errors. We define the graph correction as a graph-to-graph problem, similar to the spelling correction problem (also called the string-to-string problem). The graph correction is more complex than string correction, as the graphs are 2D while strings are only 1D. We design a strategy based on a dictionary of graph edit operations to automatically identify and correct the errors in the input graph. For each type of error the graph edit dictionary stores a representative erroneous subgraph as well as the corrected version. As an erroneous roof topology graph may contain several errors, a heuristic search is applied to find the optimum sequence of graph edits to correct the errors one by one. The graph edit dictionary can be expanded to include entries needed to cope with errors that were previously not encountered. Experiments show that the dictionary with only fifteen entries already properly corrects one quarter of erroneous graphs in about 4500 buildings, and even half of the erroneous graphs in one test area, achieving as high as a 95% acceptance rate of the reconstructed models.

  12. Apparent spatial blurring and displacement of a point optical source due to cloud scattering

    SciTech Connect

    Brower, K.L.

    1997-09-01

    A Monte Carlo algorithm is used to determine the apparent spatial blurring of a terrestrial 1.07 micron optical point source due to cloud scattering as seen from space. The virtual image of a point source over a virtual source plane area 22.4 x 22.4 square kilometers arising from cloud scattering was determined for stratus clouds (NASA cloud number 5) and altostratus clouds optical source arises from photon scattering by cloud water droplets. Displacement of the virtual source is due to the apparent illumination of the cloud top region directly about the actual source which when viewed at a nonzero look angle gives a projected displacement of the apparent source relative to the actual source. These features are quantified by an analysis of the Monte Carlo computational results.

  13. SU-C-18A-04: 3D Markerless Registration of Lung Based On Coherent Point Drift: Application in Image Guided Radiotherapy

    SciTech Connect

    Nasehi Tehrani, J; Wang, J; Guo, X; Yang, Y

    2014-06-01

    Purpose: This study evaluated a new probabilistic non-rigid registration method called coherent point drift for real time 3D markerless registration of the lung motion during radiotherapy. Method: 4DCT image datasets Dir-lab (www.dir-lab.com) have been used for creating 3D boundary element model of the lungs. For the first step, the 3D surface of the lungs in respiration phases T0 and T50 were segmented and divided into a finite number of linear triangular elements. Each triangle is a two dimensional object which has three vertices (each vertex has three degree of freedom). One of the main features of the lungs motion is velocity coherence so the vertices that creating the mesh of the lungs should also have features and degree of freedom of lung structure. This means that the vertices close to each other tend to move coherently. In the next step, we implemented a probabilistic non-rigid registration method called coherent point drift to calculate nonlinear displacement of vertices between different expiratory phases. Results: The method has been applied to images of 10-patients in Dir-lab dataset. The normal distribution of vertices to the origin for each expiratory stage were calculated. The results shows that the maximum error of registration between different expiratory phases is less than 0.4 mm (0.38 SI, 0.33 mm AP, 0.29 mm RL direction). This method is a reliable method for calculating the vector of displacement, and the degrees of freedom (DOFs) of lung structure in radiotherapy. Conclusions: We evaluated a new 3D registration method for distribution set of vertices inside lungs mesh. In this technique, lungs motion considering velocity coherence are inserted as a penalty in regularization function. The results indicate that high registration accuracy is achievable with CPD. This method is helpful for calculating of displacement vector and analyzing possible physiological and anatomical changes during treatment.

  14. A 3D point-kernel multiple scatter model for parallel-beam SPECT based on a gamma-ray buildup factor

    NASA Astrophysics Data System (ADS)

    Marinkovic, Predrag; Ilic, Radovan; Spaic, Rajko

    2007-09-01

    A three-dimensional (3D) point-kernel multiple scatter model for point spread function (PSF) determination in parallel-beam single-photon emission computed tomography (SPECT), based on a dose gamma-ray buildup factor, is proposed. This model embraces nonuniform attenuation in a voxelized object of imaging (patient body) and multiple scattering that is treated as in the point-kernel integration gamma-ray shielding problems. First-order Compton scattering is done by means of the Klein-Nishina formula, but the multiple scattering is accounted for by making use of a dose buildup factor. An asset of the present model is the possibility of generating a complete two-dimensional (2D) PSF that can be used for 3D SPECT reconstruction by means of iterative algorithms. The proposed model is convenient in those situations where more exact techniques are not economical. For the proposed model's testing purpose calculations (for the point source in a nonuniform scattering object for parallel beam collimator geometry), the multiple-order scatter PSF generated by means of the proposed model matched well with those using Monte Carlo (MC) simulations. Discrepancies are observed only at the exponential tails mostly due to the high statistic uncertainty of MC simulations in this area, but not because of the inappropriateness of the model.

  15. Advanced 3-D analysis, client-server systems, and cloud computing—Integration of cardiovascular imaging data into clinical workflows of transcatheter aortic valve replacement

    PubMed Central

    Zimmermann, Mathis; Falkner, Juergen

    2013-01-01

    Degenerative aortic stenosis is highly prevalent in the aging populations of industrialized countries and is associated with poor prognosis. Surgical valve replacement has been the only established treatment with documented improvement of long-term outcome. However, many of the older patients with aortic stenosis (AS) are high-risk or ineligible for surgery. For these patients, transcatheter aortic valve replacement (TAVR) has emerged as a treatment alternative. The TAVR procedure is characterized by a lack of visualization of the operative field. Therefore, pre- and intra-procedural imaging is critical for patient selection, pre-procedural planning, and intra-operative decision-making. Incremental to conventional angiography and 2-D echocardiography, multidetector computed tomography (CT) has assumed an important role before TAVR. The analysis of 3-D CT data requires extensive post-processing during direct interaction with the dataset, using advance analysis software. Organization and storage of the data according to complex clinical workflows and sharing of image information have become a critical part of these novel treatment approaches. Optimally, the data are integrated into a comprehensive image data file accessible to multiple groups of practitioners across the hospital. This creates new challenges for data management requiring a complex IT infrastructure, spanning across multiple locations, but is increasingly achieved with client-server solutions and private cloud technology. This article describes the challenges and opportunities created by the increased amount of patient-specific imaging data in the context of TAVR. PMID:24282750

  16. Advanced 3-D analysis, client-server systems, and cloud computing-Integration of cardiovascular imaging data into clinical workflows of transcatheter aortic valve replacement.

    PubMed

    Schoenhagen, Paul; Zimmermann, Mathis; Falkner, Juergen

    2013-06-01

    Degenerative aortic stenosis is highly prevalent in the aging populations of industrialized countries and is associated with poor prognosis. Surgical valve replacement has been the only established treatment with documented improvement of long-term outcome. However, many of the older patients with aortic stenosis (AS) are high-risk or ineligible for surgery. For these patients, transcatheter aortic valve replacement (TAVR) has emerged as a treatment alternative. The TAVR procedure is characterized by a lack of visualization of the operative field. Therefore, pre- and intra-procedural imaging is critical for patient selection, pre-procedural planning, and intra-operative decision-making. Incremental to conventional angiography and 2-D echocardiography, multidetector computed tomography (CT) has assumed an important role before TAVR. The analysis of 3-D CT data requires extensive post-processing during direct interaction with the dataset, using advance analysis software. Organization and storage of the data according to complex clinical workflows and sharing of image information have become a critical part of these novel treatment approaches. Optimally, the data are integrated into a comprehensive image data file accessible to multiple groups of practitioners across the hospital. This creates new challenges for data management requiring a complex IT infrastructure, spanning across multiple locations, but is increasingly achieved with client-server solutions and private cloud technology. This article describes the challenges and opportunities created by the increased amount of patient-specific imaging data in the context of TAVR. PMID:24282750

  17. Automated Detection of Geomorphic Features in LiDAR Point Clouds of Various Spatial Density

    NASA Astrophysics Data System (ADS)

    Dorninger, Peter; Székely, Balázs; Zámolyi, András.; Nothegger, Clemens

    2010-05-01

    relevant results. Consequently, it could be verified that a topographic surface can be properly represented by a set of distinct planar structures. Therefore, the subsequent interpretation of those planes with respect to geomorphic characteristics is acceptable. The additional in situ geological measurements verified some of our findings in the sense that similar primary directions could be found that were derived from the LiDAR data set and (Zámolyi et al., 2010, this volume). References: P. Dorninger, N. Pfeifer: "A Comprehensive Automated 3D Approach for Building Extraction, Reconstruction, and Regularization from Airborne Laser Scanning Point Clouds"; Sensors, 8 (2008), 11; 7323 - 7343. C. Nothegger, P. Dorninger: "3D Filtering of High-Resolution Terrestrial Laser Scanner Point Clouds for Cultural Heritage Documentation"; Photogrammetrie, Fernerkundung, Geoinformation, 1 (2009), 53 - 63. A. Zámolyi, B. Székely, G. Molnár, A. Roncat, P. Dorninger, A. Pocsai, M. Wyszyski, P. Drexel: "Comparison of LiDAR derived directional topographic features with geologic field evidence: a case study of Doren landslide (Vorarlberg, Austria)"; EGU General Assembly 2010, Vienna, Austria

  18. Evaluation of the Quantitative Accuracy of 3D Reconstruction of Edentulous Jaw Models with Jaw Relation Based on Reference Point System Alignment

    PubMed Central

    Li, Weiwei; Yuan, Fusong; Lv, Peijun; Wang, Yong; Sun, Yuchun

    2015-01-01

    Objectives To apply contact measurement and reference point system (RPS) alignment techniques to establish a method for 3D reconstruction of the edentulous jaw models with centric relation and to quantitatively evaluate its accuracy. Methods Upper and lower edentulous jaw models were clinically prepared, 10 pairs of resin cylinders with same size were adhered to axial surfaces of upper and lower models. The occlusal bases and the upper and lower jaw models were installed in the centric relation position. Faro Edge 1.8m was used to directly obtain center points of the base surface of the cylinders (contact method). Activity 880 dental scanner was used to obtain 3D data of the cylinders and the center points were fitted (fitting method). 3 pairs of center points were used to align the virtual model to centric relation. An observation coordinate system was interactively established. The straight-line distances in the X (horizontal left/right), Y (horizontal anterior/posterior), and Z (vertical) between the remaining 7 pairs of center points derived from contact method and fitting method were measured respectively and analyzed using a paired t-test. Results The differences of the straight-line distances of the remaining 7 pairs of center points between the two methods were X: 0.074 ± 0.107 mm, Y: 0.168 ± 0.176 mm, and Z: −0.003± 0.155 mm. The results of paired t-test were X and Z: p >0.05, Y: p <0.05. Conclusion By using contact measurement and the reference point system alignment technique, highly accurate reconstruction of the vertical distance and centric relation of a digital edentulous jaw model can be achieved, which meets the design and manufacturing requirements of the complete dentures. The error of horizontal anterior/posterior jaw relation was relatively large. PMID:25659133

  19. Dynamic topology and flux rope evolution during non-linear tearing of 3D null point current sheets

    SciTech Connect

    Wyper, P. F. Pontin, D. I.

    2014-10-15

    In this work, the dynamic magnetic field within a tearing-unstable three-dimensional current sheet about a magnetic null point is described in detail. We focus on the evolution of the magnetic null points and flux ropes that are formed during the tearing process. Generally, we find that both magnetic structures are created prolifically within the layer and are non-trivially related. We examine how nulls are created and annihilated during bifurcation processes, and describe how they evolve within the current layer. The type of null bifurcation first observed is associated with the formation of pairs of flux ropes within the current layer. We also find that new nulls form within these flux ropes, both following internal reconnection and as adjacent flux ropes interact. The flux ropes exhibit a complex evolution, driven by a combination of ideal kinking and their interaction with the outflow jets from the main layer. The finite size of the unstable layer also allows us to consider the wider effects of flux rope generation. We find that the unstable current layer acts as a source of torsional magnetohydrodynamic waves and dynamic braiding of magnetic fields. The implications of these results to several areas of heliophysics are discussed.

  20. A new perspective on the relationship between cloud shade and point cloudiness

    NASA Astrophysics Data System (ADS)

    Brabec, Marek; Badescu, Viorel; Paulescu, Marius; Dumitrescu, Alexandru

    2016-05-01

    Several simple relationships between cloud shade and point cloudiness have been proposed in the last few decades. The present approach is fundamentally different in that it captures some of the hard restrictions dictated by the bounded range (0, 1)of the cloud shade. Three different models are proposed. The main aim is to produce estimates of the whole conditional distribution of the cloud shade for a given point cloudiness value. The beta-inflated model, which takes into account natural physical constraints of the cloud shade, provides the best results.

  1. Reconstruction of 3D scenes from sequences of images

    NASA Astrophysics Data System (ADS)

    Niu, Bei; Sang, Xinzhu; Chen, Duo; Cai, Yuanfa

    2013-08-01

    Reconstruction of three-dimensional (3D) scenes is an active research topic in the field of computer vision and 3D display. It's a challenge to model 3D objects rapidly and effectively. A 3D model can be extracted from multiple images. The system only requires a sequence of images taken with cameras without knowing the parameters of camera, which provide flexibility to a high degree. We focus on quickly merging point cloud of the object from depth map sequences. The whole system combines algorithms of different areas in computer vision, such as camera calibration, stereo correspondence, point cloud splicing and surface reconstruction. The procedure of 3D reconstruction is decomposed into a number of successive steps. Firstly, image sequences are received by the camera freely moving around the object. Secondly, the scene depth is obtained by a non-local stereo matching algorithm. The pairwise is realized with the Scale Invariant Feature Transform (SIFT) algorithm. An initial matching is then made for the first two images of the sequence. For the subsequent image that is processed with previous image, the point of interest corresponding to ones in previous images are refined or corrected. The vertical parallax between the images is eliminated. The next step is to calibrate camera, and intrinsic parameters and external parameters of the camera are calculated. Therefore, The relative position and orientation of camera are gotten. A sequence of depth maps are acquired by using a non-local cost aggregation method for stereo matching. Then point cloud sequence is achieved by the scene depths, which consists of point cloud model using the external parameters of camera and the point cloud sequence. The point cloud model is then approximated by a triangular wire-frame mesh to reduce geometric complexity and to tailor the model to the requirements of computer graphics visualization systems. Finally, the texture is mapped onto the wire-frame model, which can also be used for 3

  2. Automatic registration of unordered point clouds acquired by Kinect sensors using an overlap heuristic

    NASA Astrophysics Data System (ADS)

    Weber, T.; Hänsch, R.; Hellwich, O.

    2015-04-01

    This paper proposes and evaluates a pipeline to automatically register point clouds captured by depth sensors like the Microsoft Kinect. The method neither makes assumptions about the view order of the sensors, nor uses any kind of other task-dependent prior knowledge. All point clouds within the input set are aligned in a common, global coordinate system by a successive application of pairwise registration steps. The order of the individual transformations is automatically derived from a global point cloud graph, which uses the overlap of two individual point clouds to establish a weighted link between them. The experiments prove the generality of the proposed approach by applying it to data from a single but moving sensor, multiple Kinects that run simultaneously, as well as laser scanning data. The obtained accuracies in terms of the mean nearest point neighbor distance are below 0.01% of the maximum point distance of the reference data in all cases.

  3. 3D affine registration using teaching-learning based optimization

    NASA Astrophysics Data System (ADS)

    Jani, Ashish; Savsani, Vimal; Pandya, Abhijit

    2013-09-01

    3D image registration is an emerging research field in the study of computer vision. In this paper, two effective global optimization methods are considered for the 3D registration of point clouds. Experiments were conducted by applying each algorithm and their performance was evaluated with respect to rigidity, similarity and affine transformations. Comparison of algorithms and its effectiveness was tested for the average performance to find the global solution for minimizing the error in the terms of distance between the model cloud and the data cloud. The parameters for the transformation matrix were considered as the design variables. Further comparisons of the considered methods were done for the computational effort, computational time and the convergence of the algorithm. The results reveal that the use of TLBO was outstanding for image processing application involving 3D registration. [Figure not available: see fulltext.

  4. Random-Profiles-Based 3D Face Recognition System

    PubMed Central

    Joongrock, Kim; Sunjin, Yu; Sangyoun, Lee

    2014-01-01

    In this paper, a noble nonintrusive three-dimensional (3D) face modeling system for random-profile-based 3D face recognition is presented. Although recent two-dimensional (2D) face recognition systems can achieve a reliable recognition rate under certain conditions, their performance is limited by internal and external changes, such as illumination and pose variation. To address these issues, 3D face recognition, which uses 3D face data, has recently received much attention. However, the performance of 3D face recognition highly depends on the precision of acquired 3D face data, while also requiring more computational power and storage capacity than 2D face recognition systems. In this paper, we present a developed nonintrusive 3D face modeling system composed of a stereo vision system and an invisible near-infrared line laser, which can be directly applied to profile-based 3D face recognition. We further propose a novel random-profile-based 3D face recognition method that is memory-efficient and pose-invariant. The experimental results demonstrate that the reconstructed 3D face data consists of more than 50 k 3D point clouds and a reliable recognition rate against pose variation. PMID:24691101

  5. Bootstrapping 3D fermions

    NASA Astrophysics Data System (ADS)

    Iliesiu, Luca; Kos, Filip; Poland, David; Pufu, Silviu S.; Simmons-Duffin, David; Yacoby, Ran

    2016-03-01

    We study the conformal bootstrap for a 4-point function of fermions < ψψψψ> in 3D. We first introduce an embedding formalism for 3D spinors and compute the conformal blocks appearing in fermion 4-point functions. Using these results, we find general bounds on the dimensions of operators appearing in the ψ × ψ OPE, and also on the central charge C T . We observe features in our bounds that coincide with scaling dimensions in the GrossNeveu models at large N . We also speculate that other features could coincide with a fermionic CFT containing no relevant scalar operators.

  6. 3D surface defect analysis and evaluation

    NASA Astrophysics Data System (ADS)

    Yang, B.; Jia, M.; Song, G. J.; Tao, L.; Harding, K. G.

    2008-08-01

    A method is proposed for surface defect analysis and evaluation. Good 3D point clouds can now be obtained through a variety of surface profiling methods such as stylus tracers, structured light, or interferometry. In order to inspect a surface for defects, first a reference surface that represents the surface without any defects needs to be identified. This reference surface can then be fit to the point cloud. The algorithm we present finds the least square solution for the overdetermined equation set to obtain the parameters of the reference surface mathematical description. The distance between each point within the point cloud and the reference surface is then calculated using to the derived reference surface equation. For analysis of the data, the user can preset a threshold distance value. If the calculated distance is bigger than the threshold value, the corresponding point is marked as a defect point. The software then generates a color-coded map of the measured surface. Defect points that are connected together are formed into a defect-clustering domain. Each defect-clustering domain is treated as one defect area. We then use a clustering domain searching algorithm to auto-search all the defect areas in the point cloud. The different critical parameters used for evaluating the defect status of a point cloud that can be calculated are described as: P-Depth,a peak depth of all defects; Defect Number, the number of surface defects; Defects/Area, the defect number in unit area; and Defect Coverage Ratio which is a ratio of the defect area to the region of interest.

  7. High-resolution spectroscopy of Saturn at 3 microns: CH 4, CH 3D, C 2H 2, C 2H 6, PH 3, clouds, and haze

    NASA Astrophysics Data System (ADS)

    Kim, Joo Hyeon; Kim, Sang J.; Geballe, Thomas R.; Kim, Sungsoo S.; Brown, Linda R.

    2006-12-01

    We report observation and analysis of a high-resolution 2.87-3.54 μm spectrum of the southern temperate region of Saturn obtained with NIRSPEC at Keck II. The spectrum reveals absorption and emission lines of five molecular species as well as spectral features of haze particles. The ν+ν band of CH 3D is detected in absorption between 2.87 and 2.92 μm; and we derived from it a mixing ratio approximately consistent with the Infrared Space Observatory result. The ν band of C 2H 2 also is detected in absorption between 2.95 and 3.05 μm; analysis indicates a sudden drop in the C 2H 2 mixing ratio at 15 mbar (130 km above the 1 bar level), probably due to condensation in the low stratosphere. The presence of the ν+ν+ν band of C 2H 6 near 3.07 μm, first reported by Bjoraker et al. [Bjoraker, G.L., Larson, H.P., Fink, U., 1981. Astrophys. J. 248, 856-862], is confirmed, and a C 2H 6 condensation altitude of 10 mbar (140 km) in the low stratosphere is determined. We assign weak emission lines within the 3.3 μm band of CH 4 to the ν band of C 2H 6, and derive a mixing ratio of 9±4×10 for this species. Most of the C 2H 6 3.3 μm line emission arises in the altitude range 460-620 km (at ˜μbar pressure levels), much higher than the 160-370 km range where the 12 μm thermal molecular line emission of this species arises. At 2.87-2.90 μm the major absorber is tropospheric PH 3. The cloud level determined here and at 3.22-3.54 is 390-460 mbar (˜30 km), somewhat higher than found by Kim and Geballe [Kim, S.J., Geballe, T.R., 2005. Icarus 179, 449-458] from analysis of a low resolution spectrum. A broad absorption feature at 2.96 μm, which might be due to NH 3 ice particles in saturnian clouds, is also present. The effect of a haze layer at about 125 km (˜12 mbar level) on the 3.20-3.54 μm spectrum, which was not apparent in the low resolution spectrum, is clearly evident in the high resolution data, and the spectral properties of the haze particles suggest that

  8. Saturation point representation of cloud-top entrainment instability

    NASA Technical Reports Server (NTRS)

    Boers, Reinout

    1991-01-01

    Cloud-top entrainment instability was investigated using a mixing line analysis. Mixing time scales are closely related to the actual size of the parcel, so that local instabilities are largely dependent on the scales of mixing near the cloud top. Given a fixed transport velocity, variation over a small range of parcel length scales (parcel mixing velocities) turns an energy-producing mixing process into an energy-consuming mixing process. It is suggested that a single criterion for cloud-top entrainment instability will not be found due to the role of at least three factors operating more or less independently; the stability of the mixing line, the entrainment speed, and the strength of the internal boundary-layer circulation.

  9. An automatic registration algorithm for the scattered point clouds based on the curvature feature

    NASA Astrophysics Data System (ADS)

    He, Bingwei; Lin, Zeming; Li, Y. F.

    2013-03-01

    Object modeling by the registration of multiple range images has important applications in reverse engineering and computer vision. In order to register multi-view scattered point clouds, a novel curvature-based automatic registration algorithm is proposed in this paper, which can solve the registrat