a Point Cloud Classification Approach Based on Vertical Structures of Ground Objects
NASA Astrophysics Data System (ADS)
Zhao, Y.; Hu, Q.; Hu, W.
2018-04-01
This paper proposes a novel method for point cloud classification using vertical structural characteristics of ground objects. Since urbanization develops rapidly nowadays, urban ground objects also change frequently. Conventional photogrammetric methods cannot satisfy the requirements of updating the ground objects' information efficiently, so LiDAR (Light Detection and Ranging) technology is employed to accomplish this task. LiDAR data, namely point cloud data, can obtain detailed three-dimensional coordinates of ground objects, but this kind of data is discrete and unorganized. To accomplish ground objects classification with point cloud, we first construct horizontal grids and vertical layers to organize point cloud data, and then calculate vertical characteristics, including density and measures of dispersion, and form characteristic curves for each grids. With the help of PCA processing and K-means algorithm, we analyze the similarities and differences of characteristic curves. Curves that have similar features will be classified into the same class and point cloud correspond to these curves will be classified as well. The whole process is simple but effective, and this approach does not need assistance of other data sources. In this study, point cloud data are classified into three classes, which are vegetation, buildings, and roads. When horizontal grid spacing and vertical layer spacing are 3 m and 1 m respectively, vertical characteristic is set as density, and the number of dimensions after PCA processing is 11, the overall precision of classification result is about 86.31 %. The result can help us quickly understand the distribution of various ground objects.
Heat capacity anomaly in a self-aggregating system: Triblock copolymer 17R4 in water
NASA Astrophysics Data System (ADS)
Dumancas, Lorenzo V.; Simpson, David E.; Jacobs, D. T.
2015-05-01
The reverse Pluronic, triblock copolymer 17R4 is formed from poly(propylene oxide) (PPO) and poly(ethylene oxide) (PEO): PPO14 - PEO24 - PPO14, where the number of monomers in each block is denoted by the subscripts. In water, 17R4 has a micellization line marking the transition from a unimer network to self-aggregated spherical micelles which is quite near a cloud point curve above which the system separates into copolymer-rich and copolymer-poor liquid phases. The phase separation has an Ising-like, lower consolute critical point with a well-determined critical temperature and composition. We have measured the heat capacity as a function of temperature using an adiabatic calorimeter for three compositions: (1) the critical composition where the anomaly at the critical point is analyzed, (2) a composition much less than the critical composition with a much smaller spike when the cloud point curve is crossed, and (3) a composition near where the micellization line intersects the cloud point curve that only shows micellization. For the critical composition, the heat capacity anomaly very near the critical point is observed for the first time in a Pluronic/water system and is described well as a second-order phase transition resulting from the copolymer-water interaction. For all compositions, the onset of micellization is clear, but the formation of micelles occurs over a broad range of temperatures and never becomes complete because micelles form differently in each phase above the cloud point curve. The integrated heat capacity gives an enthalpy that is smaller than the standard state enthalpy of micellization given by a van't Hoff plot, a typical result for Pluronic systems.
Section Curve Reconstruction and Mean-Camber Curve Extraction of a Point-Sampled Blade Surface
Li, Wen-long; Xie, He; Li, Qi-dong; Zhou, Li-ping; Yin, Zhou-ping
2014-01-01
The blade is one of the most critical parts of an aviation engine, and a small change in the blade geometry may significantly affect the dynamics performance of the aviation engine. Rapid advancements in 3D scanning techniques have enabled the inspection of the blade shape using a dense and accurate point cloud. This paper proposes a new method to achieving two common tasks in blade inspection: section curve reconstruction and mean-camber curve extraction with the representation of a point cloud. The mathematical morphology is expanded and applied to restrain the effect of the measuring defects and generate an ordered sequence of 2D measured points in the section plane. Then, the energy and distance are minimized to iteratively smoothen the measured points, approximate the section curve and extract the mean-camber curve. In addition, a turbine blade is machined and scanned to observe the curvature variation, energy variation and approximation error, which demonstrates the availability of the proposed method. The proposed method is simple to implement and can be applied in aviation casting-blade finish inspection, large forging-blade allowance inspection and visual-guided robot grinding localization. PMID:25551467
Section curve reconstruction and mean-camber curve extraction of a point-sampled blade surface.
Li, Wen-long; Xie, He; Li, Qi-dong; Zhou, Li-ping; Yin, Zhou-ping
2014-01-01
The blade is one of the most critical parts of an aviation engine, and a small change in the blade geometry may significantly affect the dynamics performance of the aviation engine. Rapid advancements in 3D scanning techniques have enabled the inspection of the blade shape using a dense and accurate point cloud. This paper proposes a new method to achieving two common tasks in blade inspection: section curve reconstruction and mean-camber curve extraction with the representation of a point cloud. The mathematical morphology is expanded and applied to restrain the effect of the measuring defects and generate an ordered sequence of 2D measured points in the section plane. Then, the energy and distance are minimized to iteratively smoothen the measured points, approximate the section curve and extract the mean-camber curve. In addition, a turbine blade is machined and scanned to observe the curvature variation, energy variation and approximation error, which demonstrates the availability of the proposed method. The proposed method is simple to implement and can be applied in aviation casting-blade finish inspection, large forging-blade allowance inspection and visual-guided robot grinding localization.
Datum Feature Extraction and Deformation Analysis Method Based on Normal Vector of Point Cloud
NASA Astrophysics Data System (ADS)
Sun, W.; Wang, J.; Jin, F.; Liang, Z.; Yang, Y.
2018-04-01
In order to solve the problem lacking applicable analysis method in the application of three-dimensional laser scanning technology to the field of deformation monitoring, an efficient method extracting datum feature and analysing deformation based on normal vector of point cloud was proposed. Firstly, the kd-tree is used to establish the topological relation. Datum points are detected by tracking the normal vector of point cloud determined by the normal vector of local planar. Then, the cubic B-spline curve fitting is performed on the datum points. Finally, datum elevation and the inclination angle of the radial point are calculated according to the fitted curve and then the deformation information was analyzed. The proposed approach was verified on real large-scale tank data set captured with terrestrial laser scanner in a chemical plant. The results show that the method could obtain the entire information of the monitor object quickly and comprehensively, and reflect accurately the datum feature deformation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, Renyu; Demory, Brice-Olivier; Seager, Sara
2015-03-20
Kepler has detected numerous exoplanet transits by measuring stellar light in a single visible-wavelength band. In addition to detection, the precise photometry provides phase curves of exoplanets, which can be used to study the dynamic processes on these planets. However, the interpretation of these observations can be complicated by the fact that visible-wavelength phase curves can represent both thermal emission and scattering from the planets. Here we present a semi-analytical model framework that can be applied to study Kepler and future visible-wavelength phase curve observations of exoplanets. The model efficiently computes reflection and thermal emission components for both rocky andmore » gaseous planets, considering both homogeneous and inhomogeneous surfaces or atmospheres. We analyze the phase curves of the gaseous planet Kepler- 7 b and the rocky planet Kepler- 10 b using the model. In general, we find that a hot exoplanet’s visible-wavelength phase curve having a significant phase offset can usually be explained by two classes of solutions: one class requires a thermal hot spot shifted to one side of the substellar point, and the other class requires reflective clouds concentrated on the same side of the substellar point. Particularly for Kepler- 7 b, reflective clouds located on the west side of the substellar point can best explain its phase curve. The reflectivity of the clear part of the atmosphere should be less than 7% and that of the cloudy part should be greater than 80%, and the cloud boundary should be located at 11° ± 3° to the west of the substellar point. We suggest single-band photometry surveys could yield valuable information on exoplanet atmospheres and surfaces.« less
Bayesian Multiscale Modeling of Closed Curves in Point Clouds
Gu, Kelvin; Pati, Debdeep; Dunson, David B.
2014-01-01
Modeling object boundaries based on image or point cloud data is frequently necessary in medical and scientific applications ranging from detecting tumor contours for targeted radiation therapy, to the classification of organisms based on their structural information. In low-contrast images or sparse and noisy point clouds, there is often insufficient data to recover local segments of the boundary in isolation. Thus, it becomes critical to model the entire boundary in the form of a closed curve. To achieve this, we develop a Bayesian hierarchical model that expresses highly diverse 2D objects in the form of closed curves. The model is based on a novel multiscale deformation process. By relating multiple objects through a hierarchical formulation, we can successfully recover missing boundaries by borrowing structural information from similar objects at the appropriate scale. Furthermore, the model’s latent parameters help interpret the population, indicating dimensions of significant structural variability and also specifying a ‘central curve’ that summarizes the collection. Theoretical properties of our prior are studied in specific cases and efficient Markov chain Monte Carlo methods are developed, evaluated through simulation examples and applied to panorex teeth images for modeling teeth contours and also to a brain tumor contour detection problem. PMID:25544786
NASA Astrophysics Data System (ADS)
Székely, B.; Kania, A.; Standovár, T.; Heilmeier, H.
2016-06-01
The horizontal variation and vertical layering of the vegetation are important properties of the canopy structure determining the habitat; three-dimensional (3D) distribution of objects (shrub layers, understory vegetation, etc.) is related to the environmental factors (e.g., illumination, visibility). It has been shown that gaps in forests, mosaic-like structures are essential to biodiversity; various methods have been introduced to quantify this property. As the distribution of gaps in the vegetation is a multi-scale phenomenon, in order to capture it in its entirety, scale-independent methods are preferred; one of these is the calculation of lacunarity. We used Airborne Laser Scanning point clouds measured over a forest plantation situated in a former floodplain. The flat topographic relief ensured that the tree growth is independent of the topographic effects. The tree pattern in the plantation crops provided various quasi-regular and irregular patterns, as well as various ages of the stands. The point clouds were voxelized and layers of voxels were considered as images for two-dimensional input. These images calculated for a certain vicinity of reference points were taken as images for the computation of lacunarity curves, providing a stack of lacunarity curves for each reference points. These sets of curves have been compared to reveal spatial changes of this property. As the dynamic range of the lacunarity values is very large, the natural logarithms of the values were considered. Logarithms of lacunarity functions show canopy-related variations, we analysed these variations along transects. The spatial variation can be related to forest properties and ecology-specific aspects.
NASA Astrophysics Data System (ADS)
Yu, Zhijing; Ma, Kai; Wang, Zhijun; Wu, Jun; Wang, Tao; Zhuge, Jingchang
2018-03-01
A blade is one of the most important components of an aircraft engine. Due to its high manufacturing costs, it is indispensable to come up with methods for repairing damaged blades. In order to obtain a surface model of the blades, this paper proposes a modeling method by using speckle patterns based on the virtual stereo vision system. Firstly, blades are sprayed evenly creating random speckle patterns and point clouds from blade surfaces can be calculated by using speckle patterns based on the virtual stereo vision system. Secondly, boundary points are obtained in the way of varied step lengths according to curvature and are fitted to get a blade surface envelope with a cubic B-spline curve. Finally, the surface model of blades is established with the envelope curves and the point clouds. Experimental results show that the surface model of aircraft engine blades is fair and accurate.
Methodologies for Development of Patient Specific Bone Models from Human Body CT Scans
NASA Astrophysics Data System (ADS)
Chougule, Vikas Narayan; Mulay, Arati Vinayak; Ahuja, Bharatkumar Bhagatraj
2016-06-01
This work deals with development of algorithm for physical replication of patient specific human bone and construction of corresponding implants/inserts RP models by using Reverse Engineering approach from non-invasive medical images for surgical purpose. In medical field, the volumetric data i.e. voxel and triangular facet based models are primarily used for bio-modelling and visualization, which requires huge memory space. On the other side, recent advances in Computer Aided Design (CAD) technology provides additional facilities/functions for design, prototyping and manufacturing of any object having freeform surfaces based on boundary representation techniques. This work presents a process to physical replication of 3D rapid prototyping (RP) physical models of human bone from various CAD modeling techniques developed by using 3D point cloud data which is obtained from non-invasive CT/MRI scans in DICOM 3.0 format. This point cloud data is used for construction of 3D CAD model by fitting B-spline curves through these points and then fitting surface between these curve networks by using swept blend techniques. This process also can be achieved by generating the triangular mesh directly from 3D point cloud data without developing any surface model using any commercial CAD software. The generated STL file from 3D point cloud data is used as a basic input for RP process. The Delaunay tetrahedralization approach is used to process the 3D point cloud data to obtain STL file. CT scan data of Metacarpus (human bone) is used as the case study for the generation of the 3D RP model. A 3D physical model of the human bone is generated on rapid prototyping machine and its virtual reality model is presented for visualization. The generated CAD model by different techniques is compared for the accuracy and reliability. The results of this research work are assessed for clinical reliability in replication of human bone in medical field.
A two step method to treat variable winds in fallout smearing codes. Master's thesis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hopkins, A.T.
1982-03-01
A method was developed to treat non-constant winds in fallout smearing codes. The method consists of two steps: (1) location of the curved hotline (2) determination of the off-hotline activity. To locate the curved hotline, the method begins with an initial cloud of 20 discretely-sized pancake clouds, located at altitudes determined by weapon yield. Next, the particles are tracked through a 300 layer atmosphere, translating with different winds in each layer. The connection of the 20 particles' impact points is the fallout hotline. The hotline location was found to be independent of the assumed particle size distribution in the stabilizedmore » cloud. The off-hotline activity distribution is represented as a two-dimensional gaussian function, centered on the curved hotline. Hotline locator model results were compared to numerical calculations of hypothetical 100 kt burst and to the actual hotline produced by the Castle-Bravo 15 Mt nuclear test.« less
Cloud point phenomena for POE-type nonionic surfactants in a model room temperature ionic liquid.
Inoue, Tohru; Misono, Takeshi
2008-10-15
The cloud point phenomenon has been investigated for the solutions of polyoxyethylene (POE)-type nonionic surfactants (C(12)E(5), C(12)E(6), C(12)E(7), C(10)E(6), and C(14)E(6)) in 1-butyl-3-methylimidazolium tetrafluoroborate (bmimBF(4)), a typical room temperature ionic liquid (RTIL). The cloud point, T(c), increases with the elongation of the POE chain, while decreases with the increase in the hydrocarbon chain length. This demonstrates that the solvophilicity/solvophobicity of the surfactants in RTIL comes from POE chain/hydrocarbon chain. When compared with an aqueous system, the chain length dependence of T(c) is larger for the RTIL system regarding both POE and hydrocarbon chains; in particular, hydrocarbon chain length affects T(c) much more strongly in the RTIL system than in equivalent aqueous systems. In a similar fashion to the much-studied aqueous systems, the micellar growth is also observed in this RTIL solvent as the temperature approaches T(c). The cloud point curves have been analyzed using a Flory-Huggins-type model based on phase separation in polymer solutions.
Error reduction in three-dimensional metrology combining optical and touch probe data
NASA Astrophysics Data System (ADS)
Gerde, Janice R.; Christens-Barry, William A.
2010-08-01
Analysis of footwear under the Harmonized Tariff Schedule of the United States (HTSUS) is partly based on identifying the boundary ("parting line") between the "external surface area upper" (ESAU) and the sample's sole. Often, that boundary is obscured. We establish the parting line as the curved intersection between the sample outer surface and its insole surface. The outer surface is determined by discrete point cloud coordinates obtained using a laser scanner. The insole surface is defined by point cloud data, obtained using a touch probe device-a coordinate measuring machine (CMM). Because these point cloud data sets do not overlap spatially, a polynomial surface is fitted to the insole data and extended to intersect a mesh fitted to the outer surface point cloud. This line of intersection defines the ESAU boundary, permitting further fractional area calculations to proceed. The defined parting line location is sensitive to the polynomial used to fit experimental data. Extrapolation to the intersection with the ESAU can heighten this sensitivity. We discuss a methodology for transforming these data into a common reference frame. Three scenarios are considered: measurement error in point cloud coordinates, from fitting a polynomial surface to a point cloud then extrapolating beyond the data set, and error from reference frame transformation. These error sources can influence calculated surface areas. We describe experiments to assess error magnitude, the sensitivity of calculated results on these errors, and minimizing error impact on calculated quantities. Ultimately, we must ensure that statistical error from these procedures is minimized and within acceptance criteria.
NASA Astrophysics Data System (ADS)
Bureick, Johannes; Alkhatib, Hamza; Neumann, Ingo
2016-03-01
In many geodetic engineering applications it is necessary to solve the problem of describing a measured data point cloud, measured, e. g. by laser scanner, by means of free-form curves or surfaces, e. g., with B-Splines as basis functions. The state of the art approaches to determine B-Splines yields results which are seriously manipulated by the occurrence of data gaps and outliers. Optimal and robust B-Spline fitting depend, however, on optimal selection of the knot vector. Hence we combine in our approach Monte-Carlo methods and the location and curvature of the measured data in order to determine the knot vector of the B-Spline in such a way that no oscillating effects at the edges of data gaps occur. We introduce an optimized approach based on computed weights by means of resampling techniques. In order to minimize the effect of outliers, we apply robust M-estimators for the estimation of control points. The above mentioned approach will be applied to a multi-sensor system based on kinematic terrestrial laserscanning in the field of rail track inspection.
A Case Study of Reverse Engineering Integrated in an Automated Design Process
NASA Astrophysics Data System (ADS)
Pescaru, R.; Kyratsis, P.; Oancea, G.
2016-11-01
This paper presents a design methodology which automates the generation of curves extracted from the point clouds that have been obtained by digitizing the physical objects. The methodology is described on a product belonging to the industry of consumables, respectively a footwear type product that has a complex shape with many curves. The final result is the automated generation of wrapping curves, surfaces and solids according to the characteristics of the customer's foot, and to the preferences for the chosen model, which leads to the development of customized products.
NASA Astrophysics Data System (ADS)
Ghasemi, Elham; Kaykhaii, Massoud
2016-07-01
A novel, green, simple and fast method was developed for spectrophotometric determination of Malachite green, Crystal violet, and Rhodamine B in water samples based on Micro-cloud Point extraction (MCPE) at room temperature. This is the first report on the application of MCPE on dyes. In this method, to reach the cloud point at room temperature, the MCPE procedure was carried out in brine using Triton X-114 as a non-ionic surfactant. The factors influencing the extraction efficiency were investigated and optimized. Under the optimized condition, calibration curves were found to be linear in the concentration range of 0.06-0.60 mg/L, 0.10-0.80 mg/L, and 0.03-0.30 mg/L with the enrichment factors of 29.26, 85.47 and 28.36, respectively for Malachite green, Crystal violet, and Rhodamine B. Limit of detections were between 2.2 and 5.1 μg/L.
Ghasemi, Elham; Kaykhaii, Massoud
2016-07-05
A novel, green, simple and fast method was developed for spectrophotometric determination of Malachite green, Crystal violet, and Rhodamine B in water samples based on Micro-cloud Point extraction (MCPE) at room temperature. This is the first report on the application of MCPE on dyes. In this method, to reach the cloud point at room temperature, the MCPE procedure was carried out in brine using Triton X-114 as a non-ionic surfactant. The factors influencing the extraction efficiency were investigated and optimized. Under the optimized condition, calibration curves were found to be linear in the concentration range of 0.06-0.60mg/L, 0.10-0.80mg/L, and 0.03-0.30mg/L with the enrichment factors of 29.26, 85.47 and 28.36, respectively for Malachite green, Crystal violet, and Rhodamine B. Limit of detections were between 2.2 and 5.1μg/L. Copyright © 2016 Elsevier B.V. All rights reserved.
Street curb recognition in 3d point cloud data using morphological operations
NASA Astrophysics Data System (ADS)
Rodríguez-Cuenca, Borja; Concepción Alonso-Rodríguez, María; García-Cortés, Silverio; Ordóñez, Celestino
2015-04-01
Accurate and automatic detection of cartographic-entities saves a great deal of time and money when creating and updating cartographic databases. The current trend in remote sensing feature extraction is to develop methods that are as automatic as possible. The aim is to develop algorithms that can obtain accurate results with the least possible human intervention in the process. Non-manual curb detection is an important issue in road maintenance, 3D urban modeling, and autonomous navigation fields. This paper is focused on the semi-automatic recognition of curbs and street boundaries using a 3D point cloud registered by a mobile laser scanner (MLS) system. This work is divided into four steps. First, a coordinate system transformation is carried out, moving from a global coordinate system to a local one. After that and in order to simplify the calculations involved in the procedure, a rasterization based on the projection of the measured point cloud on the XY plane was carried out, passing from the 3D original data to a 2D image. To determine the location of curbs in the image, different image processing techniques such as thresholding and morphological operations were applied. Finally, the upper and lower edges of curbs are detected by an unsupervised classification algorithm on the curvature and roughness of the points that represent curbs. The proposed method is valid in both straight and curved road sections and applicable both to laser scanner and stereo vision 3D data due to the independence of its scanning geometry. This method has been successfully tested with two datasets measured by different sensors. The first dataset corresponds to a point cloud measured by a TOPCON sensor in the Spanish town of Cudillero. That point cloud comprises more than 6,000,000 points and covers a 400-meter street. The second dataset corresponds to a point cloud measured by a RIEGL sensor in the Austrian town of Horn. That point cloud comprises 8,000,000 points and represents a 160-meter street. The proposed method provides success rates in curb recognition of over 85% in both datasets.
NASA Astrophysics Data System (ADS)
Ripepi, V.; Moretti, M. I.; Clementini, G.; Marconi, M.; Cioni, M. R.; Marquette, J. B.; Tisserand, P.
2012-09-01
The Vista Magellanic Cloud (VMC, PI M.R. Cioni) survey is collecting K S -band time series photometry of the system formed by the two Magellanic Clouds (MC) and the "bridge" that connects them. These data are used to build K S -band light curves of the MC RR Lyrae stars and Classical Cepheids and determine absolute distances and the 3D geometry of the whole system using the K-band period luminosity ( PLK S ), the period-luminosity-color ( PLC) and the Wesenhiet relations applicable to these types of variables. As an example of the survey potential we present results from the VMC observations of two fields centered respectively on the South Ecliptic Pole and the 30 Doradus star forming region of the Large Magellanic Cloud. The VMC K S -band light curves of the RR Lyrae stars in these two regions have very good photometric quality with typical errors for the individual data points in the range of ˜0.02 to 0.05 mag. The Cepheids have excellent light curves (typical errors of ˜0.01 mag). The average K S magnitudes derived for both types of variables were used to derive PLK S relations that are in general good agreement within the errors with the literature data, and show a smaller scatter than previous studies.
Probing exoplanet clouds with optical phase curves.
Muñoz, Antonio García; Isaak, Kate G
2015-11-03
Kepler-7b is to date the only exoplanet for which clouds have been inferred from the optical phase curve--from visible-wavelength whole-disk brightness measurements as a function of orbital phase. Added to this, the fact that the phase curve appears dominated by reflected starlight makes this close-in giant planet a unique study case. Here we investigate the information on coverage and optical properties of the planet clouds contained in the measured phase curve. We generate cloud maps of Kepler-7b and use a multiple-scattering approach to create synthetic phase curves, thus connecting postulated clouds with measurements. We show that optical phase curves can help constrain the composition and size of the cloud particles. Indeed, model fitting for Kepler-7b requires poorly absorbing particles that scatter with low-to-moderate anisotropic efficiency, conclusions consistent with condensates of silicates, perovskite, and silica of submicron radii. We also show that we are limited in our ability to pin down the extent and location of the clouds. These considerations are relevant to the interpretation of optical phase curves with general circulation models. Finally, we estimate that the spherical albedo of Kepler-7b over the Kepler passband is in the range 0.4-0.5.
Visible Wavelength Exoplanet Phase Curves from Global Albedo Maps
NASA Astrophysics Data System (ADS)
Webber, Matthew; Cahoy, Kerri Lynn
2015-01-01
To investigate the effect of three-dimensional global albedo maps we use an albedo model that: calculates albedo spectra for each points across grid in longitude and latitude on the planetary disk, uses the appropriate angles for the source-observer geometry for each location, and then weights and sums these spectra using the Tschebychev-Gauss integration method. This structure permits detailed 3D modeling of an illuminated planetary disk and computes disk-integrated phase curves. Different pressure-temperature profiles are used for each location based on geometry and dynamics. We directly couple high-density pressure maps from global dynamic radiative-transfer models to compute global cloud maps. Cloud formation is determined from the correlation of the species condensation curves with the temperature-pressure profiles. We use the detailed cloud patterns, of spatial-varying composition and temperature, to determine the observable albedo spectra and phase curves for exoplanets Kepler-7b and HD189733b. These albedo spectra are used to compute planet-star flux ratios using PHOENIX stellar models, exoplanet orbital parameters, and telescope transmission functions. Insight from the Earthshine spectrum and solid surface albedo functions (e.g. water, ice, snow, rocks) are used with our planetary grid to determine the phase curve and flux ratios of non-uniform Earth and Super Earth-like exoplanets with various rotation rates and stellar types. Predictions can be tailored to the visible and Near-InfraRed (NIR) spectral windows for the Kepler space telescope, Hubble space telescope, and future observatories (e.g. WFIRST, JWST, Exo-C, Exo-S). Additionally, we constrain the effect of exoplanet urban-light on the shape of the night-side phase curve for Earths and Super-Earths.
NASA Astrophysics Data System (ADS)
Harmening, Corinna; Neuner, Hans
2016-09-01
Due to the establishment of terrestrial laser scanner, the analysis strategies in engineering geodesy change from pointwise approaches to areal ones. These areal analysis strategies are commonly built on the modelling of the acquired point clouds. Freeform curves and surfaces like B-spline curves/surfaces are one possible approach to obtain space continuous information. A variety of parameters determines the B-spline's appearance; the B-spline's complexity is mostly determined by the number of control points. Usually, this number of control points is chosen quite arbitrarily by intuitive trial-and-error-procedures. In this paper, the Akaike Information Criterion and the Bayesian Information Criterion are investigated with regard to a justified and reproducible choice of the optimal number of control points of B-spline curves. Additionally, we develop a method which is based on the structural risk minimization of the statistical learning theory. Unlike the Akaike and the Bayesian Information Criteria this method doesn't use the number of parameters as complexity measure of the approximating functions but their Vapnik-Chervonenkis-dimension. Furthermore, it is also valid for non-linear models. Thus, the three methods differ in their target function to be minimized and consequently in their definition of optimality. The present paper will be continued by a second paper dealing with the choice of the optimal number of control points of B-spline surfaces.
NASA Astrophysics Data System (ADS)
Zhou, Yifan; Apai, Dániel; Schneider, Glenn H.; Marley, Mark S.; Showman, Adam P.
2016-02-01
Rotational modulations of brown dwarfs have recently provided powerful constraints on the properties of ultra-cool atmospheres, including longitudinal and vertical cloud structures and cloud evolution. Furthermore, periodic light curves directly probe the rotational periods of ultra-cool objects. We present here, for the first time, time-resolved high-precision photometric measurements of a planetary-mass companion, 2M1207b. We observed the binary system with Hubble Space Telescope/Wide Field Camera 3 in two bands and with two spacecraft roll angles. Using point-spread function-based photometry, we reach a nearly photon-noise limited accuracy for both the primary and the secondary. While the primary is consistent with a flat light curve, the secondary shows modulations that are clearly detected in the combined light curve as well as in different subsets of the data. The amplitudes are 1.36% in the F125W and 0.78% in the F160W filters, respectively. By fitting sine waves to the light curves, we find a consistent period of {10.7}-0.6+1.2 hr and similar phases in both bands. The J- and H-band amplitude ratio of 2M1207b is very similar to a field brown dwarf that has identical spectral type but different J-H color. Importantly, our study also measures, for the first time, the rotation period for a directly imaged extra-solar planetary-mass companion.
An interfacial mechanism for cloud droplet formation on organic aerosols
Ruehl, C. R.; Davies, J. F.; Wilson, K. R.
2016-03-25
Accurate predictions of aerosol/cloud interactions require simple, physically accurate parameterizations of the cloud condensation nuclei (CCN) activity of aerosols. Current models assume that organic aerosol species contribute to CCN activity by lowering water activity. We measured droplet diameters at the point of CCN activation for particles composed of dicarboxylic acids or secondary organic aerosol and ammonium sulfate. Droplet activation diameters were 40 to 60% larger than predicted if the organic was assumed to be dissolved within the bulk droplet, suggesting that a new mechanism is needed to explain cloud droplet formation. A compressed film model explains how surface tension depressionmore » by interfacial organic molecules can alter the relationship between water vapor supersaturation and droplet size (i.e., the Köhler curve), leading to the larger diameters observed at activation.« less
An interfacial mechanism for cloud droplet formation on organic aerosols
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruehl, C. R.; Davies, J. F.; Wilson, K. R.
Accurate predictions of aerosol/cloud interactions require simple, physically accurate parameterizations of the cloud condensation nuclei (CCN) activity of aerosols. Current models assume that organic aerosol species contribute to CCN activity by lowering water activity. We measured droplet diameters at the point of CCN activation for particles composed of dicarboxylic acids or secondary organic aerosol and ammonium sulfate. Droplet activation diameters were 40 to 60% larger than predicted if the organic was assumed to be dissolved within the bulk droplet, suggesting that a new mechanism is needed to explain cloud droplet formation. A compressed film model explains how surface tension depressionmore » by interfacial organic molecules can alter the relationship between water vapor supersaturation and droplet size (i.e., the Köhler curve), leading to the larger diameters observed at activation.« less
An interfacial mechanism for cloud droplet formation on organic aerosols.
Ruehl, Christopher R; Davies, James F; Wilson, Kevin R
2016-03-25
Accurate predictions of aerosol/cloud interactions require simple, physically accurate parameterizations of the cloud condensation nuclei (CCN) activity of aerosols. Current models assume that organic aerosol species contribute to CCN activity by lowering water activity. We measured droplet diameters at the point of CCN activation for particles composed of dicarboxylic acids or secondary organic aerosol and ammonium sulfate. Droplet activation diameters were 40 to 60% larger than predicted if the organic was assumed to be dissolved within the bulk droplet, suggesting that a new mechanism is needed to explain cloud droplet formation. A compressed film model explains how surface tension depression by interfacial organic molecules can alter the relationship between water vapor supersaturation and droplet size (i.e., the Köhler curve), leading to the larger diameters observed at activation. Copyright © 2016, American Association for the Advancement of Science.
Probing exoplanet clouds with optical phase curves
Muñoz, Antonio García; Isaak, Kate G.
2015-01-01
Kepler-7b is to date the only exoplanet for which clouds have been inferred from the optical phase curve—from visible-wavelength whole-disk brightness measurements as a function of orbital phase. Added to this, the fact that the phase curve appears dominated by reflected starlight makes this close-in giant planet a unique study case. Here we investigate the information on coverage and optical properties of the planet clouds contained in the measured phase curve. We generate cloud maps of Kepler-7b and use a multiple-scattering approach to create synthetic phase curves, thus connecting postulated clouds with measurements. We show that optical phase curves can help constrain the composition and size of the cloud particles. Indeed, model fitting for Kepler-7b requires poorly absorbing particles that scatter with low-to-moderate anisotropic efficiency, conclusions consistent with condensates of silicates, perovskite, and silica of submicron radii. We also show that we are limited in our ability to pin down the extent and location of the clouds. These considerations are relevant to the interpretation of optical phase curves with general circulation models. Finally, we estimate that the spherical albedo of Kepler-7b over the Kepler passband is in the range 0.4–0.5. PMID:26489652
A hierarchical methodology for urban facade parsing from TLS point clouds
NASA Astrophysics Data System (ADS)
Li, Zhuqiang; Zhang, Liqiang; Mathiopoulos, P. Takis; Liu, Fangyu; Zhang, Liang; Li, Shuaipeng; Liu, Hao
2017-01-01
The effective and automated parsing of building facades from terrestrial laser scanning (TLS) point clouds of urban environments is an important research topic in the GIS and remote sensing fields. It is also challenging because of the complexity and great variety of the available 3D building facade layouts as well as the noise and data missing of the input TLS point clouds. In this paper, we introduce a novel methodology for the accurate and computationally efficient parsing of urban building facades from TLS point clouds. The main novelty of the proposed methodology is that it is a systematic and hierarchical approach that considers, in an adaptive way, the semantic and underlying structures of the urban facades for segmentation and subsequent accurate modeling. Firstly, the available input point cloud is decomposed into depth planes based on a data-driven method; such layer decomposition enables similarity detection in each depth plane layer. Secondly, the labeling of the facade elements is performed using the SVM classifier in combination with our proposed BieS-ScSPM algorithm. The labeling outcome is then augmented with weak architectural knowledge. Thirdly, least-squares fitted normalized gray accumulative curves are applied to detect regular structures, and a binarization dilation extraction algorithm is used to partition facade elements. A dynamic line-by-line division is further applied to extract the boundaries of the elements. The 3D geometrical façade models are then reconstructed by optimizing facade elements across depth plane layers. We have evaluated the performance of the proposed method using several TLS facade datasets. Qualitative and quantitative performance comparisons with several other state-of-the-art methods dealing with the same facade parsing problem have demonstrated its superiority in performance and its effectiveness in improving segmentation accuracy.
NASA Astrophysics Data System (ADS)
Roman, Michael; Rauscher, Emily
2017-11-01
Motivated by observational evidence of inhomogeneous clouds in exoplanetary atmospheres, we investigate how proposed simple cloud distributions can affect atmospheric circulations and infrared emission. We simulated temperatures and winds for the hot Jupiter Kepler-7b using a three-dimensional atmospheric circulation model that included a simplified aerosol radiative transfer model. We prescribed fixed cloud distributions and scattering properties based on results previously inferred from Kepler-7b optical phase curves, including inhomogeneous aerosols centered along the western terminator and hypothetical cases in which aerosols additionally extended across much of the planet’s nightside. In all cases, a strong jet capable of advecting aerosols from a cooler nightside to dayside was found to persist, but only at the equator. Colder temperatures at mid and polar latitudes might permit aerosol to form on the dayside without the need for advection. By altering the deposition and redistribution of heat, aerosols along the western terminator produced an asymmetric heating that effectively shifts the hottest spot further east of the substellar point than expected for a uniform distribution. The addition of opaque high clouds on the nightside can partly mitigate this enhanced shift by retaining heat that contributes to warming west of the hotspot. These expected differences in infrared phase curves could place constraints on proposed cloud distributions and their infrared opacities for brighter hot Jupiters.
A simple biota removal algorithm for 35 GHz cloud radar measurements
NASA Astrophysics Data System (ADS)
Kalapureddy, Madhu Chandra R.; Sukanya, Patra; Das, Subrata K.; Deshpande, Sachin M.; Pandithurai, Govindan; Pazamany, Andrew L.; Ambuj K., Jha; Chakravarty, Kaustav; Kalekar, Prasad; Krishna Devisetty, Hari; Annam, Sreenivas
2018-03-01
Cloud radar reflectivity profiles can be an important measurement for the investigation of cloud vertical structure (CVS). However, extracting intended meteorological cloud content from the measurement often demands an effective technique or algorithm that can reduce error and observational uncertainties in the recorded data. In this work, a technique is proposed to identify and separate cloud and non-hydrometeor echoes using the radar Doppler spectral moments profile measurements. The point and volume target-based theoretical radar sensitivity curves are used for removing the receiver noise floor and identified radar echoes are scrutinized according to the signal decorrelation period. Here, it is hypothesized that cloud echoes are observed to be temporally more coherent and homogenous and have a longer correlation period than biota. That can be checked statistically using ˜ 4 s sliding mean and standard deviation value of reflectivity profiles. The above step helps in screen out clouds critically by filtering out the biota. The final important step strives for the retrieval of cloud height. The proposed algorithm potentially identifies cloud height solely through the systematic characterization of Z variability using the local atmospheric vertical structure knowledge besides to the theoretical, statistical and echo tracing tools. Thus, characterization of high-resolution cloud radar reflectivity profile measurements has been done with the theoretical echo sensitivity curves and observed echo statistics for the true cloud height tracking (TEST). TEST showed superior performance in screening out clouds and filtering out isolated insects. TEST constrained with polarimetric measurements was found to be more promising under high-density biota whereas TEST combined with linear depolarization ratio and spectral width perform potentially to filter out biota within the highly turbulent shallow cumulus clouds in the convective boundary layer (CBL). This TEST technique is promisingly simple in realization but powerful in performance due to the flexibility in constraining, identifying and filtering out the biota and screening out the true cloud content, especially the CBL clouds. Therefore, the TEST algorithm is superior for screening out the low-level clouds that are strongly linked to the rainmaking mechanism associated with the Indian Summer Monsoon region's CVS.
Simon, Amy A; Rowe, Jason F; Gaulme, Patrick; Hammel, Heidi B; Casewell, Sarah L; Fortney, Jonathan J; Gizis, John E; Lissauer, Jack J; Morales-Juberias, Raul; Orton, Glenn S; Wong, Michael H; Marley, Mark S
2016-02-01
Observations of Neptune with the Kepler Space Telescope yield a 49 day light curve with 98% coverage at a 1 minute cadence. A significant signature in the light curve comes from discrete cloud features. We compare results extracted from the light curve data with contemporaneous disk-resolved imaging of Neptune from the Keck 10-m telescope at 1.65 microns and Hubble Space Telescope visible imaging acquired nine months later. This direct comparison validates the feature latitudes assigned to the K2 light curve periods based on Neptune's zonal wind profile, and confirms observed cloud feature variability. Although Neptune's clouds vary in location and intensity on short and long timescales, a single large discrete storm seen in Keck imaging dominates the K2 and Hubble light curves; smaller or fainter clouds likely contribute to short-term brightness variability. The K2 Neptune light curve, in conjunction with our imaging data, provides context for the interpretation of current and future brown dwarf and extrasolar planet variability measurements. In particular we suggest that the balance between large, relatively stable, atmospheric features and smaller, more transient, clouds controls the character of substellar atmospheric variability. Atmospheres dominated by a few large spots may show inherently greater light curve stability than those which exhibit a greater number of smaller features.
Transitions in the Cloud Composition of Hot Jupiters
NASA Astrophysics Data System (ADS)
Parmentier, Vivien; Fortney, Jonathan J.; Showman, Adam P.; Morley, Caroline; Marley, Mark S.
2016-09-01
Over a large range of equilibrium temperatures, clouds shape the transmission spectrum of hot Jupiter atmospheres, yet their composition remains unknown. Recent observations show that the Kepler light curves of some hot Jupiters are asymmetric: for the hottest planets, the light curve peaks before secondary eclipse, whereas for planets cooler than ˜1900 K, it peaks after secondary eclipse. We use the thermal structure from 3D global circulation models to determine the expected cloud distribution and Kepler light curves of hot Jupiters. We demonstrate that the change from an optical light curve dominated by thermal emission to one dominated by scattering (reflection) naturally explains the observed trend from negative to positive offset. For the cool planets the presence of an asymmetry in the Kepler light curve is a telltale sign of the cloud composition, because each cloud species can produce an offset only over a narrow range of effective temperatures. By comparing our models and the observations, we show that the cloud composition of hot Jupiters likely varies with equilibrium temperature. We suggest that a transition occurs between silicate and manganese sulfide clouds at a temperature near 1600 K, analogous to the L/T transition on brown dwarfs. The cold trapping of cloud species below the photosphere naturally produces such a transition and predicts similar transitions for other condensates, including TiO. We predict that most hot Jupiters should have cloudy nightsides, that partial cloudiness should be common at the limb, and that the dayside hot spot should often be cloud-free.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schartmann, M.; Ballone, A.; Burkert, A.
The dusty, ionized gas cloud G2 is currently passing the massive black hole in the Galactic Center at a distance of roughly 2400 Schwarzschild radii. We explore the possibility of a starting point of the cloud within the disks of young stars. We make use of the large amount of new observations in order to put constraints on G2's origin. Interpreting the observations as a diffuse cloud of gas, we employ three-dimensional hydrodynamical adaptive mesh refinement (AMR) simulations with the PLUTO code and do a detailed comparison with observational data. The simulations presented in this work update our previously obtainedmore » results in multiple ways: (1) high resolution three-dimensional hydrodynamical AMR simulations are used, (2) the cloud follows the updated orbit based on the Brackett-γ data, (3) a detailed comparison to the observed high-quality position–velocity (PV) diagrams and the evolution of the total Brackett-γ luminosity is done. We concentrate on two unsolved problems of the diffuse cloud scenario: the unphysical formation epoch only shortly before the first detection and the too steep Brackett-γ light curve obtained in simulations, whereas the observations indicate a constant Brackett-γ luminosity between 2004 and 2013. For a given atmosphere and cloud mass, we find a consistent model that can explain both, the observed Brackett-γ light curve and the PV diagrams of all epochs. Assuming initial pressure equilibrium with the atmosphere, this can be reached for a starting date earlier than roughly 1900, which is close to apo-center and well within the disks of young stars.« less
Rowe, Jason F.; Gaulme, Patrick; Hammel, Heidi B.; Casewell, Sarah L.; Fortney, Jonathan J.; Gizis, John E.; Lissauer, Jack J.; Morales-Juberias, Raul; Orton, Glenn S.; Wong, Michael H.; Marley, Mark S.
2017-01-01
Observations of Neptune with the Kepler Space Telescope yield a 49 day light curve with 98% coverage at a 1 minute cadence. A significant signature in the light curve comes from discrete cloud features. We compare results extracted from the light curve data with contemporaneous disk-resolved imaging of Neptune from the Keck 10-m telescope at 1.65 microns and Hubble Space Telescope visible imaging acquired nine months later. This direct comparison validates the feature latitudes assigned to the K2 light curve periods based on Neptune’s zonal wind profile, and confirms observed cloud feature variability. Although Neptune’s clouds vary in location and intensity on short and long timescales, a single large discrete storm seen in Keck imaging dominates the K2 and Hubble light curves; smaller or fainter clouds likely contribute to short-term brightness variability. The K2 Neptune light curve, in conjunction with our imaging data, provides context for the interpretation of current and future brown dwarf and extrasolar planet variability measurements. In particular we suggest that the balance between large, relatively stable, atmospheric features and smaller, more transient, clouds controls the character of substellar atmospheric variability. Atmospheres dominated by a few large spots may show inherently greater light curve stability than those which exhibit a greater number of smaller features. PMID:28127087
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shporer, Avi; Hu, Renyu
2015-10-15
We identify three Kepler transiting planets, Kepler-7b, Kepler-12b, and Kepler-41b, whose orbital phase-folded light curves are dominated by planetary atmospheric processes including thermal emission and reflected light, while the impact of non-atmospheric (i.e., gravitational) processes, including beaming (Doppler boosting) and tidal ellipsoidal distortion, is negligible. Therefore, those systems allow a direct view of their atmospheres without being hampered by the approximations used in the inclusion of both atmospheric and non-atmospheric processes when modeling the phase-curve shape. We present here the analysis of Kepler-12b and Kepler-41b atmosphere based on their Kepler phase curve, while the analysis of Kepler-7b was already presentedmore » elsewhere. The model we used efficiently computes reflection and thermal emission contributions to the phase curve, including inhomogeneous atmospheric reflection due to longitudinally varying cloud coverage. We confirm Kepler-12b and Kepler-41b show a westward phase shift between the brightest region on the planetary surface and the substellar point, similar to Kepler-7b. We find that reflective clouds located on the west side of the substellar point can explain the phase shift. The existence of inhomogeneous atmospheric reflection in all three of our targets, selected due to their atmosphere-dominated Kepler phase curve, suggests this phenomenon is common. Therefore, it is also likely to be present in planetary phase curves that do not allow a direct view of the planetary atmosphere as they contain additional orbital processes. We discuss the implications of a bright-spot shift on the analysis of phase curves where both atmospheric and gravitational processes appear, including the mass discrepancy seen in some cases between the companion’s mass derived from the beaming and ellipsoidal photometric amplitudes. Finally, we discuss the potential detection of non-transiting but otherwise similar planets, whose mass is too small to show a gravitational photometric signal, but their atmosphere is reflective enough to show detectable phase modulations.« less
TRANSITIONS IN THE CLOUD COMPOSITION OF HOT JUPITERS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parmentier, Vivien; Fortney, Jonathan J.; Morley, Caroline
Over a large range of equilibrium temperatures, clouds shape the transmission spectrum of hot Jupiter atmospheres, yet their composition remains unknown. Recent observations show that the Kepler light curves of some hot Jupiters are asymmetric: for the hottest planets, the light curve peaks before secondary eclipse, whereas for planets cooler than ∼1900 K, it peaks after secondary eclipse. We use the thermal structure from 3D global circulation models to determine the expected cloud distribution and Kepler light curves of hot Jupiters. We demonstrate that the change from an optical light curve dominated by thermal emission to one dominated by scatteringmore » (reflection) naturally explains the observed trend from negative to positive offset. For the cool planets the presence of an asymmetry in the Kepler light curve is a telltale sign of the cloud composition, because each cloud species can produce an offset only over a narrow range of effective temperatures. By comparing our models and the observations, we show that the cloud composition of hot Jupiters likely varies with equilibrium temperature. We suggest that a transition occurs between silicate and manganese sulfide clouds at a temperature near 1600 K, analogous to the L / T transition on brown dwarfs. The cold trapping of cloud species below the photosphere naturally produces such a transition and predicts similar transitions for other condensates, including TiO. We predict that most hot Jupiters should have cloudy nightsides, that partial cloudiness should be common at the limb, and that the dayside hot spot should often be cloud-free.« less
Understanding water uptake in bioaerosols using laboratory measurements, field tests, and modeling
NASA Astrophysics Data System (ADS)
Chaudhry, Zahra; Ratnesar-Shumate, Shanna A.; Buckley, Thomas J.; Kalter, Jeffrey M.; Gilberry, Jerome U.; Eshbaugh, Jonathan P.; Corson, Elizabeth C.; Santarpia, Joshua L.; Carter, Christopher C.
2013-05-01
Uptake of water by biological aerosols can impact their physical and chemical characteristics. The water content in a bioaerosol can affect the backscatter cross-section as measured by LIDAR systems. Better understanding of the water content in controlled-release clouds of bioaerosols can aid in the development of improved standoff detection systems. This study includes three methods to improve understanding of how bioaerosols take up water. The laboratory method measures hygroscopic growth of biological material after it is aerosolized and dried. Hygroscopicity curves are created as the humidity is increased in small increments to observe the deliquescence point, then the humidity is decreased to observe the efflorescence point. The field component of the study measures particle size distributions of biological material disseminated into a large humidified chamber. Measurements are made with a Twin-Aerodynamic Particle Sizer (APS, TSI, Inc), -Relative Humidity apparatus where two APS units measure the same aerosol cloud side-by-side. The first operated under dry conditions by sampling downstream of desiccant dryers, the second operated under ambient conditions. Relative humidity was measured within the sampling systems to determine the difference in the aerosol water content between the two sampling trains. The water content of the bioaerosols was calculated from the twin APS units following Khlystov et al. 2005 [1]. Biological material is measured dried and wet and compared to laboratory curves of the same material. Lastly, theoretical curves are constructed from literature values for components of the bioaerosol material.
Liu, Jian; Liang, Huawei; Wang, Zhiling; Chen, Xiangcheng
2015-01-01
The quick and accurate understanding of the ambient environment, which is composed of road curbs, vehicles, pedestrians, etc., is critical for developing intelligent vehicles. The road elements included in this work are road curbs and dynamic road obstacles that directly affect the drivable area. A framework for the online modeling of the driving environment using a multi-beam LIDAR, i.e., a Velodyne HDL-64E LIDAR, which describes the 3D environment in the form of a point cloud, is reported in this article. First, ground segmentation is performed via multi-feature extraction of the raw data grabbed by the Velodyne LIDAR to satisfy the requirement of online environment modeling. Curbs and dynamic road obstacles are detected and tracked in different manners. Curves are fitted for curb points, and points are clustered into bundles whose form and kinematics parameters are calculated. The Kalman filter is used to track dynamic obstacles, whereas the snake model is employed for curbs. Results indicate that the proposed framework is robust under various environments and satisfies the requirements for online processing. PMID:26404290
NASA Astrophysics Data System (ADS)
Ge, Huazhi; Zhang, Xi; Fletcher, Leigh; Orton, Glenn S.; Sinclair, James Andrew; Fernandes,, Joshua; Momary, Thomas W.; Warren, Ari; Kasaba, Yasumasa; Sato, Takao M.; Fujiyoshi, Takuya
2017-10-01
Many brown dwarfs exhibit infrared rotational light curves with amplitude varying from a fewpercent to twenty percent (Artigau et al. 2009, ApJ, 701, 1534; Radigan et al. 2012, ApJ, 750,105). Recently, it was claimed that weather patterns, especially planetary-scale waves in thebelts and cloud spots, are responsible for the light curves and their evolutions on brown dwarfs(Apai et al. 2017, Science, 357, 683). Here we present a clear relationship between the direct IRemission maps and light curves of Jupiter at multiple wavelengths, which might be similar withthat on cold brown dwarfs. Based on infrared disk maps from Subaru/COMICS and VLT/VISIR,we constructed full maps of Jupiter and rotational light curves at different wavelengths in thethermal infrared. We discovered a strong relationship between the light curves and weatherpatterns on Jupiter. The light curves also exhibit strong multi-bands phase shifts and temporalvariations, similar to that detected on brown dwarfs. Together with the spectra fromTEXES/IRTF, our observations further provide detailed information of the spatial variations oftemperature, ammonia clouds and aerosols in the troposphere of Jupiter (Fletcher et al. 2016,Icarus, 2016 128) and their influences on the shapes of the light curves. We conclude that waveactivities in Jupiter’s belts (Fletcher et al. 2017, GRL, 44, 7140), cloud holes, and long-livedvortices such as the Great Red Spot and ovals control the shapes of IR light curves and multi-wavelength phase shifts on Jupiter. Our finding supports the hypothesis that observed lightcurves on brown dwarfs are induced by planetary-scale waves and cloud spots.
NASA Technical Reports Server (NTRS)
Lugten, J. B.; Genzel, R.; Crawford, M. K.; Townes, C. H.
1986-01-01
Based on data obtained with the NASA Kuiper Airborne Observatory 91.4 cm telescope, the 158-micron fine structure line emission of C(+) is mapped near the galactic center. The strongest emission comes from a 10-pc FWHM diameter disk centered on Sgr A West whose dominant motion is rotation. Extended C(+) emission is also found from the +50 km/s galactic center molecular cloud, and a second cloud at v(LSR) of about -35 km/s. The rotation curve and mass distribution within 10 pc of the galactic center are derived, and the C(+) profiles show a drop-off of rotation velocity between 2 and 10 pc. A mass model is suggested with 2-4 million solar masses in a central point mass, and a M/L ratio of the central stellar cluster of 0.5 solar masses/solar luminosities, suggesting a large abundance of giants and relatively recent star formation in the center.
Spectral signatures of polar stratospheric clouds and sulfate aerosol
NASA Technical Reports Server (NTRS)
Massie, S. T.; Bailey, P. L.; Gille, J. C.; Lee, E. C.; Mergenthaler, J. L.; Roche, A. E.; Kumer, J. B.; Fishbein, E. F.; Waters, J. W.; Lahoz, W. A.
1994-01-01
Multiwavelength observations of Antarctic and midlatitude aerosol by the Cryogenic Limb Array Etalon Spectrometer (CLAES) experiment on the Upper Atmosphere Research Satellite (UARS) are used to demonstrate a technique that identifies the location of polar stratospheric clouds. The technique discussed uses the normalized area of the triangle formed by the aerosol extinctions at 925, 1257, and 1605/cm (10.8, 8.0, and 6.2 micrometers) to derive a spectral aerosol measure M of the aerosol spectrum. Mie calculations for spherical particles and T-matrix calculations for spheriodal particles are used to generate theoretical spectral extinction curves for sulfate and polar stratospheric cloud particles. The values of the spectral aerosol measure M for the sulfate and polar stratospheric cloud particles are shown to be different. Aerosol extinction data, corresponding to temperatures between 180 and 220 K at a pressure of 46 hPa (near 21-km altitude) for 18 August 1992, are used to demonstrate the technique. Thermodynamic calculations, based upon frost-point calculations and laboratory phase-equilibrium studies of nitric acid trihydrate, are used to predict the location of nitric acid trihydrate cloud particles.
An efficient solid modeling system based on a hand-held 3D laser scan device
NASA Astrophysics Data System (ADS)
Xiong, Hanwei; Xu, Jun; Xu, Chenxi; Pan, Ming
2014-12-01
The hand-held 3D laser scanner sold in the market is appealing for its port and convenient to use, but price is expensive. To develop such a system based cheap devices using the same principles as the commercial systems is impossible. In this paper, a simple hand-held 3D laser scanner is developed based on a volume reconstruction method using cheap devices. Unlike convenient laser scanner to collect point cloud of an object surface, the proposed method only scan few key profile curves on the surface. Planar section curve network can be generated from these profile curves to construct a volume model of the object. The details of design are presented, and illustrated by the example of a complex shaped object.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Webber, Matthew W.; Lewis, Nikole K.; Cahoy, Kerri
2015-05-10
We use a planetary albedo model to investigate variations in visible wavelength phase curves of exoplanets. Thermal and cloud properties for these exoplanets are derived using one-dimensional radiative-convective and cloud simulations. The presence of clouds on these exoplanets significantly alters their planetary albedo spectra. We confirm that non-uniform cloud coverage on the dayside of tidally locked exoplanets will manifest as changes to the magnitude and shift of the phase curve. In this work, we first investigate a test case of our model using a Jupiter-like planet, at temperatures consistent to 2.0 AU insolation from a solar type star, to considermore » the effect of H{sub 2}O clouds. We then extend our application of the model to the exoplanet Kepler-7b and consider the effect of varying cloud species, sedimentation efficiency, particle size, and cloud altitude. We show that, depending on the observational filter, the largest possible shift of the phase curve maximum will be ∼2°–10° for a Jupiter-like planet, and up to ∼30° (∼0.08 in fractional orbital phase) for hot-Jupiter exoplanets at visible wavelengths as a function of dayside cloud distribution with a uniformly averaged thermal profile. The models presented in this work can be adapted for a variety of planetary cases at visible wavelengths to include variations in planet–star separation, gravity, metallicity, and source-observer geometry. Finally, we tailor our model for comparison with, and confirmation of, the recent optical phase-curve observations of Kepler-7b with the Kepler space telescope. The average planetary albedo can vary between 0.1 and 0.6 for the 1300 cloud scenarios that were compared to the observations. Many of these cases cannot produce a high enough albedo to match the observations. We observe that smaller particle size and increasing cloud altitude have a strong effect on increasing albedo. In particular, we show that a set of models where Kepler-7b has roughly half of its dayside covered in small-particle clouds high in the atmosphere, made of bright minerals like MgSiO{sub 3} and Mg{sub 2}SiO{sub 4,} provide the best fits to the observed offset and magnitude of the phase-curve, whereas Fe clouds are found to be too dark to fit the observations.« less
NASA Astrophysics Data System (ADS)
Mendonça, João M.; Malik, Matej; Demory, Brice-Olivier; Heng, Kevin
2018-04-01
Recently acquired Hubble and Spitzer phase curves of the short-period hot Jupiter WASP-43b make it an ideal target for confronting theory with data. On the observational front, we re-analyze the 3.6 and 4.5 μm Spitzer phase curves and demonstrate that our improved analysis better removes residual red noise due to intra-pixel sensitivity, which leads to greater fluxes emanating from the nightside of WASP-43b, thus reducing the tension between theory and data. On the theoretical front, we construct cloud-free and cloudy atmospheres of WASP-43b using our Global Circulation Model (GCM), THOR, which solves the non-hydrostatic Euler equations (compared to GCMs that typically solve the hydrostatic primitive equations). The cloud-free atmosphere produces a reasonable fit to the dayside emission spectrum. The multi-phase emission spectra constrain the cloud deck to be confined to the nightside and have a finite cloud-top pressure. The multi-wavelength phase curves are naturally consistent with our cloudy atmospheres, except for the 4.5 μm phase curve, which requires the presence of enhanced carbon dioxide in the atmosphere of WASP-43b. Multi-phase emission spectra at higher spectral resolution, as may be obtained using the James Webb Space Telescope, and a reflected-light phase curve at visible wavelengths would further constrain the properties of clouds in WASP-43b.
A point cloud modeling method based on geometric constraints mixing the robust least squares method
NASA Astrophysics Data System (ADS)
Yue, JIanping; Pan, Yi; Yue, Shun; Liu, Dapeng; Liu, Bin; Huang, Nan
2016-10-01
The appearance of 3D laser scanning technology has provided a new method for the acquisition of spatial 3D information. It has been widely used in the field of Surveying and Mapping Engineering with the characteristics of automatic and high precision. 3D laser scanning data processing process mainly includes the external laser data acquisition, the internal industry laser data splicing, the late 3D modeling and data integration system. For the point cloud modeling, domestic and foreign researchers have done a lot of research. Surface reconstruction technology mainly include the point shape, the triangle model, the triangle Bezier surface model, the rectangular surface model and so on, and the neural network and the Alfa shape are also used in the curved surface reconstruction. But in these methods, it is often focused on single surface fitting, automatic or manual block fitting, which ignores the model's integrity. It leads to a serious problems in the model after stitching, that is, the surfaces fitting separately is often not satisfied with the well-known geometric constraints, such as parallel, vertical, a fixed angle, or a fixed distance. However, the research on the special modeling theory such as the dimension constraint and the position constraint is not used widely. One of the traditional modeling methods adding geometric constraints is a method combing the penalty function method and the Levenberg-Marquardt algorithm (L-M algorithm), whose stability is pretty good. But in the research process, it is found that the method is greatly influenced by the initial value. In this paper, we propose an improved method of point cloud model taking into account the geometric constraint. We first apply robust least-squares to enhance the initial value's accuracy, and then use penalty function method to transform constrained optimization problems into unconstrained optimization problems, and finally solve the problems using the L-M algorithm. The experimental results show that the internal accuracy is improved, and it is shown that the improved method for point clouds modeling proposed by this paper outperforms the traditional point clouds modeling methods.
The impact of galactic disc environment on star-forming clouds
NASA Astrophysics Data System (ADS)
Nguyen, Ngan K.; Pettitt, Alex R.; Tasker, Elizabeth J.; Okamoto, Takashi
2018-03-01
We explore the effect of different galactic disc environments on the properties of star-forming clouds through variations in the background potential in a set of isolated galaxy simulations. Rising, falling, and flat rotation curves expected in halo-dominated, disc-dominated, and Milky Way-like galaxies were considered, with and without an additional two-arm spiral potential. The evolution of each disc displayed notable variations that are attributed to different regimes of stability, determined by shear and gravitational collapse. The properties of a typical cloud were largely unaffected by the changes in rotation curve, but the production of small and large cloud associations was strongly dependent on this environment. This suggests that while differing rotation curves can influence where clouds are initially formed, the average bulk properties are effectively independent of the global environment. The addition of a spiral perturbation made the greatest difference to cloud properties, successfully sweeping the gas into larger, seemingly unbound, extended structures and creating large arm-interarm contrasts.
Continuously Deformation Monitoring of Subway Tunnel Based on Terrestrial Point Clouds
NASA Astrophysics Data System (ADS)
Kang, Z.; Tuo, L.; Zlatanova, S.
2012-07-01
The deformation monitoring of subway tunnel is of extraordinary necessity. Therefore, a method for deformation monitoring based on terrestrial point clouds is proposed in this paper. First, the traditional adjacent stations registration is replaced by sectioncontrolled registration, so that the common control points can be used by each station and thus the error accumulation avoided within a section. Afterwards, the central axis of the subway tunnel is determined through RANSAC (Random Sample Consensus) algorithm and curve fitting. Although with very high resolution, laser points are still discrete and thus the vertical section is computed via the quadric fitting of the vicinity of interest, instead of the fitting of the whole model of a subway tunnel, which is determined by the intersection line rotated about the central axis of tunnel within a vertical plane. The extraction of the vertical section is then optimized using RANSAC for the purpose of filtering out noises. Based on the extracted vertical sections, the volume of tunnel deformation is estimated by the comparison between vertical sections extracted at the same position from different epochs of point clouds. Furthermore, the continuously extracted vertical sections are deployed to evaluate the convergent tendency of the tunnel. The proposed algorithms are verified using real datasets in terms of accuracy and computation efficiency. The experimental result of fitting accuracy analysis shows the maximum deviation between interpolated point and real point is 1.5 mm, and the minimum one is 0.1 mm; the convergent tendency of the tunnel was detected by the comparison of adjacent fitting radius. The maximum error is 6 mm, while the minimum one is 1 mm. The computation cost of vertical section abstraction is within 3 seconds/section, which proves high efficiency..
CLICK: The USGS Center for LIDAR Information Coordination & Knowledge
Menig, Jordan C.; Stoker, Jason M.
2007-01-01
While this technology has proven its use as a mapping tool - effective for generating bare earth DEMs at high resolutions (1-3 m) and with high vertical accuracies (15-18 cm) - obstacles remain for its application as a remote sensing tool: * The high cost of collecting LIDAR * The steep learning curve on research and application of using the entire point cloud * The challenges of discovering whether data exist for regions of interest
NASA Astrophysics Data System (ADS)
Chen, W. A.; Woods, C. P.; Li, J. F.; Waliser, D. E.; Chern, J.; Tao, W.; Jiang, J. H.; Tompkins, A. M.
2010-12-01
CloudSat provides important estimates of vertically resolved ice water content (IWC) on a global scale based on radar reflectivity. These estimates of IWC have proven beneficial in evaluating the representations of ice clouds in global models. An issue when performing model-data comparisons of IWC particularly germane to this investigation, is the question of which component(s) of the frozen water mass are represented by retrieval estimates and how they relate to what is represented in models. The present study developed and applied a new technique to partition CloudSat total IWC into small and large ice hydrometeors, based on the CloudSat-retrieved ice particle size distribution (PSD) parameters. The new method allows one to make relevant model-data comparisons and provides new insights into the model’s representation of atmospheric IWC. The partitioned CloudSat IWC suggests that the small ice particles contribute to 20-30% of the total IWC in the upper troposphere when a threshold size of 100 μm is used. Sensitivity measures with respect to the threshold size, the PSD parameters, and the retrieval algorithms are presented. The new dataset is compared to model estimates, pointing to areas for model improvement. Cloud ice analyses from the European Centre for Medium-Range Weather Forecasts model agree well with the small IWC from CloudSat. The finite-volume multi-scale modeling framework model underestimates total IWC at 147 and 215 hPa, while overestimating the fractional contribution from the small ice species. These results are discussed in terms of their applications to, and implications for, the evaluation of global atmospheric models, providing constraints on the representations of cloud feedback and precipitation in global models, which in turn can help reduce uncertainties associated with climate change projections. Figure 1. A sample lognormal ice number distribution (red curve), and the corresponding mass distribution (black curve). The dotted line represents the cutoff size for IWC partitioning (Dc = 100 µm as an example). The partial integrals of the mass distribution for particles smaller and larger than Dc correspond to IWC<100 (green area) and IWC>100 (blue area), respectively.
Micellization and phase transitions in a triblock copolymer-D2O system
NASA Astrophysics Data System (ADS)
Odhner, Hosanna; Huff, Alison; Patton, Kelly; Jacobs, D. T.; Clover, Bryna; Greer, Sandra
2011-03-01
The triblock copolymer (``unimer'') of PPO-PEO-PPO (commercially known as 17R4) has hydrophobic ends and a hydrophilic center. When placed in D2 O at lower concentrations and temperatures, only a network of unimers exists. However, at higher concentrations or temperatures, micelles of different geometries can form. We have measured the micellization line marking the transition from only unimers to some micelles, as well as a one- to two-phase transition at higher temperatures. This second transition is an Ising-like, LCST critical point, based on the shape of the coexistence curve. We find the LCST to not correspond to the minimum of the cloud point curve, which indicates polydispersity as described by Sollich. We acknowledge the support from Research Corporation, NSF-REU grant DMR 0649112, The College of Wooster, and (for BC and SG) to the donors of the Petroleum Research Fund, administered by the American Chemical Society.
Mohd, N I; Zain, N N M; Raoov, M; Mohamad, S
2018-04-01
A new cloud point methodology was successfully used for the extraction of carcinogenic pesticides in milk samples as a prior step to their determination by spectrophotometry. In this work, non-ionic silicone surfactant, also known as 3-(3-hydroxypropyl-heptatrimethylxyloxane), was chosen as a green extraction solvent because of its structure and properties. The effect of different parameters, such as the type of surfactant, concentration and volume of surfactant, pH, salt, temperature, incubation time and water content on the cloud point extraction of carcinogenic pesticides such as atrazine and propazine, was studied in detail and a set of optimum conditions was established. A good correlation coefficient ( R 2 ) in the range of 0.991-0.997 for all calibration curves was obtained. The limit of detection was 1.06 µg l -1 (atrazine) and 1.22 µg l -1 (propazine), and the limit of quantitation was 3.54 µg l -1 (atrazine) and 4.07 µg l -1 (propazine). Satisfactory recoveries in the range of 81-108% were determined in milk samples at 5 and 1000 µg l -1 , respectively, with low relative standard deviation, n = 3 of 0.301-7.45% in milk matrices. The proposed method is very convenient, rapid, cost-effective and environmentally friendly for food analysis.
Uher, Vojtěch; Gajdoš, Petr; Radecký, Michal; Snášel, Václav
2016-01-01
The Differential Evolution (DE) is a widely used bioinspired optimization algorithm developed by Storn and Price. It is popular for its simplicity and robustness. This algorithm was primarily designed for real-valued problems and continuous functions, but several modified versions optimizing both integer and discrete-valued problems have been developed. The discrete-coded DE has been mostly used for combinatorial problems in a set of enumerative variants. However, the DE has a great potential in the spatial data analysis and pattern recognition. This paper formulates the problem as a search of a combination of distinct vertices which meet the specified conditions. It proposes a novel approach called the Multidimensional Discrete Differential Evolution (MDDE) applying the principle of the discrete-coded DE in discrete point clouds (PCs). The paper examines the local searching abilities of the MDDE and its convergence to the global optimum in the PCs. The multidimensional discrete vertices cannot be simply ordered to get a convenient course of the discrete data, which is crucial for good convergence of a population. A novel mutation operator utilizing linear ordering of spatial data based on the space filling curves is introduced. The algorithm is tested on several spatial datasets and optimization problems. The experiments show that the MDDE is an efficient and fast method for discrete optimizations in the multidimensional point clouds.
Utilization of the Discrete Differential Evolution for Optimization in Multidimensional Point Clouds
Radecký, Michal; Snášel, Václav
2016-01-01
The Differential Evolution (DE) is a widely used bioinspired optimization algorithm developed by Storn and Price. It is popular for its simplicity and robustness. This algorithm was primarily designed for real-valued problems and continuous functions, but several modified versions optimizing both integer and discrete-valued problems have been developed. The discrete-coded DE has been mostly used for combinatorial problems in a set of enumerative variants. However, the DE has a great potential in the spatial data analysis and pattern recognition. This paper formulates the problem as a search of a combination of distinct vertices which meet the specified conditions. It proposes a novel approach called the Multidimensional Discrete Differential Evolution (MDDE) applying the principle of the discrete-coded DE in discrete point clouds (PCs). The paper examines the local searching abilities of the MDDE and its convergence to the global optimum in the PCs. The multidimensional discrete vertices cannot be simply ordered to get a convenient course of the discrete data, which is crucial for good convergence of a population. A novel mutation operator utilizing linear ordering of spatial data based on the space filling curves is introduced. The algorithm is tested on several spatial datasets and optimization problems. The experiments show that the MDDE is an efficient and fast method for discrete optimizations in the multidimensional point clouds. PMID:27974884
Heydari, Rouhollah; Elyasi, Najmeh S
2014-10-01
A novel, simple, and effective ion-pair cloud-point extraction coupled with a gradient high-performance liquid chromatography method was developed for determination of thiamine (vitamin B1 ), niacinamide (vitamin B3 ), pyridoxine (vitamin B6 ), and riboflavin (vitamin B2 ) in plasma and urine samples. The extraction and separation of vitamins were achieved based on an ion-pair formation approach between these ionizable analytes and 1-heptanesulfonic acid sodium salt as an ion-pairing agent. Influential variables on the ion-pair cloud-point extraction efficiency, such as the ion-pairing agent concentration, ionic strength, pH, volume of Triton X-100, extraction temperature, and incubation time have been fully evaluated and optimized. Water-soluble vitamins were successfully extracted by 1-heptanesulfonic acid sodium salt (0.2% w/v) as ion-pairing agent with Triton X-100 (4% w/v) as surfactant phase at 50°C for 10 min. The calibration curves showed good linearity (r(2) > 0.9916) and precision in the concentration ranges of 1-50 μg/mL for thiamine and niacinamide, 5-100 μg/mL for pyridoxine, and 0.5-20 μg/mL for riboflavin. The recoveries were in the range of 78.0-88.0% with relative standard deviations ranging from 6.2 to 8.2%. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Three-dimension reconstruction based on spatial light modulator
NASA Astrophysics Data System (ADS)
Deng, Xuejiao; Zhang, Nanyang; Zeng, Yanan; Yin, Shiliang; Wang, Weiyu
2011-02-01
Three-dimension reconstruction, known as an important research direction of computer graphics, is widely used in the related field such as industrial design and manufacture, construction, aerospace, biology and so on. Via such technology we can obtain three-dimension digital point cloud from a two-dimension image, and then simulate the three-dimensional structure of the physical object for further study. At present, the obtaining of three-dimension digital point cloud data is mainly based on the adaptive optics system with Shack-Hartmann sensor and phase-shifting digital holography. Referring to surface fitting, there are also many available methods such as iterated discrete fourier transform, convolution and image interpolation, linear phase retrieval. The main problems we came across in three-dimension reconstruction are the extraction of feature points and arithmetic of curve fitting. To solve such problems, we can, first of all, calculate the relevant surface normal vector information of each pixel in the light source coordinate system, then these vectors are to be converted to the coordinates of image through the coordinate conversion, so the expectant 3D point cloud get arise. Secondly, after the following procedures of de-noising, repairing, the feature points can later be selected and fitted to get the fitting function of the surface topography by means of Zernike polynomial, so as to reconstruct the determinand's three-dimensional topography. In this paper, a new kind of three-dimension reconstruction algorithm is proposed, with the assistance of which, the topography can be estimated from its grayscale at different sample points. Moreover, the previous stimulation and the experimental results prove that the new algorithm has a strong capability to fit, especially for large-scale objects .
Traveltime and dispersion in the Potomac River, Cumberland, Maryland, to Washington, D.C.
Taylor, Kenneth R.; James, Robert W.; Helinsky, Bernard M.
1985-01-01
A travel-time and dispersion study using rhodamine dye was conducted on the Potomac River between Cumberland, Maryland, and Washington, D.C., a distance of 189 miles. The flow during the study was at approximately the 90-percent flow-duration level. A similar study was conducted by Wilson and Forrest in 1964 at a flow duration of approximately 60 percent. The two sets of data were used to develop a generalized procedure for predicting travel-times and downstream concentrations resulting from spillage of water-soluble substances at any point along the river. The procedure will allow the user to calculate travel-time and concentration data for almost any spillage problem that occurs during periods of relatively steady flow between 50- and 95-percent flow duration. A new procedure for calculating unit peak concentration was derived. The new procedure depends on an analogy between a time-concentration curve and a scalene triangle. As a result of this analogy, the unit peak concentration can be expressed in terms of the length of the _lye or contaminant cloud. The new procedure facilitates the calculation of unit peak concentration for long reaches of river. Previously, there was no way to link unit peak concentration curves for studies in which the river was divided into subreaches for study. Variable dispersive characteristics caused mainly by low-head dams precluded useful extrapolation of the unit peak-concentration attenuation curves, as has been done in previous studies. The procedure is applied to a hypothetical situation in which 20,000 pounds of contaminant is spilled at a railroad crossing at Magnolia, West Virginia. The times required for the leading edge, the peak concentration, and the trailing edge of the contaminant cloud to reach Point of Rocks, Maryland (110 river miles downstream), are 295, 375, and 540 hours respectively, during a period when flow is at the 80-percent flow-duration level. The peak conservative concentration would be approximately 340 micrograms per liter at Point of Rocks.
Point-Cloud Compression for Vehicle-Based Mobile Mapping Systems Using Portable Network Graphics
NASA Astrophysics Data System (ADS)
Kohira, K.; Masuda, H.
2017-09-01
A mobile mapping system is effective for capturing dense point-clouds of roads and roadside objects Point-clouds of urban areas, residential areas, and arterial roads are useful for maintenance of infrastructure, map creation, and automatic driving. However, the data size of point-clouds measured in large areas is enormously large. A large storage capacity is required to store such point-clouds, and heavy loads will be taken on network if point-clouds are transferred through the network. Therefore, it is desirable to reduce data sizes of point-clouds without deterioration of quality. In this research, we propose a novel point-cloud compression method for vehicle-based mobile mapping systems. In our compression method, point-clouds are mapped onto 2D pixels using GPS time and the parameters of the laser scanner. Then, the images are encoded in the Portable Networking Graphics (PNG) format and compressed using the PNG algorithm. In our experiments, our method could efficiently compress point-clouds without deteriorating the quality.
The registration of non-cooperative moving targets laser point cloud in different view point
NASA Astrophysics Data System (ADS)
Wang, Shuai; Sun, Huayan; Guo, Huichao
2018-01-01
Non-cooperative moving target multi-view cloud registration is the key technology of 3D reconstruction of laser threedimension imaging. The main problem is that the density changes greatly and noise exists under different acquisition conditions of point cloud. In this paper, firstly, the feature descriptor is used to find the most similar point cloud, and then based on the registration algorithm of region segmentation, the geometric structure of the point is extracted by the geometric similarity between point and point, The point cloud is divided into regions based on spectral clustering, feature descriptors are created for each region, searching to find the most similar regions in the most similar point of view cloud, and then aligning the pair of point clouds by aligning their minimum bounding boxes. Repeat the above steps again until registration of all point clouds is completed. Experiments show that this method is insensitive to the density of point clouds and performs well on the noise of laser three-dimension imaging.
He, Ying; Liang, Bin; Yang, Jun; Li, Shunzhi; He, Jin
2017-08-11
The Iterative Closest Points (ICP) algorithm is the mainstream algorithm used in the process of accurate registration of 3D point cloud data. The algorithm requires a proper initial value and the approximate registration of two point clouds to prevent the algorithm from falling into local extremes, but in the actual point cloud matching process, it is difficult to ensure compliance with this requirement. In this paper, we proposed the ICP algorithm based on point cloud features (GF-ICP). This method uses the geometrical features of the point cloud to be registered, such as curvature, surface normal and point cloud density, to search for the correspondence relationships between two point clouds and introduces the geometric features into the error function to realize the accurate registration of two point clouds. The experimental results showed that the algorithm can improve the convergence speed and the interval of convergence without setting a proper initial value.
Liang, Bin; Yang, Jun; Li, Shunzhi; He, Jin
2017-01-01
The Iterative Closest Points (ICP) algorithm is the mainstream algorithm used in the process of accurate registration of 3D point cloud data. The algorithm requires a proper initial value and the approximate registration of two point clouds to prevent the algorithm from falling into local extremes, but in the actual point cloud matching process, it is difficult to ensure compliance with this requirement. In this paper, we proposed the ICP algorithm based on point cloud features (GF-ICP). This method uses the geometrical features of the point cloud to be registered, such as curvature, surface normal and point cloud density, to search for the correspondence relationships between two point clouds and introduces the geometric features into the error function to realize the accurate registration of two point clouds. The experimental results showed that the algorithm can improve the convergence speed and the interval of convergence without setting a proper initial value. PMID:28800096
Curve Set Feature-Based Robust and Fast Pose Estimation Algorithm
Hashimoto, Koichi
2017-01-01
Bin picking refers to picking the randomly-piled objects from a bin for industrial production purposes, and robotic bin picking is always used in automated assembly lines. In order to achieve a higher productivity, a fast and robust pose estimation algorithm is necessary to recognize and localize the randomly-piled parts. This paper proposes a pose estimation algorithm for bin picking tasks using point cloud data. A novel descriptor Curve Set Feature (CSF) is proposed to describe a point by the surface fluctuation around this point and is also capable of evaluating poses. The Rotation Match Feature (RMF) is proposed to match CSF efficiently. The matching process combines the idea of the matching in 2D space of origin Point Pair Feature (PPF) algorithm with nearest neighbor search. A voxel-based pose verification method is introduced to evaluate the poses and proved to be more than 30-times faster than the kd-tree-based verification method. Our algorithm is evaluated against a large number of synthetic and real scenes and proven to be robust to noise, able to detect metal parts, more accurately and more than 10-times faster than PPF and Oriented, Unique and Repeatable (OUR)-Clustered Viewpoint Feature Histogram (CVFH). PMID:28771216
DOE Office of Scientific and Technical Information (OSTI.GOV)
Charnay, B.; Meadows, V.; Misra, A.
2015-11-01
The warm sub-Neptune GJ1214b has a featureless transit spectrum that may be due to the presence of high and thick clouds or haze. Here, we simulate the atmosphere of GJ1214b with a 3D General Circulation Model for cloudy hydrogen-dominated atmospheres, including cloud radiative effects. We show that the atmospheric circulation is strong enough to transport micrometric cloud particles to the upper atmosphere and generally leads to a minimum of cloud at the equator. By scattering stellar light, clouds increase the planetary albedo to 0.4–0.6 and cool the atmosphere below 1 mbar. However, the heating by ZnS clouds leads to themore » formation of a stratospheric thermal inversion above 10 mbar, with temperatures potentially high enough on the dayside to evaporate KCl clouds. We show that flat transit spectra consistent with Hubble Space Telescope observations are possible if cloud particle radii are around 0.5 μm, and that such clouds should be optically thin at wavelengths >3 μm. Using simulated cloudy atmospheres that fit the observed spectra we generate transit, emission, and reflection spectra and phase curves for GJ1214b. We show that a stratospheric thermal inversion would be readily accessible in near- and mid-infrared atmospheric spectral windows. We find that the amplitude of the thermal phase curves is strongly dependent on metallicity, but only slightly impacted by clouds. Our results suggest that primary and secondary eclipses and phase curves observed by the James Webb Space Telescope in the near- to mid-infrared should provide strong constraints on the nature of GJ1214b's atmosphere and clouds.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Almeyda, Triana; Robinson, Andrew; Richmond, Michael
The obscuring circumnuclear torus of dusty molecular gas is one of the major components of active galactic nuclei (AGN). The torus can be studied by analyzing the time response of its infrared (IR) dust emission to variations in the AGN continuum luminosity, a technique known as reverberation mapping. The IR response is the convolution of the AGN ultraviolet/optical light curve with a transfer function that contains information about the size, geometry, and structure of the torus. Here, we describe a new computer model that simulates the reverberation response of a clumpy torus. Given an input optical light curve, the codemore » computes the emission of a 3D ensemble of dust clouds as a function of time at selected IR wavelengths, taking into account light travel delays. We present simulated dust emission responses at 3.6, 4.5, and 30 μ m that explore the effects of various geometrical and structural properties, dust cloud orientation, and anisotropy of the illuminating radiation field. We also briefly explore the effects of cloud shadowing (clouds are shielded from the AGN continuum source). Example synthetic light curves have also been generated, using the observed optical light curve of the Seyfert 1 galaxy NGC 6418 as input. The torus response is strongly wavelength-dependent, due to the gradient in cloud surface temperature within the torus, and because the cloud emission is strongly anisotropic at shorter wavelengths. Anisotropic illumination of the torus also significantly modifies the torus response, reducing the lag between the IR and optical variations.« less
Yang, Xiupei; Jia, Zhihui; Yang, Xiaocui; Li, Gu; Liao, Xiangjun
2017-03-01
A cloud point extraction (CPE) method was used as a pre-concentration strategy prior to the determination of trace levels of silver in water by flame atomic absorption spectrometry (FAAS) The pre-concentration is based on the clouding phenomena of non-ionic surfactant, triton X-114, with Ag (I)/diethyldithiocarbamate (DDTC) complexes in which the latter is soluble in a micellar phase composed by the former. When the temperature increases above its cloud point, the Ag (I)/DDTC complexes are extracted into the surfactant-rich phase. The factors affecting the extraction efficiency including pH of the aqueous solution, concentration of the DDTC, amount of the surfactant, incubation temperature and time were investigated and optimized. Under the optimal experimental conditions, no interference was observed for the determination of 100 ng·mL -1 Ag + in the presence of various cations below their maximum concentrations allowed in this method, for instance, 50 μg·mL -1 for both Zn 2+ and Cu 2+ , 80 μg·mL -1 for Pb 2+ , 1000 μg·mL -1 for Mn 2+ , and 100 μg·mL -1 for both Cd 2+ and Ni 2+ . The calibration curve was linear in the range of 1-500 ng·mL -1 with a limit of detection (LOD) at 0.3 ng·mL -1 . The developed method was successfully applied for the determination of trace levels of silver in water samples such as river water and tap water.
Registration algorithm of point clouds based on multiscale normal features
NASA Astrophysics Data System (ADS)
Lu, Jun; Peng, Zhongtao; Su, Hang; Xia, GuiHua
2015-01-01
The point cloud registration technology for obtaining a three-dimensional digital model is widely applied in many areas. To improve the accuracy and speed of point cloud registration, a registration method based on multiscale normal vectors is proposed. The proposed registration method mainly includes three parts: the selection of key points, the calculation of feature descriptors, and the determining and optimization of correspondences. First, key points are selected from the point cloud based on the changes of magnitude of multiscale curvatures obtained by using principal components analysis. Then the feature descriptor of each key point is proposed, which consists of 21 elements based on multiscale normal vectors and curvatures. The correspondences in a pair of two point clouds are determined according to the descriptor's similarity of key points in the source point cloud and target point cloud. Correspondences are optimized by using a random sampling consistency algorithm and clustering technology. Finally, singular value decomposition is applied to optimized correspondences so that the rigid transformation matrix between two point clouds is obtained. Experimental results show that the proposed point cloud registration algorithm has a faster calculation speed, higher registration accuracy, and better antinoise performance.
Accuracy assessment of building point clouds automatically generated from iphone images
NASA Astrophysics Data System (ADS)
Sirmacek, B.; Lindenbergh, R.
2014-06-01
Low-cost sensor generated 3D models can be useful for quick 3D urban model updating, yet the quality of the models is questionable. In this article, we evaluate the reliability of an automatic point cloud generation method using multi-view iPhone images or an iPhone video file as an input. We register such automatically generated point cloud on a TLS point cloud of the same object to discuss accuracy, advantages and limitations of the iPhone generated point clouds. For the chosen example showcase, we have classified 1.23% of the iPhone point cloud points as outliers, and calculated the mean of the point to point distances to the TLS point cloud as 0.11 m. Since a TLS point cloud might also include measurement errors and noise, we computed local noise values for the point clouds from both sources. Mean (μ) and standard deviation (σ) of roughness histograms are calculated as (μ1 = 0.44 m., σ1 = 0.071 m.) and (μ2 = 0.025 m., σ2 = 0.037 m.) for the iPhone and TLS point clouds respectively. Our experimental results indicate possible usage of the proposed automatic 3D model generation framework for 3D urban map updating, fusion and detail enhancing, quick and real-time change detection purposes. However, further insights should be obtained first on the circumstances that are needed to guarantee a successful point cloud generation from smartphone images.
Díaz Alonso, Fernando; González Ferradás, Enrique; Sánchez Pérez, Juan Francisco; Miñana Aznar, Agustín; Ruiz Gimeno, José; Martínez Alonso, Jesús
2006-09-21
A number of models have been proposed to calculate overpressure and impulse from accidental industrial explosions. When the blast is produced by ignition of a vapour cloud, the TNO Multi-Energy model is widely used. From the curves given by this model, data are fitted to obtain equations showing the relationship between overpressure, impulse and distance. These equations, referred herein as characteristic curves, can be fitted by means of power equations, which depend on explosion energy and charge strength. Characteristic curves allow the determination of overpressure and impulse at each distance.
Simulation of optical interstellar scintillation
NASA Astrophysics Data System (ADS)
Habibi, F.; Moniez, M.; Ansari, R.; Rahvar, S.
2013-04-01
Aims: Stars twinkle because their light propagates through the atmosphere. The same phenomenon is expected on a longer time scale when the light of remote stars crosses an interstellar turbulent molecular cloud, but it has never been observed at optical wavelengths. The aim of the study described in this paper is to fully simulate the scintillation process, starting from the molecular cloud description as a fractal object, ending with the simulations of fluctuating stellar light curves. Methods: Fast Fourier transforms are first used to simulate fractal clouds. Then, the illumination pattern resulting from the crossing of background star light through these refractive clouds is calculated from a Fresnel integral that also uses fast Fourier transform techniques. Regularisation procedure and computing limitations are discussed, along with the effect of spatial and temporal coherency (source size and wavelength passband). Results: We quantify the expected modulation index of stellar light curves as a function of the turbulence strength - characterised by the diffraction radius Rdiff - and the projected source size, introduce the timing aspects, and establish connections between the light curve observables and the refractive cloud. We extend our discussion to clouds with different structure functions from Kolmogorov-type turbulence. Conclusions: Our study confirms that current telescopes of ~4 m with fast-readout, wide-field detectors have the capability of discovering the first interstellar optical scintillation effects. We also show that this effect should be unambiguously distinguished from any other type of variability through the observation of desynchronised light curves, simultaneously measured by two distant telescopes.
NASA Astrophysics Data System (ADS)
Galewsky, J.
2017-12-01
Understanding the processes that govern the relationships between lower tropospheric stability and low-cloud cover is crucial for improved constraints on low-cloud feedbacks and for improving the parameterizations of low-cloud cover used in climate models. The stable isotopic composition of atmospheric water vapor is a sensitive recorder of the balance of moistening and drying processes that set the humidity of the lower troposphere and may thus provide a useful framework for improving our understanding low-cloud processes. In-situ measurements of water vapor isotopic composition collected at the NOAA Mauna Loa Observatory in Hawaii, along with twice-daily soundings from Hilo and remote sensing of cloud cover, show a clear inverse relationship between the estimated inversion strength (EIS) and the mixing ratios and water vapor δ -values, and a positive relationship between EIS, deuterium excess, and Δ δ D, defined as the difference between an observation and a reference Rayleigh distillation curve. These relationships are consistent with reduced moistening and an enhanced upper-tropospheric contribution above the trade inversion under high EIS conditions and stronger moistening under weaker EIS conditions. The cloud fraction, cloud liquid water path, and cloud-top pressure were all found to be higher under low EIS conditions. Inverse modeling of the isotopic data for the highest and lowest terciles of EIS conditions provide quantitative constraints on the cold-point temperatures and mixing fractions that govern the humidity above the trade inversion. The modeling shows the moistening fraction between moist boundary layer air and dry middle tropospheric air 24±1.5% under low EIS conditions is and 6±1.5% under high EIS conditions. A cold-point (last-saturation) temperature of -30C can match the observations for both low and high EIS conditions. The isotopic composition of the moistening source as derived from the inversion (-114±10‰ ) requires moderate fractionation from a pure marine source, indicating a link between inversion strength and moistening of the lower troposphere from the outflow of shallow convection. This approach can be applied in other settings and the results can be used to test parameterizations in climate models.
Reconstructing the Curve-Skeletons of 3D Shapes Using the Visual Hull.
Livesu, Marco; Guggeri, Fabio; Scateni, Riccardo
2012-11-01
Curve-skeletons are the most important descriptors for shapes, capable of capturing in a synthetic manner the most relevant features. They are useful for many different applications: from shape matching and retrieval, to medical imaging, to animation. This has led, over the years, to the development of several different techniques for extraction, each trying to comply with specific goals. We propose a novel technique which stems from the intuition of reproducing what a human being does to deduce the shape of an object holding it in his or her hand and rotating. To accomplish this, we use the formal definitions of epipolar geometry and visual hull. We show how it is possible to infer the curve-skeleton of a broad class of 3D shapes, along with an estimation of the radii of the maximal inscribed balls, by gathering information about the medial axes of their projections on the image planes of the stereographic vision. It is definitely worth to point out that our method works indifferently on (even unoriented) polygonal meshes, voxel models, and point clouds. Moreover, it is insensitive to noise, pose-invariant, resolution-invariant, and robust when applied to incomplete data sets.
Performance Evaluation of sUAS Equipped with Velodyne HDL-32E LiDAR Sensor
NASA Astrophysics Data System (ADS)
Jozkow, G.; Wieczorek, P.; Karpina, M.; Walicka, A.; Borkowski, A.
2017-08-01
The Velodyne HDL-32E laser scanner is used more frequently as main mapping sensor in small commercial UASs. However, there is still little information about the actual accuracy of point clouds collected with such UASs. This work evaluates empirically the accuracy of the point cloud collected with such UAS. Accuracy assessment was conducted in four aspects: impact of sensors on theoretical point cloud accuracy, trajectory reconstruction quality, and internal and absolute point cloud accuracies. Theoretical point cloud accuracy was evaluated by calculating 3D position error knowing errors of used sensors. The quality of trajectory reconstruction was assessed by comparing position and attitude differences from forward and reverse EKF solution. Internal and absolute accuracies were evaluated by fitting planes to 8 point cloud samples extracted for planar surfaces. In addition, the absolute accuracy was also determined by calculating point 3D distances between LiDAR UAS and reference TLS point clouds. Test data consisted of point clouds collected in two separate flights performed over the same area. Executed experiments showed that in tested UAS, the trajectory reconstruction, especially attitude, has significant impact on point cloud accuracy. Estimated absolute accuracy of point clouds collected during both test flights was better than 10 cm, thus investigated UAS fits mapping-grade category.
A shape-based segmentation method for mobile laser scanning point clouds
NASA Astrophysics Data System (ADS)
Yang, Bisheng; Dong, Zhen
2013-07-01
Segmentation of mobile laser point clouds of urban scenes into objects is an important step for post-processing (e.g., interpretation) of point clouds. Point clouds of urban scenes contain numerous objects with significant size variability, complex and incomplete structures, and holes or variable point densities, raising great challenges for the segmentation of mobile laser point clouds. This paper addresses these challenges by proposing a shape-based segmentation method. The proposed method first calculates the optimal neighborhood size of each point to derive the geometric features associated with it, and then classifies the point clouds according to geometric features using support vector machines (SVMs). Second, a set of rules are defined to segment the classified point clouds, and a similarity criterion for segments is proposed to overcome over-segmentation. Finally, the segmentation output is merged based on topological connectivity into a meaningful geometrical abstraction. The proposed method has been tested on point clouds of two urban scenes obtained by different mobile laser scanners. The results show that the proposed method segments large-scale mobile laser point clouds with good accuracy and computationally effective time cost, and that it segments pole-like objects particularly well.
NASA Astrophysics Data System (ADS)
Kourtidis, Konstantinos; Georgoulias, Aristeidis
2017-04-01
We studied the impact of anthropogenic aerosols, fine mode natural aerosols, Saharan dust, atmospheric water vapor, cloud fraction, cloud optical depth and cloud top height on the magnitude of fair weather PG at the rural station of Xanthi. Fair weather PG was measured in situ while the other parameters were obtained from the MODIS instrument onboard the Terra and Aqua satellites. All of the above parameteres were found to impact fair weather PG magnitude. Regarding aerosols, the impact was larger for Saharan dust and fine mode natural aerosols whereas regarding clouds the impact was larger for cloud fraction while less than that of aerosols. Water vapour and ice precipitable water were also found to influence fair weather PG. Since aerosols and water are ubiquitous in the atmosphere and exhibit large spatial and temporal variability, we postulate that our understanding of the Carnegie curve might need revision.
UV extinction properties of carina nebular dust
NASA Technical Reports Server (NTRS)
Massa, Derck
1993-01-01
I have performed an analysis of the UV extinction by dust along the line of sight to the young open cluster Tr 16. The observed curves are parameterized in order to extract quantitative information about the structure of the curves. Furthermore, by constructing differential extinction curves, obtained by differencing curves for stars which lie within a few arc seconds of each other on the sky, I was able to obtain a curve which is free of the effects of foreground extinction, and represents the extinction by the dust in the Tr 16 molecular cloud. I then show that this curve is nearly identical to one due to dust in the Orion molecular cloud. This result shows that dust in the Carina arm exhibits the same behavior as that in the local arm.
Measurement of optical blurring in a turbulent cloud chamber
NASA Astrophysics Data System (ADS)
Packard, Corey D.; Ciochetto, David S.; Cantrell, Will H.; Roggemann, Michael C.; Shaw, Raymond A.
2016-10-01
Earth's atmosphere can significantly impact the propagation of electromagnetic radiation, degrading the performance of imaging systems. Deleterious effects of the atmosphere include turbulence, absorption and scattering by particulates. Turbulence leads to blurring, while absorption attenuates the energy that reaches imaging sensors. The optical properties of aerosols and clouds also impact radiation propagation via scattering, resulting in decorrelation from unscattered light. Models have been proposed for calculating a point spread function (PSF) for aerosol scattering, providing a method for simulating the contrast and spatial detail expected when imaging through atmospheres with significant aerosol optical depth. However, these synthetic images and their predicating theory would benefit from comparison with measurements in a controlled environment. Recently, Michigan Technological University (MTU) has designed a novel laboratory cloud chamber. This multiphase, turbulent "Pi Chamber" is capable of pressures down to 100 hPa and temperatures from -55 to +55°C. Additionally, humidity and aerosol concentrations are controllable. These boundary conditions can be combined to form and sustain clouds in an instrumented laboratory setting for measuring the impact of clouds on radiation propagation. This paper describes an experiment to generate mixing and expansion clouds in supersaturated conditions with salt aerosols, and an example of measured imagery viewed through the generated cloud is shown. Aerosol and cloud droplet distributions measured during the experiment are used to predict scattering PSF and MTF curves, and a methodology for validating existing theory is detailed. Measured atmospheric inputs will be used to simulate aerosol-induced image degradation for comparison with measured imagery taken through actual cloud conditions. The aerosol MTF will be experimentally calculated and compared to theoretical expressions. The key result of this study is the proposal of a closure experiment for verification of theoretical aerosol effects using actual clouds in a controlled laboratory setting.
LSAH: a fast and efficient local surface feature for point cloud registration
NASA Astrophysics Data System (ADS)
Lu, Rongrong; Zhu, Feng; Wu, Qingxiao; Kong, Yanzi
2018-04-01
Point cloud registration is a fundamental task in high level three dimensional applications. Noise, uneven point density and varying point cloud resolutions are the three main challenges for point cloud registration. In this paper, we design a robust and compact local surface descriptor called Local Surface Angles Histogram (LSAH) and propose an effectively coarse to fine algorithm for point cloud registration. The LSAH descriptor is formed by concatenating five normalized sub-histograms into one histogram. The five sub-histograms are created by accumulating a different type of angle from a local surface patch respectively. The experimental results show that our LSAH is more robust to uneven point density and point cloud resolutions than four state-of-the-art local descriptors in terms of feature matching. Moreover, we tested our LSAH based coarse to fine algorithm for point cloud registration. The experimental results demonstrate that our algorithm is robust and efficient as well.
NASA Technical Reports Server (NTRS)
Burns, Lee; Decker, Ryan
2004-01-01
Lightning strike location and peak current are monitored operationally in the Kennedy Space Center (KSC)/Cape Canaveral Air Force Station (CCAFS) area by the Cloud to Ground Lightning Surveillance System (CGLSS). The present study compiles ten years of CGLSS data into a climatological database of all strikes recorded within a 20-mile radius of space shuttle launch platform LP39A, which serves as a convenient central point. The period of record (POR) for the database runs from January 1, 1993 to December 31, 2002. Histograms and cumulative probability curves are produced to determine the distribution of occurrence rates for the spectrum of strike intensities (given in kA). Further analysis of the database provides a description of both seasonal and interannual variations in the lightning distribution.
NASA Technical Reports Server (NTRS)
Burns, Lee; Decker, Ryan
2005-01-01
Lightning strike location and peak current are monitored operationally in the Kennedy Space Center (KSC) Cape Canaveral Air Force Station (CCAFS) area by the Cloud to Ground Lightning Surveillance System (CGLSS). The present study compiles ten years worth of CGLSS data into a database of near strikes. Using shuffle launch platform LP39A as a convenient central point, all strikes recorded within a 20-mile radius for the period of record O R ) from January 1, 1993 to December 31,2002 were included in the subset database. Histograms and cumulative probability curves are produced for both strike intensity (peak current, in kA) and the corresponding magnetic inductance fields (in A/m). Results for the full POR have application to launch operations lightning monitoring and post-strike test procedures.
Zhou, Jun; Sun, Jiang Bing; Xu, Xin Yu; Cheng, Zhao Hui; Zeng, Ping; Wang, Feng Qiao; Zhang, Qiong
2015-03-25
A simple, inexpensive and efficient method based on the mixed cloud point extraction (MCPE) combined with high performance liquid chromatography was developed for the simultaneous separation and determination of six flavonoids (rutin, hyperoside, quercetin-3-O-sophoroside, isoquercitrin, astragalin and quercetin) in Apocynum venetum leaf samples. The non-ionic surfactant Genapol X-080 and cetyl-trimethyl ammonium bromide (CTAB) was chosen as the mixed extracting solvent. Parameters that affect the MCPE processes, such as the content of Genapol X-080 and CTAB, pH, salt content, extraction temperature and time were investigated and optimized. Under the optimized conditions, the calibration curve for six flavonoids were all linear with the correlation coefficients greater than 0.9994. The intra-day and inter-day precision (RSD) were below 8.1% and the limits of detection (LOD) for the six flavonoids were 1.2-5.0 ng mL(-1) (S/N=3). The proposed method was successfully used to separate and determine the six flavonoids in A. venetum leaf samples. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Davis, D. S.; Larson, H. P.; Hofmann, R.
1986-01-01
A near-infrared (1.8 to 3.5) microns extinction curve for the Orion molecular cloud is presented. The curve is derived from high-resolution spectra of the Orion H2 source recorded from the Kuiper Airborne Observatory. The data reveal that the Orion extinction law is indistinguishable from a 1/lambda form in the near-infrared, except for strongly enhanced extinction near a wavelength of about 3 microns. The implications of these results, in the context of current interstellar grain models, are discussed.
The Segmentation of Point Clouds with K-Means and ANN (artifical Neural Network)
NASA Astrophysics Data System (ADS)
Kuçak, R. A.; Özdemir, E.; Erol, S.
2017-05-01
Segmentation of point clouds is recently used in many Geomatics Engineering applications such as the building extraction in urban areas, Digital Terrain Model (DTM) generation and the road or urban furniture extraction. Segmentation is a process of dividing point clouds according to their special characteristic layers. The present paper discusses K-means and self-organizing map (SOM) which is a type of ANN (Artificial Neural Network) segmentation algorithm which treats the segmentation of point cloud. The point clouds which generate with photogrammetric method and Terrestrial Lidar System (TLS) were segmented according to surface normal, intensity and curvature. Thus, the results were evaluated. LIDAR (Light Detection and Ranging) and Photogrammetry are commonly used to obtain point clouds in many remote sensing and geodesy applications. By photogrammetric method or LIDAR method, it is possible to obtain point cloud from terrestrial or airborne systems. In this study, the measurements were made with a Leica C10 laser scanner in LIDAR method. In photogrammetric method, the point cloud was obtained from photographs taken from the ground with a 13 MP non-metric camera.
Applicability Analysis of Cloth Simulation Filtering Algorithm for Mobile LIDAR Point Cloud
NASA Astrophysics Data System (ADS)
Cai, S.; Zhang, W.; Qi, J.; Wan, P.; Shao, J.; Shen, A.
2018-04-01
Classifying the original point clouds into ground and non-ground points is a key step in LiDAR (light detection and ranging) data post-processing. Cloth simulation filtering (CSF) algorithm, which based on a physical process, has been validated to be an accurate, automatic and easy-to-use algorithm for airborne LiDAR point cloud. As a new technique of three-dimensional data collection, the mobile laser scanning (MLS) has been gradually applied in various fields, such as reconstruction of digital terrain models (DTM), 3D building modeling and forest inventory and management. Compared with airborne LiDAR point cloud, there are some different features (such as point density feature, distribution feature and complexity feature) for mobile LiDAR point cloud. Some filtering algorithms for airborne LiDAR data were directly used in mobile LiDAR point cloud, but it did not give satisfactory results. In this paper, we explore the ability of the CSF algorithm for mobile LiDAR point cloud. Three samples with different shape of the terrain are selected to test the performance of this algorithm, which respectively yields total errors of 0.44 %, 0.77 % and1.20 %. Additionally, large area dataset is also tested to further validate the effectiveness of this algorithm, and results show that it can quickly and accurately separate point clouds into ground and non-ground points. In summary, this algorithm is efficient and reliable for mobile LiDAR point cloud.
Investigating the Accuracy of Point Clouds Generated for Rock Surfaces
NASA Astrophysics Data System (ADS)
Seker, D. Z.; Incekara, A. H.
2016-12-01
Point clouds which are produced by means of different techniques are widely used to model the rocks and obtain the properties of rock surfaces like roughness, volume and area. These point clouds can be generated by applying laser scanning and close range photogrammetry techniques. Laser scanning is the most common method to produce point cloud. In this method, laser scanner device produces 3D point cloud at regular intervals. In close range photogrammetry, point cloud can be produced with the help of photographs taken in appropriate conditions depending on developing hardware and software technology. Many photogrammetric software which is open source or not currently provide the generation of point cloud support. Both methods are close to each other in terms of accuracy. Sufficient accuracy in the mm and cm range can be obtained with the help of a qualified digital camera and laser scanner. In both methods, field work is completed in less time than conventional techniques. In close range photogrammetry, any part of rock surfaces can be completely represented owing to overlapping oblique photographs. In contrast to the proximity of the data, these two methods are quite different in terms of cost. In this study, whether or not point cloud produced by photographs can be used instead of point cloud produced by laser scanner device is investigated. In accordance with this purpose, rock surfaces which have complex and irregular shape located in İstanbul Technical University Ayazaga Campus were selected as study object. Selected object is mixture of different rock types and consists of both partly weathered and fresh parts. Study was performed on a part of 30m x 10m rock surface. 2D and 3D analysis were performed for several regions selected from the point clouds of the surface models. 2D analysis is area-based and 3D analysis is volume-based. Analysis conclusions showed that point clouds in both are similar and can be used as alternative to each other. This proved that point cloud produced using photographs which are both economical and enables to produce data in less time can be used in several studies instead of point cloud produced by laser scanner.
NASA Astrophysics Data System (ADS)
Reinhardt, K.; Emanuel, R. E.; Johnson, D. M.
2013-12-01
Mountain cloud forest (MCF) ecosystems are characterized by a high frequency of cloud fog, with vegetation enshrouded in fog. The altitudinal boundaries of cloud-fog zones co-occur with conspicuous, sharp vegetation ecotones between MCF- and non-MCF-vegetation. This suggests linkages between cloud-fog and vegetation physiology and ecosystem functioning. However, very few studies have provided a mechanistic explanation for the sharp changes in vegetation communities, or how (if) cloud-fog and vegetation are linked. We investigated ecophysiological linkages between clouds and trees in Southern Appalachian spruce-fir MCF. These refugial forests occur in only six mountain-top, sky-island populations, and are immersed in clouds on up to 80% of all growing season days. Our fundamental research questions was: How are cloud-fog and cloud-forest trees linked? We measured microclimate and physiology of canopy tree species across a range of sky conditions (cloud immersed, partly cloudy, sunny). Measurements included: 1) sunlight intensity and spectral quality; 2) carbon gain and photosynthetic capacity at leaf (gas exchange) and ecosystem (eddy covariance) scales; and 3) relative limitations to carbon gain (biochemical, stomatal, hydraulic). RESULTS: 1) Midday sunlight intensity ranged from very dark (<30 μmol m-2 s-1, under cloud-immersed conditions) to very bright (>2500 μmol m-2 s-1), and was highly variable on minute-to-minute timescales whenever clouds were present in the sky. Clouds and cloud-fog increased the proportion of blue-light wavelengths 5-15% compared to sunny conditions, and altered blue:red and red:far red ratios, both of which have been shown to strongly affect stomatal functioning. 2) Cloud-fog resulted in ~50% decreased carbon gain at leaf and ecosystem scales, due to sunlight levels below photosynthetic light-saturation-points. However, greenhouse studies and light-response-curve analyses demonstrated that MCF tree species have low light-compensation points (can photosynthesize even at low light levels), and maximum photosynthesis occurs during high-light, diffuse-light conditions such as occurs during diffuse 'sunflecks' inside the cloud fog. Additionally, the capacity to respond to brief, intermittent sunflecks ('photosynthetic induction', e.g., time to maximum photosynthesis) was high in our MCF species. 3) Data quantifying limitations to photosynthesis were contradictory, underscoring complex relationships among photosynthesis, light, carbon and water relations. While stomatal response to atmospheric moisture demand was sensitive (e.g., 80% drop in stomatal conductance in a <1 kPa drop in vapor-pressure-deficit in conifer species), stem xylem hydraulic conductivity suggested strong drought tolerance capabilities. CONCLUSIONS: Clouds and cloud-fog exert strong influence on canopy-tree and ecosystem carbon relations. MCF are dynamic light environments. In these highly variable but ultimately light-limited ecosystems, vegetation must be able to both fix carbon when cloudy and dark but also be able to capitalize on saturating sunlight when possible.
LiDAR Point Cloud and Stereo Image Point Cloud Fusion
2013-09-01
LiDAR point cloud (right) highlighting linear edge features ideal for automatic registration...point cloud (right) highlighting linear edge features ideal for automatic registration. Areas where topography is being derived, unfortunately, do...with the least amount of automatic correlation errors was used. The following graphic (Figure 12) shows the coverage of the WV1 stereo triplet as
LIDAR Point Cloud Data Extraction and Establishment of 3D Modeling of Buildings
NASA Astrophysics Data System (ADS)
Zhang, Yujuan; Li, Xiuhai; Wang, Qiang; Liu, Jiang; Liang, Xin; Li, Dan; Ni, Chundi; Liu, Yan
2018-01-01
This paper takes the method of Shepard’s to deal with the original LIDAR point clouds data, and generate regular grid data DSM, filters the ground point cloud and non ground point cloud through double least square method, and obtains the rules of DSM. By using region growing method for the segmentation of DSM rules, the removal of non building point cloud, obtaining the building point cloud information. Uses the Canny operator to extract the image segmentation is needed after the edges of the building, uses Hough transform line detection to extract the edges of buildings rules of operation based on the smooth and uniform. At last, uses E3De3 software to establish the 3D model of buildings.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Wenyang; Cheung, Yam; Sawant, Amit
2016-05-15
Purpose: To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. Methods: The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparsemore » regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. Results: On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced occlusions. Conclusions: The authors have developed a fast and robust surface reconstruction method on point clouds captured from a 3D surface photogrammetry system, with demonstrated sub-millimeter reconstruction accuracy and subsecond reconstruction time. It is suitable for real-time motion tracking in radiotherapy, with clear surface structures for better quantifications.« less
Liu, Wenyang; Cheung, Yam; Sawant, Amit; Ruan, Dan
2016-05-01
To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced occlusions. The authors have developed a fast and robust surface reconstruction method on point clouds captured from a 3D surface photogrammetry system, with demonstrated sub-millimeter reconstruction accuracy and subsecond reconstruction time. It is suitable for real-time motion tracking in radiotherapy, with clear surface structures for better quantifications.
Liu, Wenyang; Cheung, Yam; Sawant, Amit; Ruan, Dan
2016-01-01
Purpose: To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. Methods: The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. Results: On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced occlusions. Conclusions: The authors have developed a fast and robust surface reconstruction method on point clouds captured from a 3D surface photogrammetry system, with demonstrated sub-millimeter reconstruction accuracy and subsecond reconstruction time. It is suitable for real-time motion tracking in radiotherapy, with clear surface structures for better quantifications. PMID:27147347
Automatic Registration of TLS-TLS and TLS-MLS Point Clouds Using a Genetic Algorithm
Yan, Li; Xie, Hong; Chen, Changjun
2017-01-01
Registration of point clouds is a fundamental issue in Light Detection and Ranging (LiDAR) remote sensing because point clouds scanned from multiple scan stations or by different platforms need to be transformed to a uniform coordinate reference frame. This paper proposes an efficient registration method based on genetic algorithm (GA) for automatic alignment of two terrestrial LiDAR scanning (TLS) point clouds (TLS-TLS point clouds) and alignment between TLS and mobile LiDAR scanning (MLS) point clouds (TLS-MLS point clouds). The scanning station position acquired by the TLS built-in GPS and the quasi-horizontal orientation of the LiDAR sensor in data acquisition are used as constraints to narrow the search space in GA. A new fitness function to evaluate the solutions for GA, named as Normalized Sum of Matching Scores, is proposed for accurate registration. Our method is divided into five steps: selection of matching points, initialization of population, transformation of matching points, calculation of fitness values, and genetic operation. The method is verified using a TLS-TLS data set and a TLS-MLS data set. The experimental results indicate that the RMSE of registration of TLS-TLS point clouds is 3~5 mm, and that of TLS-MLS point clouds is 2~4 cm. The registration integrating the existing well-known ICP with GA is further proposed to accelerate the optimization and its optimizing time decreases by about 50%. PMID:28850100
Automatic Registration of TLS-TLS and TLS-MLS Point Clouds Using a Genetic Algorithm.
Yan, Li; Tan, Junxiang; Liu, Hua; Xie, Hong; Chen, Changjun
2017-08-29
Registration of point clouds is a fundamental issue in Light Detection and Ranging (LiDAR) remote sensing because point clouds scanned from multiple scan stations or by different platforms need to be transformed to a uniform coordinate reference frame. This paper proposes an efficient registration method based on genetic algorithm (GA) for automatic alignment of two terrestrial LiDAR scanning (TLS) point clouds (TLS-TLS point clouds) and alignment between TLS and mobile LiDAR scanning (MLS) point clouds (TLS-MLS point clouds). The scanning station position acquired by the TLS built-in GPS and the quasi-horizontal orientation of the LiDAR sensor in data acquisition are used as constraints to narrow the search space in GA. A new fitness function to evaluate the solutions for GA, named as Normalized Sum of Matching Scores, is proposed for accurate registration. Our method is divided into five steps: selection of matching points, initialization of population, transformation of matching points, calculation of fitness values, and genetic operation. The method is verified using a TLS-TLS data set and a TLS-MLS data set. The experimental results indicate that the RMSE of registration of TLS-TLS point clouds is 3~5 mm, and that of TLS-MLS point clouds is 2~4 cm. The registration integrating the existing well-known ICP with GA is further proposed to accelerate the optimization and its optimizing time decreases by about 50%.
Analysis of a jet stream induced gravity wave associated with an observed ice cloud over Greenland
NASA Astrophysics Data System (ADS)
Buss, S.; Hertzog, A.; Hostettler, C.; Bui, T. P.; Lüthi, T.; Wernli, H.
2003-11-01
A polar stratospheric ice cloud (PSC type II) was observed by airborne lidar above Greenland on 14 January 2000. Is was the unique observation of an ice cloud over Greenland during the SOLVE/THESEO 2000 campaign. Mesoscale simulations with the hydrostatic HRM model are presented which, in contrast to global analyses, are capable to produce a vertically propagating gravity wave that induces the low temperatures at the level of the PSC afforded for the ice formation. The simulated minimum temperature is ~8 K below the driving analyses and ~3 K below the frost point, exactly coinciding with the location of the observed ice cloud. Despite the high elevations of the Greenland orography the simulated gravity wave is not a mountain wave. Analyses of the horizontal wind divergence, of the background wind profiles, of backward gravity wave ray-tracing trajectories, of HRM experiments with reduced Greenland topography and of several instability diagnostics near the tropopause level provide consistent evidence that the wave is emitted by the geostrophic adjustment of a jet instability associated with an intense, rapidly evolving, anticyclonically curved jet stream. In order to evaluate the potential frequency of such non-orographic polar stratospheric cloud events, an approximate jet instability diagnostic is performed for the winter 1999/2000. It indicates that ice-PSCs are only occasionally generated by gravity waves emanating from an unstable jet.
Automatic Classification of Trees from Laser Scanning Point Clouds
NASA Astrophysics Data System (ADS)
Sirmacek, B.; Lindenbergh, R.
2015-08-01
Development of laser scanning technologies has promoted tree monitoring studies to a new level, as the laser scanning point clouds enable accurate 3D measurements in a fast and environmental friendly manner. In this paper, we introduce a probability matrix computation based algorithm for automatically classifying laser scanning point clouds into 'tree' and 'non-tree' classes. Our method uses the 3D coordinates of the laser scanning points as input and generates a new point cloud which holds a label for each point indicating if it belongs to the 'tree' or 'non-tree' class. To do so, a grid surface is assigned to the lowest height level of the point cloud. The grids are filled with probability values which are calculated by checking the point density above the grid. Since the tree trunk locations appear with very high values in the probability matrix, selecting the local maxima of the grid surface help to detect the tree trunks. Further points are assigned to tree trunks if they appear in the close proximity of trunks. Since heavy mathematical computations (such as point cloud organization, detailed shape 3D detection methods, graph network generation) are not required, the proposed algorithm works very fast compared to the existing methods. The tree classification results are found reliable even on point clouds of cities containing many different objects. As the most significant weakness, false detection of light poles, traffic signs and other objects close to trees cannot be prevented. Nevertheless, the experimental results on mobile and airborne laser scanning point clouds indicate the possible usage of the algorithm as an important step for tree growth observation, tree counting and similar applications. While the laser scanning point cloud is giving opportunity to classify even very small trees, accuracy of the results is reduced in the low point density areas further away than the scanning location. These advantages and disadvantages of two laser scanning point cloud sources are discussed in detail.
Georeferencing UAS Derivatives Through Point Cloud Registration with Archived Lidar Datasets
NASA Astrophysics Data System (ADS)
Magtalas, M. S. L. Y.; Aves, J. C. L.; Blanco, A. C.
2016-10-01
Georeferencing gathered images is a common step before performing spatial analysis and other processes on acquired datasets using unmanned aerial systems (UAS). Methods of applying spatial information to aerial images or their derivatives is through onboard GPS (Global Positioning Systems) geotagging, or through tying of models through GCPs (Ground Control Points) acquired in the field. Currently, UAS (Unmanned Aerial System) derivatives are limited to meter-levels of accuracy when their generation is unaided with points of known position on the ground. The use of ground control points established using survey-grade GPS or GNSS receivers can greatly reduce model errors to centimeter levels. However, this comes with additional costs not only with instrument acquisition and survey operations, but also in actual time spent in the field. This study uses a workflow for cloud-based post-processing of UAS data in combination with already existing LiDAR data. The georeferencing of the UAV point cloud is executed using the Iterative Closest Point algorithm (ICP). It is applied through the open-source CloudCompare software (Girardeau-Montaut, 2006) on a `skeleton point cloud'. This skeleton point cloud consists of manually extracted features consistent on both LiDAR and UAV data. For this cloud, roads and buildings with minimal deviations given their differing dates of acquisition are considered consistent. Transformation parameters are computed for the skeleton cloud which could then be applied to the whole UAS dataset. In addition, a separate cloud consisting of non-vegetation features automatically derived using CANUPO classification algorithm (Brodu and Lague, 2012) was used to generate a separate set of parameters. Ground survey is done to validate the transformed cloud. An RMSE value of around 16 centimeters was found when comparing validation data to the models georeferenced using the CANUPO cloud and the manual skeleton cloud. Cloud-to-cloud distance computations of CANUPO and manual skeleton clouds were obtained with values for both equal to around 0.67 meters at 1.73 standard deviation.
AceCloud: Molecular Dynamics Simulations in the Cloud.
Harvey, M J; De Fabritiis, G
2015-05-26
We present AceCloud, an on-demand service for molecular dynamics simulations. AceCloud is designed to facilitate the secure execution of large ensembles of simulations on an external cloud computing service (currently Amazon Web Services). The AceCloud client, integrated into the ACEMD molecular dynamics package, provides an easy-to-use interface that abstracts all aspects of interaction with the cloud services. This gives the user the experience that all simulations are running on their local machine, minimizing the learning curve typically associated with the transition to using high performance computing services.
NASA Astrophysics Data System (ADS)
Cura, Rémi; Perret, Julien; Paparoditis, Nicolas
2017-05-01
In addition to more traditional geographical data such as images (rasters) and vectors, point cloud data are becoming increasingly available. Such data are appreciated for their precision and true three-Dimensional (3D) nature. However, managing point clouds can be difficult due to scaling problems and specificities of this data type. Several methods exist but are usually fairly specialised and solve only one aspect of the management problem. In this work, we propose a comprehensive and efficient point cloud management system based on a database server that works on groups of points (patches) rather than individual points. This system is specifically designed to cover the basic needs of point cloud users: fast loading, compressed storage, powerful patch and point filtering, easy data access and exporting, and integrated processing. Moreover, the proposed system fully integrates metadata (like sensor position) and can conjointly use point clouds with other geospatial data, such as images, vectors, topology and other point clouds. Point cloud (parallel) processing can be done in-base with fast prototyping capabilities. Lastly, the system is built on open source technologies; therefore it can be easily extended and customised. We test the proposed system with several billion points obtained from Lidar (aerial and terrestrial) and stereo-vision. We demonstrate loading speeds in the ˜50 million pts/h per process range, transparent-for-user and greater than 2 to 4:1 compression ratio, patch filtering in the 0.1 to 1 s range, and output in the 0.1 million pts/s per process range, along with classical processing methods, such as object detection.
NASA Astrophysics Data System (ADS)
Zhao, Yu; Shi, Chen-Xiao; Kwon, Ki-Chul; Piao, Yan-Ling; Piao, Mei-Lan; Kim, Nam
2018-03-01
We propose a fast calculation method for a computer-generated hologram (CGH) of real objects that uses a point cloud gridding method. The depth information of the scene is acquired using a depth camera and the point cloud model is reconstructed virtually. Because each point of the point cloud is distributed precisely to the exact coordinates of each layer, each point of the point cloud can be classified into grids according to its depth. A diffraction calculation is performed on the grids using a fast Fourier transform (FFT) to obtain a CGH. The computational complexity is reduced dramatically in comparison with conventional methods. The feasibility of the proposed method was confirmed by numerical and optical experiments.
NASA Technical Reports Server (NTRS)
Scowcroft, Victoria; Freedman, Wendy L.; Madore, Barry F.; Monson, Andrew J.; Persson, S. E.; Seibert, Mark; Rigby, Jane R.; Sturch, Laura
2011-01-01
The Carnegie Hubble Program (CHP) is designed to improve the extragalactic distance scale using data from the post-cryogenic era of Spitzer. The ultimate goal is a determination of the Hubble constant to an accuracy of 2%. This paper is the first in a series on the Cepheid population of the Large Magellanic Cloud, and focuses on the period-luminosity relations (Leavitt laws) that will be used, in conjunction with observations of Milky Way Cepheids, to set the slope and zero-point of the Cepheid distance scale in the mid-infrared. To this end, we have obtained uniformly-sampled light curves for 85 LMC Cepheids, having periods between 6 and 140 days. Period- luminosity and period-color relations are presented in the 3.6 micron and 4.5 micron bands. We demonstrate that the 3.6 micron band is a superb distance indicator. The cyclical variation of the [3.6]-[4.5] color has been measured for the first time. We attribute the amplitude and phase of the color curves to the dissociation and recombination of CO molecules in the Cepheid s atmosphere. The CO affects only the 4.5 micron flux making it a potential metallicity indicator.
Processing Uav and LIDAR Point Clouds in Grass GIS
NASA Astrophysics Data System (ADS)
Petras, V.; Petrasova, A.; Jeziorska, J.; Mitasova, H.
2016-06-01
Today's methods of acquiring Earth surface data, namely lidar and unmanned aerial vehicle (UAV) imagery, non-selectively collect or generate large amounts of points. Point clouds from different sources vary in their properties such as number of returns, density, or quality. We present a set of tools with applications for different types of points clouds obtained by a lidar scanner, structure from motion technique (SfM), and a low-cost 3D scanner. To take advantage of the vertical structure of multiple return lidar point clouds, we demonstrate tools to process them using 3D raster techniques which allow, for example, the development of custom vegetation classification methods. Dense point clouds obtained from UAV imagery, often containing redundant points, can be decimated using various techniques before further processing. We implemented and compared several decimation techniques in regard to their performance and the final digital surface model (DSM). Finally, we will describe the processing of a point cloud from a low-cost 3D scanner, namely Microsoft Kinect, and its application for interaction with physical models. All the presented tools are open source and integrated in GRASS GIS, a multi-purpose open source GIS with remote sensing capabilities. The tools integrate with other open source projects, specifically Point Data Abstraction Library (PDAL), Point Cloud Library (PCL), and OpenKinect libfreenect2 library to benefit from the open source point cloud ecosystem. The implementation in GRASS GIS ensures long term maintenance and reproducibility by the scientific community but also by the original authors themselves.
a Global Registration Algorithm of the Single-Closed Ring Multi-Stations Point Cloud
NASA Astrophysics Data System (ADS)
Yang, R.; Pan, L.; Xiang, Z.; Zeng, H.
2018-04-01
Aimed at the global registration problem of the single-closed ring multi-stations point cloud, a formula in order to calculate the error of rotation matrix was constructed according to the definition of error. The global registration algorithm of multi-station point cloud was derived to minimize the error of rotation matrix. And fast-computing formulas of transformation matrix with whose implementation steps and simulation experiment scheme was given. Compared three different processing schemes of multi-station point cloud, the experimental results showed that the effectiveness of the new global registration method was verified, and it could effectively complete the global registration of point cloud.
NASA Astrophysics Data System (ADS)
Xu, Y.; Sun, Z.; Boerner, R.; Koch, T.; Hoegner, L.; Stilla, U.
2018-04-01
In this work, we report a novel way of generating ground truth dataset for analyzing point cloud from different sensors and the validation of algorithms. Instead of directly labeling large amount of 3D points requiring time consuming manual work, a multi-resolution 3D voxel grid for the testing site is generated. Then, with the help of a set of basic labeled points from the reference dataset, we can generate a 3D labeled space of the entire testing site with different resolutions. Specifically, an octree-based voxel structure is applied to voxelize the annotated reference point cloud, by which all the points are organized by 3D grids of multi-resolutions. When automatically annotating the new testing point clouds, a voting based approach is adopted to the labeled points within multiple resolution voxels, in order to assign a semantic label to the 3D space represented by the voxel. Lastly, robust line- and plane-based fast registration methods are developed for aligning point clouds obtained via various sensors. Benefiting from the labeled 3D spatial information, we can easily create new annotated 3D point clouds of different sensors of the same scene directly by considering the corresponding labels of 3D space the points located, which would be convenient for the validation and evaluation of algorithms related to point cloud interpretation and semantic segmentation.
An Elliptic Curve Based Schnorr Cloud Security Model in Distributed Environment
Muthurajan, Vinothkumar; Narayanasamy, Balaji
2016-01-01
Cloud computing requires the security upgrade in data transmission approaches. In general, key-based encryption/decryption (symmetric and asymmetric) mechanisms ensure the secure data transfer between the devices. The symmetric key mechanisms (pseudorandom function) provide minimum protection level compared to asymmetric key (RSA, AES, and ECC) schemes. The presence of expired content and the irrelevant resources cause unauthorized data access adversely. This paper investigates how the integrity and secure data transfer are improved based on the Elliptic Curve based Schnorr scheme. This paper proposes a virtual machine based cloud model with Hybrid Cloud Security Algorithm (HCSA) to remove the expired content. The HCSA-based auditing improves the malicious activity prediction during the data transfer. The duplication in the cloud server degrades the performance of EC-Schnorr based encryption schemes. This paper utilizes the blooming filter concept to avoid the cloud server duplication. The combination of EC-Schnorr and blooming filter efficiently improves the security performance. The comparative analysis between proposed HCSA and the existing Distributed Hash Table (DHT) regarding execution time, computational overhead, and auditing time with auditing requests and servers confirms the effectiveness of HCSA in the cloud security model creation. PMID:26981584
An Elliptic Curve Based Schnorr Cloud Security Model in Distributed Environment.
Muthurajan, Vinothkumar; Narayanasamy, Balaji
2016-01-01
Cloud computing requires the security upgrade in data transmission approaches. In general, key-based encryption/decryption (symmetric and asymmetric) mechanisms ensure the secure data transfer between the devices. The symmetric key mechanisms (pseudorandom function) provide minimum protection level compared to asymmetric key (RSA, AES, and ECC) schemes. The presence of expired content and the irrelevant resources cause unauthorized data access adversely. This paper investigates how the integrity and secure data transfer are improved based on the Elliptic Curve based Schnorr scheme. This paper proposes a virtual machine based cloud model with Hybrid Cloud Security Algorithm (HCSA) to remove the expired content. The HCSA-based auditing improves the malicious activity prediction during the data transfer. The duplication in the cloud server degrades the performance of EC-Schnorr based encryption schemes. This paper utilizes the blooming filter concept to avoid the cloud server duplication. The combination of EC-Schnorr and blooming filter efficiently improves the security performance. The comparative analysis between proposed HCSA and the existing Distributed Hash Table (DHT) regarding execution time, computational overhead, and auditing time with auditing requests and servers confirms the effectiveness of HCSA in the cloud security model creation.
Dusty Donuts: Modeling the Reverberation Response of the Circumnuclear Dusty Torus Emission in AGN
NASA Astrophysics Data System (ADS)
Almeyda, Triana R.
The obscuring circumnuclear torus of dusty molecular gas is one of the major components of AGN (active galactic nuclei), yet its size, composition, and structure are not well understood. These properties can be studied by analyzing the temporal variations of the infrared (IR) dust emission from the torus in response to variations in the AGN continuum luminosity; a technique known as reverberation mapping. In a recent international campaign 12 AGN were monitored using the Spitzer Space Telescope and several ground-based telescopes, providing a unique set of well-sampled mid-IR and optical light curves which are required in order to determine the approximate sizes of the tori in these AGN. To help extract structural information contained in the data a computer model, TORMAC, has been developed that simulates the reverberation response of the clumpy torus emission. Given an input optical light curve, the code computes the emission of a 3D ensemble of dust clouds as a function of time at selected IR wavelengths, taking into account light travel delays. A large library of torus reverberation response simulations has been constructed, to investigate the effects of various geometrical and structural properties such as inclination, cloud distribution, disk half-opening angle, and radial depth. The effects of dust cloud orientation, cloud optical depth, anisotropy of the illuminating AGN radiation field, dust cloud shadowing, and cloud occultation are also explored in detail. TORMAC was also used to generate synthetic IR light curves for the Seyfert 1 galaxy, NGC 6418, using the observed optical light curve as the input, to investigate how the torus and dust cloud properties incorporated in the code affect the results obtained from reverberation mapping. This dissertation presents the most comprehensive investigation to date showing that radiative transfer effects within the torus and anisotropic illumination of the torus can strongly influence the torus IR response at different wavelengths, and should be accounted for when interpreting reverberation mapping data. TORMAC provides a powerful modeling tool that can generate simulated IR light curves for direct comparison to observations. As many types of astronomical sources are both variable and embedded in, or surrounded, by dust, TORMAC also has applications for dust reverberation studies well beyond the AGN observed in the Spitzer monitoring campaign.
The One to Multiple Automatic High Accuracy Registration of Terrestrial LIDAR and Optical Images
NASA Astrophysics Data System (ADS)
Wang, Y.; Hu, C.; Xia, G.; Xue, H.
2018-04-01
The registration of ground laser point cloud and close-range image is the key content of high-precision 3D reconstruction of cultural relic object. In view of the requirement of high texture resolution in the field of cultural relic at present, The registration of point cloud and image data in object reconstruction will result in the problem of point cloud to multiple images. In the current commercial software, the two pairs of registration of the two kinds of data are realized by manually dividing point cloud data, manual matching point cloud and image data, manually selecting a two - dimensional point of the same name of the image and the point cloud, and the process not only greatly reduces the working efficiency, but also affects the precision of the registration of the two, and causes the problem of the color point cloud texture joint. In order to solve the above problems, this paper takes the whole object image as the intermediate data, and uses the matching technology to realize the automatic one-to-one correspondence between the point cloud and multiple images. The matching of point cloud center projection reflection intensity image and optical image is applied to realize the automatic matching of the same name feature points, and the Rodrigo matrix spatial similarity transformation model and weight selection iteration are used to realize the automatic registration of the two kinds of data with high accuracy. This method is expected to serve for the high precision and high efficiency automatic 3D reconstruction of cultural relic objects, which has certain scientific research value and practical significance.
Mysterious eclipses in the light curve of KIC8462852: a possible explanation
NASA Astrophysics Data System (ADS)
Neslušan, L.; Budaj, J.
2017-04-01
Context. Apart from thousands of "regular" exoplanet candidates, Kepler satellite has discovered a small number of stars exhibiting peculiar eclipse-like events. They are most probably caused by disintegrating bodies transiting in front of the star. However, the nature of the bodies and obscuration events, such as those observed in KIC 8462852, remain mysterious. A swarm of comets or artificial alien mega-structures have been proposed as an explanation for the latter object. Aims: We explore the possibility that such eclipses are caused by the dust clouds associated with massive parent bodies orbiting the host star. Methods: We assumed a massive object and a simple model of the dust cloud surrounding the object. Then, we used the numerical integration to simulate the evolution of the cloud, its parent body, and resulting light-curves as they orbit and transit the star. Results: We found that it is possible to reproduce the basic features in the light-curve of KIC 8462852 with only four objects enshrouded in dust clouds. The fact that they are all on similar orbits and that such models require only a handful of free parameters provides additional support for this hypothesis. Conclusions: This model provides an alternative to the comet scenario. With such physical models at hand, at present, there is no need to invoke alien mega-structures for an explanation of these light-curves.
Brute Force Matching Between Camera Shots and Synthetic Images from Point Clouds
NASA Astrophysics Data System (ADS)
Boerner, R.; Kröhnert, M.
2016-06-01
3D point clouds, acquired by state-of-the-art terrestrial laser scanning techniques (TLS), provide spatial information about accuracies up to several millimetres. Unfortunately, common TLS data has no spectral information about the covered scene. However, the matching of TLS data with images is important for monoplotting purposes and point cloud colouration. Well-established methods solve this issue by matching of close range images and point cloud data by fitting optical camera systems on top of laser scanners or rather using ground control points. The approach addressed in this paper aims for the matching of 2D image and 3D point cloud data from a freely moving camera within an environment covered by a large 3D point cloud, e.g. a 3D city model. The key advantage of the free movement affects augmented reality applications or real time measurements. Therefore, a so-called real image, captured by a smartphone camera, has to be matched with a so-called synthetic image which consists of reverse projected 3D point cloud data to a synthetic projection centre whose exterior orientation parameters match the parameters of the image, assuming an ideal distortion free camera.
An Approach of Web-based Point Cloud Visualization without Plug-in
NASA Astrophysics Data System (ADS)
Ye, Mengxuan; Wei, Shuangfeng; Zhang, Dongmei
2016-11-01
With the advances in three-dimensional laser scanning technology, the demand for visualization of massive point cloud is increasingly urgent, but a few years ago point cloud visualization was limited to desktop-based solutions until the introduction of WebGL, several web renderers are available. This paper addressed the current issues in web-based point cloud visualization, and proposed a method of web-based point cloud visualization without plug-in. The method combines ASP.NET and WebGL technologies, using the spatial database PostgreSQL to store data and the open web technologies HTML5 and CSS3 to implement the user interface, a visualization system online for 3D point cloud is developed by Javascript with the web interactions. Finally, the method is applied to the real case. Experiment proves that the new model is of great practical value which avoids the shortcoming of the existing WebGIS solutions.
Model for Semantically Rich Point Cloud Data
NASA Astrophysics Data System (ADS)
Poux, F.; Neuville, R.; Hallot, P.; Billen, R.
2017-10-01
This paper proposes an interoperable model for managing high dimensional point clouds while integrating semantics. Point clouds from sensors are a direct source of information physically describing a 3D state of the recorded environment. As such, they are an exhaustive representation of the real world at every scale: 3D reality-based spatial data. Their generation is increasingly fast but processing routines and data models lack of knowledge to reason from information extraction rather than interpretation. The enhanced smart point cloud developed model allows to bring intelligence to point clouds via 3 connected meta-models while linking available knowledge and classification procedures that permits semantic injection. Interoperability drives the model adaptation to potentially many applications through specialized domain ontologies. A first prototype is implemented in Python and PostgreSQL database and allows to combine semantic and spatial concepts for basic hybrid queries on different point clouds.
Gürkan, Ramazan; Kır, Ufuk; Altunay, Nail
2015-08-01
The determination of inorganic arsenic species in water, beverages and foods become crucial in recent years, because arsenic species are considered carcinogenic and found at high concentrations in the samples. This communication describes a new cloud-point extraction (CPE) method for the determination of low quantity of arsenic species in the samples, purchased from the local market by UV-Visible Spectrophotometer (UV-Vis). The method is based on selective ternary complex of As(V) with acridine orange (AOH(+)) being a versatile fluorescence cationic dye in presence of tartaric acid and polyethylene glycol tert-octylphenyl ether (Triton X-114) at pH 5.0. Under the optimized conditions, a preconcentration factor of 65 and detection limit (3S blank/m) of 1.14 μg L(-1) was obtained from the calibration curve constructed in the range of 4-450 μg L(-1) with a correlation coefficient of 0.9932 for As(V). The method is validated by the analysis of certified reference materials (CRMs). Copyright © 2015 Elsevier Ltd. All rights reserved.
Self-Similar Spin Images for Point Cloud Matching
NASA Astrophysics Data System (ADS)
Pulido, Daniel
The rapid growth of Light Detection And Ranging (Lidar) technologies that collect, process, and disseminate 3D point clouds have allowed for increasingly accurate spatial modeling and analysis of the real world. Lidar sensors can generate massive 3D point clouds of a collection area that provide highly detailed spatial and radiometric information. However, a Lidar collection can be expensive and time consuming. Simultaneously, the growth of crowdsourced Web 2.0 data (e.g., Flickr, OpenStreetMap) have provided researchers with a wealth of freely available data sources that cover a variety of geographic areas. Crowdsourced data can be of varying quality and density. In addition, since it is typically not collected as part of a dedicated experiment but rather volunteered, when and where the data is collected is arbitrary. The integration of these two sources of geoinformation can provide researchers the ability to generate products and derive intelligence that mitigate their respective disadvantages and combine their advantages. Therefore, this research will address the problem of fusing two point clouds from potentially different sources. Specifically, we will consider two problems: scale matching and feature matching. Scale matching consists of computing feature metrics of each point cloud and analyzing their distributions to determine scale differences. Feature matching consists of defining local descriptors that are invariant to common dataset distortions (e.g., rotation and translation). Additionally, after matching the point clouds they can be registered and processed further (e.g., change detection). The objective of this research is to develop novel methods to fuse and enhance two point clouds from potentially disparate sources (e.g., Lidar and crowdsourced Web 2.0 datasets). The scope of this research is to investigate both scale and feature matching between two point clouds. The specific focus of this research will be in developing a novel local descriptor based on the concept of self-similarity to aid in the scale and feature matching steps. An open problem in fusion is how best to extract features from two point clouds and then perform feature-based matching. The proposed approach for this matching step is the use of local self-similarity as an invariant measure to match features. In particular, the proposed approach is to combine the concept of local self-similarity with a well-known feature descriptor, Spin Images, and thereby define "Self-Similar Spin Images". This approach is then extended to the case of matching two points clouds in very different coordinate systems (e.g., a geo-referenced Lidar point cloud and stereo-image derived point cloud without geo-referencing). The use of Self-Similar Spin Images is again applied to address this problem by introducing a "Self-Similar Keyscale" that matches the spatial scales of two point clouds. Another open problem is how best to detect changes in content between two point clouds. A method is proposed to find changes between two point clouds by analyzing the order statistics of the nearest neighbors between the two clouds, and thereby define the "Nearest Neighbor Order Statistic" method. Note that the well-known Hausdorff distance is a special case as being just the maximum order statistic. Therefore, by studying the entire histogram of these nearest neighbors it is expected to yield a more robust method to detect points that are present in one cloud but not the other. This approach is applied at multiple resolutions. Therefore, changes detected at the coarsest level will yield large missing targets and at finer levels will yield smaller targets.
Observational evidence of dust evolution in galactic extinction curves
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cecchi-Pestellini, Cesare; Casu, Silvia; Mulas, Giacomo
Although structural and optical properties of hydrogenated amorphous carbons are known to respond to varying physical conditions, most conventional extinction models are basically curve fits with modest predictive power. We compare an evolutionary model of the physical properties of carbonaceous grain mantles with their determination by homogeneously fitting observationally derived Galactic extinction curves with the same physically well-defined dust model. We find that a large sample of observed Galactic extinction curves are compatible with the evolutionary scenario underlying such a model, requiring physical conditions fully consistent with standard density, temperature, radiation field intensity, and average age of diffuse interstellar clouds.more » Hence, through the study of interstellar extinction we may, in principle, understand the evolutionary history of the diffuse interstellar clouds.« less
NASA Astrophysics Data System (ADS)
Zlinszky, András; Schroiff, Anke; Otepka, Johannes; Mandlburger, Gottfried; Pfeifer, Norbert
2014-05-01
LIDAR point clouds hold valuable information for land cover and vegetation analysis, not only in the spatial distribution of the points but also in their various attributes. However, LIDAR point clouds are rarely used for visual interpretation, since for most users, the point cloud is difficult to interpret compared to passive optical imagery. Meanwhile, point cloud viewing software is available allowing interactive 3D interpretation, but typically only one attribute at a time. This results in a large number of points with the same colour, crowding the scene and often obscuring detail. We developed a scheme for mapping information from multiple LIDAR point attributes to the Red, Green, and Blue channels of a widely used LIDAR data format, which are otherwise mostly used to add information from imagery to create "photorealistic" point clouds. The possible combinations of parameters are therefore represented in a wide range of colours, but relative differences in individual parameter values of points can be well understood. The visualization was implemented in OPALS software, using a simple and robust batch script, and is viewer independent since the information is stored in the point cloud data file itself. In our case, the following colour channel assignment delivered best results: Echo amplitude in the Red, echo width in the Green and normalized height above a Digital Terrain Model in the Blue channel. With correct parameter scaling (but completely without point classification), points belonging to asphalt and bare soil are dark red, low grassland and crop vegetation are bright red to yellow, shrubs and low trees are green and high trees are blue. Depending on roof material and DTM quality, buildings are shown from red through purple to dark blue. Erroneously high or low points, or points with incorrect amplitude or echo width usually have colours contrasting from terrain or vegetation. This allows efficient visual interpretation of the point cloud in planar, profile and 3D views since it reduces crowding of the scene and delivers intuitive contextual information. The resulting visualization has proved useful for vegetation analysis for habitat mapping, and can also be applied as a first step for point cloud level classification. An interactive demonstration of the visualization script is shown during poster attendance, including the opportunity to view your own point cloud sample files.
Rosnell, Tomi; Honkavaara, Eija
2012-01-01
The objective of this investigation was to develop and investigate methods for point cloud generation by image matching using aerial image data collected by quadrocopter type micro unmanned aerial vehicle (UAV) imaging systems. Automatic generation of high-quality, dense point clouds from digital images by image matching is a recent, cutting-edge step forward in digital photogrammetric technology. The major components of the system for point cloud generation are a UAV imaging system, an image data collection process using high image overlaps, and post-processing with image orientation and point cloud generation. Two post-processing approaches were developed: one of the methods is based on Bae Systems’ SOCET SET classical commercial photogrammetric software and another is built using Microsoft®’s Photosynth™ service available in the Internet. Empirical testing was carried out in two test areas. Photosynth processing showed that it is possible to orient the images and generate point clouds fully automatically without any a priori orientation information or interactive work. The photogrammetric processing line provided dense and accurate point clouds that followed the theoretical principles of photogrammetry, but also some artifacts were detected. The point clouds from the Photosynth processing were sparser and noisier, which is to a large extent due to the fact that the method is not optimized for dense point cloud generation. Careful photogrammetric processing with self-calibration is required to achieve the highest accuracy. Our results demonstrate the high performance potential of the approach and that with rigorous processing it is possible to reach results that are consistent with theory. We also point out several further research topics. Based on theoretical and empirical results, we give recommendations for properties of imaging sensor, data collection and processing of UAV image data to ensure accurate point cloud generation. PMID:22368479
Rosnell, Tomi; Honkavaara, Eija
2012-01-01
The objective of this investigation was to develop and investigate methods for point cloud generation by image matching using aerial image data collected by quadrocopter type micro unmanned aerial vehicle (UAV) imaging systems. Automatic generation of high-quality, dense point clouds from digital images by image matching is a recent, cutting-edge step forward in digital photogrammetric technology. The major components of the system for point cloud generation are a UAV imaging system, an image data collection process using high image overlaps, and post-processing with image orientation and point cloud generation. Two post-processing approaches were developed: one of the methods is based on Bae Systems' SOCET SET classical commercial photogrammetric software and another is built using Microsoft(®)'s Photosynth™ service available in the Internet. Empirical testing was carried out in two test areas. Photosynth processing showed that it is possible to orient the images and generate point clouds fully automatically without any a priori orientation information or interactive work. The photogrammetric processing line provided dense and accurate point clouds that followed the theoretical principles of photogrammetry, but also some artifacts were detected. The point clouds from the Photosynth processing were sparser and noisier, which is to a large extent due to the fact that the method is not optimized for dense point cloud generation. Careful photogrammetric processing with self-calibration is required to achieve the highest accuracy. Our results demonstrate the high performance potential of the approach and that with rigorous processing it is possible to reach results that are consistent with theory. We also point out several further research topics. Based on theoretical and empirical results, we give recommendations for properties of imaging sensor, data collection and processing of UAV image data to ensure accurate point cloud generation.
Gürkan, Ramazan; Korkmaz, Sema; Altunay, Nail
2016-08-01
A new ultrasonic-thermostatic-assisted cloud point extraction procedure (UTA-CPE) was developed for preconcentration at the trace levels of vanadium (V) and molybdenum (Mo) in milk, vegetables and foodstuffs prior to determination via flame atomic absorption spectrometry (FAAS). The method is based on the ion-association of stable anionic oxalate complexes of V(V) and Mo(VI) with [9-(diethylamino)benzo[a]phenoxazin-5-ylidene]azanium; sulfate (Nile blue A) at pH 4.5, and then extraction of the formed ion-association complexes into micellar phase of polyoxyethylene(7.5)nonylphenyl ether (PONPE 7.5). The UTA-CPE is greatly simplified and accelerated compared to traditional cloud point extraction (CPE). The analytical parameters optimized are solution pH, the concentrations of complexing reagents (oxalate and Nile blue A), the PONPE 7.5 concentration, electrolyte concentration, sample volume, temperature and ultrasonic power. Under the optimum conditions, the calibration curves for Mo(VI) and V(V) are obtained in the concentration range of 3-340µgL(-1) and 5-250µgL(-1) with high sensitivity enhancement factors (EFs) of 145 and 115, respectively. The limits of detection (LODs) for Mo(VI) and V(V) are 0.86 and 1.55µgL(-1), respectively. The proposed method demonstrated good performances such as relative standard deviations (as RSD %) (≤3.5%) and spiked recoveries (95.7-102.3%). The accuracy of the method was assessed by analysis of two standard reference materials (SRMs) and recoveries of spiked solutions. The method was successfully applied into the determination of trace amounts of Mo(VI) and V(V) in milk, vegetables and foodstuffs with satisfactory results. Copyright © 2016 Elsevier B.V. All rights reserved.
A Robotic Platform for Corn Seedling Morphological Traits Characterization
Lu, Hang; Tang, Lie; Whitham, Steven A.; Mei, Yu
2017-01-01
Crop breeding plays an important role in modern agriculture, improving plant performance, and increasing yield. Identifying the genes that are responsible for beneficial traits greatly facilitates plant breeding efforts for increasing crop production. However, associating genes and their functions with agronomic traits requires researchers to observe, measure, record, and analyze phenotypes of large numbers of plants, a repetitive and error-prone job if performed manually. An automated seedling phenotyping system aimed at replacing manual measurement, reducing sampling time, and increasing the allowable work time is thus highly valuable. Toward this goal, we developed an automated corn seedling phenotyping platform based on a time-of-flight of light (ToF) camera and an industrial robot arm. A ToF camera is mounted on the end effector of the robot arm. The arm positions the ToF camera at different viewpoints for acquiring 3D point cloud data. A camera-to-arm transformation matrix was calculated using a hand-eye calibration procedure and applied to transfer different viewpoints into an arm-based coordinate frame. Point cloud data filters were developed to remove the noise in the background and in the merged seedling point clouds. A 3D-to-2D projection and an x-axis pixel density distribution method were used to segment the stem and leaves. Finally, separated leaves were fitted with 3D curves for morphological traits characterization. This platform was tested on a sample of 60 corn plants at their early growth stages with between two to five leaves. The error ratios of the stem height and leave length measurements are 13.7% and 13.1%, respectively, demonstrating the feasibility of this robotic system for automated corn seedling phenotyping. PMID:28895892
A Robotic Platform for Corn Seedling Morphological Traits Characterization.
Lu, Hang; Tang, Lie; Whitham, Steven A; Mei, Yu
2017-09-12
Crop breeding plays an important role in modern agriculture, improving plant performance, and increasing yield. Identifying the genes that are responsible for beneficial traits greatly facilitates plant breeding efforts for increasing crop production. However, associating genes and their functions with agronomic traits requires researchers to observe, measure, record, and analyze phenotypes of large numbers of plants, a repetitive and error-prone job if performed manually. An automated seedling phenotyping system aimed at replacing manual measurement, reducing sampling time, and increasing the allowable work time is thus highly valuable. Toward this goal, we developed an automated corn seedling phenotyping platform based on a time-of-flight of light (ToF) camera and an industrial robot arm. A ToF camera is mounted on the end effector of the robot arm. The arm positions the ToF camera at different viewpoints for acquiring 3D point cloud data. A camera-to-arm transformation matrix was calculated using a hand-eye calibration procedure and applied to transfer different viewpoints into an arm-based coordinate frame. Point cloud data filters were developed to remove the noise in the background and in the merged seedling point clouds. A 3D-to-2D projection and an x -axis pixel density distribution method were used to segment the stem and leaves. Finally, separated leaves were fitted with 3D curves for morphological traits characterization. This platform was tested on a sample of 60 corn plants at their early growth stages with between two to five leaves. The error ratios of the stem height and leave length measurements are 13.7% and 13.1%, respectively, demonstrating the feasibility of this robotic system for automated corn seedling phenotyping.
a Fast Method for Measuring the Similarity Between 3d Model and 3d Point Cloud
NASA Astrophysics Data System (ADS)
Zhang, Zongliang; Li, Jonathan; Li, Xin; Lin, Yangbin; Zhang, Shanxin; Wang, Cheng
2016-06-01
This paper proposes a fast method for measuring the partial Similarity between 3D Model and 3D point Cloud (SimMC). It is crucial to measure SimMC for many point cloud-related applications such as 3D object retrieval and inverse procedural modelling. In our proposed method, the surface area of model and the Distance from Model to point Cloud (DistMC) are exploited as measurements to calculate SimMC. Here, DistMC is defined as the weighted distance of the distances between points sampled from model and point cloud. Similarly, Distance from point Cloud to Model (DistCM) is defined as the average distance of the distances between points in point cloud and model. In order to reduce huge computational burdens brought by calculation of DistCM in some traditional methods, we define SimMC as the ratio of weighted surface area of model to DistMC. Compared to those traditional SimMC measuring methods that are only able to measure global similarity, our method is capable of measuring partial similarity by employing distance-weighted strategy. Moreover, our method is able to be faster than other partial similarity assessment methods. We demonstrate the superiority of our method both on synthetic data and laser scanning data.
Multi-band Emission Light Curves of Jupiter: Insights on Brown Dwarfs and Directly Imaged Exoplanets
NASA Astrophysics Data System (ADS)
Zhang, Xi; Ge, Huazhi; Orton, Glenn S.; Fletcher, Leigh N.; Sinclair, James; Fernandes, Joshua; Momary, Thomas W.; Kasaba, Yasumasa; Sato, Takao M.; Fujiyoshi, Takuya
2016-10-01
Many brown dwarfs exhibit significant infrared flux variability (e.g., Artigau et al. 2009, ApJ, 701, 1534; Radigan et al. 2012, ApJ, 750, 105), ranging from several to twenty percent of the brightness. Current hypotheses include temperature variations, cloud holes and patchiness, and cloud height and thickness variations (e.g., Apai et al. 2013, ApJ, 768, 121; Robinson and Marley 2014, ApJ, 785, 158; Zhang and Showman 2014, ApJ, 788, L6). Some brown dwarfs show phase shifts in the light curves among different wavelengths (e.g., Buenzli et al. 2012, ApJ, 760, L31; Yang et al. 2016, arXiv:1605.02708), indicating vertical variations of the cloud distribution. The current observational technique can barely detect the brightness changes on the surfaces of nearby brown dwarfs (Crossfield et al. 2014, Nature, 505, 654) let alone resolve detailed weather patterns that cause the flux variability. The infrared emission maps of Jupiter might shed light on this problem. Using COMICS at Subaru Telescope, VISIR at Very Large Telescope (VLT) and NASA's Infrared Telescope Facility (IRTF), we obtained infrared images of Jupiter over several nights at multiple wavelengths that are sensitive to several pressure levels from the stratosphere to the deep troposphere below the ammonia clouds. The rotational maps and emission light curves are constructed. The individual pixel brightness varies up to a hundred percent level and the variation of the full-disk brightness is around several percent. Both the shape and amplitude of the light curves are significantly distinct at different wavelengths. Variation of light curves at different epochs and phase shift among different wavelengths are observed. We will present principle component analysis to identify dominant emission features such as stable vortices, cloud holes and eddies in the belts and zones and strong emissions in the aurora region. A radiative transfer model is used to simulate those features to get a more quantitative understanding. This work provides rich insights on the relationship between observed light curves and weather on brown dwarfs and perhaps on directly imaged exoplanets in the future.
Motion Estimation System Utilizing Point Cloud Registration
NASA Technical Reports Server (NTRS)
Chen, Qi (Inventor)
2016-01-01
A system and method of estimation motion of a machine is disclosed. The method may include determining a first point cloud and a second point cloud corresponding to an environment in a vicinity of the machine. The method may further include generating a first extended gaussian image (EGI) for the first point cloud and a second EGI for the second point cloud. The method may further include determining a first EGI segment based on the first EGI and a second EGI segment based on the second EGI. The method may further include determining a first two dimensional distribution for points in the first EGI segment and a second two dimensional distribution for points in the second EGI segment. The method may further include estimating motion of the machine based on the first and second two dimensional distributions.
Pointo - a Low Cost Solution to Point Cloud Processing
NASA Astrophysics Data System (ADS)
Houshiar, H.; Winkler, S.
2017-11-01
With advance in technology access to data especially 3D point cloud data becomes more and more an everyday task. 3D point clouds are usually captured with very expensive tools such as 3D laser scanners or very time consuming methods such as photogrammetry. Most of the available softwares for 3D point cloud processing are designed for experts and specialists in this field and are usually very large software packages containing variety of methods and tools. This results in softwares that are usually very expensive to acquire and also very difficult to use. Difficulty of use is caused by complicated user interfaces that is required to accommodate a large list of features. The aim of these complex softwares is to provide a powerful tool for a specific group of specialist. However they are not necessary required by the majority of the up coming average users of point clouds. In addition to complexity and high costs of these softwares they generally rely on expensive and modern hardware and only compatible with one specific operating system. Many point cloud customers are not point cloud processing experts or willing to spend the high acquisition costs of these expensive softwares and hardwares. In this paper we introduce a solution for low cost point cloud processing. Our approach is designed to accommodate the needs of the average point cloud user. To reduce the cost and complexity of software our approach focuses on one functionality at a time in contrast with most available softwares and tools that aim to solve as many problems as possible at the same time. Our simple and user oriented design improve the user experience and empower us to optimize our methods for creation of an efficient software. In this paper we introduce Pointo family as a series of connected softwares to provide easy to use tools with simple design for different point cloud processing requirements. PointoVIEWER and PointoCAD are introduced as the first components of the Pointo family to provide a fast and efficient visualization with the ability to add annotation and documentation to the point clouds.
Study of Huizhou architecture component point cloud in surface reconstruction
NASA Astrophysics Data System (ADS)
Zhang, Runmei; Wang, Guangyin; Ma, Jixiang; Wu, Yulu; Zhang, Guangbin
2017-06-01
Surface reconfiguration softwares have many problems such as complicated operation on point cloud data, too many interaction definitions, and too stringent requirements for inputing data. Thus, it has not been widely popularized so far. This paper selects the unique Huizhou Architecture chuandou wooden beam framework as the research object, and presents a complete set of implementation in data acquisition from point, point cloud preprocessing and finally implemented surface reconstruction. Firstly, preprocessing the acquired point cloud data, including segmentation and filtering. Secondly, the surface’s normals are deduced directly from the point cloud dataset. Finally, the surface reconstruction is studied by using Greedy Projection Triangulation Algorithm. Comparing the reconstructed model with the three-dimensional surface reconstruction softwares, the results show that the proposed scheme is more smooth, time efficient and portable.
NASA Astrophysics Data System (ADS)
Buss, S.; Hertzog, A.; Hostettler, C.; Bui, T. B.; Lüthi, D.; Wernli, H.
2004-08-01
A polar stratospheric ice cloud (PSC type II) was observed by airborne lidar above Greenland on 14 January 2000. It was the unique observation of an ice cloud over Greenland during the SOLVE/THESEO 2000 campaign. Mesoscale simulations with the hydrostatic HRM model are presented which, in contrast to global analyses, are capable to produce a vertically propagating gravity wave that induces the low temperatures at the level of the PSC afforded for the ice formation. The simulated minimum temperature is ~8 K below the driving analyses and ~4.5 K below the frost point, exactly coinciding with the location of the observed ice cloud. Despite the high elevations of the Greenland orography the simulated gravity wave is not a mountain wave. Analyses of the horizontal wind divergence, of the background wind profiles, of backward gravity wave ray-tracing trajectories, of HRM experiments with reduced Greenland topography and of several diagnostics near the tropopause level provide evidence that the wave is emitted from an intense, rapidly evolving, anticyclonically curved jet stream. The precise physical process responsible for the wave emission could not be identified definitely, but geostrophic adjustment and shear instability are likely candidates.
In order to evaluate the potential frequency of such non-orographic polar stratospheric cloud events, the non-linear balance equation diagnostic is performed for the winter 1999/2000. It indicates that ice-PSCs are only occasionally generated by gravity waves emanating from spontaneous adjustment.
NASA Astrophysics Data System (ADS)
Lague, D.
2014-12-01
High Resolution Topographic (HRT) datasets are predominantly stored and analyzed as 2D raster grids of elevations (i.e., Digital Elevation Models). Raster grid processing is common in GIS software and benefits from a large library of fast algorithms dedicated to geometrical analysis, drainage network computation and topographic change measurement. Yet, all instruments or methods currently generating HRT datasets (e.g., ALS, TLS, SFM, stereo satellite imagery) output natively 3D unstructured point clouds that are (i) non-regularly sampled, (ii) incomplete (e.g., submerged parts of river channels are rarely measured), and (iii) include 3D elements (e.g., vegetation, vertical features such as river banks or cliffs) that cannot be accurately described in a DEM. Interpolating the raw point cloud onto a 2D grid generally results in a loss of position accuracy, spatial resolution and in more or less controlled interpolation. Here I demonstrate how studying earth surface topography and processes directly on native 3D point cloud datasets offers several advantages over raster based methods: point cloud methods preserve the accuracy of the original data, can better handle the evaluation of uncertainty associated to topographic change measurements and are more suitable to study vegetation characteristics and steep features of the landscape. In this presentation, I will illustrate and compare Point Cloud based and Raster based workflows with various examples involving ALS, TLS and SFM for the analysis of bank erosion processes in bedrock and alluvial rivers, rockfall statistics (including rockfall volume estimate directly from point clouds) and the interaction of vegetation/hydraulics and sedimentation in salt marshes. These workflows use 2 recently published algorithms for point cloud classification (CANUPO) and point cloud comparison (M3C2) now implemented in the open source software CloudCompare.
Inoue, Tohru; Higuchi, Yuka; Misono, Takeshi
2009-10-01
The melting behavior of polyethyleneglycol dodecyl ethers (C(12)E(6), C(12)E(7), and C(12)E(8)) in a room temperature ionic liquid, 1-butyl-3-methylimidazolium tetrafluoroborate (bmimBF(4)), was investigated by means of differential scanning calorimetry (DSC). The melting temperature as a function of the surfactant concentration, combined with the cmc curve and cloud point curve, provided phase diagrams for the surfactant/bmimBF(4) mixtures in solvent-rich region. The characteristic feature of the mixtures is an existence of the Krafft temperature which is usually not observed with aqueous solutions of nonionic surfactants. The heat of fusion as a function of the surfactant concentration provided the interaction energy between the surfactant and bmimBF(4). The interaction energy shows a linear dependence on the length of polyoxyethylene (POE) chain of the surfactants, which suggests that the solvation takes place around the POE chain.
Compression of 3D Point Clouds Using a Region-Adaptive Hierarchical Transform.
De Queiroz, Ricardo; Chou, Philip A
2016-06-01
In free-viewpoint video, there is a recent trend to represent scene objects as solids rather than using multiple depth maps. Point clouds have been used in computer graphics for a long time and with the recent possibility of real time capturing and rendering, point clouds have been favored over meshes in order to save computation. Each point in the cloud is associated with its 3D position and its color. We devise a method to compress the colors in point clouds which is based on a hierarchical transform and arithmetic coding. The transform is a hierarchical sub-band transform that resembles an adaptive variation of a Haar wavelet. The arithmetic encoding of the coefficients assumes Laplace distributions, one per sub-band. The Laplace parameter for each distribution is transmitted to the decoder using a custom method. The geometry of the point cloud is encoded using the well-established octtree scanning. Results show that the proposed solution performs comparably to the current state-of-the-art, in many occasions outperforming it, while being much more computationally efficient. We believe this work represents the state-of-the-art in intra-frame compression of point clouds for real-time 3D video.
PROGRA2 experiment: new results for dust clouds and regoliths
NASA Astrophysics Data System (ADS)
Renard, J.-B.; Hadamcik, E.; Worms, J.-C.; Levasseur-Regourd, A.-C.; Daugeron, D.
With the CNES-sponsored PROGRA2 facility, linear polarization of scattered light is performed on various types of dust clouds in microgravity during parabolic flights onboard the CNES- and ESA-sponsored A300 Zéro-G aircraft. Clouds of fluffy aggregates are also studied on the ground when lifted by an air-draught. The effect of the physical properties of the particles, such as the grains size and size distribution, the real part of the refractive index, and the structure is currently being studied. The size distribution of the agglomerates is measured in the field of view from the polarized component images. The large number of phase curves already obtained in the various conditions of measurements, in order to build a database (about 160 curves) allows us to better connect the physical properties with the observed polarization of the dust in the clouds. The aim is to compare these curves with those obtained in the solar system by remote-sensing and in-situ techniques for interplanetary dust, cometary coma, and solid particles in planetary atmospheres (Renard et al., 2003). Measurements on layers of particles (i.e. on the ground) are then compared with remote measurements on asteroidal regoliths and planetary surfaces. New phase curves will be presented and discussed i.e. for quartz samples, crystals, fluffy mixtures of alumina and silica, and a high porosity ``regolith'' analogue made of micron-sized silica spheres. This work will contribute to the choice of the samples to be studied with the IMPACT/ICAPS instrument onboard the ISS. J.-B. Renard, E. Hadamcik, T. Lemaire, J.-C. Worms and A.-C. Levasseur-Regourd (2003). Polarization imaging of dust cloud particles: improvement and applications of the PROGRA2 instrument, ASR 31, 12, 2511-2518.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, W; Sawant, A; Ruan, D
2016-06-15
Purpose: Surface photogrammetry (e.g. VisionRT, C-Rad) provides a noninvasive way to obtain high-frequency measurement for patient motion monitoring in radiotherapy. This work aims to develop a real-time surface reconstruction method on the acquired point clouds, whose acquisitions are subject to noise and missing measurements. In contrast to existing surface reconstruction methods that are usually computationally expensive, the proposed method reconstructs continuous surfaces with comparable accuracy in real-time. Methods: The key idea in our method is to solve and propagate a sparse linear relationship from the point cloud (measurement) manifold to the surface (reconstruction) manifold, taking advantage of the similarity inmore » local geometric topology in both manifolds. With consistent point cloud acquisition, we propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, building the point correspondences by the iterative closest point (ICP) method. To accommodate changing noise levels and/or presence of inconsistent occlusions, we further propose a modified sparse regression (MSR) model to account for the large and sparse error built by ICP, with a Laplacian prior. We evaluated our method on both clinical acquired point clouds under consistent conditions and simulated point clouds with inconsistent occlusions. The reconstruction accuracy was evaluated w.r.t. root-mean-squared-error, by comparing the reconstructed surfaces against those from the variational reconstruction method. Results: On clinical point clouds, both the SR and MSR models achieved sub-millimeter accuracy, with mean reconstruction time reduced from 82.23 seconds to 0.52 seconds and 0.94 seconds, respectively. On simulated point cloud with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent performance despite the introduced occlusions. Conclusion: We have developed a real-time and robust surface reconstruction method on point clouds acquired by photogrammetry systems. It serves an important enabling step for real-time motion tracking in radiotherapy. This work is supported in part by NIH grant R01 CA169102-02.« less
SU-E-T-186: Cloud-Based Quality Assurance Application for Linear Accelerator Commissioning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rogers, J
2015-06-15
Purpose: To identify anomalies and safety issues during data collection and modeling for treatment planning systems Methods: A cloud-based quality assurance system (AQUIRE - Automated QUalIty REassurance) has been developed to allow the uploading and analysis of beam data aquired during the treatment planning system commissioning process. In addition to comparing and aggregating measured data, tools have also been developed to extract dose from the treatment planning system for end-to-end testing. A gamma index is perfomed on the data to give a dose difference and distance-to-agreement for validation that a beam model is generating plans consistent with the beam datamore » collection. Results: Over 20 linear accelerators have been commissioning using this platform, and a variety of errors and potential saftey issues have been caught through the validation process. For example, the gamma index of 2% dose, 2mm DTA is quite sufficient to see curves not corrected for effective point of measurement. Also, data imported into the database is analyzed against an aggregate of similar linear accelerators to show data points that are outliers. The resulting curves in the database exhibit a very small standard deviation and imply that a preconfigured beam model based on aggregated linear accelerators will be sufficient in most cases. Conclusion: With the use of this new platform for beam data commissioning, errors in beam data collection and treatment planning system modeling are greatly reduced. With the reduction in errors during acquisition, the resulting beam models are quite similar, suggesting that a common beam model may be possible in the future. Development is ongoing to create routine quality assurance tools to compare back to the beam data acquired during commissioning. I am a medical physicist for Alzyen Medical Physics, and perform commissioning services.« less
FPFH-based graph matching for 3D point cloud registration
NASA Astrophysics Data System (ADS)
Zhao, Jiapeng; Li, Chen; Tian, Lihua; Zhu, Jihua
2018-04-01
Correspondence detection is a vital step in point cloud registration and it can help getting a reliable initial alignment. In this paper, we put forward an advanced point feature-based graph matching algorithm to solve the initial alignment problem of rigid 3D point cloud registration with partial overlap. Specifically, Fast Point Feature Histograms are used to determine the initial possible correspondences firstly. Next, a new objective function is provided to make the graph matching more suitable for partially overlapping point cloud. The objective function is optimized by the simulated annealing algorithm for final group of correct correspondences. Finally, we present a novel set partitioning method which can transform the NP-hard optimization problem into a O(n3)-solvable one. Experiments on the Stanford and UWA public data sets indicates that our method can obtain better result in terms of both accuracy and time cost compared with other point cloud registration methods.
Protecting Location Privacy for Outsourced Spatial Data in Cloud Storage
Gui, Xiaolin; An, Jian; Zhao, Jianqiang; Zhang, Xuejun
2014-01-01
As cloud computing services and location-aware devices are fully developed, a large amount of spatial data needs to be outsourced to the cloud storage provider, so the research on privacy protection for outsourced spatial data gets increasing attention from academia and industry. As a kind of spatial transformation method, Hilbert curve is widely used to protect the location privacy for spatial data. But sufficient security analysis for standard Hilbert curve (SHC) is seldom proceeded. In this paper, we propose an index modification method for SHC (SHC∗) and a density-based space filling curve (DSC) to improve the security of SHC; they can partially violate the distance-preserving property of SHC, so as to achieve better security. We formally define the indistinguishability and attack model for measuring the privacy disclosure risk of spatial transformation methods. The evaluation results indicate that SHC∗ and DSC are more secure than SHC, and DSC achieves the best index generation performance. PMID:25097865
Protecting location privacy for outsourced spatial data in cloud storage.
Tian, Feng; Gui, Xiaolin; An, Jian; Yang, Pan; Zhao, Jianqiang; Zhang, Xuejun
2014-01-01
As cloud computing services and location-aware devices are fully developed, a large amount of spatial data needs to be outsourced to the cloud storage provider, so the research on privacy protection for outsourced spatial data gets increasing attention from academia and industry. As a kind of spatial transformation method, Hilbert curve is widely used to protect the location privacy for spatial data. But sufficient security analysis for standard Hilbert curve (SHC) is seldom proceeded. In this paper, we propose an index modification method for SHC (SHC(∗)) and a density-based space filling curve (DSC) to improve the security of SHC; they can partially violate the distance-preserving property of SHC, so as to achieve better security. We formally define the indistinguishability and attack model for measuring the privacy disclosure risk of spatial transformation methods. The evaluation results indicate that SHC(∗) and DSC are more secure than SHC, and DSC achieves the best index generation performance.
Smart Point Cloud: Definition and Remaining Challenges
NASA Astrophysics Data System (ADS)
Poux, F.; Hallot, P.; Neuville, R.; Billen, R.
2016-10-01
Dealing with coloured point cloud acquired from terrestrial laser scanner, this paper identifies remaining challenges for a new data structure: the smart point cloud. This concept arises with the statement that massive and discretized spatial information from active remote sensing technology is often underused due to data mining limitations. The generalisation of point cloud data associated with the heterogeneity and temporality of such datasets is the main issue regarding structure, segmentation, classification, and interaction for an immediate understanding. We propose to use both point cloud properties and human knowledge through machine learning to rapidly extract pertinent information, using user-centered information (smart data) rather than raw data. A review of feature detection, machine learning frameworks and database systems indexed both for mining queries and data visualisation is studied. Based on existing approaches, we propose a new 3-block flexible framework around device expertise, analytic expertise and domain base reflexion. This contribution serves as the first step for the realisation of a comprehensive smart point cloud data structure.
Motion-Compensated Compression of Dynamic Voxelized Point Clouds.
De Queiroz, Ricardo L; Chou, Philip A
2017-05-24
Dynamic point clouds are a potential new frontier in visual communication systems. A few articles have addressed the compression of point clouds, but very few references exist on exploring temporal redundancies. This paper presents a novel motion-compensated approach to encoding dynamic voxelized point clouds at low bit rates. A simple coder breaks the voxelized point cloud at each frame into blocks of voxels. Each block is either encoded in intra-frame mode or is replaced by a motion-compensated version of a block in the previous frame. The decision is optimized in a rate-distortion sense. In this way, both the geometry and the color are encoded with distortion, allowing for reduced bit-rates. In-loop filtering is employed to minimize compression artifacts caused by distortion in the geometry information. Simulations reveal that this simple motion compensated coder can efficiently extend the compression range of dynamic voxelized point clouds to rates below what intra-frame coding alone can accommodate, trading rate for geometry accuracy.
Solubilization of phenanthrene above cloud point of Brij 30: a new application in biodegradation.
Pantsyrnaya, T; Delaunay, S; Goergen, J L; Guseva, E; Boudrant, J
2013-06-01
In the present study a new application of solubilization of phenanthrene above cloud point of Brij 30 in biodegradation was developed. It was shown that a temporal solubilization of phenanthrene above cloud point of Brij 30 (5wt%) permitted to obtain a stable increase of the solubility of phenanthrene even when the temperature was decreased to culture conditions of used microorganism Pseudomonas putida (28°C). A higher initial concentration of soluble phenanthrene was obtained after the cloud point treatment: 200 against 120μM without treatment. All soluble phenanthrene was metabolized and a higher final concentration of its major metabolite - 1-hydroxy-2-naphthoic acid - (160 against 85μM) was measured in the culture medium in the case of a preliminary cloud point treatment. Therefore a temporary solubilization at cloud point might have a perspective application in the enhancement of biodegradation of polycyclic aromatic hydrocarbons. Copyright © 2013 Elsevier Ltd. All rights reserved.
A portable low-cost 3D point cloud acquiring method based on structure light
NASA Astrophysics Data System (ADS)
Gui, Li; Zheng, Shunyi; Huang, Xia; Zhao, Like; Ma, Hao; Ge, Chao; Tang, Qiuxia
2018-03-01
A fast and low-cost method of acquiring 3D point cloud data is proposed in this paper, which can solve the problems of lack of texture information and low efficiency of acquiring point cloud data with only one pair of cheap cameras and projector. Firstly, we put forward a scene adaptive design method of random encoding pattern, that is, a coding pattern is projected onto the target surface in order to form texture information, which is favorable for image matching. Subsequently, we design an efficient dense matching algorithm that fits the projected texture. After the optimization of global algorithm and multi-kernel parallel development with the fusion of hardware and software, a fast acquisition system of point-cloud data is accomplished. Through the evaluation of point cloud accuracy, the results show that point cloud acquired by the method proposed in this paper has higher precision. What`s more, the scanning speed meets the demand of dynamic occasion and has better practical application value.
Joint classification and contour extraction of large 3D point clouds
NASA Astrophysics Data System (ADS)
Hackel, Timo; Wegner, Jan D.; Schindler, Konrad
2017-08-01
We present an effective and efficient method for point-wise semantic classification and extraction of object contours of large-scale 3D point clouds. What makes point cloud interpretation challenging is the sheer size of several millions of points per scan and the non-grid, sparse, and uneven distribution of points. Standard image processing tools like texture filters, for example, cannot handle such data efficiently, which calls for dedicated point cloud labeling methods. It turns out that one of the major drivers for efficient computation and handling of strong variations in point density, is a careful formulation of per-point neighborhoods at multiple scales. This allows, both, to define an expressive feature set and to extract topologically meaningful object contours. Semantic classification and contour extraction are interlaced problems. Point-wise semantic classification enables extracting a meaningful candidate set of contour points while contours help generating a rich feature representation that benefits point-wise classification. These methods are tailored to have fast run time and small memory footprint for processing large-scale, unstructured, and inhomogeneous point clouds, while still achieving high classification accuracy. We evaluate our methods on the semantic3d.net benchmark for terrestrial laser scans with >109 points.
Point clouds segmentation as base for as-built BIM creation
NASA Astrophysics Data System (ADS)
Macher, H.; Landes, T.; Grussenmeyer, P.
2015-08-01
In this paper, a three steps segmentation approach is proposed in order to create 3D models from point clouds acquired by TLS inside buildings. The three scales of segmentation are floors, rooms and planes composing the rooms. First, floor segmentation is performed based on analysis of point distribution along Z axis. Then, for each floor, room segmentation is achieved considering a slice of point cloud at ceiling level. Finally, planes are segmented for each room, and planes corresponding to ceilings and floors are identified. Results of each step are analysed and potential improvements are proposed. Based on segmented point clouds, the creation of as-built BIM is considered in a future work section. Not only the classification of planes into several categories is proposed, but the potential use of point clouds acquired outside buildings is also considered.
High-Precision Registration of Point Clouds Based on Sphere Feature Constraints.
Huang, Junhui; Wang, Zhao; Gao, Jianmin; Huang, Youping; Towers, David Peter
2016-12-30
Point cloud registration is a key process in multi-view 3D measurements. Its precision affects the measurement precision directly. However, in the case of the point clouds with non-overlapping areas or curvature invariant surface, it is difficult to achieve a high precision. A high precision registration method based on sphere feature constraint is presented to overcome the difficulty in the paper. Some known sphere features with constraints are used to construct virtual overlapping areas. The virtual overlapping areas provide more accurate corresponding point pairs and reduce the influence of noise. Then the transformation parameters between the registered point clouds are solved by an optimization method with weight function. In that case, the impact of large noise in point clouds can be reduced and a high precision registration is achieved. Simulation and experiments validate the proposed method.
High-Precision Registration of Point Clouds Based on Sphere Feature Constraints
Huang, Junhui; Wang, Zhao; Gao, Jianmin; Huang, Youping; Towers, David Peter
2016-01-01
Point cloud registration is a key process in multi-view 3D measurements. Its precision affects the measurement precision directly. However, in the case of the point clouds with non-overlapping areas or curvature invariant surface, it is difficult to achieve a high precision. A high precision registration method based on sphere feature constraint is presented to overcome the difficulty in the paper. Some known sphere features with constraints are used to construct virtual overlapping areas. The virtual overlapping areas provide more accurate corresponding point pairs and reduce the influence of noise. Then the transformation parameters between the registered point clouds are solved by an optimization method with weight function. In that case, the impact of large noise in point clouds can be reduced and a high precision registration is achieved. Simulation and experiments validate the proposed method. PMID:28042846
Pan, Tao; Liu, Chunyan; Zeng, Xinying; Xin, Qiao; Xu, Meiying; Deng, Yangwu; Dong, Wei
2017-06-01
A recent work has shown that hydrophobic organic compounds solubilized in the micelle phase of some nonionic surfactants present substrate toxicity to microorganisms with increasing bioavailability. However, in cloud point systems, biotoxicity is prevented, because the compounds are solubilized into a coacervate phase, thereby leaving a fraction of compounds with cells in a dilute phase. This study extends the understanding of the relationship between substrate toxicity and bioavailability of hydrophobic organic compounds solubilized in nonionic surfactant micelle phase and cloud point system. Biotoxicity experiments were conducted with naphthalene and phenanthrene in the presence of mixed nonionic surfactants Brij30 and TMN-3, which formed a micelle phase or cloud point system at different concentrations. Saccharomyces cerevisiae, unable to degrade these compounds, was used for the biotoxicity experiments. Glucose in the cloud point system was consumed faster than in the nonionic surfactant micelle phase, indicating that the solubilized compounds had increased toxicity to cells in the nonionic surfactant micelle phase. The results were verified by subsequent biodegradation experiments. The compounds were degraded faster by PAH-degrading bacterium in the cloud point system than in the micelle phase. All these results showed that biotoxicity of the hydrophobic organic compounds increases with bioavailability in the surfactant micelle phase but remains at a low level in the cloud point system. These results provide a guideline for the application of cloud point systems as novel media for microbial transformation or biodegradation.
NASA Astrophysics Data System (ADS)
Zahir, N.; Ali, A.
2015-12-01
The Lake Urmiah has undergone a drastic shrinkage in size over the past few decades. The initial intention of this paper is to present an approach for determining the so called "salient times" during which the trend of the shrinkage process is accelerated or decelerated. To find these salient times, a quasi_continuous curve was optimally fitted to the Topex altimetry data within the period 1998 to 2006. To find the salient points within this period of time, the points of inflections of the fitted curve is computed using a second derivative approach. The water volume was also computed using 16 cloud free Landsat images of the Lake within the periods of 1998 to 2006. In the first stage of the water volume calculation, the pixels of the Lake were segmented using the Automated Water Extraction Index (AWEI) and the shorelines of the Lake were extracted by a boundary detecting operator using the generated binary image of the Lake surface. The water volume fluctuation rate was then computed under the assumption that the two successive Lake surfaces and their corresponding water level differences demonstrate approximately a truncated pyramid. The analysis of the water level fluctuation rates were further extended by a sinusoidal curve fitted to the Topex altimetry data. This curve was intended to model the seasonal fluctuations of the water level. In the final stage of this article, the correlation between the fluctuation rates and the precipitation and temperature variations were also numerically determined. This paper reports in some details the stages mentioned above.
Filtering Photogrammetric Point Clouds Using Standard LIDAR Filters Towards DTM Generation
NASA Astrophysics Data System (ADS)
Zhang, Z.; Gerke, M.; Vosselman, G.; Yang, M. Y.
2018-05-01
Digital Terrain Models (DTMs) can be generated from point clouds acquired by laser scanning or photogrammetric dense matching. During the last two decades, much effort has been paid to developing robust filtering algorithms for the airborne laser scanning (ALS) data. With the point cloud quality from dense image matching (DIM) getting better and better, the research question that arises is whether those standard Lidar filters can be used to filter photogrammetric point clouds as well. Experiments are implemented to filter two dense matching point clouds with different noise levels. Results show that the standard Lidar filter is robust to random noise. However, artefacts and blunders in the DIM points often appear due to low contrast or poor texture in the images. Filtering will be erroneous in these locations. Filtering the DIM points pre-processed by a ranking filter will bring higher Type II error (i.e. non-ground points actually labelled as ground points) but much lower Type I error (i.e. bare ground points labelled as non-ground points). Finally, the potential DTM accuracy that can be achieved by DIM points is evaluated. Two DIM point clouds derived by Pix4Dmapper and SURE are compared. On grassland dense matching generates points higher than the true terrain surface, which will result in incorrectly elevated DTMs. The application of the ranking filter leads to a reduced bias in the DTM height, but a slightly increased noise level.
Vicente, Filipa A; Cardoso, Inês S; Sintra, Tânia E; Lemus, Jesus; Marques, Eduardo F; Ventura, Sónia P M; Coutinho, João A P
2017-09-21
Aqueous micellar two-phase systems (AMTPS) hold a large potential for cloud point extraction of biomolecules but are yet poorly studied and characterized, with few phase diagrams reported for these systems, hence limiting their use in extraction processes. This work reports a systematic investigation of the effect of different surface-active ionic liquids (SAILs)-covering a wide range of molecular properties-upon the clouding behavior of three nonionic Tergitol surfactants. Two different effects of the SAILs on the cloud points and mixed micelle size have been observed: ILs with a more hydrophilic character and lower critical packing parameter (CPP < 1 / 2 ) lead to the formation of smaller micelles and concomitantly increase the cloud points; in contrast, ILs with a more hydrophobic character and higher CPP (CPP ≥ 1) induce significant micellar growth and a decrease in the cloud points. The latter effect is particularly interesting and unusual for it was accepted that cloud point reduction is only induced by inorganic salts. The effects of nonionic surfactant concentration, SAIL concentration, pH, and micelle ζ potential are also studied and rationalized.
Point Cloud Management Through the Realization of the Intelligent Cloud Viewer Software
NASA Astrophysics Data System (ADS)
Costantino, D.; Angelini, M. G.; Settembrini, F.
2017-05-01
The paper presents a software dedicated to the elaboration of point clouds, called Intelligent Cloud Viewer (ICV), made in-house by AESEI software (Spin-Off of Politecnico di Bari), allowing to view point cloud of several tens of millions of points, also on of "no" very high performance systems. The elaborations are carried out on the whole point cloud and managed by means of the display only part of it in order to speed up rendering. It is designed for 64-bit Windows and is fully written in C ++ and integrates different specialized modules for computer graphics (Open Inventor by SGI, Silicon Graphics Inc), maths (BLAS, EIGEN), computational geometry (CGAL, Computational Geometry Algorithms Library), registration and advanced algorithms for point clouds (PCL, Point Cloud Library), advanced data structures (BOOST, Basic Object Oriented Supporting Tools), etc. ICV incorporates a number of features such as, for example, cropping, transformation and georeferencing, matching, registration, decimation, sections, distances calculation between clouds, etc. It has been tested on photographic and TLS (Terrestrial Laser Scanner) data, obtaining satisfactory results. The potentialities of the software have been tested by carrying out the photogrammetric survey of the Castel del Monte which was already available in previous laser scanner survey made from the ground by the same authors. For the aerophotogrammetric survey has been adopted a flight height of approximately 1000ft AGL (Above Ground Level) and, overall, have been acquired over 800 photos in just over 15 minutes, with a covering not less than 80%, the planned speed of about 90 knots.
NASA Astrophysics Data System (ADS)
Nayak, M.; Beck, J.; Udrea, B.
This paper focuses on the aerospace application of a single beam laser rangefinder (LRF) for 3D imaging, shape detection, and reconstruction in the context of a space-based space situational awareness (SSA) mission scenario. The primary limitation to 3D imaging from LRF point clouds is the one-dimensional nature of the single beam measurements. A method that combines relative orbital motion and scanning attitude motion to generate point clouds has been developed and the design and characterization of multiple relative motion and attitude maneuver profiles are presented. The target resident space object (RSO) has the shape of a generic telecommunications satellite. The shape and attitude of the RSO are unknown to the chaser satellite however, it is assumed that the RSO is un-cooperative and has fixed inertial pointing. All sensors in the metrology chain are assumed ideal. A previous study by the authors used pure Keplerian motion to perform a similar 3D imaging mission at an asteroid. A new baseline for proximity operations maneuvers for LRF scanning, based on a waypoint adaptation of the Hill-Clohessy-Wiltshire (HCW) equations is examined. Propellant expenditure for each waypoint profile is discussed and combinations of relative motion and attitude maneuvers that minimize the propellant used to achieve a minimum required point cloud density are studied. Both LRF strike-point coverage and point cloud density are maximized; the capability for 3D shape registration and reconstruction from point clouds generated with a single beam LRF without catalog comparison is proven. Next, a method of using edge detection algorithms to process a point cloud into a 3D modeled image containing reconstructed shapes is presented. Weighted accuracy of edge reconstruction with respect to the true model is used to calculate a qualitative “ metric” that evaluates effectiveness of coverage. Both edge recognition algorithms and the metric are independent of point cloud densit- , therefore they are utilized to compare the quality of point clouds generated by various attitude and waypoint command profiles. The RSO model incorporates diverse irregular protruding shapes, such as open sensor covers, instrument pods and solar arrays, to test the limits of the algorithms. This analysis is used to mathematically prove that point clouds generated by a single-beam LRF can achieve sufficient edge recognition accuracy for SSA applications, with meaningful shape information extractable even from sparse point clouds. For all command profiles, reconstruction of RSO shapes from the point clouds generated with the proposed method are compared to the truth model and conclusions are drawn regarding their fidelity.
NASA Astrophysics Data System (ADS)
Gézero, L.; Antunes, C.
2017-05-01
The digital terrain models (DTM) assume an essential role in all types of road maintenance, water supply and sanitation projects. The demand of such information is more significant in developing countries, where the lack of infrastructures is higher. In recent years, the use of Mobile LiDAR Systems (MLS) proved to be a very efficient technique in the acquisition of precise and dense point clouds. These point clouds can be a solution to obtain the data for the production of DTM in remote areas, due mainly to the safety, precision, speed of acquisition and the detail of the information gathered. However, the point clouds filtering and algorithms to separate "terrain points" from "no terrain points", quickly and consistently, remain a challenge that has caught the interest of researchers. This work presents a method to create the DTM from point clouds collected by MLS. The method is based in two interactive steps. The first step of the process allows reducing the cloud point to a set of points that represent the terrain's shape, being the distance between points inversely proportional to the terrain variation. The second step is based on the Delaunay triangulation of the points resulting from the first step. The achieved results encourage a wider use of this technology as a solution for large scale DTM production in remote areas.
Evaluation of terrestrial photogrammetric point clouds derived from thermal imagery
NASA Astrophysics Data System (ADS)
Metcalf, Jeremy P.; Olsen, Richard C.
2016-05-01
Computer vision and photogrammetric techniques have been widely applied to digital imagery producing high density 3D point clouds. Using thermal imagery as input, the same techniques can be applied to infrared data to produce point clouds in 3D space, providing surface temperature information. The work presented here is an evaluation of the accuracy of 3D reconstruction of point clouds produced using thermal imagery. An urban scene was imaged over an area at the Naval Postgraduate School, Monterey, CA, viewing from above as with an airborne system. Terrestrial thermal and RGB imagery were collected from a rooftop overlooking the site using a FLIR SC8200 MWIR camera and a Canon T1i DSLR. In order to spatially align each dataset, ground control points were placed throughout the study area using Trimble R10 GNSS receivers operating in RTK mode. Each image dataset is processed to produce a dense point cloud for 3D evaluation.
Spline approximation, Part 1: Basic methodology
NASA Astrophysics Data System (ADS)
Ezhov, Nikolaj; Neitzel, Frank; Petrovic, Svetozar
2018-04-01
In engineering geodesy point clouds derived from terrestrial laser scanning or from photogrammetric approaches are almost never used as final results. For further processing and analysis a curve or surface approximation with a continuous mathematical function is required. In this paper the approximation of 2D curves by means of splines is treated. Splines offer quite flexible and elegant solutions for interpolation or approximation of "irregularly" distributed data. Depending on the problem they can be expressed as a function or as a set of equations that depend on some parameter. Many different types of splines can be used for spline approximation and all of them have certain advantages and disadvantages depending on the approximation problem. In a series of three articles spline approximation is presented from a geodetic point of view. In this paper (Part 1) the basic methodology of spline approximation is demonstrated using splines constructed from ordinary polynomials and splines constructed from truncated polynomials. In the forthcoming Part 2 the notion of B-spline will be explained in a unique way, namely by using the concept of convex combinations. The numerical stability of all spline approximation approaches as well as the utilization of splines for deformation detection will be investigated on numerical examples in Part 3.
Traveltime and dispersion in the Potomac River, Cumberland, Maryland, to Washington, D.C.
Taylor, K.R.; James, R.W.; Helinsky, B.M.
1984-01-01
Data from two traveltime and dispersion studies, using rhodamine dye, are used to develop a generalized procedure for predicting traveltime and downstream concentrations resulting from spillage of water-soluble substances at any point along the Potomac River from Cumberland, Maryland, to Washington, D.C. The procedure will allow the approximate solution to almost any spillage problem concerning traveltime and concentration during periods of relatively steady flow between 50- and 95-percent flow duration. A new procedure for calculating unit peak concentration is derived. The new procedure, based on the similarity in shape of a time-concentration curve and a scalene triangle, allows unit peak concentration to be expressed in terms of the length of the dye cloud. This approach facilitates the linking of peak-concentration attenuation curves for long reaches of rivers which are divided into subreaches for study. An example problem is solved for a hypothetical spill of 20,000 pounds of contaminant at Magnolia, West Virginia. The predicted traveltime of the leading edge, peak concentration, and trailing edge to Point of Rocks, Maryland (110 miles downstream), are 295 , 375, and 540 hours, respectively, for a flow duration of 80 percent. The predicted maximum concentration is 340 micrograms/L. (USGS)
Visual Data Analysis for Satellites
NASA Technical Reports Server (NTRS)
Lau, Yee; Bhate, Sachin; Fitzpatrick, Patrick
2008-01-01
The Visual Data Analysis Package is a collection of programs and scripts that facilitate visual analysis of data available from NASA and NOAA satellites, as well as dropsonde, buoy, and conventional in-situ observations. The package features utilities for data extraction, data quality control, statistical analysis, and data visualization. The Hierarchical Data Format (HDF) satellite data extraction routines from NASA's Jet Propulsion Laboratory were customized for specific spatial coverage and file input/output. Statistical analysis includes the calculation of the relative error, the absolute error, and the root mean square error. Other capabilities include curve fitting through the data points to fill in missing data points between satellite passes or where clouds obscure satellite data. For data visualization, the software provides customizable Generic Mapping Tool (GMT) scripts to generate difference maps, scatter plots, line plots, vector plots, histograms, timeseries, and color fill images.
NASA Astrophysics Data System (ADS)
Bolkas, Dimitrios; Martinez, Aaron
2018-01-01
Point-cloud coordinate information derived from terrestrial Light Detection And Ranging (LiDAR) is important for several applications in surveying and civil engineering. Plane fitting and segmentation of target-surfaces is an important step in several applications such as in the monitoring of structures. Reliable parametric modeling and segmentation relies on the underlying quality of the point-cloud. Therefore, understanding how point-cloud errors affect fitting of planes and segmentation is important. Point-cloud intensity, which accompanies the point-cloud data, often goes hand-in-hand with point-cloud noise. This study uses industrial particle boards painted with eight different colors (black, white, grey, red, green, blue, brown, and yellow) and two different sheens (flat and semi-gloss) to explore how noise and plane residuals vary with scanning geometry (i.e., distance and incidence angle) and target-color. Results show that darker colors, such as black and brown, can produce point clouds that are several times noisier than bright targets, such as white. In addition, semi-gloss targets manage to reduce noise in dark targets by about 2-3 times. The study of plane residuals with scanning geometry reveals that, in many of the cases tested, residuals decrease with increasing incidence angles, which can assist in understanding the distribution of plane residuals in a dataset. Finally, a scheme is developed to derive survey guidelines based on the data collected in this experiment. Three examples demonstrate that users should consider instrument specification, required precision of plane residuals, required point-spacing, target-color, and target-sheen, when selecting scanning locations. Outcomes of this study can aid users to select appropriate instrumentation and improve planning of terrestrial LiDAR data-acquisition.
GRAND DITCH VIEW, FROM FARVIEW CURVE OVERLOOK, VIEWING WEST. DITCH ...
GRAND DITCH VIEW, FROM FARVIEW CURVE OVERLOOK, VIEWING WEST. DITCH IS INDICATED BY HORIZONTAL LINE NEAR TOP OF CLOUD COVERED PEAKS - Grand Ditch, Baker Creek to LaPoudre Pass Creek, Grand Lake, Grand County, CO
NASA Astrophysics Data System (ADS)
Poux, F.; Neuville, R.; Billen, R.
2017-08-01
Reasoning from information extraction given by point cloud data mining allows contextual adaptation and fast decision making. However, to achieve this perceptive level, a point cloud must be semantically rich, retaining relevant information for the end user. This paper presents an automatic knowledge-based method for pre-processing multi-sensory data and classifying a hybrid point cloud from both terrestrial laser scanning and dense image matching. Using 18 features including sensor's biased data, each tessera in the high-density point cloud from the 3D captured complex mosaics of Germigny-des-prés (France) is segmented via a colour multi-scale abstraction-based featuring extracting connectivity. A 2D surface and outline polygon of each tessera is generated by a RANSAC plane extraction and convex hull fitting. Knowledge is then used to classify every tesserae based on their size, surface, shape, material properties and their neighbour's class. The detection and semantic enrichment method shows promising results of 94% correct semantization, a first step toward the creation of an archaeological smart point cloud.
Temporally consistent segmentation of point clouds
NASA Astrophysics Data System (ADS)
Owens, Jason L.; Osteen, Philip R.; Daniilidis, Kostas
2014-06-01
We consider the problem of generating temporally consistent point cloud segmentations from streaming RGB-D data, where every incoming frame extends existing labels to new points or contributes new labels while maintaining the labels for pre-existing segments. Our approach generates an over-segmentation based on voxel cloud connectivity, where a modified k-means algorithm selects supervoxel seeds and associates similar neighboring voxels to form segments. Given the data stream from a potentially mobile sensor, we solve for the camera transformation between consecutive frames using a joint optimization over point correspondences and image appearance. The aligned point cloud may then be integrated into a consistent model coordinate frame. Previously labeled points are used to mask incoming points from the new frame, while new and previous boundary points extend the existing segmentation. We evaluate the algorithm on newly-generated RGB-D datasets.
Traffic sign detection in MLS acquired point clouds for geometric and image-based semantic inventory
NASA Astrophysics Data System (ADS)
Soilán, Mario; Riveiro, Belén; Martínez-Sánchez, Joaquín; Arias, Pedro
2016-04-01
Nowadays, mobile laser scanning has become a valid technology for infrastructure inspection. This technology permits collecting accurate 3D point clouds of urban and road environments and the geometric and semantic analysis of data became an active research topic in the last years. This paper focuses on the detection of vertical traffic signs in 3D point clouds acquired by a LYNX Mobile Mapper system, comprised of laser scanning and RGB cameras. Each traffic sign is automatically detected in the LiDAR point cloud, and its main geometric parameters can be automatically extracted, therefore aiding the inventory process. Furthermore, the 3D position of traffic signs are reprojected on the 2D images, which are spatially and temporally synced with the point cloud. Image analysis allows for recognizing the traffic sign semantics using machine learning approaches. The presented method was tested in road and urban scenarios in Galicia (Spain). The recall results for traffic sign detection are close to 98%, and existing false positives can be easily filtered after point cloud projection. Finally, the lack of a large, publicly available Spanish traffic sign database is pointed out.
a Gross Error Elimination Method for Point Cloud Data Based on Kd-Tree
NASA Astrophysics Data System (ADS)
Kang, Q.; Huang, G.; Yang, S.
2018-04-01
Point cloud data has been one type of widely used data sources in the field of remote sensing. Key steps of point cloud data's pro-processing focus on gross error elimination and quality control. Owing to the volume feature of point could data, existed gross error elimination methods need spend massive memory both in space and time. This paper employed a new method which based on Kd-tree algorithm to construct, k-nearest neighbor algorithm to search, settled appropriate threshold to determine with result turns out a judgement that whether target point is or not an outlier. Experimental results show that, our proposed algorithm will help to delete gross error in point cloud data and facilitate to decrease memory consumption, improve efficiency.
Clouds on Neptune: Motions, Evolution, and Structure
NASA Technical Reports Server (NTRS)
Sromovsky, Larry A.; Morgan, Thomas (Technical Monitor)
2001-01-01
The aims of our original proposal were these: (1) improving measurements of Neptune's circulation, (2) understanding the spatial distribution of cloud features, (3) discovery of new cloud features and understanding their evolutionary process, (4) understanding the vertical structure of zonal cloud patterns, (5) defining the structure of discrete cloud features, and (6) defining the near IR albedo and light curve of Triton. Towards these aims we proposed analysis of existing 1996 groundbased NSFCAM/IRTF observations and nearly simultaneous WFPC2 observations from the Hubble Space Telescope. We also proposed to acquire new observations from both HST and the IRTF.
Object Detection using the Kinect
2012-03-01
Kinect camera and point cloud data from the Kinect’s structured light stereo system (figure 1). We obtain reasonable results using a single prototype...same manner we present in this report. For example, at Willow Garage , Steder uses a 3-D feature he developed to classify objects directly from point...detecting backpacks using the data available from the Kinect sensor. 4 3.1 Point Cloud Filtering Dense point clouds derived from stereo are notoriously
Tunnel Point Cloud Filtering Method Based on Elliptic Cylindrical Model
NASA Astrophysics Data System (ADS)
Zhua, Ningning; Jiaa, Yonghong; Luo, Lun
2016-06-01
The large number of bolts and screws that attached to the subway shield ring plates, along with the great amount of accessories of metal stents and electrical equipments mounted on the tunnel walls, make the laser point cloud data include lots of non-tunnel section points (hereinafter referred to as non-points), therefore affecting the accuracy for modeling and deformation monitoring. This paper proposed a filtering method for the point cloud based on the elliptic cylindrical model. The original laser point cloud data was firstly projected onto a horizontal plane, and a searching algorithm was given to extract the edging points of both sides, which were used further to fit the tunnel central axis. Along the axis the point cloud was segmented regionally, and then fitted as smooth elliptic cylindrical surface by means of iteration. This processing enabled the automatic filtering of those inner wall non-points. Experiments of two groups showed coincident results, that the elliptic cylindrical model based method could effectively filter out the non-points, and meet the accuracy requirements for subway deformation monitoring. The method provides a new mode for the periodic monitoring of tunnel sections all-around deformation in subways routine operation and maintenance.
NASA Astrophysics Data System (ADS)
Kassem, Mohammed A.; Amin, Alaa S.
2015-02-01
A new method to estimate rhodium in different samples at trace levels had been developed. Rhodium was complexed with 5-(4‧-nitro-2‧,6‧-dichlorophenylazo)-6-hydroxypyrimidine-2,4-dione (NDPHPD) as a complexing agent in an aqueous medium and concentrated by using Triton X-114 as a surfactant. The investigated rhodium complex was preconcentrated with cloud point extraction process using the nonionic surfactant Triton X-114 to extract rhodium complex from aqueous solutions at pH 4.75. After the phase separation at 50 °C, the surfactant-rich phase was heated again at 100 °C to remove water after decantation and the remaining phase was dissolved using 0.5 mL of acetonitrile. Under optimum conditions, the calibration curve was linear for the concentration range of 0.5-75 ng mL-1 and the detection limit was 0.15 ng mL-1 of the original solution. The enhancement factor of 500 was achieved for 250 mL samples containing the analyte and relative standard deviations were ⩽1.50%. The method was found to be highly selective, fairly sensitive, simple, rapid and economical and safely applied for rhodium determination in different complex materials such as synthetic mixture of alloys and environmental water samples.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Aardt, Jan; Romanczyk, Paul; van Leeuwen, Martin
Terrestrial laser scanning (TLS) has emerged as an effective tool for rapid comprehensive measurement of object structure. Registration of TLS data is an important prerequisite to overcome the limitations of occlusion. However, due to the high dissimilarity of point cloud data collected from disparate viewpoints in the forest environment, adequate marker-free registration approaches have not been developed. The majority of studies instead rely on the utilization of artificial tie points (e.g., reflective tooling balls) placed within a scene to aid in coordinate transformation. We present a technique for generating view-invariant feature descriptors that are intrinsic to the point cloud datamore » and, thus, enable blind marker-free registration in forest environments. To overcome the limitation of initial pose estimation, we employ a voting method to blindly determine the optimal pairwise transformation parameters, without an a priori estimate of the initial sensor pose. To provide embedded error metrics, we developed a set theory framework in which a circular transformation is traversed between disjoint tie point subsets. This provides an upper estimate of the Root Mean Square Error (RMSE) confidence associated with each pairwise transformation. Output RMSE errors are commensurate with the RMSE of input tie points locations. Thus, while the mean output RMSE=16.3cm, improved results could be achieved with a more precise laser scanning system. This study 1) quantifies the RMSE of the proposed marker-free registration approach, 2) assesses the validity of embedded confidence metrics using receiver operator characteristic (ROC) curves, and 3) informs optimal sample spacing considerations for TLS data collection in New England forests. Furthermore, while the implications for rapid, accurate, and precise forest inventory are obvious, the conceptual framework outlined here could potentially be extended to built environments.« less
Van Aardt, Jan; Romanczyk, Paul; van Leeuwen, Martin; ...
2016-04-04
Terrestrial laser scanning (TLS) has emerged as an effective tool for rapid comprehensive measurement of object structure. Registration of TLS data is an important prerequisite to overcome the limitations of occlusion. However, due to the high dissimilarity of point cloud data collected from disparate viewpoints in the forest environment, adequate marker-free registration approaches have not been developed. The majority of studies instead rely on the utilization of artificial tie points (e.g., reflective tooling balls) placed within a scene to aid in coordinate transformation. We present a technique for generating view-invariant feature descriptors that are intrinsic to the point cloud datamore » and, thus, enable blind marker-free registration in forest environments. To overcome the limitation of initial pose estimation, we employ a voting method to blindly determine the optimal pairwise transformation parameters, without an a priori estimate of the initial sensor pose. To provide embedded error metrics, we developed a set theory framework in which a circular transformation is traversed between disjoint tie point subsets. This provides an upper estimate of the Root Mean Square Error (RMSE) confidence associated with each pairwise transformation. Output RMSE errors are commensurate with the RMSE of input tie points locations. Thus, while the mean output RMSE=16.3cm, improved results could be achieved with a more precise laser scanning system. This study 1) quantifies the RMSE of the proposed marker-free registration approach, 2) assesses the validity of embedded confidence metrics using receiver operator characteristic (ROC) curves, and 3) informs optimal sample spacing considerations for TLS data collection in New England forests. Furthermore, while the implications for rapid, accurate, and precise forest inventory are obvious, the conceptual framework outlined here could potentially be extended to built environments.« less
A Modular Approach to Video Designation of Manipulation Targets for Manipulators
2014-05-12
side view of a ray going through a point cloud of a water bottle sitting on the ground. The bottom left image shows the same point cloud after it has...System (ROS), Point Cloud Library (PCL), and OpenRAVE were used to a great extent to help promote reusability of the code developed during this
Automatic Matching of Large Scale Images and Terrestrial LIDAR Based on App Synergy of Mobile Phone
NASA Astrophysics Data System (ADS)
Xia, G.; Hu, C.
2018-04-01
The digitalization of Cultural Heritage based on ground laser scanning technology has been widely applied. High-precision scanning and high-resolution photography of cultural relics are the main methods of data acquisition. The reconstruction with the complete point cloud and high-resolution image requires the matching of image and point cloud, the acquisition of the homonym feature points, the data registration, etc. However, the one-to-one correspondence between image and corresponding point cloud depends on inefficient manual search. The effective classify and management of a large number of image and the matching of large image and corresponding point cloud will be the focus of the research. In this paper, we propose automatic matching of large scale images and terrestrial LiDAR based on APP synergy of mobile phone. Firstly, we develop an APP based on Android, take pictures and record related information of classification. Secondly, all the images are automatically grouped with the recorded information. Thirdly, the matching algorithm is used to match the global and local image. According to the one-to-one correspondence between the global image and the point cloud reflection intensity image, the automatic matching of the image and its corresponding laser radar point cloud is realized. Finally, the mapping relationship between global image, local image and intensity image is established according to homonym feature point. So we can establish the data structure of the global image, the local image in the global image, the local image corresponding point cloud, and carry on the visualization management and query of image.
Incremental triangulation by way of edge swapping and local optimization
NASA Technical Reports Server (NTRS)
Wiltberger, N. Lyn
1994-01-01
This document is intended to serve as an installation, usage, and basic theory guide for the two dimensional triangulation software 'HARLEY' written for the Silicon Graphics IRIS workstation. This code consists of an incremental triangulation algorithm based on point insertion and local edge swapping. Using this basic strategy, several types of triangulations can be produced depending on user selected options. For example, local edge swapping criteria can be chosen which minimizes the maximum interior angle (a MinMax triangulation) or which maximizes the minimum interior angle (a MaxMin or Delaunay triangulation). It should be noted that the MinMax triangulation is generally only locally optical (not globally optimal) in this measure. The MaxMin triangulation, however, is both locally and globally optical. In addition, Steiner triangulations can be constructed by inserting new sites at triangle circumcenters followed by edge swapping based on the MaxMin criteria. Incremental insertion of sites also provides flexibility in choosing cell refinement criteria. A dynamic heap structure has been implemented in the code so that once a refinement measure is specified (i.e., maximum aspect ratio or some measure of a solution gradient for the solution adaptive grid generation) the cell with the largest value of this measure is continually removed from the top of the heap and refined. The heap refinement strategy allows the user to specify either the number of cells desired or refine the mesh until all cell refinement measures satisfy a user specified tolerance level. Since the dynamic heap structure is constantly updated, the algorithm always refines the particular cell in the mesh with the largest refinement criteria value. The code allows the user to: triangulate a cloud of prespecified points (sites), triangulate a set of prespecified interior points constrained by prespecified boundary curve(s), Steiner triangulate the interior/exterior of prespecified boundary curve(s), refine existing triangulations based on solution error measures, and partition meshes based on the Cuthill-McKee, spectral, and coordinate bisection strategies.
NASA Astrophysics Data System (ADS)
Vázquez Tarrío, Daniel; Borgniet, Laurent; Recking, Alain; Liebault, Frédéric; Vivier, Marie
2016-04-01
The present research is focused on the Vénéon river at Plan du Lac (Massif des Ecrins, France), an alpine braided gravel bed stream with a glacio-nival hydrological regime. It drains a catchment area of 316 km2. The present research is focused in a 2.5 km braided reach placed immediately upstream of a small hydropower dam. An airbone LIDAR survey was accomplished in October, 2014 by EDF (the company managing the small hydropower dam), and data coming from this LIDAR survey were available for the present research. Point density of the LIDAR-derived 3D-point cloud was between 20-50 points/m2, with a vertical precision of 2-3 cm over flat surfaces. Moreover, between April and Juin, 2015, we carried out a photogrammetrical campaign based in aerial images taken with an UAV-drone. The UAV-derived point-cloud has a point density of 200-300 points/m2, and a vertical precision over flat control surfaces comparable to that of the LIDAR point cloud (2-3 cm). Simultaneously to the UAV campaign, we took several Wolman samples with the aim of characterizing the grain size distribution of bed sediment. Wolman samples were taken following a geomorphological criterion (unit bars, head/tail of compound bars). Furthermore, some of the Wolman samples were repeated with the aim of defining the uncertainty of our sampling protocol. LIDAR and UAV-derived point clouds were treated in order to check whether both point-clouds were correctly co-aligned. After that, we estimated bed roughness using the detrended standard deviation of heights, in a 40-cm window. For all this data treatment we used CloudCompare. Then, we measured the distribution of roughness in the same geomorphological units where we took the Wolman samples, and we compared with the grain size distributions measured in the field: differences between UAV-point cloud roughness distributions and measured-grain size distribution (~1-2 cm) are in the same order of magnitude of the differences found between the repeated Wolman samples (~0.5-1.5 cm). Differences with LIDAR-derived roughness distributions are only slightly higher, which could be due to the lower point density of the LIDAR point clouds.
Vertical Optical Scanning with Panoramic Vision for Tree Trunk Reconstruction
Berveglieri, Adilson; Liang, Xinlian; Honkavaara, Eija
2017-01-01
This paper presents a practical application of a technique that uses a vertical optical flow with a fisheye camera to generate dense point clouds from a single planimetric station. Accurate data can be extracted to enable the measurement of tree trunks or branches. The images that are collected with this technique can be oriented in photogrammetric software (using fisheye models) and used to generate dense point clouds, provided that some constraints on the camera positions are adopted. A set of images was captured in a forest plot in the experiments. Weighted geometric constraints were imposed in the photogrammetric software to calculate the image orientation, perform dense image matching, and accurately generate a 3D point cloud. The tree trunks in the scenes were reconstructed and mapped in a local reference system. The accuracy assessment was based on differences between measured and estimated trunk diameters at different heights. Trunk sections from an image-based point cloud were also compared to the corresponding sections that were extracted from a dense terrestrial laser scanning (TLS) point cloud. Cylindrical fitting of the trunk sections allowed the assessment of the accuracies of the trunk geometric shapes in both clouds. The average difference between the cylinders that were fitted to the photogrammetric cloud and those to the TLS cloud was less than 1 cm, which indicates the potential of the proposed technique. The point densities that were obtained with vertical optical scanning were 1/3 less than those that were obtained with TLS. However, the point density can be improved by using higher resolution cameras. PMID:29207468
Vertical Optical Scanning with Panoramic Vision for Tree Trunk Reconstruction.
Berveglieri, Adilson; Tommaselli, Antonio M G; Liang, Xinlian; Honkavaara, Eija
2017-12-02
This paper presents a practical application of a technique that uses a vertical optical flow with a fisheye camera to generate dense point clouds from a single planimetric station. Accurate data can be extracted to enable the measurement of tree trunks or branches. The images that are collected with this technique can be oriented in photogrammetric software (using fisheye models) and used to generate dense point clouds, provided that some constraints on the camera positions are adopted. A set of images was captured in a forest plot in the experiments. Weighted geometric constraints were imposed in the photogrammetric software to calculate the image orientation, perform dense image matching, and accurately generate a 3D point cloud. The tree trunks in the scenes were reconstructed and mapped in a local reference system. The accuracy assessment was based on differences between measured and estimated trunk diameters at different heights. Trunk sections from an image-based point cloud were also compared to the corresponding sections that were extracted from a dense terrestrial laser scanning (TLS) point cloud. Cylindrical fitting of the trunk sections allowed the assessment of the accuracies of the trunk geometric shapes in both clouds. The average difference between the cylinders that were fitted to the photogrammetric cloud and those to the TLS cloud was less than 1 cm, which indicates the potential of the proposed technique. The point densities that were obtained with vertical optical scanning were 1/3 less than those that were obtained with TLS. However, the point density can be improved by using higher resolution cameras.
Automatic Recognition of Indoor Navigation Elements from Kinect Point Clouds
NASA Astrophysics Data System (ADS)
Zeng, L.; Kang, Z.
2017-09-01
This paper realizes automatically the navigating elements defined by indoorGML data standard - door, stairway and wall. The data used is indoor 3D point cloud collected by Kinect v2 launched in 2011 through the means of ORB-SLAM. By contrast, it is cheaper and more convenient than lidar, but the point clouds also have the problem of noise, registration error and large data volume. Hence, we adopt a shape descriptor - histogram of distances between two randomly chosen points, proposed by Osada and merges with other descriptor - in conjunction with random forest classifier to recognize the navigation elements (door, stairway and wall) from Kinect point clouds. This research acquires navigation elements and their 3-d location information from each single data frame through segmentation of point clouds, boundary extraction, feature calculation and classification. Finally, this paper utilizes the acquired navigation elements and their information to generate the state data of the indoor navigation module automatically. The experimental results demonstrate a high recognition accuracy of the proposed method.
Towards 3D Matching of Point Clouds Derived from Oblique and Nadir Airborne Imagery
NASA Astrophysics Data System (ADS)
Zhang, Ming
Because of the low-expense high-efficient image collection process and the rich 3D and texture information presented in the images, a combined use of 2D airborne nadir and oblique images to reconstruct 3D geometric scene has a promising market for future commercial usage like urban planning or first responders. The methodology introduced in this thesis provides a feasible way towards fully automated 3D city modeling from oblique and nadir airborne imagery. In this thesis, the difficulty of matching 2D images with large disparity is avoided by grouping the images first and applying the 3D registration afterward. The procedure starts with the extraction of point clouds using a modified version of the RIT 3D Extraction Workflow. Then the point clouds are refined by noise removal and surface smoothing processes. Since the point clouds extracted from different image groups use independent coordinate systems, there are translation, rotation and scale differences existing. To figure out these differences, 3D keypoints and their features are extracted. For each pair of point clouds, an initial alignment and a more accurate registration are applied in succession. The final transform matrix presents the parameters describing the translation, rotation and scale requirements. The methodology presented in the thesis has been shown to behave well for test data. The robustness of this method is discussed by adding artificial noise to the test data. For Pictometry oblique aerial imagery, the initial alignment provides a rough alignment result, which contains a larger offset compared to that of test data because of the low quality of the point clouds themselves, but it can be further refined through the final optimization. The accuracy of the final registration result is evaluated by comparing it to the result obtained from manual selection of matched points. Using the method introduced, point clouds extracted from different image groups could be combined with each other to build a more complete point cloud, or be used as a complement to existing point clouds extracted from other sources. This research will both improve the state of the art of 3D city modeling and inspire new ideas in related fields.
Fast Semantic Segmentation of 3d Point Clouds with Strongly Varying Density
NASA Astrophysics Data System (ADS)
Hackel, Timo; Wegner, Jan D.; Schindler, Konrad
2016-06-01
We describe an effective and efficient method for point-wise semantic classification of 3D point clouds. The method can handle unstructured and inhomogeneous point clouds such as those derived from static terrestrial LiDAR or photogammetric reconstruction; and it is computationally efficient, making it possible to process point clouds with many millions of points in a matter of minutes. The key issue, both to cope with strong variations in point density and to bring down computation time, turns out to be careful handling of neighborhood relations. By choosing appropriate definitions of a point's (multi-scale) neighborhood, we obtain a feature set that is both expressive and fast to compute. We evaluate our classification method both on benchmark data from a mobile mapping platform and on a variety of large, terrestrial laser scans with greatly varying point density. The proposed feature set outperforms the state of the art with respect to per-point classification accuracy, while at the same time being much faster to compute.
A Voxel-Based Approach for Imaging Voids in Three-Dimensional Point Clouds
NASA Astrophysics Data System (ADS)
Salvaggio, Katie N.
Geographically accurate scene models have enormous potential beyond that of just simple visualizations in regard to automated scene generation. In recent years, thanks to ever increasing computational efficiencies, there has been significant growth in both the computer vision and photogrammetry communities pertaining to automatic scene reconstruction from multiple-view imagery. The result of these algorithms is a three-dimensional (3D) point cloud which can be used to derive a final model using surface reconstruction techniques. However, the fidelity of these point clouds has not been well studied, and voids often exist within the point cloud. Voids exist in texturally difficult areas, as well as areas where multiple views were not obtained during collection, constant occlusion existed due to collection angles or overlapping scene geometry, or in regions that failed to triangulate accurately. It may be possible to fill in small voids in the scene using surface reconstruction or hole-filling techniques, but this is not the case with larger more complex voids, and attempting to reconstruct them using only the knowledge of the incomplete point cloud is neither accurate nor aesthetically pleasing. A method is presented for identifying voids in point clouds by using a voxel-based approach to partition the 3D space. By using collection geometry and information derived from the point cloud, it is possible to detect unsampled voxels such that voids can be identified. This analysis takes into account the location of the camera and the 3D points themselves to capitalize on the idea of free space, such that voxels that lie on the ray between the camera and point are devoid of obstruction, as a clear line of sight is a necessary requirement for reconstruction. Using this approach, voxels are classified into three categories: occupied (contains points from the point cloud), free (rays from the camera to the point passed through the voxel), and unsampled (does not contain points and no rays passed through the area). Voids in the voxel space are manifested as unsampled voxels. A similar line-of-sight analysis can then be used to pinpoint locations at aircraft altitude at which the voids in the point clouds could theoretically be imaged. This work is based on the assumption that inclusion of more images of the void areas in the 3D reconstruction process will reduce the number of voids in the point cloud that were a result of lack of coverage. Voids resulting from texturally difficult areas will not benefit from more imagery in the reconstruction process, and thus are identified and removed prior to the determination of future potential imaging locations.
Classification by Using Multispectral Point Cloud Data
NASA Astrophysics Data System (ADS)
Liao, C. T.; Huang, H. H.
2012-07-01
Remote sensing images are generally recorded in two-dimensional format containing multispectral information. Also, the semantic information is clearly visualized, which ground features can be better recognized and classified via supervised or unsupervised classification methods easily. Nevertheless, the shortcomings of multispectral images are highly depending on light conditions, and classification results lack of three-dimensional semantic information. On the other hand, LiDAR has become a main technology for acquiring high accuracy point cloud data. The advantages of LiDAR are high data acquisition rate, independent of light conditions and can directly produce three-dimensional coordinates. However, comparing with multispectral images, the disadvantage is multispectral information shortage, which remains a challenge in ground feature classification through massive point cloud data. Consequently, by combining the advantages of both LiDAR and multispectral images, point cloud data with three-dimensional coordinates and multispectral information can produce a integrate solution for point cloud classification. Therefore, this research acquires visible light and near infrared images, via close range photogrammetry, by matching images automatically through free online service for multispectral point cloud generation. Then, one can use three-dimensional affine coordinate transformation to compare the data increment. At last, the given threshold of height and color information is set as threshold in classification.
Line segment extraction for large scale unorganized point clouds
NASA Astrophysics Data System (ADS)
Lin, Yangbin; Wang, Cheng; Cheng, Jun; Chen, Bili; Jia, Fukai; Chen, Zhonggui; Li, Jonathan
2015-04-01
Line segment detection in images is already a well-investigated topic, although it has received considerably less attention in 3D point clouds. Benefiting from current LiDAR devices, large-scale point clouds are becoming increasingly common. Most human-made objects have flat surfaces. Line segments that occur where pairs of planes intersect give important information regarding the geometric content of point clouds, which is especially useful for automatic building reconstruction and segmentation. This paper proposes a novel method that is capable of accurately extracting plane intersection line segments from large-scale raw scan points. The 3D line-support region, namely, a point set near a straight linear structure, is extracted simultaneously. The 3D line-support region is fitted by our Line-Segment-Half-Planes (LSHP) structure, which provides a geometric constraint for a line segment, making the line segment more reliable and accurate. We demonstrate our method on the point clouds of large-scale, complex, real-world scenes acquired by LiDAR devices. We also demonstrate the application of 3D line-support regions and their LSHP structures on urban scene abstraction.
Characterizing Sorghum Panicles using 3D Point Clouds
NASA Astrophysics Data System (ADS)
Lonesome, M.; Popescu, S. C.; Horne, D. W.; Pugh, N. A.; Rooney, W.
2017-12-01
To address demands of population growth and impacts of global climate change, plant breeders must increase crop yield through genetic improvement. However, plant phenotyping, the characterization of a plant's physical attributes, remains a primary bottleneck in modern crop improvement programs. 3D point clouds generated from terrestrial laser scanning (TLS) and unmanned aerial systems (UAS) based structure from motion (SfM) are a promising data source to increase the efficiency of screening plant material in breeding programs. This study develops and evaluates methods for characterizing sorghum (Sorghum bicolor) panicles (heads) in field plots from both TLS and UAS-based SfM point clouds. The TLS point cloud over experimental sorghum field at Texas A&M farm in Burleston County TX were collected using a FARO Focus X330 3D laser scanner. SfM point cloud was generated from UAS imagery captured using a Phantom 3 Professional UAS at 10m altitude and 85% image overlap. The panicle detection method applies point cloud reflectance, height and point density attributes characteristic of sorghum panicles to detect them and estimate their dimensions (panicle length and width) through image classification and clustering procedures. We compare the derived panicle counts and panicle sizes with field-based and manually digitized measurements in selected plots and study the strengths and limitations of each data source for sorghum panicle characterization.
Efficient terrestrial laser scan segmentation exploiting data structure
NASA Astrophysics Data System (ADS)
Mahmoudabadi, Hamid; Olsen, Michael J.; Todorovic, Sinisa
2016-09-01
New technologies such as lidar enable the rapid collection of massive datasets to model a 3D scene as a point cloud. However, while hardware technology continues to advance, processing 3D point clouds into informative models remains complex and time consuming. A common approach to increase processing efficiently is to segment the point cloud into smaller sections. This paper proposes a novel approach for point cloud segmentation using computer vision algorithms to analyze panoramic representations of individual laser scans. These panoramas can be quickly created using an inherent neighborhood structure that is established during the scanning process, which scans at fixed angular increments in a cylindrical or spherical coordinate system. In the proposed approach, a selected image segmentation algorithm is applied on several input layers exploiting this angular structure including laser intensity, range, normal vectors, and color information. These segments are then mapped back to the 3D point cloud so that modeling can be completed more efficiently. This approach does not depend on pre-defined mathematical models and consequently setting parameters for them. Unlike common geometrical point cloud segmentation methods, the proposed method employs the colorimetric and intensity data as another source of information. The proposed algorithm is demonstrated on several datasets encompassing variety of scenes and objects. Results show a very high perceptual (visual) level of segmentation and thereby the feasibility of the proposed algorithm. The proposed method is also more efficient compared to Random Sample Consensus (RANSAC), which is a common approach for point cloud segmentation.
Three-dimensional reconstruction of indoor whole elements based on mobile LiDAR point cloud data
NASA Astrophysics Data System (ADS)
Gong, Yuejian; Mao, Wenbo; Bi, Jiantao; Ji, Wei; He, Zhanjun
2014-11-01
Ground-based LiDAR is one of the most effective city modeling tools at present, which has been widely used for three-dimensional reconstruction of outdoor objects. However, as for indoor objects, there are some technical bottlenecks due to lack of GPS signal. In this paper, based on the high-precision indoor point cloud data which was obtained by LiDAR, an international advanced indoor mobile measuring equipment, high -precision model was fulfilled for all indoor ancillary facilities. The point cloud data we employed also contain color feature, which is extracted by fusion with CCD images. Thus, it has both space geometric feature and spectral information which can be used for constructing objects' surface and restoring color and texture of the geometric model. Based on Autodesk CAD platform and with help of PointSence plug, three-dimensional reconstruction of indoor whole elements was realized. Specifically, Pointools Edit Pro was adopted to edit the point cloud, then different types of indoor point cloud data was processed, including data format conversion, outline extracting and texture mapping of the point cloud model. Finally, three-dimensional visualization of the real-world indoor was completed. Experiment results showed that high-precision 3D point cloud data obtained by indoor mobile measuring equipment can be used for indoor whole elements' 3-d reconstruction and that methods proposed in this paper can efficiently realize the 3 -d construction of indoor whole elements. Moreover, the modeling precision could be controlled within 5 cm, which was proved to be a satisfactory result.
NASA Astrophysics Data System (ADS)
Gupta, Shaurya; Guha, Daipayan; Jakubovic, Raphael; Yang, Victor X. D.
2017-02-01
Computer-assisted navigation is used by surgeons in spine procedures to guide pedicle screws to improve placement accuracy and in some cases, to better visualize patient's underlying anatomy. Intraoperative registration is performed to establish a correlation between patient's anatomy and the pre/intra-operative image. Current algorithms rely on seeding points obtained directly from the exposed spinal surface to achieve clinically acceptable registration accuracy. Registration of these three dimensional surface point-clouds are prone to various systematic errors. The goal of this study was to evaluate the robustness of surgical navigation systems by looking at the relationship between the optical density of an acquired 3D point-cloud and the corresponding surgical navigation error. A retrospective review of a total of 48 registrations performed using an experimental structured light navigation system developed within our lab was conducted. For each registration, the number of points in the acquired point cloud was evaluated relative to whether the registration was acceptable, the corresponding system reported error and target registration error. It was demonstrated that the number of points in the point cloud neither correlates with the acceptance/rejection of a registration or the system reported error. However, a negative correlation was observed between the number of the points in the point-cloud and the corresponding sagittal angular error. Thus, system reported total registration points and accuracy are insufficient to gauge the accuracy of a navigation system and the operating surgeon must verify and validate registration based on anatomical landmarks prior to commencing surgery.
Study on Huizhou architecture of point cloud registration based on optimized ICP algorithm
NASA Astrophysics Data System (ADS)
Zhang, Runmei; Wu, Yulu; Zhang, Guangbin; Zhou, Wei; Tao, Yuqian
2018-03-01
In view of the current point cloud registration software has high hardware requirements, heavy workload and moltiple interactive definition, the source of software with better processing effect is not open, a two--step registration method based on normal vector distribution feature and coarse feature based iterative closest point (ICP) algorithm is proposed in this paper. This method combines fast point feature histogram (FPFH) algorithm, define the adjacency region of point cloud and the calculation model of the distribution of normal vectors, setting up the local coordinate system for each key point, and obtaining the transformation matrix to finish rough registration, the rough registration results of two stations are accurately registered by using the ICP algorithm. Experimental results show that, compared with the traditional ICP algorithm, the method used in this paper has obvious time and precision advantages for large amount of point clouds.
NASA Astrophysics Data System (ADS)
Ge, Xuming
2017-08-01
The coarse registration of point clouds from urban building scenes has become a key topic in applications of terrestrial laser scanning technology. Sampling-based algorithms in the random sample consensus (RANSAC) model have emerged as mainstream solutions to address coarse registration problems. In this paper, we propose a novel combined solution to automatically align two markerless point clouds from building scenes. Firstly, the method segments non-ground points from ground points. Secondly, the proposed method detects feature points from each cross section and then obtains semantic keypoints by connecting feature points with specific rules. Finally, the detected semantic keypoints from two point clouds act as inputs to a modified 4PCS algorithm. Examples are presented and the results compared with those of K-4PCS to demonstrate the main contributions of the proposed method, which are the extension of the original 4PCS to handle heavy datasets and the use of semantic keypoints to improve K-4PCS in relation to registration accuracy and computational efficiency.
Object-Based Coregistration of Terrestrial Photogrammetric and ALS Point Clouds in Forested Areas
NASA Astrophysics Data System (ADS)
Polewski, P.; Erickson, A.; Yao, W.; Coops, N.; Krzystek, P.; Stilla, U.
2016-06-01
Airborne Laser Scanning (ALS) and terrestrial photogrammetry are methods applicable for mapping forested environments. While ground-based techniques provide valuable information about the forest understory, the measured point clouds are normally expressed in a local coordinate system, whose transformation into a georeferenced system requires additional effort. In contrast, ALS point clouds are usually georeferenced, yet the point density near the ground may be poor under dense overstory conditions. In this work, we propose to combine the strengths of the two data sources by co-registering the respective point clouds, thus enriching the georeferenced ALS point cloud with detailed understory information in a fully automatic manner. Due to markedly different sensor characteristics, coregistration methods which expect a high geometric similarity between keypoints are not suitable in this setting. Instead, our method focuses on the object (tree stem) level. We first calculate approximate stem positions in the terrestrial and ALS point clouds and construct, for each stem, a descriptor which quantifies the 2D and vertical distances to other stem centers (at ground height). Then, the similarities between all descriptor pairs from the two point clouds are calculated, and standard graph maximum matching techniques are employed to compute corresponding stem pairs (tiepoints). Finally, the tiepoint subset yielding the optimal rigid transformation between the terrestrial and ALS coordinate systems is determined. We test our method on simulated tree positions and a plot situated in the northern interior of the Coast Range in western Oregon, USA, using ALS data (76 x 121 m2) and a photogrammetric point cloud (33 x 35 m2) derived from terrestrial photographs taken with a handheld camera. Results on both simulated and real data show that the proposed stem descriptors are discriminative enough to derive good correspondences. Specifically, for the real plot data, 24 corresponding stems were coregistered with an average 2D position deviation of 66 cm.
NASA Astrophysics Data System (ADS)
Qualls, R. J.; Woodruff, C.
2017-12-01
The behavior of inter-annual trends in mountain snow cover would represent extremely useful information for drought and climate change assessment; however, individual data sources exhibit specific limitations for characterizing this behavior. For example, SNOTEL data provide time series point values of Snow Water Equivalent (SWE), but lack spatial content apart from that contained in a sparse network of point values. Satellite observations in the visible spectrum can provide snow covered area, but not SWE at present, and are limited by cloud cover which often obscures visibility of the ground, especially during the winter and spring in mountainous areas. Cloud cover, therefore, often limits both temporal and spatial coverage of satellite remote sensing of snow. Among the platforms providing the best combination of temporal and spatial coverage to overcome the cloud obscuration problem by providing frequent overflights, the Aqua and Terra satellites carrying the MODIS instrument package provide 500 m, daily resolution observations of snow cover. These were only launched in 1999 and the early 2000's, thus limiting the historical period over which these data are available. A hybrid method incorporating SNOTEL and MODIS data has been developed which accomplishes cloud removal, and enables determination of the time series of watershed spatial snow cover when either SNOTEL or MODIS data are available. This allows one to generate spatial snow cover information for watersheds with SNOTEL stations for periods both before and after the launch of the Aqua and Terra satellites, extending the spatial information about snow cover over the period of record of the SNOTEL stations present in a watershed. This method is used to quantify the spatial time series of snow over the 9000 km2 Upper Snake River watershed and to evaluate inter-annual trends in the timing, rate, and duration of melt over the nearly 40 year period from the early 1980's to the present, and shows promise for generating snow cover depletion maps for drought and climate change scenarios.
Harmonic regression based multi-temporal cloud filtering algorithm for Landsat 8
NASA Astrophysics Data System (ADS)
Joshi, P.
2015-12-01
Landsat data archive though rich is seen to have missing dates and periods owing to the weather irregularities and inconsistent coverage. The satellite images are further subject to cloud cover effects resulting in erroneous analysis and observations of ground features. In earlier studies the change detection algorithm using statistical control charts on harmonic residuals of multi-temporal Landsat 5 data have been shown to detect few prominent remnant clouds [Brooks, Evan B., et al, 2014]. So, in this work we build on this harmonic regression approach to detect and filter clouds using a multi-temporal series of Landsat 8 images. Firstly, we compute the harmonic coefficients using the fitting models on annual training data. This time series of residuals is further subjected to Shewhart X-bar control charts which signal the deviations of cloud points from the fitted multi-temporal fourier curve. For the process with standard deviation σ we found the second and third order harmonic regression with a x-bar chart control limit [Lσ] ranging between [0.5σ < Lσ < σ] as most efficient in detecting clouds. By implementing second order harmonic regression with successive x-bar chart control limits of L and 0.5 L on the NDVI, NDSI and haze optimized transformation (HOT), and utilizing the seasonal physical properties of these parameters, we have designed a novel multi-temporal algorithm for filtering clouds from Landsat 8 images. The method is applied to Virginia and Alabama in Landsat8 UTM zones 17 and 16 respectively. Our algorithm efficiently filters all types of cloud cover with an overall accuracy greater than 90%. As a result of the multi-temporal operation and the ability to recreate the multi-temporal database of images using only the coefficients of the fourier regression, our algorithm is largely storage and time efficient. The results show a good potential for this multi-temporal approach for cloud detection as a timely and targeted solution for the Landsat 8 research community, catering to the need for innovative processing solutions in the infant stage of the satellite.
Large-scale urban point cloud labeling and reconstruction
NASA Astrophysics Data System (ADS)
Zhang, Liqiang; Li, Zhuqiang; Li, Anjian; Liu, Fangyu
2018-04-01
The large number of object categories and many overlapping or closely neighboring objects in large-scale urban scenes pose great challenges in point cloud classification. In this paper, a novel framework is proposed for classification and reconstruction of airborne laser scanning point cloud data. To label point clouds, we present a rectified linear units neural network named ReLu-NN where the rectified linear units (ReLu) instead of the traditional sigmoid are taken as the activation function in order to speed up the convergence. Since the features of the point cloud are sparse, we reduce the number of neurons by the dropout to avoid over-fitting of the training process. The set of feature descriptors for each 3D point is encoded through self-taught learning, and forms a discriminative feature representation which is taken as the input of the ReLu-NN. The segmented building points are consolidated through an edge-aware point set resampling algorithm, and then they are reconstructed into 3D lightweight models using the 2.5D contouring method (Zhou and Neumann, 2010). Compared with deep learning approaches, the ReLu-NN introduced can easily classify unorganized point clouds without rasterizing the data, and it does not need a large number of training samples. Most of the parameters in the network are learned, and thus the intensive parameter tuning cost is significantly reduced. Experimental results on various datasets demonstrate that the proposed framework achieves better performance than other related algorithms in terms of classification accuracy and reconstruction quality.
NASA Technical Reports Server (NTRS)
Farrugia, C. J.; Richardson, I. G.; Burlaga, L. F.; Lepping, R. P.; Osherovich, V. A.
1993-01-01
Simultaneous ISEE 3 and IMP 8 spacecraft observations of magnetic fields and flow anisotropies of solar energetic protons and electrons during the passage of an interplanetary magnetic cloud show various particle signature differences at the two spacecraft. These differences are interpretable in terms of the magnetic line topology of the cloud, the connectivity of the cloud field lines to the solar surface, and the interconnection between the magnetic fields of the magnetic clouds and of the earth. These observations are consistent with a magnetic cloud model in which these mesoscale configurations are curved magnetic flux ropes attached at both ends to the sun's surface, extending out to 1 AU.
Superposition and alignment of labeled point clouds.
Fober, Thomas; Glinca, Serghei; Klebe, Gerhard; Hüllermeier, Eyke
2011-01-01
Geometric objects are often represented approximately in terms of a finite set of points in three-dimensional euclidean space. In this paper, we extend this representation to what we call labeled point clouds. A labeled point cloud is a finite set of points, where each point is not only associated with a position in three-dimensional space, but also with a discrete class label that represents a specific property. This type of model is especially suitable for modeling biomolecules such as proteins and protein binding sites, where a label may represent an atom type or a physico-chemical property. Proceeding from this representation, we address the question of how to compare two labeled points clouds in terms of their similarity. Using fuzzy modeling techniques, we develop a suitable similarity measure as well as an efficient evolutionary algorithm to compute it. Moreover, we consider the problem of establishing an alignment of the structures in the sense of a one-to-one correspondence between their basic constituents. From a biological point of view, alignments of this kind are of great interest, since mutually corresponding molecular constituents offer important information about evolution and heredity, and can also serve as a means to explain a degree of similarity. In this paper, we therefore develop a method for computing pairwise or multiple alignments of labeled point clouds. To this end, we proceed from an optimal superposition of the corresponding point clouds and construct an alignment which is as much as possible in agreement with the neighborhood structure established by this superposition. We apply our methods to the structural analysis of protein binding sites.
Quantifying spatial variability of AgI cloud seeding benefits and Ag enrichments in snow
NASA Astrophysics Data System (ADS)
Fisher, J.; Benner, S. G.; Lytle, M. L.; Kunkel, M. L.; Blestrud, D.; Holbrook, V. P.; Parkinson, S.; Edwards, R.
2016-12-01
Glaciogenic cloud seeding is an important scientific technology for enhancing water resources across in the Western United States. Cloud seeding enriches super cooled liquid water layers with plumes of silver iodide (AgI), an artificial ice nuclei. Recent studies using target-control regression analysis and modeling estimate glaciogenic cloud seeding increases snow precipitation between 3-15% annually. However, the efficacy of cloud seeding programs is difficult to assess using weather models and statistics alone. This study will supplement precipitation enhancement statistics and Weather Research and Forecasting (WRF) model outputs with ultra-trace chemistry. Combining precipitation enhancement estimates with trace chemistry data (to estimate AgI plume targeting accuracy) may provide a more robust analysis. Precipitation enhancement from the 2016 water year will be modeled two ways. First, by using double-mass curve. Annual SNOTEL data of the cumulative SWE in unseeded areas and cumulative SWE in seeded areas will be compared before, and after, the cloud seeding program's initiation in 2003. Any change in the double-mass curve's slope after 2003 may be attributed to cloud seeding. Second, WRF model estimates of precipitation will be compared to the observed precipitation at SNOTEL sites. The difference between observed and modeled precipitation in AgI seeded regions may also be attributed to cloud seeding (assuming modeled and observed data are comparable at unseeded SNOTEL stations). Ultra-trace snow chemistry data from the 2016 winter season will be used to validate whether estimated precipitation increases are positively correlated with the mass of silver in the snowpack.
NASA Astrophysics Data System (ADS)
Kang, Zhizhong
2013-10-01
This paper presents a new approach to automatic registration of terrestrial laser scanning (TLS) point clouds utilizing a novel robust estimation method by an efficient BaySAC (BAYes SAmpling Consensus). The proposed method directly generates reflectance images from 3D point clouds, and then using SIFT algorithm extracts keypoints to identify corresponding image points. The 3D corresponding points, from which transformation parameters between point clouds are computed, are acquired by mapping the 2D ones onto the point cloud. To remove false accepted correspondences, we implement a conditional sampling method to select the n data points with the highest inlier probabilities as a hypothesis set and update the inlier probabilities of each data point using simplified Bayes' rule for the purpose of improving the computation efficiency. The prior probability is estimated by the verification of the distance invariance between correspondences. The proposed approach is tested on four data sets acquired by three different scanners. The results show that, comparing with the performance of RANSAC, BaySAC leads to less iterations and cheaper computation cost when the hypothesis set is contaminated with more outliers. The registration results also indicate that, the proposed algorithm can achieve high registration accuracy on all experimental datasets.
The Research on the Spectral Characteristics of Sea Fog Based on Caliop and Modis Data
NASA Astrophysics Data System (ADS)
Wan, J.; Su, J.; Liu, S.; Sheng, H.
2018-04-01
In view of that difficulty of distinguish between sea fog and low cloud by optical remote sensing mean, the research on spectral characteristics of sea fog is focused and carried out. The satellite laser radar CALIOP data and the high spectral MODIS data were obtained from May to December 2017, and the scattering coefficient and the vertical height information were extracted from the atmospheric attenuation of the lower star to extract the sea fog sample points, and the spectral response curve based on MODIS was formed to analyse the spectral response characteristics of the sea fog, thus providing a theoretical basis for the monitoring of sea fog with optical remote sensing image.
Continuum Limit of Total Variation on Point Clouds
NASA Astrophysics Data System (ADS)
García Trillos, Nicolás; Slepčev, Dejan
2016-04-01
We consider point clouds obtained as random samples of a measure on a Euclidean domain. A graph representing the point cloud is obtained by assigning weights to edges based on the distance between the points they connect. Our goal is to develop mathematical tools needed to study the consistency, as the number of available data points increases, of graph-based machine learning algorithms for tasks such as clustering. In particular, we study when the cut capacity, and more generally total variation, on these graphs is a good approximation of the perimeter (total variation) in the continuum setting. We address this question in the setting of Γ-convergence. We obtain almost optimal conditions on the scaling, as the number of points increases, of the size of the neighborhood over which the points are connected by an edge for the Γ-convergence to hold. Taking of the limit is enabled by a transportation based metric which allows us to suitably compare functionals defined on different point clouds.
Point cloud registration from local feature correspondences-Evaluation on challenging datasets.
Petricek, Tomas; Svoboda, Tomas
2017-01-01
Registration of laser scans, or point clouds in general, is a crucial step of localization and mapping with mobile robots or in object modeling pipelines. A coarse alignment of the point clouds is generally needed before applying local methods such as the Iterative Closest Point (ICP) algorithm. We propose a feature-based approach to point cloud registration and evaluate the proposed method and its individual components on challenging real-world datasets. For a moderate overlap between the laser scans, the method provides a superior registration accuracy compared to state-of-the-art methods including Generalized ICP, 3D Normal-Distribution Transform, Fast Point-Feature Histograms, and 4-Points Congruent Sets. Compared to the surface normals, the points as the underlying features yield higher performance in both keypoint detection and establishing local reference frames. Moreover, sign disambiguation of the basis vectors proves to be an important aspect in creating repeatable local reference frames. A novel method for sign disambiguation is proposed which yields highly repeatable reference frames.
On the performance of metrics to predict quality in point cloud representations
NASA Astrophysics Data System (ADS)
Alexiou, Evangelos; Ebrahimi, Touradj
2017-09-01
Point clouds are a promising alternative for immersive representation of visual contents. Recently, an increased interest has been observed in the acquisition, processing and rendering of this modality. Although subjective and objective evaluations are critical in order to assess the visual quality of media content, they still remain open problems for point cloud representation. In this paper we focus our efforts on subjective quality assessment of point cloud geometry, subject to typical types of impairments such as noise corruption and compression-like distortions. In particular, we propose a subjective methodology that is closer to real-life scenarios of point cloud visualization. The performance of the state-of-the-art objective metrics is assessed by considering the subjective scores as the ground truth. Moreover, we investigate the impact of adopting different test methodologies by comparing them. Advantages and drawbacks of every approach are reported, based on statistical analysis. The results and conclusions of this work provide useful insights that could be considered in future experimentation.
Semantic Segmentation of Building Elements Using Point Cloud Hashing
NASA Astrophysics Data System (ADS)
Chizhova, M.; Gurianov, A.; Hess, M.; Luhmann, T.; Brunn, A.; Stilla, U.
2018-05-01
For the interpretation of point clouds, the semantic definition of extracted segments from point clouds or images is a common problem. Usually, the semantic of geometrical pre-segmented point cloud elements are determined using probabilistic networks and scene databases. The proposed semantic segmentation method is based on the psychological human interpretation of geometric objects, especially on fundamental rules of primary comprehension. Starting from these rules the buildings could be quite well and simply classified by a human operator (e.g. architect) into different building types and structural elements (dome, nave, transept etc.), including particular building parts which are visually detected. The key part of the procedure is a novel method based on hashing where point cloud projections are transformed into binary pixel representations. A segmentation approach released on the example of classical Orthodox churches is suitable for other buildings and objects characterized through a particular typology in its construction (e.g. industrial objects in standardized enviroments with strict component design allowing clear semantic modelling).
NASA Astrophysics Data System (ADS)
Alidoost, F.; Arefi, H.
2017-11-01
Nowadays, Unmanned Aerial System (UAS)-based photogrammetry offers an affordable, fast and effective approach to real-time acquisition of high resolution geospatial information and automatic 3D modelling of objects for numerous applications such as topography mapping, 3D city modelling, orthophoto generation, and cultural heritages preservation. In this paper, the capability of four different state-of-the-art software packages as 3DSurvey, Agisoft Photoscan, Pix4Dmapper Pro and SURE is examined to generate high density point cloud as well as a Digital Surface Model (DSM) over a historical site. The main steps of this study are including: image acquisition, point cloud generation, and accuracy assessment. The overlapping images are first captured using a quadcopter and next are processed by different software to generate point clouds and DSMs. In order to evaluate the accuracy and quality of point clouds and DSMs, both visual and geometric assessments are carry out and the comparison results are reported.
Multiview point clouds denoising based on interference elimination
NASA Astrophysics Data System (ADS)
Hu, Yang; Wu, Qian; Wang, Le; Jiang, Huanyu
2018-03-01
Newly emerging low-cost depth sensors offer huge potentials for three-dimensional (3-D) modeling, but existing high noise restricts these sensors from obtaining accurate results. Thus, we proposed a method for denoising registered multiview point clouds with high noise to solve that problem. The proposed method is aimed at fully using redundant information to eliminate the interferences among point clouds of different views based on an iterative procedure. In each iteration, noisy points are either deleted or moved to their weighted average targets in accordance with two cases. Simulated data and practical data captured by a Kinect v2 sensor were tested in experiments qualitatively and quantitatively. Results showed that the proposed method can effectively reduce noise and recover local features from highly noisy multiview point clouds with good robustness, compared to truncated signed distance function and moving least squares (MLS). Moreover, the resulting low-noise point clouds can be further smoothed by the MLS to achieve improved results. This study provides the feasibility of obtaining fine 3-D models with high-noise devices, especially for depth sensors, such as Kinect.
Feature-based three-dimensional registration for repetitive geometry in machine vision
Gong, Yuanzheng; Seibel, Eric J.
2016-01-01
As an important step in three-dimensional (3D) machine vision, 3D registration is a process of aligning two or multiple 3D point clouds that are collected from different perspectives together into a complete one. The most popular approach to register point clouds is to minimize the difference between these point clouds iteratively by Iterative Closest Point (ICP) algorithm. However, ICP does not work well for repetitive geometries. To solve this problem, a feature-based 3D registration algorithm is proposed to align the point clouds that are generated by vision-based 3D reconstruction. By utilizing texture information of the object and the robustness of image features, 3D correspondences can be retrieved so that the 3D registration of two point clouds is to solve a rigid transformation. The comparison of our method and different ICP algorithms demonstrates that our proposed algorithm is more accurate, efficient and robust for repetitive geometry registration. Moreover, this method can also be used to solve high depth uncertainty problem caused by little camera baseline in vision-based 3D reconstruction. PMID:28286703
An efficient global energy optimization approach for robust 3D plane segmentation of point clouds
NASA Astrophysics Data System (ADS)
Dong, Zhen; Yang, Bisheng; Hu, Pingbo; Scherer, Sebastian
2018-03-01
Automatic 3D plane segmentation is necessary for many applications including point cloud registration, building information model (BIM) reconstruction, simultaneous localization and mapping (SLAM), and point cloud compression. However, most of the existing 3D plane segmentation methods still suffer from low precision and recall, and inaccurate and incomplete boundaries, especially for low-quality point clouds collected by RGB-D sensors. To overcome these challenges, this paper formulates the plane segmentation problem as a global energy optimization because it is robust to high levels of noise and clutter. First, the proposed method divides the raw point cloud into multiscale supervoxels, and considers planar supervoxels and individual points corresponding to nonplanar supervoxels as basic units. Then, an efficient hybrid region growing algorithm is utilized to generate initial plane set by incrementally merging adjacent basic units with similar features. Next, the initial plane set is further enriched and refined in a mutually reinforcing manner under the framework of global energy optimization. Finally, the performances of the proposed method are evaluated with respect to six metrics (i.e., plane precision, plane recall, under-segmentation rate, over-segmentation rate, boundary precision, and boundary recall) on two benchmark datasets. Comprehensive experiments demonstrate that the proposed method obtained good performances both in high-quality TLS point clouds (i.e., http://SEMANTIC3D.NET)
Indoor Modelling from Slam-Based Laser Scanner: Door Detection to Envelope Reconstruction
NASA Astrophysics Data System (ADS)
Díaz-Vilariño, L.; Verbree, E.; Zlatanova, S.; Diakité, A.
2017-09-01
Updated and detailed indoor models are being increasingly demanded for various applications such as emergency management or navigational assistance. The consolidation of new portable and mobile acquisition systems has led to a higher availability of 3D point cloud data from indoors. In this work, we explore the combined use of point clouds and trajectories from SLAM-based laser scanner to automate the reconstruction of building indoors. The methodology starts by door detection, since doors represent transitions from one indoor space to other, which constitutes an initial approach about the global configuration of the point cloud into building rooms. For this purpose, the trajectory is used to create a vertical point cloud profile in which doors are detected as local minimum of vertical distances. As point cloud and trajectory are related by time stamp, this feature is used to subdivide the point cloud into subspaces according to the location of the doors. The correspondence between subspaces and building rooms is not unambiguous. One subspace always corresponds to one room, but one room is not necessarily depicted by just one subspace, for example, in case of a room containing several doors and in which the acquisition is performed in a discontinue way. The labelling problem is formulated as combinatorial approach solved as a minimum energy optimization. Once the point cloud is subdivided into building rooms, envelop (conformed by walls, ceilings and floors) is reconstructed for each space. The connectivity between spaces is included by adding the previously detected doors to the reconstructed model. The methodology is tested in a real case study.
Tran, Thi Huong Giang; Ressl, Camillo; Pfeifer, Norbert
2018-02-03
This paper suggests a new approach for change detection (CD) in 3D point clouds. It combines classification and CD in one step using machine learning. The point cloud data of both epochs are merged for computing features of four types: features describing the point distribution, a feature relating to relative terrain elevation, features specific for the multi-target capability of laser scanning, and features combining the point clouds of both epochs to identify the change. All these features are merged in the points and then training samples are acquired to create the model for supervised classification, which is then applied to the whole study area. The final results reach an overall accuracy of over 90% for both epochs of eight classes: lost tree, new tree, lost building, new building, changed ground, unchanged building, unchanged tree, and unchanged ground.
A curvature-based weighted fuzzy c-means algorithm for point clouds de-noising
NASA Astrophysics Data System (ADS)
Cui, Xin; Li, Shipeng; Yan, Xiutian; He, Xinhua
2018-04-01
In order to remove the noise of three-dimensional scattered point cloud and smooth the data without damnify the sharp geometric feature simultaneity, a novel algorithm is proposed in this paper. The feature-preserving weight is added to fuzzy c-means algorithm which invented a curvature weighted fuzzy c-means clustering algorithm. Firstly, the large-scale outliers are removed by the statistics of r radius neighboring points. Then, the algorithm estimates the curvature of the point cloud data by using conicoid parabolic fitting method and calculates the curvature feature value. Finally, the proposed clustering algorithm is adapted to calculate the weighted cluster centers. The cluster centers are regarded as the new points. The experimental results show that this approach is efficient to different scale and intensities of noise in point cloud with a high precision, and perform a feature-preserving nature at the same time. Also it is robust enough to different noise model.
NASA Astrophysics Data System (ADS)
Hoegner, L.; Tuttas, S.; Xu, Y.; Eder, K.; Stilla, U.
2016-06-01
This paper discusses the automatic coregistration and fusion of 3d point clouds generated from aerial image sequences and corresponding thermal infrared (TIR) images. Both RGB and TIR images have been taken from a RPAS platform with a predefined flight path where every RGB image has a corresponding TIR image taken from the same position and with the same orientation with respect to the accuracy of the RPAS system and the inertial measurement unit. To remove remaining differences in the exterior orientation, different strategies for coregistering RGB and TIR images are discussed: (i) coregistration based on 2D line segments for every single TIR image and the corresponding RGB image. This method implies a mainly planar scene to avoid mismatches; (ii) coregistration of both the dense 3D point clouds from RGB images and from TIR images by coregistering 2D image projections of both point clouds; (iii) coregistration based on 2D line segments in every single TIR image and 3D line segments extracted from intersections of planes fitted in the segmented dense 3D point cloud; (iv) coregistration of both the dense 3D point clouds from RGB images and from TIR images using both ICP and an adapted version based on corresponding segmented planes; (v) coregistration of both image sets based on point features. The quality is measured by comparing the differences of the back projection of homologous points in both corrected RGB and TIR images.
NASA Astrophysics Data System (ADS)
Bunds, M. P.
2017-12-01
Point clouds are a powerful data source in the geosciences, and the emergence of structure-from-motion (SfM) photogrammetric techniques has allowed them to be generated quickly and inexpensively. Consequently, applications of them as well as methods to generate, manipulate, and analyze them warrant inclusion in undergraduate curriculum. In a new course called Geospatial Field Methods at Utah Valley University, students in small groups use SfM to generate a point cloud from imagery collected with a small unmanned aerial system (sUAS) and use it as a primary data source for a research project. Before creating their point clouds, students develop needed technical skills in laboratory and class activities. The students then apply the skills to construct the point clouds, and the research projects and point cloud construction serve as a central theme for the class. Intended student outcomes for the class include: technical skills related to acquiring, processing, and analyzing geospatial data; improved ability to carry out a research project; and increased knowledge related to their specific project. To construct the point clouds, students first plan their field work by outlining the field site, identifying locations for ground control points (GCPs), and loading them onto a handheld GPS for use in the field. They also estimate sUAS flight elevation, speed, and the flight path grid spacing required to produce a point cloud with the resolution required for their project goals. In the field, the students place the GCPs using handheld GPS, and survey the GCP locations using post-processed-kinematic (PPK) or real-time-kinematic (RTK) methods. The students pilot the sUAS and operate its camera according to the parameters that they estimated in planning their field work. Data processing includes obtaining accurate locations for the PPK/RTK base station and GCPs, and SfM processing with Agisoft Photoscan. The resulting point clouds are rasterized into digital surface models, assessed for accuracy, and analyzed in Geographic Information System software. Student projects have included mapping and analyzing landslide morphology, fault scarps, and earthquake ground surface rupture. Students have praised the geospatial skills they learn, whereas helping them stay on schedule to finish their projects is a challenge.
Building a LiDAR point cloud simulator: Testing algorithms for high resolution topographic change
NASA Astrophysics Data System (ADS)
Carrea, Dario; Abellán, Antonio; Derron, Marc-Henri; Jaboyedoff, Michel
2014-05-01
Terrestrial laser technique (TLS) is becoming a common tool in Geosciences, with clear applications ranging from the generation of a high resolution 3D models to the monitoring of unstable slopes and the quantification of morphological changes. Nevertheless, like every measurement techniques, TLS still has some limitations that are not clearly understood and affect the accuracy of the dataset (point cloud). A challenge in LiDAR research is to understand the influence of instrumental parameters on measurement errors during LiDAR acquisition. Indeed, different critical parameters interact with the scans quality at different ranges: the existence of shadow areas, the spatial resolution (point density), and the diameter of the laser beam, the incidence angle and the single point accuracy. The objective of this study is to test the main limitations of different algorithms usually applied on point cloud data treatment, from alignment to monitoring. To this end, we built in MATLAB(c) environment a LiDAR point cloud simulator able to recreate the multiple sources of errors related to instrumental settings that we normally observe in real datasets. In a first step we characterized the error from single laser pulse by modelling the influence of range and incidence angle on single point data accuracy. In a second step, we simulated the scanning part of the system in order to analyze the shifting and angular error effects. Other parameters have been added to the point cloud simulator, such as point spacing, acquisition window, etc., in order to create point clouds of simple and/or complex geometries. We tested the influence of point density and vitiating point of view on the Iterative Closest Point (ICP) alignment and also in some deformation tracking algorithm with same point cloud geometry, in order to determine alignment and deformation detection threshold. We also generated a series of high resolution point clouds in order to model small changes on different environments (erosion, landslide monitoring, etc) and we then tested the use of filtering techniques using 3D moving windows along the space and time, which considerably reduces data scattering due to the benefits of data redundancy. In conclusion, the simulator allowed us to improve our different algorithms and to understand how instrumental error affects final results. And also, improve the methodology of scans acquisition to find the best compromise between point density, positioning and acquisition time with the best accuracy possible to characterize the topographic change.
Mapping Urban Tree Canopy Cover Using Fused Airborne LIDAR and Satellite Imagery Data
NASA Astrophysics Data System (ADS)
Parmehr, Ebadat G.; Amati, Marco; Fraser, Clive S.
2016-06-01
Urban green spaces, particularly urban trees, play a key role in enhancing the liveability of cities. The availability of accurate and up-to-date maps of tree canopy cover is important for sustainable development of urban green spaces. LiDAR point clouds are widely used for the mapping of buildings and trees, and several LiDAR point cloud classification techniques have been proposed for automatic mapping. However, the effectiveness of point cloud classification techniques for automated tree extraction from LiDAR data can be impacted to the point of failure by the complexity of tree canopy shapes in urban areas. Multispectral imagery, which provides complementary information to LiDAR data, can improve point cloud classification quality. This paper proposes a reliable method for the extraction of tree canopy cover from fused LiDAR point cloud and multispectral satellite imagery data. The proposed method initially associates each LiDAR point with spectral information from the co-registered satellite imagery data. It calculates the normalised difference vegetation index (NDVI) value for each LiDAR point and corrects tree points which have been misclassified as buildings. Then, region growing of tree points, taking the NDVI value into account, is applied. Finally, the LiDAR points classified as tree points are utilised to generate a canopy cover map. The performance of the proposed tree canopy cover mapping method is experimentally evaluated on a data set of airborne LiDAR and WorldView 2 imagery covering a suburb in Melbourne, Australia.
Registration of Vehicle-Borne Point Clouds and Panoramic Images Based on Sensor Constellations.
Yao, Lianbi; Wu, Hangbin; Li, Yayun; Meng, Bin; Qian, Jinfei; Liu, Chun; Fan, Hongchao
2017-04-11
A mobile mapping system (MMS) is usually utilized to collect environmental data on and around urban roads. Laser scanners and panoramic cameras are the main sensors of an MMS. This paper presents a new method for the registration of the point clouds and panoramic images based on sensor constellation. After the sensor constellation was analyzed, a feature point, the intersection of the connecting line between the global positioning system (GPS) antenna and the panoramic camera with a horizontal plane, was utilized to separate the point clouds into blocks. The blocks for the central and sideward laser scanners were extracted with the segmentation feature points. Then, the point clouds located in the blocks were separated from the original point clouds. Each point in the blocks was used to find the accurate corresponding pixel in the relative panoramic images via a collinear function, and the position and orientation relationship amongst different sensors. A search strategy is proposed for the correspondence of laser scanners and lenses of panoramic cameras to reduce calculation complexity and improve efficiency. Four cases of different urban road types were selected to verify the efficiency and accuracy of the proposed method. Results indicate that most of the point clouds (with an average of 99.7%) were successfully registered with the panoramic images with great efficiency. Geometric evaluation results indicate that horizontal accuracy was approximately 0.10-0.20 m, and vertical accuracy was approximately 0.01-0.02 m for all cases. Finally, the main factors that affect registration accuracy, including time synchronization amongst different sensors, system positioning and vehicle speed, are discussed.
Dust in the Small Magellanic Cloud
NASA Technical Reports Server (NTRS)
Rodrigues, C. V.; Coyne, G. V.; Magalhaes, A. M.
1995-01-01
We discuss simultaneous dust model fits to our extinction and polarization data for the Small Magellanic Cloud (SMC) using existing dust models. Dust model fits to the wavelength dependent polarization are possible for stars with small lambda(sub max). They generally imply size distributions which are narrower and have smaller average sizes compared to those in the Galaxy. The best fits for the extinction curves are obtained with a power law size distribution. The typical, monotonic SMC extinction curve can be well fit with graphite and silicate grains if a small fraction of the SMC carbon is locked up in the grains. Amorphous carbon and silicate grains also fit the data well.
2017-04-01
ADVANCED VISUALIZATION AND INTERACTIVE DISPLAY RAPID INNOVATION AND DISCOVERY EVALUATION RESEARCH (VISRIDER) PROGRAM TASK 6: POINT CLOUD...To) OCT 2013 – SEP 2014 4. TITLE AND SUBTITLE ADVANCED VISUALIZATION AND INTERACTIVE DISPLAY RAPID INNOVATION AND DISCOVERY EVALUATION RESEARCH...various point cloud visualization techniques for viewing large scale LiDAR datasets. Evaluate their potential use for thick client desktop platforms
Inventory of File WAFS_blended_2014102006f06.grib2
) [%] 004 700 mb CTP 6 hour fcst In-Cloud Turbulence [%] spatial ave,code table 4.15=3,#points=1 005 700 mb CTP 6 hour fcst In-Cloud Turbulence [%] spatial max,code table 4.15=3,#points=1 006 600 mb CTP 6 hour fcst In-Cloud Turbulence [%] spatial ave,code table 4.15=3,#points=1 007 600 mb CTP 6 hour fcst In
Cloud cover archiving on a global scale - A discussion of principles
NASA Technical Reports Server (NTRS)
Henderson-Sellers, A.; Hughes, N. A.; Wilson, M.
1981-01-01
Monitoring of climatic variability and climate modeling both require a reliable global cloud data set. Examination is made of the temporal and spatial variability of cloudiness in light of recommendations made by GARP in 1975 (and updated by JOC in 1978 and 1980) for cloud data archiving. An examination of the methods of comparing cloud cover frequency curves suggests that the use of the beta distribution not only facilitates objective comparison, but also reduces overall storage requirements. A specific study of the only current global cloud climatology (the U.S. Air Force's 3-dimensional nephanalysis) over the United Kingdom indicates that discussion of methods of validating satellite-based data sets is urgently required.
NASA Astrophysics Data System (ADS)
Zheng, X.; Albrecht, B.; Jonsson, H. H.; Khelif, D.; Feingold, G.; Minnis, P.; Ayers, K.; Chuang, P.; Donaher, S.; Rossiter, D.; Ghate, V.; Ruiz-Plancarte, J.; Sun-Mack, S.
2011-09-01
Aircraft observations made off the coast of northern Chile in the Southeastern Pacific (20° S, 72° W; named Point Alpha) from 16 October to 13 November 2008 during the VAMOS Ocean-Cloud- Atmosphere-Land Study-Regional Experiment (VOCALS-REx), combined with meteorological reanalysis, satellite measurements, and radiosonde data, are used to investigate the boundary layer (BL) and aerosol-cloud-drizzle variations in this region. On days without predominately synoptic and meso-scale influences, the BL at Point Alpha was typical of a non-drizzling stratocumulus-topped BL. Entrainment rates calculated from the near cloud-top fluxes and turbulence in the BL at Point Alpha appeared to be weaker than those in the BL over the open ocean west of Point Alpha and the BL near the coast of the northeast Pacific. The cloud liquid water path (LWP) varied between 15 g m-2 and 160 g m-2. The BL had a depth of 1140 ± 120 m, was generally well-mixed and capped by a sharp inversion without predominately synoptic and meso-scale influences. The wind direction generally switched from southerly within the BL to northerly above the inversion. On days when a synoptic system and related mesoscale costal circulations affected conditions at Point Alpha (29 October-4 November), a moist layer above the inversion moved over Point Alpha, and the total-water mixing ratio above the inversion was larger than that within the BL. The accumulation mode aerosol varied from 250 to 700 cm-3 within the BL, and CCN at 0.2 % supersaturation within the BL ranged between 150 and 550 cm-3. The main aerosol source at Point Alpha was horizontal advection within the BL from south. The average cloud droplet number concentration ranged between 80 and 400 cm-3. While the mean LWP retrieved from GOES was in good agreement with the in situ measurements, the GOES-derived cloud droplet effective radius tended to be larger than that from the aircraft in situ observations near cloud top. The aerosol and cloud LWP relationship reveals that during the typical well-mixed BL days the cloud LWP increased with the CCN concentrations. On the other hand, meteorological factors and the decoupling processes have large influences on the cloud LWP variation as well.
Impact of survey workflow on precision and accuracy of terrestrial LiDAR datasets
NASA Astrophysics Data System (ADS)
Gold, P. O.; Cowgill, E.; Kreylos, O.
2009-12-01
Ground-based LiDAR (Light Detection and Ranging) survey techniques are enabling remote visualization and quantitative analysis of geologic features at unprecedented levels of detail. For example, digital terrain models computed from LiDAR data have been used to measure displaced landforms along active faults and to quantify fault-surface roughness. But how accurately do terrestrial LiDAR data represent the true ground surface, and in particular, how internally consistent and precise are the mosaiced LiDAR datasets from which surface models are constructed? Addressing this question is essential for designing survey workflows that capture the necessary level of accuracy for a given project while minimizing survey time and equipment, which is essential for effective surveying of remote sites. To address this problem, we seek to define a metric that quantifies how scan registration error changes as a function of survey workflow. Specifically, we are using a Trimble GX3D laser scanner to conduct a series of experimental surveys to quantify how common variables in field workflows impact the precision of scan registration. Primary variables we are testing include 1) use of an independently measured network of control points to locate scanner and target positions, 2) the number of known-point locations used to place the scanner and point clouds in 3-D space, 3) the type of target used to measure distances between the scanner and the known points, and 4) setting up the scanner over a known point as opposed to resectioning of known points. Precision of the registered point cloud is quantified using Trimble Realworks software by automatic calculation of registration errors (errors between locations of the same known points in different scans). Accuracy of the registered cloud (i.e., its ground-truth) will be measured in subsequent experiments. To obtain an independent measure of scan-registration errors and to better visualize the effects of these errors on a registered point cloud, we scan from multiple locations an object of known geometry (a cylinder mounted above a square box). Preliminary results show that even in a controlled experimental scan of an object of known dimensions, there is significant variability in the precision of the registered point cloud. For example, when 3 scans of the central object are registered using 4 known points (maximum time, maximum equipment), the point clouds align to within ~1 cm (normal to the object surface). However, when the same point clouds are registered with only 1 known point (minimum time, minimum equipment), misalignment of the point clouds can range from 2.5 to 5 cm, depending on target type. The greater misalignment of the 3 point clouds when registered with fewer known points stems from the field method employed in acquiring the dataset and demonstrates the impact of field workflow on LiDAR dataset precision. By quantifying the degree of scan mismatch in results such as this, we can provide users with the information needed to maximize efficiency in remote field surveys.
Convergence on the Prediction of Ice Particle Mass and Projected Area in Ice Clouds
NASA Astrophysics Data System (ADS)
Mitchell, D. L.
2013-12-01
Ice particle mass- and area-dimensional power law (henceforth m-D and A-D) relationships are building-blocks for formulating microphysical processes and optical properties in cloud and climate models, and they are critical for ice cloud remote sensing algorithms, affecting the retrieval accuracy. They can be estimated by (1) directly measuring the sizes, masses and areas of individual ice particles at ground-level and (2) using aircraft probes to simultaneously measure the ice water content (IWC) and ice particle size distribution. A third indirect method is to use observations from method 1 to develop an m-A relationship representing mean conditions in ice clouds. Owing to a tighter correlation (relative to m-D data), this m-A relationship can be used to estimate m from aircraft probe measurements of A. This has the advantage of estimating m at small sizes, down to 10 μm using the 2D-Sterio probe. In this way, 2D-S measurements of maximum dimension D can be related to corresponding estimates of m to develop ice cloud type and temperature dependent m-D expressions. However, these expressions are no longer linear in log-log space, but are slowly varying curves covering most of the size range of natural ice particles. This work compares all three of the above methods and demonstrates close agreement between them. Regarding (1), 4869 ice particles and corresponding melted hemispheres were measured during a field campaign to obtain D and m. Selecting only those unrimed habits that formed between -20°C and -40°C, the mean mass values for selected size intervals are within 35% of the corresponding masses predicted by the Method 3 curve based on a similar temperature range. Moreover, the most recent m-D expression based on Method 2 differs by no more than 50% with the m-D curve from Method 3. Method 3 appears to be the most accurate over the observed ice particle size range (10-4000 μm). An m-D/A-D scheme was developed by which self-consistent m-D and A-D power laws are extracted from Method 3 for a given ice particle number concentration N and IWC, appropriate for the relevant size range inferred from N and IWC. The resulting m-D/A-D power laws are based on the same data set comprised of 24 flights in ice clouds during a 6-month field campaign. Standard deviations for these power law constants are determined, which are much needed for cloud property remote sensing algorithms. Comparison of Method 3 (curve fit) with Method 1 (red std. deviations from measurements of ice particles found in cirrus clouds) and Method 2 (Cotton et al. and Heymsfield et al.).
Modeling the effects of an offset of the center of symmetry in the zodiacal cloud
NASA Astrophysics Data System (ADS)
Holmes, E. K.; Dermott, S. F.; Xu, Y. L.; Wyatt, M.; Jayaraman, S.
1998-04-01
There is a possible connection between structure in circumstellar dust clouds and the presence of planets, our own zodiacal cloud being the prime example. Asymmetries in such clouds could be diagnostic of planets which would be otherwise undetectable. One such feature is an offset of the center of symmetry of the disk with respect to the central star. The offset is caused by the forced eccentricities (ef) of particles in the cloud. The orbit of a particle can be described by a set of five orbital elements: the semi-major axis (a), eccentricity (e), inclination (I), longitude of ascending node (Omega) and the argument of pericenter (omega). In low order secular perturbation theory, osculating elements of small bodies are decomposed into proper and forced elements. The proper elements are dependent on initial conditions while the forced elements are imposed on the particle's orbit by the gravitational perturbations of the planets. This decomposition is still applicable in the presence of drag forces. We compare COBE observations of the variation in average polar brightness of the background cloud, (N + S)/2, with ecliptic longitude of Earth with those of a model cloud made of asteroidal particles which populate the inner solar system according to a 1/rgamma where (gamma) = 1 (Poynting Robertson light drag) distribution. The variation with ecliptic longitude of Earth in mean polar brightness is shown in for the 25 micron waveband. Sine curves are fit to both the COBE observations and the model. The variation in (N+S)/2 with ecliptic longitude of Earth can be represented as a superposition of two sine curves: one for the variation in (N + S)/2 due to the Earth's eccentric orbit and the other for the variation in (N + S)/2 due to the forced eccentricities of particles in the cloud. If the cloud were symmetric about the Sun (i.e., if there were no offset), the maximum and minimum brightnesses of the cloud would occur at perihelion and aphelion, respectively. Looking at the model, one can see that the minimum does occur at Earth's aphelion (282.9 deg). However, the minimum of the COBE curve is clearly displaced from aphelion, showing that the center of symmetry of the cloud is displaced from the Sun. If we could turn off the effect of the Earth's eccentricity, we could isolate the sine curve due to ef. When we do this for the model cloud however, we do not see a variation in (N + S)/2 for two reasons: 1) Although the particle orbits are circularized due to Poynting Robertson drag (PR drag), the wedge shape of the cloud cancels out any number density variation as a function of radial distance; and 2) Even though we would expect the orbits of the particles to be more densely spaced at perihelion than at aphelion (provided all the particles had the same ef and omegaf, due to Kepler's Second Law the particles spend less time at perihelion than at aphelion thus canceling out any noticeable effect on the number density. However, when we build a new model cloud governed by a constant distribution of particles (1/rgamma where gamma = 0) instead of a 1/r distribution, we do see a sinusoidal variation in (N + S)/2 with ecliptic longitude of Earth. These results imply that the particles contributing to the observed offset do not have a PR drag distribution (i.e., they are not simply asteroidal particles). Future work will determine whether cometary particles (having a theoretical gamma = 1.5), collisionally evolved asteroidal particles, or a combination of both types of particles are responsible for the offset of the center of symmetry of the zodiacal cloud.
Cloud Atlas: Rotational Modulations in the L/T Transition Brown Dwarf Companion HN Peg B
NASA Technical Reports Server (NTRS)
Zhou, Yifan; Apai, Daniel; Metchev, Stanimir; Lew, Ben W. P.; Schneider, Glenn; Marley, Mark S.; Karalidi, Theodora; Manjavacas, Elena; Bedin, Luigi R.; Cowan, Nicolas B.;
2018-01-01
Time-resolved observations of brown dwarfs' rotational modulations provide powerful insights into the properties of condensate clouds in ultra-cool atmospheres. Multi-wavelength light curves reveal cloud vertical structures, condensate particle sizes, and cloud morphology, which directly constrain condensate cloud and atmospheric circulation models. We report results from Hubble Space Telescope/Wide Field Camera 3 (WFC3) near-infrared G141 taken in six consecutive orbits observations of HNPeg B, an L/T transition brown dwarf companion to a G0V type star. The best-fit sine wave to the 1.1 to 1.7 micron broadband light curve has the amplitude of and period of hour. The modulation amplitude has no detectable wavelength dependence except in the 1.4 micron water absorption band, indicating that the characteristic condensate particle sizes are large (greater than 1 micron). We detect significantly (4.4 sigma) lower modulation amplitude in the 1.4 micron water absorption band, and find that HN Peg B's spectral modulation resembles those of early T type brown dwarfs. We also describe a new empirical interpolation method to remove spectral contamination from the bright host star. This method may be applied in other high-contrast time-resolved observations with WFC3.
Cloud Atlas: Rotational Modulations in the L/T Transition Brown Dwarf Companion HN Peg B
NASA Astrophysics Data System (ADS)
Zhou, Yifan; Apai, Dániel; Metchev, Stanimir; Lew, Ben W. P.; Schneider, Glenn; Marley, Mark S.; Karalidi, Theodora; Manjavacas, Elena; Bedin, Luigi R.; Cowan, Nicolas B.; Miles-Páez, Paulo A.; Lowrance, Patrick J.; Radigan, Jacqueline; Burgasser, Adam J.
2018-03-01
Time-resolved observations of brown dwarfs’ rotational modulations provide powerful insights into the properties of condensate clouds in ultra-cool atmospheres. Multi-wavelength light curves reveal cloud vertical structures, condensate particle sizes, and cloud morphology, which directly constrain condensate cloud and atmospheric circulation models. We report results from Hubble Space Telescope/Wide Field Camera 3 near-infrared G141 taken in six consecutive orbits observations of HN Peg B, an L/T transition brown dwarf companion to a G0V type star. The best-fit sine wave to the 1.1–1.7 μm broadband light curve has an amplitude of 1.206% ± 0.025% and period of 15.4 ± 0.5 hr. The modulation amplitude has no detectable wavelength dependence except in the 1.4 μm water absorption band, indicating that the characteristic condensate particle sizes are large (>1 μm). We detect significantly (4.4σ) lower modulation amplitude in the 1.4 μm water absorption band and find that HN Peg B’s spectral modulation resembles those of early T type brown dwarfs. We also describe a new empirical interpolation method to remove spectral contamination from the bright host star. This method may be applied in other high-contrast time-resolved observations with WFC3.
Comparison of 3D point clouds produced by LIDAR and UAV photoscan in the Rochefort cave (Belgium)
NASA Astrophysics Data System (ADS)
Watlet, Arnaud; Triantafyllou, Antoine; Kaufmann, Olivier; Le Mouelic, Stéphane
2016-04-01
Amongst today's techniques that are able to produce 3D point clouds, LIDAR and UAV (Unmanned Aerial Vehicle) photogrammetry are probably the most commonly used. Both methods have their own advantages and limitations. LIDAR scans create high resolution and high precision 3D point clouds, but such methods are generally costly, especially for sporadic surveys. Compared to LIDAR, UAV (e.g. drones) are cheap and flexible to use in different kind of environments. Moreover, the photogrammetric processing workflow of digital images taken with UAV becomes easier with the rise of many affordable software packages (e.g. Agisoft, PhotoModeler3D, VisualSFM). We present here a challenging study made at the Rochefort Cave Laboratory (South Belgium) comprising surface and underground surveys. The site is located in the Belgian Variscan fold-and-thrust belt, a region that shows many karstic networks within Devonian limestone units. A LIDAR scan has been acquired in the main chamber of the cave (~ 15000 m³) to spatialize 3D point cloud of its inner walls and infer geological beds and structures. Even if the use of LIDAR instrument was not really comfortable in such caving environment, the collected data showed a remarkable precision according to few control points geometry. We also decided to perform another challenging survey of the same cave chamber by modelling a 3D point cloud using photogrammetry of a set of DSLR camera pictures taken from the ground and UAV pictures. The aim was to compare both techniques in terms of (i) implementation of data acquisition and processing, (ii) quality of resulting 3D points clouds (points density, field vs cloud recovery and points precision), (iii) their application for geological purposes. Through Rochefort case study, main conclusions are that LIDAR technique provides higher density point clouds with slightly higher precision than photogrammetry method. However, 3D data modeled by photogrammetry provide visible light spectral information for each modeled voxel and interpolated vertices that can be a useful attributes for clustering during data treatment. We thus illustrate such applications to the Rochefort cave by using both sources of 3D information to quantify the orientation of inaccessible geological structures (e.g. faults, tectonic and gravitational joints, and sediments bedding), cluster these structures using color information gathered from UAV's 3D point cloud and compare these data to structural data surveyed on the field. An additional drone photoscan was also conducted in the surface sinkhole giving access to the surveyed underground cavity to seek geological bodies' connections.
NASA Astrophysics Data System (ADS)
Klapa, Przemyslaw; Mitka, Bartosz; Zygmunt, Mariusz
2017-12-01
Capability of obtaining a multimillion point cloud in a very short time has made the Terrestrial Laser Scanning (TLS) a widely used tool in many fields of science and technology. The TLS accuracy matches traditional devices used in land surveying (tacheometry, GNSS - RTK), but like any measurement it is burdened with error which affects the precise identification of objects based on their image in the form of a point cloud. The point’s coordinates are determined indirectly by means of measuring the angles and calculating the time of travel of the electromagnetic wave. Each such component has a measurement error which is translated into the final result. The XYZ coordinates of a measuring point are determined with some uncertainty and the very accuracy of determining these coordinates is reduced as the distance to the instrument increases. The paper presents the results of examination of geometrical stability of a point cloud obtained by means terrestrial laser scanner and accuracy evaluation of solids determined using the cloud. Leica P40 scanner and two different settings of measuring points were used in the tests. The first concept involved placing a few balls in the field and then scanning them from various sides at similar distances. The second part of measurement involved placing balls and scanning them a few times from one side but at varying distances from the instrument to the object. Each measurement encompassed a scan of the object with automatic determination of its position and geometry. The desk studies involved a semiautomatic fitting of solids and measurement of their geometrical elements, and comparison of parameters that determine their geometry and location in space. The differences of measures of geometrical elements of balls and translations vectors of the solids centres indicate the geometrical changes of the point cloud depending on the scanning distance and parameters. The results indicate the changes in the geometry of scanned objects depending on the point cloud quality and distance from the measuring instrument. Varying geometrical dimensions of the same element suggest also that the point cloud does not keep a stable geometry of measured objects.
Kassem, Mohammed A; Amin, Alaa S
2015-02-05
A new method to estimate rhodium in different samples at trace levels had been developed. Rhodium was complexed with 5-(4'-nitro-2',6'-dichlorophenylazo)-6-hydroxypyrimidine-2,4-dione (NDPHPD) as a complexing agent in an aqueous medium and concentrated by using Triton X-114 as a surfactant. The investigated rhodium complex was preconcentrated with cloud point extraction process using the nonionic surfactant Triton X-114 to extract rhodium complex from aqueous solutions at pH 4.75. After the phase separation at 50°C, the surfactant-rich phase was heated again at 100°C to remove water after decantation and the remaining phase was dissolved using 0.5mL of acetonitrile. Under optimum conditions, the calibration curve was linear for the concentration range of 0.5-75ngmL(-1) and the detection limit was 0.15ngmL(-1) of the original solution. The enhancement factor of 500 was achieved for 250mL samples containing the analyte and relative standard deviations were ⩽1.50%. The method was found to be highly selective, fairly sensitive, simple, rapid and economical and safely applied for rhodium determination in different complex materials such as synthetic mixture of alloys and environmental water samples. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Hanel, A.; Stilla, U.
2017-05-01
Vehicle environment cameras observing traffic participants in the area around a car and interior cameras observing the car driver are important data sources for driver intention recognition algorithms. To combine information from both camera groups, a camera system calibration can be performed. Typically, there is no overlapping field-of-view between environment and interior cameras. Often no marked reference points are available in environments, which are a large enough to cover a car for the system calibration. In this contribution, a calibration method for a vehicle camera system with non-overlapping camera groups in an urban environment is described. A-priori images of an urban calibration environment taken with an external camera are processed with the structure-frommotion method to obtain an environment point cloud. Images of the vehicle interior, taken also with an external camera, are processed to obtain an interior point cloud. Both point clouds are tied to each other with images of both image sets showing the same real-world objects. The point clouds are transformed into a self-defined vehicle coordinate system describing the vehicle movement. On demand, videos can be recorded with the vehicle cameras in a calibration drive. Poses of vehicle environment cameras and interior cameras are estimated separately using ground control points from the respective point cloud. All poses of a vehicle camera estimated for different video frames are optimized in a bundle adjustment. In an experiment, a point cloud is created from images of an underground car park, as well as a point cloud of the interior of a Volkswagen test car is created. Videos of two environment and one interior cameras are recorded. Results show, that the vehicle camera poses are estimated successfully especially when the car is not moving. Position standard deviations in the centimeter range can be achieved for all vehicle cameras. Relative distances between the vehicle cameras deviate between one and ten centimeters from tachymeter reference measurements.
NASA Astrophysics Data System (ADS)
Duarte, João; Gonçalves, Gil; Duarte, Diogo; Figueiredo, Fernando; Mira, Maria
2015-04-01
Photogrammetric Unmanned Aerial Vehicles (UAVs) and Terrestrial Laser Scanners (TLS) are two emerging technologies that allows the production of dense 3D point clouds of the sensed topographic surfaces. Although image-based stereo-photogrammetric point clouds could not, in general, compete on geometric quality over TLS point clouds, fully automated mapping solutions based on ultra-light UAVs (or drones) have recently become commercially available at very reasonable accuracy and cost for engineering and geological applications. The purpose of this paper is to compare the two point clouds generated by these two technologies, in order to automatize the manual process tasks commonly used to detect and represent the attitude of discontinuities (Stereographic projection: Schmidt net - Equal area). To avoid the difficulties of access and guarantee the data survey security conditions, this fundamental step in all geological/geotechnical studies, applied to the extractive industry and engineering works, has to be replaced by a more expeditious and reliable methodology. This methodology will allow, in a more actuated clear way, give answers to the needs of evaluation of rock masses, by mapping the structures present, which will reduce considerably the associated risks (investment, structures dimensioning, security, etc.). A case study of a dolerite outcrop locate in the center of Portugal (the dolerite outcrop is situated in the volcanic complex of Serra de Todo-o-Mundo, Casais Gaiola, intruded in Jurassic sandstones) will be used to assess this methodology. The results obtained show that the 3D point cloud produced by the Photogrammetric UAV platform has the appropriate geometric quality for extracting the parameters that define the discontinuities of the dolerite outcrops. Although, they are comparable to the manual extracted parameters, their quality is inferior to parameters extracted from the TLS point cloud.
Cloud-point detection using a portable thickness shear mode crystal resonator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mansure, A.J.; Spates, J.J.; Germer, J.W.
1997-08-01
The Thickness Shear Mode (TSM) crystal resonator monitors the crude oil by propagating a shear wave into the oil. The coupling of the shear wave and the crystal vibrations is a function of the viscosity of the oil. By driving the crystal with circuitry that incorporates feedback, it is possible to determine the change from Newtonian to non-Newtonian viscosity at the cloud point. A portable prototype TSM Cloud Point Detector (CPD) has performed flawlessly during field and lab tests proving the technique is less subjective or operator dependent than the ASTM standard. The TSM CPD, in contrast to standard viscositymore » techniques, makes the measurement in a closed container capable of maintaining up to 100 psi. The closed container minimizes losses of low molecular weight volatiles, allowing samples (25 ml) to be retested with the addition of chemicals. By cycling/thermal soaking the sample, the effects of thermal history can be investigated and eliminated as a source of confusion. The CPD is portable, suitable for shipping the field offices for use by personnel without special training or experience in cloud point measurements. As such, it can make cloud point data available without the delays and inconvenience of sending samples to special labs. The crystal resonator technology can be adapted to in-line monitoring of cloud point and deposition detection.« less
NASA Astrophysics Data System (ADS)
Micheletti, Natan; Tonini, Marj; Lane, Stuart N.
2017-02-01
Acquisition of high density point clouds using terrestrial laser scanners (TLSs) has become commonplace in geomorphic science. The derived point clouds are often interpolated onto regular grids and the grids compared to detect change (i.e. erosion and deposition/advancement movements). This procedure is necessary for some applications (e.g. digital terrain analysis), but it inevitably leads to a certain loss of potentially valuable information contained within the point clouds. In the present study, an alternative methodology for geomorphological analysis and feature detection from point clouds is proposed. It rests on the use of the Density-Based Spatial Clustering of Applications with Noise (DBSCAN), applied to TLS data for a rock glacier front slope in the Swiss Alps. The proposed methods allowed the detection and isolation of movements directly from point clouds which yield to accuracies in the following computation of volumes that depend only on the actual registered distance between points. We demonstrated that these values are more conservative than volumes computed with the traditional DEM comparison. The results are illustrated for the summer of 2015, a season of enhanced geomorphic activity associated with exceptionally high temperatures.
Metric Scale Calculation for Visual Mapping Algorithms
NASA Astrophysics Data System (ADS)
Hanel, A.; Mitschke, A.; Boerner, R.; Van Opdenbosch, D.; Hoegner, L.; Brodie, D.; Stilla, U.
2018-05-01
Visual SLAM algorithms allow localizing the camera by mapping its environment by a point cloud based on visual cues. To obtain the camera locations in a metric coordinate system, the metric scale of the point cloud has to be known. This contribution describes a method to calculate the metric scale for a point cloud of an indoor environment, like a parking garage, by fusing multiple individual scale values. The individual scale values are calculated from structures and objects with a-priori known metric extension, which can be identified in the unscaled point cloud. Extensions of building structures, like the driving lane or the room height, are derived from density peaks in the point distribution. The extension of objects, like traffic signs with a known metric size, are derived using projections of their detections in images onto the point cloud. The method is tested with synthetic image sequences of a drive with a front-looking mono camera through a virtual 3D model of a parking garage. It has been shown, that each individual scale value improves either the robustness of the fused scale value or reduces its error. The error of the fused scale is comparable to other recent works.
GPU-Based Point Cloud Superpositioning for Structural Comparisons of Protein Binding Sites.
Leinweber, Matthias; Fober, Thomas; Freisleben, Bernd
2018-01-01
In this paper, we present a novel approach to solve the labeled point cloud superpositioning problem for performing structural comparisons of protein binding sites. The solution is based on a parallel evolution strategy that operates on large populations and runs on GPU hardware. The proposed evolution strategy reduces the likelihood of getting stuck in a local optimum of the multimodal real-valued optimization problem represented by labeled point cloud superpositioning. The performance of the GPU-based parallel evolution strategy is compared to a previously proposed CPU-based sequential approach for labeled point cloud superpositioning, indicating that the GPU-based parallel evolution strategy leads to qualitatively better results and significantly shorter runtimes, with speed improvements of up to a factor of 1,500 for large populations. Binary classification tests based on the ATP, NADH, and FAD protein subsets of CavBase, a database containing putative binding sites, show average classification rate improvements from about 92 percent (CPU) to 96 percent (GPU). Further experiments indicate that the proposed GPU-based labeled point cloud superpositioning approach can be superior to traditional protein comparison approaches based on sequence alignments.
Real object-based 360-degree integral-floating display using multiple depth camera
NASA Astrophysics Data System (ADS)
Erdenebat, Munkh-Uchral; Dashdavaa, Erkhembaatar; Kwon, Ki-Chul; Wu, Hui-Ying; Yoo, Kwan-Hee; Kim, Young-Seok; Kim, Nam
2015-03-01
A novel 360-degree integral-floating display based on the real object is proposed. The general procedure of the display system is similar with conventional 360-degree integral-floating displays. Unlike previously presented 360-degree displays, the proposed system displays the 3D image generated from the real object in 360-degree viewing zone. In order to display real object in 360-degree viewing zone, multiple depth camera have been utilized to acquire the depth information around the object. Then, the 3D point cloud representations of the real object are reconstructed according to the acquired depth information. By using a special point cloud registration method, the multiple virtual 3D point cloud representations captured by each depth camera are combined as single synthetic 3D point cloud model, and the elemental image arrays are generated for the newly synthesized 3D point cloud model from the given anamorphic optic system's angular step. The theory has been verified experimentally, and it shows that the proposed 360-degree integral-floating display can be an excellent way to display real object in the 360-degree viewing zone.
The Feasibility of 3d Point Cloud Generation from Smartphones
NASA Astrophysics Data System (ADS)
Alsubaie, N.; El-Sheimy, N.
2016-06-01
This paper proposes a new technique for increasing the accuracy of direct geo-referenced image-based 3D point cloud generated from low-cost sensors in smartphones. The smartphone's motion sensors are used to directly acquire the Exterior Orientation Parameters (EOPs) of the captured images. These EOPs, along with the Interior Orientation Parameters (IOPs) of the camera/ phone, are used to reconstruct the image-based 3D point cloud. However, because smartphone motion sensors suffer from poor GPS accuracy, accumulated drift and high signal noise, inaccurate 3D mapping solutions often result. Therefore, horizontal and vertical linear features, visible in each image, are extracted and used as constraints in the bundle adjustment procedure. These constraints correct the relative position and orientation of the 3D mapping solution. Once the enhanced EOPs are estimated, the semi-global matching algorithm (SGM) is used to generate the image-based dense 3D point cloud. Statistical analysis and assessment are implemented herein, in order to demonstrate the feasibility of 3D point cloud generation from the consumer-grade sensors in smartphones.
Plasma irregularities caused by cycloid bunching of the CRRES G-2 barium release
NASA Technical Reports Server (NTRS)
Bernhardt, P. A.; Huba, J. D.; Pongratz, M. B.; Simons, D. J.; Wolcott, J. H.
1993-01-01
The Combined Release and Radiation Effects Satellite (CRRES) spacecraft carried a number of barium thermite canisters for release into the upper atmosphere. The barium release labeled G-2 showed evidence of curved irregularities not aligned with the ambient magnetic field B. The newly discovered curved structures can be explained by a process called cycloid bunching. Cycloid bunching occurs when plasma is created by photoionization of a neutral cloud injected at high velocity perpendicular to B. If the injection velocity is much larger than the expansion speed of the cloud, the ion trail will form a cycloid that has irregularities spaced by the product of the perpendicular injection speed and the ion gyroperiod, Images of the solar-illuminated barium ions are compared with the results of a three-dimensional kinetic simulation. Cycloid bunching is shown to be responsible for the rapid generation of both curved and field-aligned irregularities in the CRRES G-2 experiment.
Registration of Vehicle-Borne Point Clouds and Panoramic Images Based on Sensor Constellations
Yao, Lianbi; Wu, Hangbin; Li, Yayun; Meng, Bin; Qian, Jinfei; Liu, Chun; Fan, Hongchao
2017-01-01
A mobile mapping system (MMS) is usually utilized to collect environmental data on and around urban roads. Laser scanners and panoramic cameras are the main sensors of an MMS. This paper presents a new method for the registration of the point clouds and panoramic images based on sensor constellation. After the sensor constellation was analyzed, a feature point, the intersection of the connecting line between the global positioning system (GPS) antenna and the panoramic camera with a horizontal plane, was utilized to separate the point clouds into blocks. The blocks for the central and sideward laser scanners were extracted with the segmentation feature points. Then, the point clouds located in the blocks were separated from the original point clouds. Each point in the blocks was used to find the accurate corresponding pixel in the relative panoramic images via a collinear function, and the position and orientation relationship amongst different sensors. A search strategy is proposed for the correspondence of laser scanners and lenses of panoramic cameras to reduce calculation complexity and improve efficiency. Four cases of different urban road types were selected to verify the efficiency and accuracy of the proposed method. Results indicate that most of the point clouds (with an average of 99.7%) were successfully registered with the panoramic images with great efficiency. Geometric evaluation results indicate that horizontal accuracy was approximately 0.10–0.20 m, and vertical accuracy was approximately 0.01–0.02 m for all cases. Finally, the main factors that affect registration accuracy, including time synchronization amongst different sensors, system positioning and vehicle speed, are discussed. PMID:28398256
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martin, Shawn
This code consists of Matlab routines which enable the user to perform non-manifold surface reconstruction via triangulation from high dimensional point cloud data. The code was based on an algorithm originally developed in [Freedman (2007), An Incremental Algorithm for Reconstruction of Surfaces of Arbitrary Codimension Computational Geometry: Theory and Applications, 36(2):106-116]. This algorithm has been modified to accommodate non-manifold surface according to the work described in [S. Martin and J.-P. Watson (2009), Non-Manifold Surface Reconstruction from High Dimensional Point Cloud DataSAND #5272610].The motivation for developing the code was a point cloud describing the molecular conformation space of cyclooctane (C8H16). Cyclooctanemore » conformation space was represented using points in 72 dimensions (3 coordinates for each molecule). The code was used to triangulate the point cloud and thereby study the geometry and topology of cyclooctane. Futures applications are envisioned for peptides and proteins.« less
Classification of Mobile Laser Scanning Point Clouds from Height Features
NASA Astrophysics Data System (ADS)
Zheng, M.; Lemmens, M.; van Oosterom, P.
2017-09-01
The demand for 3D maps of cities and road networks is steadily growing and mobile laser scanning (MLS) systems are often the preferred geo-data acquisition method for capturing such scenes. Because MLS systems are mounted on cars or vans they can acquire billions of points of road scenes within a few hours of survey. Manual processing of point clouds is labour intensive and thus time consuming and expensive. Hence, the need for rapid and automated methods for 3D mapping of dense point clouds is growing exponentially. The last five years the research on automated 3D mapping of MLS data has tremendously intensified. In this paper, we present our work on automated classification of MLS point clouds. In the present stage of the research we exploited three features - two height components and one reflectance value, and achieved an overall accuracy of 73 %, which is really encouraging for further refining our approach.
Outdoor Illegal Construction Identification Algorithm Based on 3D Point Cloud Segmentation
NASA Astrophysics Data System (ADS)
An, Lu; Guo, Baolong
2018-03-01
Recently, various illegal constructions occur significantly in our surroundings, which seriously restrict the orderly development of urban modernization. The 3D point cloud data technology is used to identify the illegal buildings, which could address the problem above effectively. This paper proposes an outdoor illegal construction identification algorithm based on 3D point cloud segmentation. Initially, in order to save memory space and reduce processing time, a lossless point cloud compression method based on minimum spanning tree is proposed. Then, a ground point removing method based on the multi-scale filtering is introduced to increase accuracy. Finally, building clusters on the ground can be obtained using a region growing method, as a result, the illegal construction can be marked. The effectiveness of the proposed algorithm is verified using a publicly data set collected from the International Society for Photogrammetry and Remote Sensing (ISPRS).
Effect of electromagnetic field on Kordylewski clouds formation
NASA Astrophysics Data System (ADS)
Salnikova, Tatiana; Stepanov, Sergey
2018-05-01
In previous papers the authors suggest a clarification of the phenomenon of appearance-disappearance of Kordylewski clouds - accumulation of cosmic dust mass in the vicinity of the triangle libration points of the Earth-Moon system. Under gravi-tational and light perturbation of the Sun the triangle libration points aren't the points of relative equilibrium. However, there exist the stable periodic motion of the particles, surrounding every of the triangle libration points. Due to this fact we can consider a probabilistic model of the dust clouds formation. These clouds move along the periodical orbits in small vicinity of the point of periodical orbit. To continue this research we suggest a mathematical model to investigate also the electromagnetic influences, arising under consideration of the charged dust particles in the vicinity of the triangle libration points of the Earth-Moon system. In this model we take under consideration the self-unduced force field within the set of charged particles, the probability distribution density evolves according to the Vlasov equation.
Point Cloud Based Change Detection - an Automated Approach for Cloud-based Services
NASA Astrophysics Data System (ADS)
Collins, Patrick; Bahr, Thomas
2016-04-01
The fusion of stereo photogrammetric point clouds with LiDAR data or terrain information derived from SAR interferometry has a significant potential for 3D topographic change detection. In the present case study latest point cloud generation and analysis capabilities are used to examine a landslide that occurred in the village of Malin in Maharashtra, India, on 30 July 2014, and affected an area of ca. 44.000 m2. It focuses on Pléiades high resolution satellite imagery and the Airbus DS WorldDEMTM as a product of the TanDEM-X mission. This case study was performed using the COTS software package ENVI 5.3. Integration of custom processes and automation is supported by IDL (Interactive Data Language). Thus, ENVI analytics is running via the object-oriented and IDL-based ENVITask API. The pre-event topography is represented by the WorldDEMTM product, delivered with a raster of 12 m x 12 m and based on the EGM2008 geoid (called pre-DEM). For the post-event situation a Pléiades 1B stereo image pair of the AOI affected was obtained. The ENVITask "GeneratePointCloudsByDenseImageMatching" was implemented to extract passive point clouds in LAS format from the panchromatic stereo datasets: • A dense image-matching algorithm is used to identify corresponding points in the two images. • A block adjustment is applied to refine the 3D coordinates that describe the scene geometry. • Additionally, the WorldDEMTM was input to constrain the range of heights in the matching area, and subsequently the length of the epipolar line. The "PointCloudFeatureExtraction" task was executed to generate the post-event digital surface model from the photogrammetric point clouds (called post-DEM). Post-processing consisted of the following steps: • Adding the geoid component (EGM 2008) to the post-DEM. • Pre-DEM reprojection to the UTM Zone 43N (WGS-84) coordinate system and resizing. • Subtraction of the pre-DEM from the post-DEM. • Filtering and threshold based classification of the DEM difference to analyze the surface changes in 3D. The automated point cloud generation and analysis introduced here can be embedded in virtually any existing geospatial workflow for operational applications. Three integration options were implemented in this case study: • Integration within any ArcGIS environment whether deployed on the desktop, in the cloud, or online. Execution uses a customized ArcGIS script tool. A Python script file retrieves the parameters from the user interface and runs the precompiled IDL code. That IDL code is used to interface between the Python script and the relevant ENVITasks. • Publishing the point cloud processing tasks as services via the ENVI Services Engine (ESE). ESE is a cloud-based image analysis solution to publish and deploy advanced ENVI image and data analytics to existing enterprise infrastructures. For this purpose the entire IDL code can be capsuled in a single ENVITask. • Integration in an existing geospatial workflow using the Python-to-IDL Bridge. This mechanism allows calling IDL code within Python on a user-defined platform. The results of this case study allow a 3D estimation of the topographic changes within the tectonically active and anthropogenically invaded Malin area after the landslide event. Accordingly, the point cloud analysis was correlated successfully with modelled displacement contours of the slope. Based on optical satellite imagery, such point clouds of high precision and density distribution can be obtained in a few minutes to support the operational monitoring of landslide processes.
NASA Astrophysics Data System (ADS)
Zheng, X.; Albrecht, B.; Jonsson, H. H.; Khelif, D.; Feingold, G.; Minnis, P.; Ayers, K.; Chuang, P.; Donaher, S.; Rossiter, D.; Ghate, V.; Ruiz-Plancarte, J.; Sun-Mack, S.
2011-05-01
Aircraft observations made off the coast of northern Chile in the Southeastern Pacific (20° S, 72° W; named Point Alpha) from 16 October to 13 November 2008 during the VAMOS Ocean-Cloud-Atmosphere-Land Study-Regional Experiment (VOCALS-REx), combined with meteorological reanalysis, satellite measurements, and radiosonde data, are used to investigate the boundary layer (BL) and aerosol-cloud-drizzle variations in this region. The BL at Point Alpha was typical of a non-drizzling stratocumulus-topped BL on days without predominately synoptic and meso-scale influences. The BL had a depth of 1140 ± 120 m, was well-mixed and capped by a sharp inversion. The wind direction generally switched from southerly within the BL to northerly above the inversion. The cloud liquid water path (LWP) varied between 15 g m-2 and 160 g m-2. From 29 October to 4 November, when a synoptic system affected conditions at Point Alpha, the cloud LWP was higher than on the other days by around 40 g m-2. On 1 and 2 November, a moist layer above the inversion moved over Point Alpha. The total-water specific humidity above the inversion was larger than that within the BL during these days. Entrainment rates (average of 1.5 ± 0.6 mm s-1) calculated from the near cloud-top fluxes and turbulence (vertical velocity variance) in the BL at Point Alpha appeared to be weaker than those in the BL over the open ocean west of Point Alpha and the BL near the coast of the northeast Pacific. The accumulation mode aerosol varied from 250 to 700 cm-3 within the BL, and CCN at 0.2 % supersaturation within the BL ranged between 150 and 550 cm-3. The main aerosol source at Point Alpha was horizontal advection within the BL from south. The average cloud droplet number concentration ranged between 80 and 400 cm-3, which was consistent with the satellite-derived values. The relationship of cloud droplet number concentration and CCN at 0.2 % supersaturation from 18 flights is Nd =4.6 × CCN0.71. While the mean LWP retrieved from GOES was in good agreement with the in situ measurements, the GOES-derived cloud droplet effective radius tended to be larger than that from the aircraft {in situ} observations near cloud top. The aerosol and cloud LWP relationship reveals that during the typical well-mixed BL days the cloud LWP increased with the CCN concentrations. On the other hand, meteorological factors and the decoupling processes have large influences on the cloud LWP variation as well.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Warner-Schmid, D.; Hoshi, Suwaru; Armstrong, D.W.
Aqueous solutions of nonionic surfactants are known to undergo phase separations at elevated temperatures. This phenomenon is known as clouding,' and the temperature at which it occurs is refereed to as the cloud point. Permethylhydroxypropyl-[beta]-cyclodextrin (PMHP-[beta]-CD) was synthesized and aqueous solutions containing it were found to undergo similar cloud-point behavior. Factors that affect the phase separation of PMHP-[beta]-CD were investigated. Subsequently, the cloud-point extractions of several aromatic compounds (i.e., acetanilide, aniline, 2,2[prime]-dihydroxybiphenyl, N-methylaniline, 2-naphthol, o-nitroaniline, m-nitroaniline, p-nitroaniline, nitrobenzene, o-nitrophenol, m-nitrophenol, p-nitrophenol, 4-phenazophenol, 3-phenylphenol, and 2-phenylbenzimidazole) from dilute aqueous solution were evaluated. Although the extraction efficiency of the compounds varied, mostmore » can be quantitatively extracted if sufficient PMHP-[beta]-CD is used. For those few compounds that are not extracted (e.g., o-nitroacetanilide), the cloud-point procedure may be an effective one-step isolation or purification method. 18 refs., 2 figs., 3 tabs.« less
Change Analysis in Structural Laser Scanning Point Clouds: The Baseline Method
Shen, Yueqian; Lindenbergh, Roderik; Wang, Jinhu
2016-01-01
A method is introduced for detecting changes from point clouds that avoids registration. For many applications, changes are detected between two scans of the same scene obtained at different times. Traditionally, these scans are aligned to a common coordinate system having the disadvantage that this registration step introduces additional errors. In addition, registration requires stable targets or features. To avoid these issues, we propose a change detection method based on so-called baselines. Baselines connect feature points within one scan. To analyze changes, baselines connecting corresponding points in two scans are compared. As feature points either targets or virtual points corresponding to some reconstructable feature in the scene are used. The new method is implemented on two scans sampling a masonry laboratory building before and after seismic testing, that resulted in damages in the order of several centimeters. The centres of the bricks of the laboratory building are automatically extracted to serve as virtual points. Baselines connecting virtual points and/or target points are extracted and compared with respect to a suitable structural coordinate system. Changes detected from the baseline analysis are compared to a traditional cloud to cloud change analysis demonstrating the potential of the new method for structural analysis. PMID:28029121
Change Analysis in Structural Laser Scanning Point Clouds: The Baseline Method.
Shen, Yueqian; Lindenbergh, Roderik; Wang, Jinhu
2016-12-24
A method is introduced for detecting changes from point clouds that avoids registration. For many applications, changes are detected between two scans of the same scene obtained at different times. Traditionally, these scans are aligned to a common coordinate system having the disadvantage that this registration step introduces additional errors. In addition, registration requires stable targets or features. To avoid these issues, we propose a change detection method based on so-called baselines. Baselines connect feature points within one scan. To analyze changes, baselines connecting corresponding points in two scans are compared. As feature points either targets or virtual points corresponding to some reconstructable feature in the scene are used. The new method is implemented on two scans sampling a masonry laboratory building before and after seismic testing, that resulted in damages in the order of several centimeters. The centres of the bricks of the laboratory building are automatically extracted to serve as virtual points. Baselines connecting virtual points and/or target points are extracted and compared with respect to a suitable structural coordinate system. Changes detected from the baseline analysis are compared to a traditional cloud to cloud change analysis demonstrating the potential of the new method for structural analysis.
NASA Astrophysics Data System (ADS)
Bornemann, Pierrick; Jean-Philippe, Malet; André, Stumpf; Anne, Puissant; Julien, Travelletti
2016-04-01
Dense multi-temporal point clouds acquired with terrestrial laser scanning (TLS) have proved useful for the study of structure and kinematics of slope movements. Most of the existing deformation analysis methods rely on the use of interpolated data. Approaches that use multiscale image correlation provide a precise and robust estimation of the observed movements; however, for non-rigid motion patterns, these methods tend to underestimate all the components of the movement. Further, for rugged surface topography, interpolated data introduce a bias and a loss of information in some local places where the point cloud information is not sufficiently dense. Those limits can be overcome by using deformation analysis exploiting directly the original 3D point clouds assuming some hypotheses on the deformation (e.g. the classic ICP algorithm requires an initial guess by the user of the expected displacement patterns). The objective of this work is therefore to propose a deformation analysis method applied to a series of 20 3D point clouds covering the period October 2007 - October 2015 at the Super-Sauze landslide (South East French Alps). The dense point clouds have been acquired with a terrestrial long-range Optech ILRIS-3D laser scanning device from the same base station. The time series are analyzed using two approaches: 1) a method of correlation of gradient images, and 2) a method of feature tracking in the raw 3D point clouds. The estimated surface displacements are then compared with GNSS surveys on reference targets. Preliminary results tend to show that the image correlation method provides a good estimation of the displacement fields at first order, but shows limitations such as the inability to track some deformation patterns, and the use of a perspective projection that does not maintain original angles and distances in the correlated images. Results obtained with 3D point clouds comparison algorithms (C2C, ICP, M3C2) bring additional information on the displacement fields. Displacement fields derived from both approaches are then combined and provide a better understanding of the landslide kinematics.
Image-Based Airborne LiDAR Point Cloud Encoding for 3d Building Model Retrieval
NASA Astrophysics Data System (ADS)
Chen, Yi-Chen; Lin, Chao-Hung
2016-06-01
With the development of Web 2.0 and cyber city modeling, an increasing number of 3D models have been available on web-based model-sharing platforms with many applications such as navigation, urban planning, and virtual reality. Based on the concept of data reuse, a 3D model retrieval system is proposed to retrieve building models similar to a user-specified query. The basic idea behind this system is to reuse these existing 3D building models instead of reconstruction from point clouds. To efficiently retrieve models, the models in databases are compactly encoded by using a shape descriptor generally. However, most of the geometric descriptors in related works are applied to polygonal models. In this study, the input query of the model retrieval system is a point cloud acquired by Light Detection and Ranging (LiDAR) systems because of the efficient scene scanning and spatial information collection. Using Point clouds with sparse, noisy, and incomplete sampling as input queries is more difficult than that by using 3D models. Because that the building roof is more informative than other parts in the airborne LiDAR point cloud, an image-based approach is proposed to encode both point clouds from input queries and 3D models in databases. The main goal of data encoding is that the models in the database and input point clouds can be consistently encoded. Firstly, top-view depth images of buildings are generated to represent the geometry surface of a building roof. Secondly, geometric features are extracted from depth images based on height, edge and plane of building. Finally, descriptors can be extracted by spatial histograms and used in 3D model retrieval system. For data retrieval, the models are retrieved by matching the encoding coefficients of point clouds and building models. In experiments, a database including about 900,000 3D models collected from the Internet is used for evaluation of data retrieval. The results of the proposed method show a clear superiority over related methods.
The ultraviolet interstellar extinction curve in the Pleiades
NASA Technical Reports Server (NTRS)
Witt, A. N.; Bohlin, R. C.; Stecher, T. P.
1981-01-01
The wavelength dependence of ultraviolet extinction in the Pleiades dust clouds has been determined from IUE observations of HD 23512, the brightest heavily reddened member of the Pleiades cluster. There is evidence for an anomalously weak absorption bump at 2200 A, followed by an extinction rise in the far ultraviolet with an essentially normal slope. A relatively weak absorption band at 2200 A and a weak diffuse absorption band at 4430 A seem to be common characteristics of dust present in dense clouds. Evidence is presented which suggests that the extinction characteristics found for HD 23512 are typical for a class of extinction curves observed in several cases in the Galaxy and in the LMC.
NASA Astrophysics Data System (ADS)
Hui, Z.; Cheng, P.; Ziggah, Y. Y.; Nie, Y.
2018-04-01
Filtering is a key step for most applications of airborne LiDAR point clouds. Although lots of filtering algorithms have been put forward in recent years, most of them suffer from parameters setting or thresholds adjusting, which will be time-consuming and reduce the degree of automation of the algorithm. To overcome this problem, this paper proposed a threshold-free filtering algorithm based on expectation-maximization. The proposed algorithm is developed based on an assumption that point clouds are seen as a mixture of Gaussian models. The separation of ground points and non-ground points from point clouds can be replaced as a separation of a mixed Gaussian model. Expectation-maximization (EM) is applied for realizing the separation. EM is used to calculate maximum likelihood estimates of the mixture parameters. Using the estimated parameters, the likelihoods of each point belonging to ground or object can be computed. After several iterations, point clouds can be labelled as the component with a larger likelihood. Furthermore, intensity information was also utilized to optimize the filtering results acquired using the EM method. The proposed algorithm was tested using two different datasets used in practice. Experimental results showed that the proposed method can filter non-ground points effectively. To quantitatively evaluate the proposed method, this paper adopted the dataset provided by the ISPRS for the test. The proposed algorithm can obtain a 4.48 % total error which is much lower than most of the eight classical filtering algorithms reported by the ISPRS.
3D reconstruction from non-uniform point clouds via local hierarchical clustering
NASA Astrophysics Data System (ADS)
Yang, Jiaqi; Li, Ruibo; Xiao, Yang; Cao, Zhiguo
2017-07-01
Raw scanned 3D point clouds are usually irregularly distributed due to the essential shortcomings of laser sensors, which therefore poses a great challenge for high-quality 3D surface reconstruction. This paper tackles this problem by proposing a local hierarchical clustering (LHC) method to improve the consistency of point distribution. Specifically, LHC consists of two steps: 1) adaptive octree-based decomposition of 3D space, and 2) hierarchical clustering. The former aims at reducing the computational complexity and the latter transforms the non-uniform point set into uniform one. Experimental results on real-world scanned point clouds validate the effectiveness of our method from both qualitative and quantitative aspects.
Applications of low altitude photogrammetry for morphometry, displacements, and landform modeling
NASA Astrophysics Data System (ADS)
Gomez, F. G.; Polun, S. G.; Hickcox, K.; Miles, C.; Delisle, C.; Beem, J. R.
2016-12-01
Low-altitude aerial surveying is emerging as a tool that greatly improves the ease and efficiency of measuring landforms for quantitative geomorphic analyses. High-resolution, close-range photogrammetry produces dense, 3-dimensional point clouds that facilitate the construction of digital surface models, as well as a potential means of classifying ground targets using spatial structure. This study presents results from recent applications of UAS-based photogrammetry, including high resolution surface morphometry of a lava flow, repeat-pass applications to mass movements, and fault scarp degradation modeling. Depending upon the desired photographic resolution and the platform/payload flown, aerial photos are typically acquired at altitudes of 40 - 100 meters above the ground surface. In all cases, high-precision ground control points are key for accurate (and repeatable) orientation - relying on low-precision GPS coordinates (whether on the ground or geotags in the aerial photos) typically results in substantial rotations (tilt) of the reference frame. Using common ground control points between repeat surveys results in matching point clouds with RMS residuals better than 10 cm. In arid regions, the point cloud is used to assess lava flow surface roughness using multi-scale measurements of point cloud dimensionality. For the landslide study, the point cloud provides a basis for assessing possible displacements. In addition, the high resolution orthophotos facilitate mapping of fractures and their growth. For neotectonic applications, we compare fault scarp modeling results from UAV-derived point clouds versus field-based surveys (kinematic GPS and electronic distance measurements). In summary, there is a wide ranging toolbox of low-altitude aerial platforms becoming available for field geoscientists. In many instances, these tools will present convenience and reduced cost compared with the effort and expense to contract acquisitions of aerial imagery.
SEMANTIC3D.NET: a New Large-Scale Point Cloud Classification Benchmark
NASA Astrophysics Data System (ADS)
Hackel, T.; Savinov, N.; Ladicky, L.; Wegner, J. D.; Schindler, K.; Pollefeys, M.
2017-05-01
This paper presents a new 3D point cloud classification benchmark data set with over four billion manually labelled points, meant as input for data-hungry (deep) learning methods. We also discuss first submissions to the benchmark that use deep convolutional neural networks (CNNs) as a work horse, which already show remarkable performance improvements over state-of-the-art. CNNs have become the de-facto standard for many tasks in computer vision and machine learning like semantic segmentation or object detection in images, but have no yet led to a true breakthrough for 3D point cloud labelling tasks due to lack of training data. With the massive data set presented in this paper, we aim at closing this data gap to help unleash the full potential of deep learning methods for 3D labelling tasks. Our semantic3D.net data set consists of dense point clouds acquired with static terrestrial laser scanners. It contains 8 semantic classes and covers a wide range of urban outdoor scenes: churches, streets, railroad tracks, squares, villages, soccer fields and castles. We describe our labelling interface and show that our data set provides more dense and complete point clouds with much higher overall number of labelled points compared to those already available to the research community. We further provide baseline method descriptions and comparison between methods submitted to our online system. We hope semantic3D.net will pave the way for deep learning methods in 3D point cloud labelling to learn richer, more general 3D representations, and first submissions after only a few months indicate that this might indeed be the case.
NASA Technical Reports Server (NTRS)
Rosen, James M.; Hofmann, D. J.; Carpenter, J. R.; Harder, J. W.; Oltmans, S. J.
1988-01-01
The first balloon-borne frost point measurements over Antarctica were made during September and October, 1987 as part of the NOZE 2 effort at McMurdo. The results indicate water vapor mixing ratios on the order of 2 ppmv in the 15 to 20 km region which is somewhat smaller than the typical values currently being used significantly smaller than the typical values currently being used in polar stratospheric cloud (PSC) theories. The observed water vapor mixing ratio would correspond to saturated conditions for what is thought to be the lowest stratospheric temperatures encountered over the Antarctic. Through the use of available lidar observations there appears to be significant evidence that some PSCs form at temperatures higher than the local frost point (with respect to water) in the 10 to 20 km region thus supporting the nitric acid theory of PSC composition. Clouds near 15 km and below appear to form in regions saturated with respect to water and thus are probably mostly ice water clouds although they could contain relatively small amounts of other constituents. Photographic evidence suggests that the clouds forming above the frost point probably have an appearance quite different from the lower altitude iridescent, colored nacreous clouds.
Romm, H; Ainsbury, E; Bajinskis, A; Barnard, S; Barquinero, J F; Barrios, L; Beinke, C; Puig-Casanovas, R; Deperas-Kaminska, M; Gregoire, E; Oestreicher, U; Lindholm, C; Moquet, J; Rothkamm, K; Sommer, S; Thierens, H; Vral, A; Vandersickel, V; Wojcik, A
2014-05-01
In the case of a large scale radiation accident high throughput methods of biological dosimetry for population triage are needed to identify individuals requiring clinical treatment. The dicentric assay performed in web-based scoring mode may be a very suitable technique. Within the MULTIBIODOSE EU FP7 project a network is being established of 8 laboratories with expertise in dose estimations based on the dicentric assay. Here, the manual dicentric assay was tested in a web-based scoring mode. More than 23,000 high resolution images of metaphase spreads (only first mitosis) were captured by four laboratories and established as image galleries on the internet (cloud). The galleries included images of a complete dose effect curve (0-5.0 Gy) and three types of irradiation scenarios simulating acute whole body, partial body and protracted exposure. The blood samples had been irradiated in vitro with gamma rays at the University of Ghent, Belgium. Two laboratories provided image galleries from Fluorescence plus Giemsa stained slides (3 h colcemid) and the image galleries from the other two laboratories contained images from Giemsa stained preparations (24 h colcemid). Each of the 8 participating laboratories analysed 3 dose points of the dose effect curve (scoring 100 cells for each point) and 3 unknown dose points (50 cells) for each of the 3 simulated irradiation scenarios. At first all analyses were performed in a QuickScan Mode without scoring individual chromosomes, followed by conventional scoring (only complete cells, 46 centromeres). The calibration curves obtained using these two scoring methods were very similar, with no significant difference in the linear-quadratic curve coefficients. Analysis of variance showed a significant effect of dose on the yield of dicentrics, but no significant effect of the laboratories, different methods of slide preparation or different incubation times used for colcemid. The results obtained to date within the MULTIBIODOSE project by a network of 8 collaborating laboratories throughout Europe are very promising. The dicentric assay in the web based scoring mode as a high throughput scoring strategy is a useful application for biodosimetry in the case of a large scale radiation accident.
An approach of point cloud denoising based on improved bilateral filtering
NASA Astrophysics Data System (ADS)
Zheng, Zeling; Jia, Songmin; Zhang, Guoliang; Li, Xiuzhi; Zhang, Xiangyin
2018-04-01
An omnidirectional mobile platform is designed for building point cloud based on an improved filtering algorithm which is employed to handle the depth image. First, the mobile platform can move flexibly and the control interface is convenient to control. Then, because the traditional bilateral filtering algorithm is time-consuming and inefficient, a novel method is proposed which called local bilateral filtering (LBF). LBF is applied to process depth image obtained by the Kinect sensor. The results show that the effect of removing noise is improved comparing with the bilateral filtering. In the condition of off-line, the color images and processed images are used to build point clouds. Finally, experimental results demonstrate that our method improves the speed of processing time of depth image and the effect of point cloud which has been built.
Point cloud modeling using the homogeneous transformation for non-cooperative pose estimation
NASA Astrophysics Data System (ADS)
Lim, Tae W.
2015-06-01
A modeling process to simulate point cloud range data that a lidar (light detection and ranging) sensor produces is presented in this paper in order to support the development of non-cooperative pose (relative attitude and position) estimation approaches which will help improve proximity operation capabilities between two adjacent vehicles. The algorithms in the modeling process were based on the homogeneous transformation, which has been employed extensively in robotics and computer graphics, as well as in recently developed pose estimation algorithms. Using a flash lidar in a laboratory testing environment, point cloud data of a test article was simulated and compared against the measured point cloud data. The simulated and measured data sets match closely, validating the modeling process. The modeling capability enables close examination of the characteristics of point cloud images of an object as it undergoes various translational and rotational motions. Relevant characteristics that will be crucial in non-cooperative pose estimation were identified such as shift, shadowing, perspective projection, jagged edges, and differential point cloud density. These characteristics will have to be considered in developing effective non-cooperative pose estimation algorithms. The modeling capability will allow extensive non-cooperative pose estimation performance simulations prior to field testing, saving development cost and providing performance metrics of the pose estimation concepts and algorithms under evaluation. The modeling process also provides "truth" pose of the test objects with respect to the sensor frame so that the pose estimation error can be quantified.
Quality Assessment and Comparison of Smartphone and Leica C10 Laser Scanner Based Point Clouds
NASA Astrophysics Data System (ADS)
Sirmacek, Beril; Lindenbergh, Roderik; Wang, Jinhu
2016-06-01
3D urban models are valuable for urban map generation, environment monitoring, safety planning and educational purposes. For 3D measurement of urban structures, generally airborne laser scanning sensors or multi-view satellite images are used as a data source. However, close-range sensors (such as terrestrial laser scanners) and low cost cameras (which can generate point clouds based on photogrammetry) can provide denser sampling of 3D surface geometry. Unfortunately, terrestrial laser scanning sensors are expensive and trained persons are needed to use them for point cloud acquisition. A potential effective 3D modelling can be generated based on a low cost smartphone sensor. Herein, we show examples of using smartphone camera images to generate 3D models of urban structures. We compare a smartphone based 3D model of an example structure with a terrestrial laser scanning point cloud of the structure. This comparison gives us opportunity to discuss the differences in terms of geometrical correctness, as well as the advantages, disadvantages and limitations in data acquisition and processing. We also discuss how smartphone based point clouds can help to solve further problems with 3D urban model generation in a practical way. We show that terrestrial laser scanning point clouds which do not have color information can be colored using smartphones. The experiments, discussions and scientific findings might be insightful for the future studies in fast, easy and low-cost 3D urban model generation field.
Knowledge-Based Object Detection in Laser Scanning Point Clouds
NASA Astrophysics Data System (ADS)
Boochs, F.; Karmacharya, A.; Marbs, A.
2012-07-01
Object identification and object processing in 3D point clouds have always posed challenges in terms of effectiveness and efficiency. In practice, this process is highly dependent on human interpretation of the scene represented by the point cloud data, as well as the set of modeling tools available for use. Such modeling algorithms are data-driven and concentrate on specific features of the objects, being accessible to numerical models. We present an approach that brings the human expert knowledge about the scene, the objects inside, and their representation by the data and the behavior of algorithms to the machine. This "understanding" enables the machine to assist human interpretation of the scene inside the point cloud. Furthermore, it allows the machine to understand possibilities and limitations of algorithms and to take this into account within the processing chain. This not only assists the researchers in defining optimal processing steps, but also provides suggestions when certain changes or new details emerge from the point cloud. Our approach benefits from the advancement in knowledge technologies within the Semantic Web framework. This advancement has provided a strong base for applications based on knowledge management. In the article we will present and describe the knowledge technologies used for our approach such as Web Ontology Language (OWL), used for formulating the knowledge base and the Semantic Web Rule Language (SWRL) with 3D processing and topologic built-ins, aiming to combine geometrical analysis of 3D point clouds, and specialists' knowledge of the scene and algorithmic processing.
NASA Astrophysics Data System (ADS)
Rutzinger, Martin; Bremer, Magnus; Ragg, Hansjörg
2013-04-01
Recently, terrestrial laser scanning (TLS) and matching of images acquired by unmanned arial vehicles (UAV) are operationally used for 3D geodata acquisition in Geoscience applications. However, the two systems cover different application domains in terms of acquisition conditions and data properties i.e. accuracy and line of sight. In this study we investigate the major differences between the two platforms for terrain roughness estimation. Terrain roughness is an important input for various applications such as morphometry studies, geomorphologic mapping, and natural process modeling (e.g. rockfall, avalanche, and hydraulic modeling). Data has been collected simultaneously by TLS using an Optech ILRIS3D and a rotary UAV using an octocopter from twins.nrn for a 900 m² test site located in a riverbed in Tyrol, Austria (Judenbach, Mieming). The TLS point cloud has been acquired from three scan positions. These have been registered using iterative closest point algorithm and a target-based referencing approach. For registration geometric targets (spheres) with a diameter of 20 cm were used. These targets were measured with dGPS for absolute georeferencing. The TLS point cloud has an average point density of 19,000 pts/m², which represents a point spacing of about 5 mm. 15 images where acquired by UAV in a height of 20 m using a calibrated camera with focal length of 18.3 mm. A 3D point cloud containing RGB attributes was derived using APERO/MICMAC software, by a direct georeferencing approach based on the aircraft IMU data. The point cloud is finally co-registered with the TLS data to guarantee an optimal preparation in order to perform the analysis. The UAV point cloud has an average point density of 17,500 pts/m², which represents a point spacing of 7.5 mm. After registration and georeferencing the level of detail of roughness representation in both point clouds have been compared considering elevation differences, roughness and representation of different grain sizes. UAV closes the gap between aerial and terrestrial surveys in terms of resolution and acquisition flexibility. This is also true for the data accuracy. Considering these data collection and data quality properties of both systems they have their merit on its own in terms of scale, data quality, data collection speed and application.
Point Cloud Based Relative Pose Estimation of a Satellite in Close Range
Liu, Lujiang; Zhao, Gaopeng; Bo, Yuming
2016-01-01
Determination of the relative pose of satellites is essential in space rendezvous operations and on-orbit servicing missions. The key problems are the adoption of suitable sensor on board of a chaser and efficient techniques for pose estimation. This paper aims to estimate the pose of a target satellite in close range on the basis of its known model by using point cloud data generated by a flash LIDAR sensor. A novel model based pose estimation method is proposed; it includes a fast and reliable pose initial acquisition method based on global optimal searching by processing the dense point cloud data directly, and a pose tracking method based on Iterative Closest Point algorithm. Also, a simulation system is presented in this paper in order to evaluate the performance of the sensor and generate simulated sensor point cloud data. It also provides truth pose of the test target so that the pose estimation error can be quantified. To investigate the effectiveness of the proposed approach and achievable pose accuracy, numerical simulation experiments are performed; results demonstrate algorithm capability of operating with point cloud directly and large pose variations. Also, a field testing experiment is conducted and results show that the proposed method is effective. PMID:27271633
NASA Astrophysics Data System (ADS)
Budge, Scott E.; Badamikar, Neeraj S.; Xie, Xuan
2015-03-01
Several photogrammetry-based methods have been proposed that the derive three-dimensional (3-D) information from digital images from different perspectives, and lidar-based methods have been proposed that merge lidar point clouds and texture the merged point clouds with digital imagery. Image registration alone has difficulty with smooth regions with low contrast, whereas point cloud merging alone has difficulty with outliers and a lack of proper convergence in the merging process. This paper presents a method to create 3-D images that uses the unique properties of texel images (pixel-fused lidar and digital imagery) to improve the quality and robustness of fused 3-D images. The proposed method uses both image processing and point-cloud merging to combine texel images in an iterative technique. Since the digital image pixels and the lidar 3-D points are fused at the sensor level, more accurate 3-D images are generated because registration of image data automatically improves the merging of the point clouds, and vice versa. Examples illustrate the value of this method over other methods. The proposed method also includes modifications for the situation where an estimate of position and attitude of the sensor is known, when obtained from low-cost global positioning systems and inertial measurement units sensors.
NASA Astrophysics Data System (ADS)
Ma, Hongchao; Cai, Zhan; Zhang, Liang
2018-01-01
This paper discusses airborne light detection and ranging (LiDAR) point cloud filtering (a binary classification problem) from the machine learning point of view. We compared three supervised classifiers for point cloud filtering, namely, Adaptive Boosting, support vector machine, and random forest (RF). Nineteen features were generated from raw LiDAR point cloud based on height and other geometric information within a given neighborhood. The test datasets issued by the International Society for Photogrammetry and Remote Sensing (ISPRS) were used to evaluate the performance of the three filtering algorithms; RF showed the best results with an average total error of 5.50%. The paper also makes tentative exploration in the application of transfer learning theory to point cloud filtering, which has not been introduced into the LiDAR field to the authors' knowledge. We performed filtering of three datasets from real projects carried out in China with RF models constructed by learning from the 15 ISPRS datasets and then transferred with little to no change of the parameters. Reliable results were achieved, especially in rural area (overall accuracy achieved 95.64%), indicating the feasibility of model transfer in the context of point cloud filtering for both easy automation and acceptable accuracy.
Large Scale Ice Water Path and 3-D Ice Water Content
Liu, Guosheng
2008-01-15
Cloud ice water concentration is one of the most important, yet poorly observed, cloud properties. Developing physical parameterizations used in general circulation models through single-column modeling is one of the key foci of the ARM program. In addition to the vertical profiles of temperature, water vapor and condensed water at the model grids, large-scale horizontal advective tendencies of these variables are also required as forcing terms in the single-column models. Observed horizontal advection of condensed water has not been available because the radar/lidar/radiometer observations at the ARM site are single-point measurement, therefore, do not provide horizontal distribution of condensed water. The intention of this product is to provide large-scale distribution of cloud ice water by merging available surface and satellite measurements. The satellite cloud ice water algorithm uses ARM ground-based measurements as baseline, produces datasets for 3-D cloud ice water distributions in a 10 deg x 10 deg area near ARM site. The approach of the study is to expand a (surface) point measurement to an (satellite) areal measurement. That is, this study takes the advantage of the high quality cloud measurements at the point of ARM site. We use the cloud characteristics derived from the point measurement to guide/constrain satellite retrieval, then use the satellite algorithm to derive the cloud ice water distributions within an area, i.e., 10 deg x 10 deg centered at ARM site.
Coarse Point Cloud Registration by Egi Matching of Voxel Clusters
NASA Astrophysics Data System (ADS)
Wang, Jinhu; Lindenbergh, Roderik; Shen, Yueqian; Menenti, Massimo
2016-06-01
Laser scanning samples the surface geometry of objects efficiently and records versatile information as point clouds. However, often more scans are required to fully cover a scene. Therefore, a registration step is required that transforms the different scans into a common coordinate system. The registration of point clouds is usually conducted in two steps, i.e. coarse registration followed by fine registration. In this study an automatic marker-free coarse registration method for pair-wise scans is presented. First the two input point clouds are re-sampled as voxels and dimensionality features of the voxels are determined by principal component analysis (PCA). Then voxel cells with the same dimensionality are clustered. Next, the Extended Gaussian Image (EGI) descriptor of those voxel clusters are constructed using significant eigenvectors of each voxel in the cluster. Correspondences between clusters in source and target data are obtained according to the similarity between their EGI descriptors. The random sampling consensus (RANSAC) algorithm is employed to remove outlying correspondences until a coarse alignment is obtained. If necessary, a fine registration is performed in a final step. This new method is illustrated on scan data sampling two indoor scenarios. The results of the tests are evaluated by computing the point to point distance between the two input point clouds. The presented two tests resulted in mean distances of 7.6 mm and 9.5 mm respectively, which are adequate for fine registration.
NASA Astrophysics Data System (ADS)
Pepe, M.; Ackermann, S.; Fregonese, L.; Achille, C.
2017-02-01
The paper describes a method for Point Clouds Color management and Integration obtained from Terrestrial Laser Scanner (TLS) and Image Based (IB) survey techniques. Especially in the Cultural Heritage (CH) environment, methods and techniques to improve the color quality of Point Clouds have a key role because a homogenous texture brings to a more accurate reconstruction of the investigated object and to a more pleasant perception of the color object as well. A color management method for point clouds can be useful in case of single data set acquired by TLS or IB technique as well as in case of chromatic heterogeneity resulting by merging different datasets. The latter condition can occur when the scans are acquired in different moments of the same day or when scans of the same object are performed in a period of weeks or months, and consequently with a different environment/lighting condition. In this paper, a procedure to balance the point cloud color in order to uniform the different data sets, to improve the chromatic quality and to highlight further details will be presented and discussed.
Classification of Aerial Photogrammetric 3d Point Clouds
NASA Astrophysics Data System (ADS)
Becker, C.; Häni, N.; Rosinskaya, E.; d'Angelo, E.; Strecha, C.
2017-05-01
We present a powerful method to extract per-point semantic class labels from aerial photogrammetry data. Labelling this kind of data is important for tasks such as environmental modelling, object classification and scene understanding. Unlike previous point cloud classification methods that rely exclusively on geometric features, we show that incorporating color information yields a significant increase in accuracy in detecting semantic classes. We test our classification method on three real-world photogrammetry datasets that were generated with Pix4Dmapper Pro, and with varying point densities. We show that off-the-shelf machine learning techniques coupled with our new features allow us to train highly accurate classifiers that generalize well to unseen data, processing point clouds containing 10 million points in less than 3 minutes on a desktop computer.
Microphysical Processes Affecting the Pinatubo Volcanic Plume
NASA Technical Reports Server (NTRS)
Hamill, Patrick; Houben, Howard; Young, Richard; Turco, Richard; Zhao, Jingxia
1996-01-01
In this paper we consider microphysical processes which affect the formation of sulfate particles and their size distribution in a dispersing cloud. A model for the dispersion of the Mt. Pinatubo volcanic cloud is described. We then consider a single point in the dispersing cloud and study the effects of nucleation, condensation and coagulation on the time evolution of the particle size distribution at that point.
The infrared counterpart of the eclipsing X-ray binary HO253 + 193
NASA Technical Reports Server (NTRS)
Zuckerman, B.; Becklin, E. E.; Mclean, I. S.; Patterson, Joseph
1992-01-01
We report the identification of the infrared counterpart of the pulsating X-ray source HO253 + 193. It is a highly reddened star varying in K light with a period near 3 hr, but an apparent even-odd effect in the light curve implies that the true period is 6.06 hr. Together with the recent report of X-ray eclipses at the latter period, this establishes the close binary nature of the source. Infrared minimum occurs at X-ray minimum, certifying that the infrared variability arises from the tidal distortion of the lobe-filling secondary. The absence of a point source at radio wavelengths, plus the distance derived from the infrared data, suggests that the binary system is accidentally located behind the dense core of the molecular cloud Lynds 1457. The eclipses and pulsations in the X-ray light curve, coupled with the hard X-ray spectrum and low luminosity, demonstrate that HO253 + 193 contains an accreting magnetic white dwarf, and hence belongs to the 'DQ Herculis' class of cataclysmic variables.
3D local feature BKD to extract road information from mobile laser scanning point clouds
NASA Astrophysics Data System (ADS)
Yang, Bisheng; Liu, Yuan; Dong, Zhen; Liang, Fuxun; Li, Bijun; Peng, Xiangyang
2017-08-01
Extracting road information from point clouds obtained through mobile laser scanning (MLS) is essential for autonomous vehicle navigation, and has hence garnered a growing amount of research interest in recent years. However, the performance of such systems is seriously affected due to varying point density and noise. This paper proposes a novel three-dimensional (3D) local feature called the binary kernel descriptor (BKD) to extract road information from MLS point clouds. The BKD consists of Gaussian kernel density estimation and binarization components to encode the shape and intensity information of the 3D point clouds that are fed to a random forest classifier to extract curbs and markings on the road. These are then used to derive road information, such as the number of lanes, the lane width, and intersections. In experiments, the precision and recall of the proposed feature for the detection of curbs and road markings on an urban dataset and a highway dataset were as high as 90%, thus showing that the BKD is accurate and robust against varying point density and noise.
Hierarchical Regularization of Polygons for Photogrammetric Point Clouds of Oblique Images
NASA Astrophysics Data System (ADS)
Xie, L.; Hu, H.; Zhu, Q.; Wu, B.; Zhang, Y.
2017-05-01
Despite the success of multi-view stereo (MVS) reconstruction from massive oblique images in city scale, only point clouds and triangulated meshes are available from existing MVS pipelines, which are topologically defect laden, free of semantical information and hard to edit and manipulate interactively in further applications. On the other hand, 2D polygons and polygonal models are still the industrial standard. However, extraction of the 2D polygons from MVS point clouds is still a non-trivial task, given the fact that the boundaries of the detected planes are zigzagged and regularities, such as parallel and orthogonal, cannot preserve. Aiming to solve these issues, this paper proposes a hierarchical polygon regularization method for the photogrammetric point clouds from existing MVS pipelines, which comprises of local and global levels. After boundary points extraction, e.g. using alpha shapes, the local level is used to consolidate the original points, by refining the orientation and position of the points using linear priors. The points are then grouped into local segments by forward searching. In the global level, regularities are enforced through a labeling process, which encourage the segments share the same label and the same label represents segments are parallel or orthogonal. This is formulated as Markov Random Field and solved efficiently. Preliminary results are made with point clouds from aerial oblique images and compared with two classical regularization methods, which have revealed that the proposed method are more powerful in abstracting a single building and is promising for further 3D polygonal model reconstruction and GIS applications.
Multiview 3D sensing and analysis for high quality point cloud reconstruction
NASA Astrophysics Data System (ADS)
Satnik, Andrej; Izquierdo, Ebroul; Orjesek, Richard
2018-04-01
Multiview 3D reconstruction techniques enable digital reconstruction of 3D objects from the real world by fusing different viewpoints of the same object into a single 3D representation. This process is by no means trivial and the acquisition of high quality point cloud representations of dynamic 3D objects is still an open problem. In this paper, an approach for high fidelity 3D point cloud generation using low cost 3D sensing hardware is presented. The proposed approach runs in an efficient low-cost hardware setting based on several Kinect v2 scanners connected to a single PC. It performs autocalibration and runs in real-time exploiting an efficient composition of several filtering methods including Radius Outlier Removal (ROR), Weighted Median filter (WM) and Weighted Inter-Frame Average filtering (WIFA). The performance of the proposed method has been demonstrated through efficient acquisition of dense 3D point clouds of moving objects.
Performance testing of 3D point cloud software
NASA Astrophysics Data System (ADS)
Varela-González, M.; González-Jorge, H.; Riveiro, B.; Arias, P.
2013-10-01
LiDAR systems are being used widely in recent years for many applications in the engineering field: civil engineering, cultural heritage, mining, industry and environmental engineering. One of the most important limitations of this technology is the large computational requirements involved in data processing, especially for large mobile LiDAR datasets. Several software solutions for data managing are available in the market, including open source suites, however, users often unknown methodologies to verify their performance properly. In this work a methodology for LiDAR software performance testing is presented and four different suites are studied: QT Modeler, VR Mesh, AutoCAD 3D Civil and the Point Cloud Library running in software developed at the University of Vigo (SITEGI). The software based on the Point Cloud Library shows better results in the loading time of the point clouds and CPU usage. However, it is not as strong as commercial suites in working set and commit size tests.
NASA Technical Reports Server (NTRS)
Long, S. A. T.
1973-01-01
The triangulation method developed specifically for the Barium Ion Cloud Project is discussed. Expression for the four displacement errors, the three slope errors, and the curvature error in the triangulation solution due to a probable error in the lines-of-sight from the observation stations to points on the cloud are derived. The triangulation method is then used to determine the effect of the following on these different errors in the solution: the number and location of the stations, the observation duration, east-west cloud drift, the number of input data points, and the addition of extra cameras to one of the stations. The pointing displacement errors, and the pointing slope errors are compared. The displacement errors in the solution due to a probable error in the position of a moving station plus the weighting factors for the data from the moving station are also determined.
3D reconstruction of wooden member of ancient architecture from point clouds
NASA Astrophysics Data System (ADS)
Zhang, Ruiju; Wang, Yanmin; Li, Deren; Zhao, Jun; Song, Daixue
2006-10-01
This paper presents a 3D reconstruction method to model wooden member of ancient architecture from point clouds based on improved deformable model. Three steps are taken to recover the shape of wooden member. Firstly, Hessian matrix is adopted to compute the axe of wooden member. Secondly, an initial model of wooden member is made by contour orthogonal to its axis. Thirdly, an accurate model is got through the coupling effect between the initial model and the point clouds of the wooden member according to the theory of improved deformable model. Every step and algorithm is studied and described in the paper. Using the point clouds captured from Forbidden City of China, shaft member and beam member are taken as examples to test the method proposed in the paper. Results show the efficiency and robustness of the method addressed in the literature to model the wooden member of ancient architecture.
Dark Matter and Extragalactic Gas Clouds in the NGC 4532/DDO 137 System
NASA Technical Reports Server (NTRS)
Hoffman, G. L.; Lu, N. Y.; Salpeter, E. E.; Connell, B. M.
1998-01-01
H I synthesis mapping of NGC 4532 and DDO 137, a pair of Sm galaxies on the edge of the Virgo cluster, is used to determine rotation curves for each of the galaxies and to resolve the structure and kinematics of three extragalactic H I clouds embedded in an extended envelope of diffuse HI discovered in earlier Arecibo studies of the system.
Investigation of mesoscale cloud features viewed by LANDSAT
NASA Technical Reports Server (NTRS)
Sherr, P. E. (Principal Investigator); Feteris, P. J.; Lisa, A. S.; Bowley, C. J.; Fowler, M. G.; Barnes, J. C.
1976-01-01
The author has identified the following significant results. Some 50 LANDSAT images displaying mesoscale cloud features were analyzed. This analysis was based on the Rayleigh-Kuettner model describing the formation of that type of mesoscale cloud feature. This model lends itself to computation of the average wind speed in northerly flow from the dimensions of the cloud band configurations measured from a LANDSAT image. In nearly every case, necessary conditions of a curved wind profile and orientation of the cloud streets within 20 degrees of the direction of the mean wind in the convective layer were met. Verification of the results by direct observation was hampered, however, by the incompatibility of the resolution of conventional rawinsonde observations with the scale of the banded cloud patterns measured from LANDSAT data. Comparison seems to be somewhat better in northerly flows than in southerly flows, with the largest discrepancies in wind speed being within 8m/sec, or a factor of two.
NASA Astrophysics Data System (ADS)
Tuttas, S.; Braun, A.; Borrmann, A.; Stilla, U.
2014-08-01
For construction progress monitoring a planned state of the construction at a certain time (as-planed) has to be compared to the actual state (as-built). The as-planed state is derived from a building information model (BIM), which contains the geometry of the building and the construction schedule. In this paper we introduce an approach for the generation of an as-built point cloud by photogrammetry. It is regarded that that images on a construction cannot be taken from everywhere it seems to be necessary. Because of this we use a combination of structure from motion process together with control points to create a scaled point cloud in a consistent coordinate system. Subsequently this point cloud is used for an as-built - as-planed comparison. For that voxels of an octree are marked as occupied, free or unknown by raycasting based on the triangulated points and the camera positions. This allows to identify not existing building parts. For the verification of the existence of building parts a second test based on the points in front and behind the as-planed model planes is performed. The proposed procedure is tested based on an inner city construction site under real conditions.
Localization of Pathology on Complex Architecture Building Surfaces
NASA Astrophysics Data System (ADS)
Sidiropoulos, A. A.; Lakakis, K. N.; Mouza, V. K.
2017-02-01
The technology of 3D laser scanning is considered as one of the most common methods for heritage documentation. The point clouds that are being produced provide information of high detail, both geometric and thematic. There are various studies that examine techniques of the best exploitation of this information. In this study, an algorithm of pathology localization, such as cracks and fissures, on complex building surfaces is being tested. The algorithm makes use of the points' position in the point cloud and tries to distinguish them in two groups-patterns; pathology and non-pathology. The extraction of the geometric information that is being used for recognizing the pattern of the points is being accomplished via Principal Component Analysis (PCA) in user-specified neighborhoods in the whole point cloud. The implementation of PCA leads to the definition of the normal vector at each point of the cloud. Two tests that operate separately examine both local and global geometric criteria among the points and conclude which of them should be categorized as pathology. The proposed algorithm was tested on parts of the Gazi Evrenos Baths masonry, which are located at the city of Giannitsa at Northern Greece.
Cosmic ray processing of N2-containing interstellar ice analogues at dark cloud conditions
NASA Astrophysics Data System (ADS)
Fedoseev, G.; Scirè, C.; Baratta, G. A.; Palumbo, M. E.
2018-04-01
N2 is believed to lock considerable part of nitrogen elemental budget and, therefore, to be one of the most abundant ice constituent in cold dark clouds. This laboratory-based research utilizes high energetic processing of N2 containing interstellar ice analogues using 200 keV H+ and He+ ions that mimics cosmic ray processing of the interstellar icy grains. It aims to investigate the formation of (iso)cyanates and cyanides in the ice mantles at the conditions typical for cold dark clouds and prestellar cores. Investigation of cosmic ray processing as a chemical trigger mechanism is explained by the high stability of N2 molecules that are chemically inert in most of the atom- and radical-addition reactions and cannot be efficiently dissociated by cosmic ray induced UV-field. Two sets of experiments are performed to closer address solid-state chemistry occurring in two distinct layers of the ice formed at different stages of dark cloud evolution, i.e. `H2O-rich' and `CO-rich' ice layers. Formation of HNCO and OCN- is discussed in all of the performed experiments. Corresponding kinetic curves for HNCO and OCN- are obtained. Furthermore, a feature around 2092 cm-1 assigned to the contributions of 13CO, CN-, and HCN is analysed. The kinetic curves for the combined HCN/CN- abundance are derived. In turn, normalized formation yields are evaluated by interpolation of the obtained results to the low irradiation doses relevant to dark cloud stage. The obtained values can be used to interpret future observations towards cold dark clouds using James Webb Space Telescope.
NASA Astrophysics Data System (ADS)
Nex, F.; Gerke, M.
2014-08-01
Image matching techniques can nowadays provide very dense point clouds and they are often considered a valid alternative to LiDAR point cloud. However, photogrammetric point clouds are often characterized by a higher level of random noise compared to LiDAR data and by the presence of large outliers. These problems constitute a limitation in the practical use of photogrammetric data for many applications but an effective way to enhance the generated point cloud has still to be found. In this paper we concentrate on the restoration of Digital Surface Models (DSM), computed from dense image matching point clouds. A photogrammetric DSM, i.e. a 2.5D representation of the surface is still one of the major products derived from point clouds. Four different algorithms devoted to DSM denoising are presented: a standard median filter approach, a bilateral filter, a variational approach (TGV: Total Generalized Variation), as well as a newly developed algorithm, which is embedded into a Markov Random Field (MRF) framework and optimized through graph-cuts. The ability of each algorithm to recover the original DSM has been quantitatively evaluated. To do that, a synthetic DSM has been generated and different typologies of noise have been added to mimic the typical errors of photogrammetric DSMs. The evaluation reveals that standard filters like median and edge preserving smoothing through a bilateral filter approach cannot sufficiently remove typical errors occurring in a photogrammetric DSM. The TGV-based approach much better removes random noise, but large areas with outliers still remain. Our own method which explicitly models the degradation properties of those DSM outperforms the others in all aspects.
NASA Astrophysics Data System (ADS)
Bonduel, M.; Bassier, M.; Vergauwen, M.; Pauwels, P.; Klein, R.
2017-11-01
The use of Building Information Modeling (BIM) for existing buildings based on point clouds is increasing. Standardized geometric quality assessment of the BIMs is needed to make them more reliable and thus reusable for future users. First, available literature on the subject is studied. Next, an initial proposal for a standardized geometric quality assessment is presented. Finally, this method is tested and evaluated with a case study. The number of specifications on BIM relating to existing buildings is limited. The Levels of Accuracy (LOA) specification of the USIBD provides definitions and suggestions regarding geometric model accuracy, but lacks a standardized assessment method. A deviation analysis is found to be dependent on (1) the used mathematical model, (2) the density of the point clouds and (3) the order of comparison. Results of the analysis can be graphical and numerical. An analysis on macro (building) and micro (BIM object) scale is necessary. On macro scale, the complete model is compared to the original point cloud and vice versa to get an overview of the general model quality. The graphical results show occluded zones and non-modeled objects respectively. Colored point clouds are derived from this analysis and integrated in the BIM. On micro scale, the relevant surface parts are extracted per BIM object and compared to the complete point cloud. Occluded zones are extracted based on a maximum deviation. What remains is classified according to the LOA specification. The numerical results are integrated in the BIM with the use of object parameters.
NASA Astrophysics Data System (ADS)
Stöcker, Claudia; Eltner, Anette
2016-04-01
Advances in computer vision and digital photogrammetry (i.e. structure from motion) allow for fast and flexible high resolution data supply. Within geoscience applications and especially in the field of small surface topography, high resolution digital terrain models and dense 3D point clouds are valuable data sources to capture actual states as well as for multi-temporal studies. However, there are still some limitations regarding robust registration and accuracy demands (e.g. systematic positional errors) which impede the comparison and/or combination of multi-sensor data products. Therefore, post-processing of 3D point clouds can heavily enhance data quality. In this matter the Iterative Closest Point (ICP) algorithm represents an alignment tool which iteratively minimizes distances of corresponding points within two datasets. Even though tool is widely used; it is often applied as a black-box application within 3D data post-processing for surface reconstruction. Aiming for precise and accurate combination of multi-sensor data sets, this study looks closely at different variants of the ICP algorithm including sub-steps of point selection, point matching, weighting, rejection, error metric and minimization. Therefore, an agricultural utilized field was investigated simultaneously by terrestrial laser scanning (TLS) and unmanned aerial vehicle (UAV) sensors two times (once covered with sparse vegetation and once bare soil). Due to different perspectives both data sets show diverse consistency in terms of shadowed areas and thus gaps so that data merging would provide consistent surface reconstruction. Although photogrammetric processing already included sub-cm accurate ground control surveys, UAV point cloud exhibits an offset towards TLS point cloud. In order to achieve the transformation matrix for fine registration of UAV point clouds, different ICP variants were tested. Statistical analyses of the results show that final success of registration and therefore data quality depends particularly on parameterization and choice of error metric, especially for erroneous data sets as in the case of sparse vegetation cover. At this, the point-to-point metric is more sensitive to data "noise" than the point-to-plane metric which results in considerably higher cloud-to-cloud distances. Concluding, in order to comply with accuracy demands of high resolution surface reconstruction and the aspect that ground control surveys can reach their limits both in time exposure and terrain accessibility ICP algorithm represents a great tool to refine rough initial alignment. Here different variants of registration modules allow for individual application according to the quality of the input data.
Mapping Directly Imaged Giant Exoplanets
NASA Astrophysics Data System (ADS)
Kostov, Veselin; Apai, Dániel
2013-01-01
With the increasing number of directly imaged giant exoplanets, the current atmosphere models are often not capable of fully explaining the spectra and luminosity of the sources. A particularly challenging component of the atmosphere models is the formation and properties of condensate cloud layers, which fundamentally impact the energetics, opacity, and evolution of the planets. Here we present a suite of techniques that can be used to estimate the level of rotational modulations these planets may show. We propose that the time-resolved observations of such periodic photometric and spectroscopic variations of extrasolar planets due to their rotation can be used as a powerful tool to probe the heterogeneity of their optical surfaces. In this paper, we develop simulations to explore the capabilities of current and next-generation ground- and space-based instruments for this technique. We address and discuss the following questions: (1) what planet properties can be deduced from the light curve and/or spectra, and in particular can we determine rotation periods, spot coverage, spot colors, and spot spectra?; (2) what is the optimal configuration of instrument/wavelength/temporal sampling required for these measurements?; and (3) can principal component analysis be used to invert the light curve and deduce the surface map of the planet? Our simulations describe the expected spectral differences between homogeneous (clear or cloudy) and patchy atmospheres, outline the significance of the dominant absorption features of H2O, CH4, and CO, and provide a method to distinguish these two types of atmospheres. Assuming surfaces with and without clouds for most currently imaged planets the current models predict the largest variations in the J band. Simulated photometry from current and future instruments is used to estimate the level of detectable photometric variations. We conclude that future instruments will be able to recover not only the rotation periods, cloud cover, cloud colors, and spectra but even cloud evolution. We also show that a longitudinal map of the planet's atmosphere can be deduced from its disk-integrated light curves.
Chance Encounter with a Stratospheric Kerosene Rocket Plume From Russia Over California
NASA Technical Reports Server (NTRS)
Newman, P. A.; Wilson, J. C.; Ross, M. N.; Brock, C. A.; Sheridan, P. J.; Schoeberl, M. R.; Lait, L. R.; Bui, T. P.; Loewenstein, M.; Podolske, J. R.;
2000-01-01
A high-altitude aircraft flight on April 18, 1997 detected an enormous aerosol cloud at 20 km altitude near California (37 N). Not visually observed, the cloud had high concentrations of soot and sulfate aerosol, and was over 180 km in horizontal extent. The cloud was probably a large hydrocarbon fueled vehicle, most likely from rocket motors burning liquid oxygen and kerosene. One of two Russian Soyuz rockets could have produced the cloud: a launch from the Baikonur Cosmodrome, Kazakhstan on April 6; or from Plesetsk, Russia on April 9. Parcel trajectories and long-lived trace gas concentrations suggest the Baikonur launch as the cloud source. Cloud trajectories do not trace the Soyuz plume from Asia to North America, illustrating the uncertainties of point-to-point trajectories. This cloud encounter is the only stratospheric measurement of a hydrocarbon fuel powered rocket.
a Method for the Registration of Hemispherical Photographs and Tls Intensity Images
NASA Astrophysics Data System (ADS)
Schmidt, A.; Schilling, A.; Maas, H.-G.
2012-07-01
Terrestrial laser scanners generate dense and accurate 3D point clouds with minimal effort, which represent the geometry of real objects, while image data contains texture information of object surfaces. Based on the complementary characteristics of both data sets, a combination is very appealing for many applications, including forest-related tasks. In the scope of our research project, independent data sets of a plain birch stand have been taken by a full-spherical laser scanner and a hemispherical digital camera. Previously, both kinds of data sets have been considered separately: Individual trees were successfully extracted from large 3D point clouds, and so-called forest inventory parameters could be determined. Additionally, a simplified tree topology representation was retrieved. From hemispherical images, leaf area index (LAI) values, as a very relevant parameter for describing a stand, have been computed. The objective of our approach is to merge a 3D point cloud with image data in a way that RGB values are assigned to each 3D point. So far, segmentation and classification of TLS point clouds in forestry applications was mainly based on geometrical aspects of the data set. However, a 3D point cloud with colour information provides valuable cues exceeding simple statistical evaluation of geometrical object features and thus may facilitate the analysis of the scan data significantly.
NASA Astrophysics Data System (ADS)
Bassier, M.; Bonduel, M.; Van Genechten, B.; Vergauwen, M.
2017-11-01
Point cloud segmentation is a crucial step in scene understanding and interpretation. The goal is to decompose the initial data into sets of workable clusters with similar properties. Additionally, it is a key aspect in the automated procedure from point cloud data to BIM. Current approaches typically only segment a single type of primitive such as planes or cylinders. Also, current algorithms suffer from oversegmenting the data and are often sensor or scene dependent. In this work, a method is presented to automatically segment large unstructured point clouds of buildings. More specifically, the segmentation is formulated as a graph optimisation problem. First, the data is oversegmented with a greedy octree-based region growing method. The growing is conditioned on the segmentation of planes as well as smooth surfaces. Next, the candidate clusters are represented by a Conditional Random Field after which the most likely configuration of candidate clusters is computed given a set of local and contextual features. The experiments prove that the used method is a fast and reliable framework for unstructured point cloud segmentation. Processing speeds up to 40,000 points per second are recorded for the region growing. Additionally, the recall and precision of the graph clustering is approximately 80%. Overall, nearly 22% of oversegmentation is reduced by clustering the data. These clusters will be classified and used as a basis for the reconstruction of BIM models.
NASA Astrophysics Data System (ADS)
Wang, Jinhu; Lindenbergh, Roderik; Menenti, Massimo
2017-06-01
Urban road environments contain a variety of objects including different types of lamp poles and traffic signs. Its monitoring is traditionally conducted by visual inspection, which is time consuming and expensive. Mobile laser scanning (MLS) systems sample the road environment efficiently by acquiring large and accurate point clouds. This work proposes a methodology for urban road object recognition from MLS point clouds. The proposed method uses, for the first time, shape descriptors of complete objects to match repetitive objects in large point clouds. To do so, a novel 3D multi-scale shape descriptor is introduced, that is embedded in a workflow that efficiently and automatically identifies different types of lamp poles and traffic signs. The workflow starts by tiling the raw point clouds along the scanning trajectory and by identifying non-ground points. After voxelization of the non-ground points, connected voxels are clustered to form candidate objects. For automatic recognition of lamp poles and street signs, a 3D significant eigenvector based shape descriptor using voxels (SigVox) is introduced. The 3D SigVox descriptor is constructed by first subdividing the points with an octree into several levels. Next, significant eigenvectors of the points in each voxel are determined by principal component analysis (PCA) and mapped onto the appropriate triangle of a sphere approximating icosahedron. This step is repeated for different scales. By determining the similarity of 3D SigVox descriptors between candidate point clusters and training objects, street furniture is automatically identified. The feasibility and quality of the proposed method is verified on two point clouds obtained in opposite direction of a stretch of road of 4 km. 6 types of lamp pole and 4 types of road sign were selected as objects of interest. Ground truth validation showed that the overall accuracy of the ∼170 automatically recognized objects is approximately 95%. The results demonstrate that the proposed method is able to recognize street furniture in a practical scenario. Remaining difficult cases are touching objects, like a lamp pole close to a tree.
Altunay, Nail; Gürkan, Ramazan; Kır, Ufuk
2016-01-01
A new, low-cost, micellar-sensitive and selective spectrophotometric method was developed for the determination of inorganic arsenic (As) species in beverage samples. Vortex-assisted cloud-point extraction (VA-CPE) was used for the efficient pre-concentration of As(V) in the selected samples. The method is based on selective and sensitive ion-pairing of As(V) with acridine red (ARH(+)) in the presence of pyrogallol and sequential extraction into the micellar phase of Triton X-45 at pH 6.0. Under the optimised conditions, the calibration curve was highly linear in the range of 0.8-280 µg l(-1) for As(V). The limits of detection and quantification of the method were 0.25 and 0.83 µg l(-1), respectively. The method was successfully applied to the determination of trace As in the pre-treated and digested samples under microwave and ultrasonic power. As(V) and total As levels in the samples were spectrophotometrically determined after pre-concentration with VA-CPE at 494 nm before and after oxidation with acidic KMnO4. The As(III) levels were calculated from the difference between As(V) and total As levels. The accuracy of the method was demonstrated by analysis of two certified reference materials (CRMs) where the measured values for As were statistically within the 95% confidence limit for the certified values.
Genomic cloud computing: legal and ethical points to consider
Dove, Edward S; Joly, Yann; Tassé, Anne-Marie; Burton, Paul; Chisholm, Rex; Fortier, Isabel; Goodwin, Pat; Harris, Jennifer; Hveem, Kristian; Kaye, Jane; Kent, Alistair; Knoppers, Bartha Maria; Lindpaintner, Klaus; Little, Julian; Riegman, Peter; Ripatti, Samuli; Stolk, Ronald; Bobrow, Martin; Cambon-Thomsen, Anne; Dressler, Lynn; Joly, Yann; Kato, Kazuto; Knoppers, Bartha Maria; Rodriguez, Laura Lyman; McPherson, Treasa; Nicolás, Pilar; Ouellette, Francis; Romeo-Casabona, Carlos; Sarin, Rajiv; Wallace, Susan; Wiesner, Georgia; Wilson, Julia; Zeps, Nikolajs; Simkevitz, Howard; De Rienzo, Assunta; Knoppers, Bartha M
2015-01-01
The biggest challenge in twenty-first century data-intensive genomic science, is developing vast computer infrastructure and advanced software tools to perform comprehensive analyses of genomic data sets for biomedical research and clinical practice. Researchers are increasingly turning to cloud computing both as a solution to integrate data from genomics, systems biology and biomedical data mining and as an approach to analyze data to solve biomedical problems. Although cloud computing provides several benefits such as lower costs and greater efficiency, it also raises legal and ethical issues. In this article, we discuss three key ‘points to consider' (data control; data security, confidentiality and transfer; and accountability) based on a preliminary review of several publicly available cloud service providers' Terms of Service. These ‘points to consider' should be borne in mind by genomic research organizations when negotiating legal arrangements to store genomic data on a large commercial cloud service provider's servers. Diligent genomic cloud computing means leveraging security standards and evaluation processes as a means to protect data and entails many of the same good practices that researchers should always consider in securing their local infrastructure. PMID:25248396
Genomic cloud computing: legal and ethical points to consider.
Dove, Edward S; Joly, Yann; Tassé, Anne-Marie; Knoppers, Bartha M
2015-10-01
The biggest challenge in twenty-first century data-intensive genomic science, is developing vast computer infrastructure and advanced software tools to perform comprehensive analyses of genomic data sets for biomedical research and clinical practice. Researchers are increasingly turning to cloud computing both as a solution to integrate data from genomics, systems biology and biomedical data mining and as an approach to analyze data to solve biomedical problems. Although cloud computing provides several benefits such as lower costs and greater efficiency, it also raises legal and ethical issues. In this article, we discuss three key 'points to consider' (data control; data security, confidentiality and transfer; and accountability) based on a preliminary review of several publicly available cloud service providers' Terms of Service. These 'points to consider' should be borne in mind by genomic research organizations when negotiating legal arrangements to store genomic data on a large commercial cloud service provider's servers. Diligent genomic cloud computing means leveraging security standards and evaluation processes as a means to protect data and entails many of the same good practices that researchers should always consider in securing their local infrastructure.
Comparison of roadway roughness derived from LIDAR and SFM 3D point clouds.
DOT National Transportation Integrated Search
2015-10-01
This report describes a short-term study undertaken to investigate the potential for using dense three-dimensional (3D) point : clouds generated from light detection and ranging (LIDAR) and photogrammetry to assess roadway roughness. Spatially : cont...
D Modeling of Components of a Garden by Using Point Cloud Data
NASA Astrophysics Data System (ADS)
Kumazakia, R.; Kunii, Y.
2016-06-01
Laser measurement is currently applied to several tasks such as plumbing management, road investigation through mobile mapping systems, and elevation model utilization through airborne LiDAR. Effective laser measurement methods have been well-documented in civil engineering, but few attempts have been made to establish equally effective methods in landscape engineering. By using point cloud data acquired through laser measurement, the aesthetic landscaping of Japanese gardens can be enhanced. This study focuses on simple landscape simulations for pruning and rearranging trees as well as rearranging rocks, lanterns, and other garden features by using point cloud data. However, such simulations lack concreteness. Therefore, this study considers the construction of a library of garden features extracted from point cloud data. The library would serve as a resource for creating new gardens and simulating gardens prior to conducting repairs. Extracted garden features are imported as 3ds Max objects, and realistic 3D models are generated by using a material editor system. As further work toward the publication of a 3D model library, file formats for tree crowns and trunks should be adjusted. Moreover, reducing the size of created models is necessary. Models created using point cloud data are informative because simply shaped garden features such as trees are often seen in the 3D industry.
Pan, Tao; Deng, Tao; Zeng, Xinying; Dong, Wei; Yu, Shuijing
2016-01-01
The biological treatment of polycyclic aromatic hydrocarbons is an important issue. Most microbes have limited practical applications because of the poor bioavailability of polycyclic aromatic hydrocarbons. In this study, the extractive biodegradation of phenanthrene by Sphingomonas polyaromaticivorans was conducted by introducing the cloud point system. The cloud point system is composed of a mixture of (40 g/L) Brij 30 and Tergitol TMN-3, which are nonionic surfactants, in equal proportions. After phenanthrene degradation, a higher wet cell weight and lower phenanthrene residue were obtained in the cloud point system than that in the control system. According to the results of high-performance liquid chromatography, the residual phenanthrene preferred to partition from the dilute phase into the coacervate phase. The concentration of residual phenanthrene in the dilute phase (below 0.001 mg/L) is lower than its solubility in water (1.18 mg/L) after extractive biodegradation. Therefore, dilute phase detoxification was achieved, thus indicating that the dilute phase could be discharged without causing phenanthrene pollution. Bioavailability was assessed by introducing the apparent logP in the cloud point system. Apparent logP decreased significantly, thus indicating that the bioavailability of phenanthrene increased remarkably in the system. This study provides a potential application of biological treatment in water and soil contaminated by phenanthrene.
NASA Astrophysics Data System (ADS)
Zhang, Yuyan; Guo, Quanli; Wang, Zhenchun; Yang, Degong
2018-03-01
This paper proposes a non-contact, non-destructive evaluation method for the surface damage of high-speed sliding electrical contact rails. The proposed method establishes a model of damage identification and calculation. A laser scanning system is built to obtain the 3D point cloud data of the rail surface. In order to extract the damage region of the rail surface, the 3D point cloud data are processed using iterative difference, nearest neighbours search and a data registration algorithm. The curvature of the point cloud data in the damage region is mapped to RGB color information, which can directly reflect the change trend of the curvature of the point cloud data in the damage region. The extracted damage region is divided into three prism elements by a method of triangulation. The volume and mass of a single element are calculated by the method of geometric segmentation. Finally, the total volume and mass of the damage region are obtained by the principle of superposition. The proposed method is applied to several typical injuries and the results are discussed. The experimental results show that the algorithm can identify damage shapes and calculate damage mass with milligram precision, which are useful for evaluating the damage in a further research stage.
NASA Astrophysics Data System (ADS)
Rothmund, Sabrina; Niethammer, Uwe; Walter, Marco; Joswig, Manfred
2013-04-01
In recent years, the high-resolution and multi-temporal 3D mapping of the Earth's surface using terrestrial laser scanning (TLS), ground-based optical images and especially low-cost UAV-based aerial images (Unmanned Aerial Vehicle) has grown in importance. This development resulted from the progressive technical improvement of the imaging systems and the freely available multi-view stereo (MVS) software packages. These different methods of data acquisition for the generation of accurate, high-resolution digital surface models (DSMs) were applied as part of an eight-week field campaign at the Super-Sauze landslide (South French Alps). An area of approximately 10,000 m² with long-term average displacement rates greater than 0.01 m/day has been investigated. The TLS-based point clouds were acquired at different viewpoints with an average point spacing between 10 to 40 mm and at different dates. On these days, more than 50 optical images were taken on points along a predefined line on the side part of the landslide by a low-cost digital compact camera. Additionally, aerial images were taken by a radio-controlled mini quad-rotor UAV equipped with another low-cost digital compact camera. The flight altitude ranged between 20 m and 250 m and produced a corresponding ground resolution between 0.6 cm and 7 cm. DGPS measurements were carried out as well in order to geo-reference and validate the point cloud data. To generate unscaled photogrammetric 3D point clouds from a disordered and tilted image set, we use the widespread open-source software package Bundler and PMVS2 (University of Washington). These multi-temporal DSMs are required on the one hand to determine the three-dimensional surface deformations and on the other hand it will be required for differential correction for orthophoto production. Drawing on the example of the acquired data at the Super-Sauze landslide, we demonstrate the potential but also the limitations of the photogrammetric point clouds. To determine the quality of the photogrammetric point cloud, these point clouds are compared with the TLS-based DSMs. The comparison shows that photogrammetric points accuracies are in the range of cm to dm, therefore don't reach the quality of the high-resolution TLS-based DSMs. Further, the validation of the photogrammetric point clouds reveals that some of them have internal curvature effects. The advantage of the photogrammetric 3D data acquisition is the use of low-cost equipment and less time-consuming data collection in the field. While the accuracy of the photogrammetric point clouds is not as high as TLS-based DSMs, the advantages of the former method are seen when applied in areas where dm-range is sufficient.
NASA Astrophysics Data System (ADS)
Orsini, Antonio; Tomasi, Claudio; Calzolari, Francescopiero; Nardino, Marianna; Cacciari, Alessandra; Georgiadis, Teodoro
2002-04-01
Simultaneous measurements of downwelling short-wave solar irradiance and incoming total radiation flux were performed at the Reeves Nevè glacier station (1200 m MSL) in Antarctica on 41 days from late November 1994 to early January 1995, employing the upward sensors of an albedometer and a pyrradiometer. The downwelling short-wave radiation measurements were analysed following the Duchon and O'Malley [J. Appl. Meteorol. 38 (1999) 132] procedure for classifying clouds, using the 50-min running mean values of standard deviation and the ratio of scaled observed to scaled clear-sky irradiance. Comparing these measurements with the Duchon and O'Malley rectangular boundaries and the local human observations of clouds collected on 17 days of the campaign, we found that the Duchon and O'Malley classification method obtained a success rate of 93% for cirrus and only 25% for cumulus. New decision criteria were established for some polar cloud classes providing success rates of 94% for cirrus, 67% for cirrostratus and altostratus, and 33% for cumulus and altocumulus. The ratios of the downwelling short-wave irradiance measured for cloudy-sky conditions to that calculated for clear-sky conditions were analysed in terms of the Kasten and Czeplak [Sol. Energy 24 (1980) 177] formula together with simultaneous human observations of cloudiness, to determine the empirical relationship curves providing reliable estimates of cloudiness for each of the three above-mentioned cloud classes. Using these cloudiness estimates, the downwelling long-wave radiation measurements (obtained as differences between the downward fluxes of total and short-wave radiation) were examined to evaluate the downwelling long-wave radiation flux normalised to totally overcast sky conditions. Calculations of the long-wave radiation flux were performed with the MODTRAN 3.7 code [Kneizys, F.X., Abreu, L.W., Anderson, G.P., Chetwynd, J.H., Shettle, E.P., Berk, A., Bernstein, L.S., Robertson, D.C., Acharya, P., Rothman, L.S., Selby, J.E.A., Gallery, W.O., Clough, S.A., 1996. In: Abreu, L.W., Anderson, G.P. (Eds.), The MODTRAN 2/3 Report and LOWTRAN 7 MODEL. Contract F19628-91-C.0132, Phillips Laboratory, Geophysics Directorate, PL/GPOS, Hanscom AFB, MA, 261 pp.] for both clear-sky and cloudy-sky conditions, considering various cloud types characterised by different cloud base altitudes and vertical thicknesses. From these evaluations, best-fit curves of the downwelling long-wave radiation flux were defined as a function of the cloud base height for the three polar cloud classes. Using these relationship curves, average estimates of the cloud base height were obtained from the three corresponding sub-sets of long-wave radiation measurements. The relative frequency histograms of the cloud base height defined by examining these three sub-sets were found to present median values of 4.7, 1.7 and 3.6 km for cirrus, cirrostratus/altostratus and cumulus/altocumulus, respectively, while median values of 6.5, 1.8 and 2.9 km were correspondingly determined by analysing only the measurements taken together with simultaneous cloud observations.
The Influential Effect of Blending, Bump, Changing Period, and Eclipsing Cepheids on the Leavitt Law
NASA Astrophysics Data System (ADS)
García-Varela, A.; Muñoz, J. R.; Sabogal, B. E.; Vargas Domínguez, S.; Martínez, J.
2016-06-01
The investigation of the nonlinearity of the Leavitt law (LL) is a topic that began more than seven decades ago, when some of the studies in this field found that the LL has a break at about 10 days. The goal of this work is to investigate a possible statistical cause of this nonlinearity. By applying linear regressions to OGLE-II and OGLE-IV data, we find that to obtain the LL by using linear regression, robust techniques to deal with influential points and/or outliers are needed instead of the ordinary least-squares regression traditionally used. In particular, by using M- and MM-regressions we establish firmly and without doubt the linearity of the LL in the Large Magellanic Cloud, without rejecting or excluding Cepheid data from the analysis. This implies that light curves of Cepheids suggesting blending, bumps, eclipses, or period changes do not affect the LL for this galaxy. For the Small Magellanic Cloud, when including Cepheids of this kind, it is not possible to find an adequate model, probably because of the geometry of the galaxy. In that case, a possible influence of these stars could exist.
Bimodal SLD Ice Accretion on a NACA 0012 Airfoil Model
NASA Technical Reports Server (NTRS)
Potapczuk, Mark; Tsao, Jen-Ching; King-Steen, Laura
2016-01-01
This presentation describes the results of ice accretion measurements on a NACA 0012 airfoil model, from the NASA Icing Research Tunnel, using an icing cloud composed of a bimodal distribution of Supercooled Large Droplets. The data consists of photographs, laser scans of the ice surface, and measurements of the mass of ice for each icing condition. The results of ice shapes accumulated as a result of exposure to an icing cloud with a bimodal droplet distribution were compared to the ice shapes resulting from an equivalent cloud composed of a droplet distribution with a standard bell curve shape.
Automated Coarse Registration of Point Clouds in 3d Urban Scenes Using Voxel Based Plane Constraint
NASA Astrophysics Data System (ADS)
Xu, Y.; Boerner, R.; Yao, W.; Hoegner, L.; Stilla, U.
2017-09-01
For obtaining a full coverage of 3D scans in a large-scale urban area, the registration between point clouds acquired via terrestrial laser scanning (TLS) is normally mandatory. However, due to the complex urban environment, the automatic registration of different scans is still a challenging problem. In this work, we propose an automatic marker free method for fast and coarse registration between point clouds using the geometric constrains of planar patches under a voxel structure. Our proposed method consists of four major steps: the voxelization of the point cloud, the approximation of planar patches, the matching of corresponding patches, and the estimation of transformation parameters. In the voxelization step, the point cloud of each scan is organized with a 3D voxel structure, by which the entire point cloud is partitioned into small individual patches. In the following step, we represent points of each voxel with the approximated plane function, and select those patches resembling planar surfaces. Afterwards, for matching the corresponding patches, a RANSAC-based strategy is applied. Among all the planar patches of a scan, we randomly select a planar patches set of three planar surfaces, in order to build a coordinate frame via their normal vectors and their intersection points. The transformation parameters between scans are calculated from these two coordinate frames. The planar patches set with its transformation parameters owning the largest number of coplanar patches are identified as the optimal candidate set for estimating the correct transformation parameters. The experimental results using TLS datasets of different scenes reveal that our proposed method can be both effective and efficient for the coarse registration task. Especially, for the fast orientation between scans, our proposed method can achieve a registration error of less than around 2 degrees using the testing datasets, and much more efficient than the classical baseline methods.
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo; Moncrieff, Mitchell; Einaud, Franco (Technical Monitor)
2001-01-01
Numerical cloud models have been developed and applied extensively to study cloud-scale and mesoscale processes during the past four decades. The distinctive aspect of these cloud models is their ability to treat explicitly (or resolve) cloud-scale dynamics. This requires the cloud models to be formulated from the non-hydrostatic equations of motion that explicitly include the vertical acceleration terms since the vertical and horizontal scales of convection are similar. Such models are also necessary in order to allow gravity waves, such as those triggered by clouds, to be resolved explicitly. In contrast, the hydrostatic approximation, usually applied in global or regional models, does allow the presence of gravity waves. In addition, the availability of exponentially increasing computer capabilities has resulted in time integrations increasing from hours to days, domain grids boxes (points) increasing from less than 2000 to more than 2,500,000 grid points with 500 to 1000 m resolution, and 3-D models becoming increasingly prevalent. The cloud resolving model is now at a stage where it can provide reasonably accurate statistical information of the sub-grid, cloud-resolving processes poorly parameterized in climate models and numerical prediction models.
X-ray pulsars in nearby irregular galaxies
NASA Astrophysics Data System (ADS)
Yang, Jun
2018-01-01
The Small Magellanic Cloud (SMC), Large Magellanic Cloud (LMC) and Irregular Galaxy IC 10 are valuable laboratories to study the physical, temporal and statistical properties of the X-ray pulsar population with multi-satellite observations, in order to probe fundamental physics. The known distance of these galaxies can help us easily categorize the luminosity of the pulsars and their age difference can be helpful for for studying the origin and evolution of compact objects. Therefore, a complete archive of 116 XMM-Newton PN, 151 Chandra (Advanced CCD Imaging Spectrometer) ACIS, and 952 RXTE PCA observations for the pulsars in the Small Magellanic Cloud (SMC) were collected and analyzed, along with 42 XMM-Newton and 30 Chandra observations for the Large Magellanic Cloud, spanning 1997-2014. From a sample of 67 SMC pulsars we generate a suite of products for each pulsar detection: spin period, flux, event list, high time-resolution light-curve, pulse-profile, periodogram, and X-ray spectrum. Combining all three satellites, I generated complete histories of the spin periods, pulse amplitudes, pulsed fractions and X-ray luminosities. Many of the pulsars show variations in pulse period due to the combination of orbital motion and accretion torques. Long-term spin-up/down trends are seen in 28/25 pulsars respectively, pointing to sustained transfer of mass and angular momentum to the neutron star on decadal timescales. The distributions of pulse detection and flux as functions of spin period provide interesting findings: mapping boundaries of accretion-driven X-ray luminosity, and showing that fast pulsars (P<10 s) are rarely detected, which yet are more prone to giant outbursts. In parallel we compare the observed pulse profiles to our general relativity (GR) model of X-ray emission in order to constrain the physical parameters of the pulsars.In addition, we conduct a search for optical counterparts to X-ray sources in the local dwarf galaxy IC 10 to form a comparison sample for Magellanic Cloud X-ray pulsars.
NASA Astrophysics Data System (ADS)
Ding, J.; Wang, G.; Xiong, L.; Zhou, X.; England, E.
2017-12-01
Coastal regions are naturally vulnerable to impact from long-term coastal erosion and episodic coastal hazards caused by extreme weather events. Major geomorphic changes can occur within a few hours during storms. Prediction of storm impact, costal planning and resilience observation after natural events all require accurate and up-to-date topographic maps of coastal morphology. Thus, the ability to conduct rapid and high-resolution-high-accuracy topographic mapping is of critical importance for long-term coastal management and rapid response after natural hazard events. Terrestrial laser scanning (TLS) techniques have been frequently applied to beach and dune erosion studies and post hazard responses. However, TLS surveying is relatively slow and costly for rapid surveying. Furthermore, TLS surveying unavoidably retains gray areas that cannot be reached by laser pulses, particularly in wetland areas where lack of direct access in most cases. Aerial mapping using photogrammetry from images taken by unmanned aerial vehicles (UAV) has become a new technique for rapid topographic mapping. UAV photogrammetry mapping techniques provide the ability to map coastal features quickly, safely, inexpensively, on short notice and with minimal impact. The primary products from photogrammetry are point clouds similar to the LiDAR point clouds. However, a large number of ground control points (ground truth) are essential for obtaining high-accuracy UAV maps. The ground control points are often obtained by GPS survey simultaneously with the TLS survey in the field. The GPS survey could be a slow and arduous process in the field. This study aims to develop methods for acquiring a huge number of ground control points from TLS survey and validating point clouds obtained from photogrammetry with the TLS point clouds. A Rigel VZ-2000 TLS scanner was used for developing laser point clouds and a DJI Phantom 4 Pro UAV was used for acquiring images. The aerial images were processed with the Photogrammetry mapping software Agisoft PhotoScan. A workflow for conducting rapid TLS and UAV survey in the field and integrating point clouds obtained from TLS and UAV surveying will be introduced. Key words: UAV photogrammetry, ground control points, TLS, coastal morphology, topographic mapping
Sparse Unorganized Point Cloud Based Relative Pose Estimation for Uncooperative Space Target.
Yin, Fang; Chou, Wusheng; Wu, Yun; Yang, Guang; Xu, Song
2018-03-28
This paper proposes an autonomous algorithm to determine the relative pose between the chaser spacecraft and the uncooperative space target, which is essential in advanced space applications, e.g., on-orbit serving missions. The proposed method, named Congruent Tetrahedron Align (CTA) algorithm, uses the very sparse unorganized 3D point cloud acquired by a LIDAR sensor, and does not require any prior pose information. The core of the method is to determine the relative pose by looking for the congruent tetrahedron in scanning point cloud and model point cloud on the basis of its known model. The two-level index hash table is built for speeding up the search speed. In addition, the Iterative Closest Point (ICP) algorithm is used for pose tracking after CTA. In order to evaluate the method in arbitrary initial attitude, a simulated system is presented. Specifically, the performance of the proposed method to provide the initial pose needed for the tracking algorithm is demonstrated, as well as their robustness against noise. Finally, a field experiment is conducted and the results demonstrated the effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Hess, M. R.; Petrovic, V.; Kuester, F.
2017-08-01
Digital documentation of cultural heritage structures is increasingly more common through the application of different imaging techniques. Many works have focused on the application of laser scanning and photogrammetry techniques for the acquisition of threedimensional (3D) geometry detailing cultural heritage sites and structures. With an abundance of these 3D data assets, there must be a digital environment where these data can be visualized and analyzed. Presented here is a feedback driven visualization framework that seamlessly enables interactive exploration and manipulation of massive point cloud data. The focus of this work is on the classification of different building materials with the goal of building more accurate as-built information models of historical structures. User defined functions have been tested within the interactive point cloud visualization framework to evaluate automated and semi-automated classification of 3D point data. These functions include decisions based on observed color, laser intensity, normal vector or local surface geometry. Multiple case studies are presented here to demonstrate the flexibility and utility of the presented point cloud visualization framework to achieve classification objectives.
Automated Point Cloud Correspondence Detection for Underwater Mapping Using AUVs
NASA Technical Reports Server (NTRS)
Hammond, Marcus; Clark, Ashley; Mahajan, Aditya; Sharma, Sumant; Rock, Stephen
2015-01-01
An algorithm for automating correspondence detection between point clouds composed of multibeam sonar data is presented. This allows accurate initialization for point cloud alignment techniques even in cases where accurate inertial navigation is not available, such as iceberg profiling or vehicles with low-grade inertial navigation systems. Techniques from computer vision literature are used to extract, label, and match keypoints between "pseudo-images" generated from these point clouds. Image matches are refined using RANSAC and information about the vehicle trajectory. The resulting correspondences can be used to initialize an iterative closest point (ICP) registration algorithm to estimate accumulated navigation error and aid in the creation of accurate, self-consistent maps. The results presented use multibeam sonar data obtained from multiple overlapping passes of an underwater canyon in Monterey Bay, California. Using strict matching criteria, the method detects 23 between-swath correspondence events in a set of 155 pseudo-images with zero false positives. Using less conservative matching criteria doubles the number of matches but introduces several false positive matches as well. Heuristics based on known vehicle trajectory information are used to eliminate these.
NASA Astrophysics Data System (ADS)
Angelats, E.; Parés, M. E.; Kumar, P.
2018-05-01
Accessible cities with accessible services are an old claim of people with reduced mobility. But this demand is still far away of becoming a reality as lot of work is required to be done yet. First step towards accessible cities is to know about real situation of the cities and its pavement infrastructure. Detailed maps or databases on street slopes, access to sidewalks, mobility in public parks and gardens, etc. are required. In this paper, we propose to use smartphone based photogrammetric point clouds, as a starting point to create accessible maps or databases. This paper analyses the performance of these point clouds and the complexity of the image acquisition procedure required to obtain them. The paper proves, through two test cases, that smartphone technology is an economical and feasible solution to get the required information, which is quite often seek by city planners to generate accessible maps. The proposed approach paves the way to generate, in a near term, accessibility maps through the use of point clouds derived from crowdsourced smartphone imagery.
Sparse Unorganized Point Cloud Based Relative Pose Estimation for Uncooperative Space Target
Chou, Wusheng; Wu, Yun; Yang, Guang; Xu, Song
2018-01-01
This paper proposes an autonomous algorithm to determine the relative pose between the chaser spacecraft and the uncooperative space target, which is essential in advanced space applications, e.g., on-orbit serving missions. The proposed method, named Congruent Tetrahedron Align (CTA) algorithm, uses the very sparse unorganized 3D point cloud acquired by a LIDAR sensor, and does not require any prior pose information. The core of the method is to determine the relative pose by looking for the congruent tetrahedron in scanning point cloud and model point cloud on the basis of its known model. The two-level index hash table is built for speeding up the search speed. In addition, the Iterative Closest Point (ICP) algorithm is used for pose tracking after CTA. In order to evaluate the method in arbitrary initial attitude, a simulated system is presented. Specifically, the performance of the proposed method to provide the initial pose needed for the tracking algorithm is demonstrated, as well as their robustness against noise. Finally, a field experiment is conducted and the results demonstrated the effectiveness of the proposed method. PMID:29597323
NASA Astrophysics Data System (ADS)
Steer, Philippe; Lague, Dimitri; Gourdon, Aurélie; Croissant, Thomas; Crave, Alain
2016-04-01
The grain-scale morphology of river sediments and their size distribution are important factors controlling the efficiency of fluvial erosion and transport. In turn, constraining the spatial evolution of these two metrics offer deep insights on the dynamics of river erosion and sediment transport from hillslopes to the sea. However, the size distribution of river sediments is generally assessed using statistically-biased field measurements and determining the grain-scale shape of river sediments remains a real challenge in geomorphology. Here we determine, with new methodological approaches based on the segmentation and geomorphological fitting of 3D point cloud dataset, the size distribution and grain-scale shape of sediments located in river environments. Point cloud segmentation is performed using either machine-learning algorithms or geometrical criterion, such as local plan fitting or curvature analysis. Once the grains are individualized into several sub-clouds, each grain-scale morphology is determined using a 3D geometrical fitting algorithm applied on the sub-cloud. If different geometrical models can be conceived and tested, only ellipsoidal models were used in this study. A phase of results checking is then performed to remove grains showing a best-fitting model with a low level of confidence. The main benefits of this automatic method are that it provides 1) an un-biased estimate of grain-size distribution on a large range of scales, from centimeter to tens of meters; 2) access to a very large number of data, only limited by the number of grains in the point-cloud dataset; 3) access to the 3D morphology of grains, in turn allowing to develop new metrics characterizing the size and shape of grains. The main limit of this method is that it is only able to detect grains with a characteristic size greater than the resolution of the point cloud. This new 3D granulometric method is then applied to river terraces both in the Poerua catchment in New-Zealand and along the Laonong river in Taiwan, which point clouds were obtained using both terrestrial lidar scanning and structure from motion photogrammetry.
Surface Fitting Filtering of LIDAR Point Cloud with Waveform Information
NASA Astrophysics Data System (ADS)
Xing, S.; Li, P.; Xu, Q.; Wang, D.; Li, P.
2017-09-01
Full-waveform LiDAR is an active technology of photogrammetry and remote sensing. It provides more detailed information about objects along the path of a laser pulse than discrete-return topographic LiDAR. The point cloud and waveform information with high quality can be obtained by waveform decomposition, which could make contributions to accurate filtering. The surface fitting filtering method with waveform information is proposed to present such advantage. Firstly, discrete point cloud and waveform parameters are resolved by global convergent Levenberg Marquardt decomposition. Secondly, the ground seed points are selected, of which the abnormal ones are detected by waveform parameters and robust estimation. Thirdly, the terrain surface is fitted and the height difference threshold is determined in consideration of window size and mean square error. Finally, the points are classified gradually with the rising of window size. The filtering process is finished until window size is larger than threshold. The waveform data in urban, farmland and mountain areas from "WATER (Watershed Allied Telemetry Experimental Research)" are selected for experiments. Results prove that compared with traditional method, the accuracy of point cloud filtering is further improved and the proposed method has highly practical value.
Kim, Joongheon; Kim, Jong-Kook
2016-01-01
This paper addresses the computation procedures for estimating the impact of interference in 60 GHz IEEE 802.11ad uplink access in order to construct visual big-data database from randomly deployed surveillance camera sensing devices. The acquired large-scale massive visual information from surveillance camera devices will be used for organizing big-data database, i.e., this estimation is essential for constructing centralized cloud-enabled surveillance database. This performance estimation study captures interference impacts on the target cloud access points from multiple interference components generated by the 60 GHz wireless transmissions from nearby surveillance camera devices to their associated cloud access points. With this uplink interference scenario, the interference impacts on the main wireless transmission from a target surveillance camera device to its associated target cloud access point with a number of settings are measured and estimated under the consideration of 60 GHz radiation characteristics and antenna radiation pattern models.
Rao, Wenwei; Wang, Yun; Han, Juan; Wang, Lei; Chen, Tong; Liu, Yan; Ni, Liang
2015-06-25
The cloud point of thermosensitive triblock polymer L61, poly(ethylene oxide)-poly(propylene oxide)-poly(ethylene oxide) (PEO-PPO-PEO), was determined in the presence of various electrolytes (K2HPO4, (NH4)3C6H5O7, and K3C6H5O7). The cloud point of L61 was lowered by the addition of electrolytes, and the cloud point of L61 decreased linearly with increasing electrolyte concentration. The efficacy of electrolytes on reducing cloud point followed the order: K3C6H5O7 > (NH4)3C6H5O7 > K2HPO4. With the increase in salt concentration, aqueous two-phase systems exhibited a phase inversion. In addition, increasing the temperature reduced the concentration of salt needed that could promote phase inversion. The phase diagrams and liquid-liquid equilibrium data of the L61-K2HPO4/(NH4)3C6H5O7/K3C6H5O7 aqueous two-phase systems (before the phase inversion but also after phase inversion) were determined at T = (25, 30, and 35) °C. Phase diagrams of aqueous two-phase systems were fitted to a four-parameter empirical nonlinear expression. Moreover, the slopes of the tie-lines and the area of two-phase region in the diagram have a tendency to rise with increasing temperature. The capacity of different salts to induce aqueous two-phase system formation was the same order as the ability of salts to reduce the cloud point.
REFLECTED LIGHT CURVES, SPHERICAL AND BOND ALBEDOS OF JUPITER- AND SATURN-LIKE EXOPLANETS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dyudina, Ulyana; Kopparla, Pushkar; Ingersoll, Andrew P.
Reflected light curves observed for exoplanets indicate that a few of them host bright clouds. We estimate how the light curve and total stellar heating of a planet depends on forward and backward scattering in the clouds based on Pioneer and Cassini spacecraft images of Jupiter and Saturn. We fit analytical functions to the local reflected brightnesses of Jupiter and Saturn depending on the planet’s phase. These observations cover broadbands at 0.59–0.72 and 0.39–0.5 μ m, and narrowbands at 0.938 (atmospheric window), 0.889 (CH4 absorption band), and 0.24–0.28 μ m. We simulate the images of the planets with a ray-tracingmore » model, and disk-integrate them to produce the full-orbit light curves. For Jupiter, we also fit the modeled light curves to the observed full-disk brightness. We derive spherical albedos for Jupiter and Saturn, and for planets with Lambertian and Rayleigh-scattering atmospheres. Jupiter-like atmospheres can produce light curves that are a factor of two fainter at half-phase than the Lambertian planet, given the same geometric albedo at transit. The spherical albedo is typically lower than for a Lambertian planet by up to a factor of ∼1.5. The Lambertian assumption will underestimate the absorption of the stellar light and the equilibrium temperature of the planetary atmosphere. We also compare our light curves with the light curves of solid bodies: the moons Enceladus and Callisto. Their strong backscattering peak within a few degrees of opposition (secondary eclipse) can lead to an even stronger underestimate of the stellar heating.« less
Intensity-corrected Herschel Observations of Nearby Isolated Low-mass Clouds
NASA Astrophysics Data System (ADS)
Sadavoy, Sarah I.; Keto, Eric; Bourke, Tyler L.; Dunham, Michael M.; Myers, Philip C.; Stephens, Ian W.; Di Francesco, James; Webb, Kristi; Stutz, Amelia M.; Launhardt, Ralf; Tobin, John J.
2018-01-01
We present intensity-corrected Herschel maps at 100, 160, 250, 350, and 500 μm for 56 isolated low-mass clouds. We determine the zero-point corrections for Herschel Photodetector Array Camera and Spectrometer (PACS) and Spectral Photometric Imaging Receiver (SPIRE) maps from the Herschel Science Archive (HSA) using Planck data. Since these HSA maps are small, we cannot correct them using typical methods. Here we introduce a technique to measure the zero-point corrections for small Herschel maps. We use radial profiles to identify offsets between the observed HSA intensities and the expected intensities from Planck. Most clouds have reliable offset measurements with this technique. In addition, we find that roughly half of the clouds have underestimated HSA-SPIRE intensities in their outer envelopes relative to Planck, even though the HSA-SPIRE maps were previously zero-point corrected. Using our technique, we produce corrected Herschel intensity maps for all 56 clouds and determine their line-of-sight average dust temperatures and optical depths from modified blackbody fits. The clouds have typical temperatures of ∼14–20 K and optical depths of ∼10‑5–10‑3. Across the whole sample, we find an anticorrelation between temperature and optical depth. We also find lower temperatures than what was measured in previous Herschel studies, which subtracted out a background level from their intensity maps to circumvent the zero-point correction. Accurate Herschel observations of clouds are key to obtaining accurate density and temperature profiles. To make such future analyses possible, intensity-corrected maps for all 56 clouds are publicly available in the electronic version. Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.
NASA Astrophysics Data System (ADS)
Schwind, Michael
Structure from Motion (SfM) is a photogrammetric technique whereby three-dimensional structures (3D) are estimated from overlapping two-dimensional (2D) image sequences. It is studied in the field of computer vision and utilized in fields such as archeology, engineering, and the geosciences. Currently, many SfM software packages exist that allow for the generation of 3D point clouds. Little work has been done to show how topographic data generated from these software differ over varying terrain types and why they might produce different results. This work aims to compare and characterize the differences between point clouds generated by three different SfM software packages: two well-known proprietary solutions (Pix4D, Agisoft PhotoScan) and one open source solution (OpenDroneMap). Five terrain types were imaged utilizing a DJI Phantom 3 Professional small unmanned aircraft system (sUAS). These terrain types include a marsh environment, a gently sloped sandy beach and jetties, a forested peninsula, a house, and a flat parking lot. Each set of imagery was processed with each software and then directly compared to each other. Before processing the sets of imagery, the software settings were analyzed and chosen in a manner that allowed for the most similar settings to be set across the three software types. This was done in an attempt to minimize point cloud differences caused by dissimilar settings. The characteristics of the resultant point clouds were then compared with each other. Furthermore, a terrestrial light detection and ranging (LiDAR) survey was conducted over the flat parking lot using a Riegl VZ- 400 scanner. This data served as ground truth in order to conduct an accuracy assessment of the sUAS-SfM point clouds. Differences were found between the different results, apparent not only in the characteristics of the clouds, but also the accuracy. This study allows for users of SfM photogrammetry to have a better understanding of how different processing software compare and the inherent sensitivity of SfM automation in 3D reconstruction. Because this study used mostly default settings within the software, it would be beneficial for further research to investigate the effects of changing parameters have on the fidelity of point cloud datasets generated from different SfM software packages.
Sloped terrain segmentation for autonomous drive using sparse 3D point cloud.
Cho, Seoungjae; Kim, Jonghyun; Ikram, Warda; Cho, Kyungeun; Jeong, Young-Sik; Um, Kyhyun; Sim, Sungdae
2014-01-01
A ubiquitous environment for road travel that uses wireless networks requires the minimization of data exchange between vehicles. An algorithm that can segment the ground in real time is necessary to obtain location data between vehicles simultaneously executing autonomous drive. This paper proposes a framework for segmenting the ground in real time using a sparse three-dimensional (3D) point cloud acquired from undulating terrain. A sparse 3D point cloud can be acquired by scanning the geography using light detection and ranging (LiDAR) sensors. For efficient ground segmentation, 3D point clouds are quantized in units of volume pixels (voxels) and overlapping data is eliminated. We reduce nonoverlapping voxels to two dimensions by implementing a lowermost heightmap. The ground area is determined on the basis of the number of voxels in each voxel group. We execute ground segmentation in real time by proposing an approach to minimize the comparison between neighboring voxels. Furthermore, we experimentally verify that ground segmentation can be executed at about 19.31 ms per frame.
Person detection and tracking with a 360° lidar system
NASA Astrophysics Data System (ADS)
Hammer, Marcus; Hebel, Marcus; Arens, Michael
2017-10-01
Today it is easily possible to generate dense point clouds of the sensor environment using 360° LiDAR (Light Detection and Ranging) sensors which are available since a number of years. The interpretation of these data is much more challenging. For the automated data evaluation the detection and classification of objects is a fundamental task. Especially in urban scenarios moving objects like persons or vehicles are of particular interest, for instance in automatic collision avoidance, for mobile sensor platforms or surveillance tasks. In literature there are several approaches for automated person detection in point clouds. While most techniques show acceptable results in object detection, the computation time is often crucial. The runtime can be problematic, especially due to the amount of data in the panoramic 360° point clouds. On the other hand, for most applications an object detection and classification in real time is needed. The paper presents a proposal for a fast, real-time capable algorithm for person detection, classification and tracking in panoramic point clouds.
Linking Advanced Visualization and MATLAB for the Analysis of 3D Gene Expression Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruebel, Oliver; Keranen, Soile V.E.; Biggin, Mark
Three-dimensional gene expression PointCloud data generated by the Berkeley Drosophila Transcription Network Project (BDTNP) provides quantitative information about the spatial and temporal expression of genes in early Drosophila embryos at cellular resolution. The BDTNP team visualizes and analyzes Point-Cloud data using the software application PointCloudXplore (PCX). To maximize the impact of novel, complex data sets, such as PointClouds, the data needs to be accessible to biologists and comprehensible to developers of analysis functions. We address this challenge by linking PCX and Matlab via a dedicated interface, thereby providing biologists seamless access to advanced data analysis functions and giving bioinformatics researchersmore » the opportunity to integrate their analysis directly into the visualization application. To demonstrate the usefulness of this approach, we computationally model parts of the expression pattern of the gene even skipped using a genetic algorithm implemented in Matlab and integrated into PCX via our Matlab interface.« less
Reinelt, Sebastian; Steinke, Daniel
2014-01-01
Summary In this work we report the synthesis of thermo-, oxidation- and cyclodextrin- (CD) responsive end-group-functionalized polymers, based on N,N-diethylacrylamide (DEAAm). In a classical free-radical chain transfer polymerization, using thiol-functionalized 4-alkylphenols, namely 3-(4-(1,1-dimethylethan-1-yl)phenoxy)propane-1-thiol and 3-(4-(2,4,4-trimethylpentan-2-yl)phenoxy)propane-1-thiol, poly(N,N-diethylacrylamide) (PDEAAm) with well-defined hydrophobic end-groups is obtained. These end-group-functionalized polymers show different cloud point values, depending on the degree of polymerization and the presence of randomly methylated β-cyclodextrin (RAMEB-CD). Additionally, the influence of the oxidation of the incorporated thioether linkages on the cloud point is investigated. The resulting hydrophilic sulfoxides show higher cloud point values for the lower critical solution temperature (LCST). A high degree of functionalization is supported by 1H NMR-, SEC-, FTIR- and MALDI–TOF measurements. PMID:24778720
A cloud physics investigation utilizing Skylab data
NASA Technical Reports Server (NTRS)
Alishouse, J.; Jacobowitz, H.; Wark, D. (Principal Investigator)
1975-01-01
The author has identified the following significant results. The Lowtran 2 program, S191 spectral response, and solar spectrum were used to compute the expected absorption by 2.0 micron band for a variety of cloud pressure levels and solar zenith angles. Analysis of the three long wavelength data channels continued in which it was found necessary to impose a minimum radiance criterion. It was also found necessary to modify the computer program to permit the computation of mean values and standard deviations for selected subsets of data on a given tape. A technique for computing the integrated absorption in the A band was devised. The technique normalizes the relative maximum at approximately .78 micron to the solar irradiance curve and then adjusts the relative maximum at approximately .74 micron to fit the solar curve.
NASA Astrophysics Data System (ADS)
Jeffreson, Sarah M. R.; Kruijssen, J. M. Diederik
2018-05-01
We propose a simple analytic theory for environmentally dependent molecular cloud lifetimes, based on the large-scale (galactic) dynamics of the interstellar medium. Within this theory, the cloud lifetime is set by the time-scales for gravitational collapse, galactic shear, spiral arm interactions, epicyclic perturbations, and cloud-cloud collisions. It is dependent on five observable quantities, accessible through measurements of the galactic rotation curve, the gas and stellar surface densities, and the gas and stellar velocity dispersions of the host galaxy. We determine how the relative importance of each dynamical mechanism varies throughout the space of observable galactic properties, and conclude that gravitational collapse and galactic shear play the greatest role in setting the cloud lifetime for the considered range of galaxy properties, while cloud-cloud collisions exert a much lesser influence. All five environmental mechanisms are nevertheless required to obtain a complete picture of cloud evolution. We apply our theory to the galaxies M31, M51, M83, and the Milky Way, and find a strong dependence of the cloud lifetime upon galactocentric radius in each case, with a typical cloud lifetime between 10 and 50 Myr. Our theory is ideally suited for systematic observational tests with the Atacama Large Millimetre/submillimetre array.
Comparative Analysis of Data Structures for Storing Massive Tins in a Dbms
NASA Astrophysics Data System (ADS)
Kumar, K.; Ledoux, H.; Stoter, J.
2016-06-01
Point cloud data are an important source for 3D geoinformation. Modern day 3D data acquisition and processing techniques such as airborne laser scanning and multi-beam echosounding generate billions of 3D points for simply an area of few square kilometers. With the size of the point clouds exceeding the billion mark for even a small area, there is a need for their efficient storage and management. These point clouds are sometimes associated with attributes and constraints as well. Storing billions of 3D points is currently possible which is confirmed by the initial implementations in Oracle Spatial SDO PC and the PostgreSQL Point Cloud extension. But to be able to analyse and extract useful information from point clouds, we need more than just points i.e. we require the surface defined by these points in space. There are different ways to represent surfaces in GIS including grids, TINs, boundary representations, etc. In this study, we investigate the database solutions for the storage and management of massive TINs. The classical (face and edge based) and compact (star based) data structures are discussed at length with reference to their structure, advantages and limitations in handling massive triangulations and are compared with the current solution of PostGIS Simple Feature. The main test dataset is the TIN generated from third national elevation model of the Netherlands (AHN3) with a point density of over 10 points/m2. PostgreSQL/PostGIS DBMS is used for storing the generated TIN. The data structures are tested with the generated TIN models to account for their geometry, topology, storage, indexing, and loading time in a database. Our study is useful in identifying what are the limitations of the existing data structures for storing massive TINs and what is required to optimise these structures for managing massive triangulations in a database.
Fully Convolutional Networks for Ground Classification from LIDAR Point Clouds
NASA Astrophysics Data System (ADS)
Rizaldy, A.; Persello, C.; Gevaert, C. M.; Oude Elberink, S. J.
2018-05-01
Deep Learning has been massively used for image classification in recent years. The use of deep learning for ground classification from LIDAR point clouds has also been recently studied. However, point clouds need to be converted into an image in order to use Convolutional Neural Networks (CNNs). In state-of-the-art techniques, this conversion is slow because each point is converted into a separate image. This approach leads to highly redundant computation during conversion and classification. The goal of this study is to design a more efficient data conversion and ground classification. This goal is achieved by first converting the whole point cloud into a single image. The classification is then performed by a Fully Convolutional Network (FCN), a modified version of CNN designed for pixel-wise image classification. The proposed method is significantly faster than state-of-the-art techniques. On the ISPRS Filter Test dataset, it is 78 times faster for conversion and 16 times faster for classification. Our experimental analysis on the same dataset shows that the proposed method results in 5.22 % of total error, 4.10 % of type I error, and 15.07 % of type II error. Compared to the previous CNN-based technique and LAStools software, the proposed method reduces the total error and type I error (while type II error is slightly higher). The method was also tested on a very high point density LIDAR point clouds resulting in 4.02 % of total error, 2.15 % of type I error and 6.14 % of type II error.
Towards semi-automatic rock mass discontinuity orientation and set analysis from 3D point clouds
NASA Astrophysics Data System (ADS)
Guo, Jiateng; Liu, Shanjun; Zhang, Peina; Wu, Lixin; Zhou, Wenhui; Yu, Yinan
2017-06-01
Obtaining accurate information on rock mass discontinuities for deformation analysis and the evaluation of rock mass stability is important. Obtaining measurements for high and steep zones with the traditional compass method is difficult. Photogrammetry, three-dimensional (3D) laser scanning and other remote sensing methods have gradually become mainstream methods. In this study, a method that is based on a 3D point cloud is proposed to semi-automatically extract rock mass structural plane information. The original data are pre-treated prior to segmentation by removing outlier points. The next step is to segment the point cloud into different point subsets. Various parameters, such as the normal, dip/direction and dip, can be calculated for each point subset after obtaining the equation of the best fit plane for the relevant point subset. A cluster analysis (a point subset that satisfies some conditions and thus forms a cluster) is performed based on the normal vectors by introducing the firefly algorithm (FA) and the fuzzy c-means (FCM) algorithm. Finally, clusters that belong to the same discontinuity sets are merged and coloured for visualization purposes. A prototype system is developed based on this method to extract the points of the rock discontinuity from a 3D point cloud. A comparison with existing software shows that this method is feasible. This method can provide a reference for rock mechanics, 3D geological modelling and other related fields.
Automatic Monitoring of Tunnel Deformation Based on High Density Point Clouds Data
NASA Astrophysics Data System (ADS)
Du, L.; Zhong, R.; Sun, H.; Wu, Q.
2017-09-01
An automated method for tunnel deformation monitoring using high density point clouds data is presented. Firstly, the 3D point clouds data are converted to two-dimensional surface by projection on the XOY plane, the projection point set of central axis on XOY plane named Uxoy is calculated by combining the Alpha Shape algorithm with RANSAC (Random Sampling Consistency) algorithm, and then the projection point set of central axis on YOZ plane named Uyoz is obtained by highest and lowest points which are extracted by intersecting straight lines that through each point of Uxoy and perpendicular to the two -dimensional surface with the tunnel point clouds, Uxoy and Uyoz together form the 3D center axis finally. Secondly, the buffer of each cross section is calculated by K-Nearest neighbor algorithm, and the initial cross-sectional point set is quickly constructed by projection method. Finally, the cross sections are denoised and the section lines are fitted using the method of iterative ellipse fitting. In order to improve the accuracy of the cross section, a fine adjustment method is proposed to rotate the initial sectional plane around the intercept point in the horizontal and vertical direction within the buffer. The proposed method is used in Shanghai subway tunnel, and the deformation of each section in the direction of 0 to 360 degrees is calculated. The result shows that the cross sections becomes flat circles from regular circles due to the great pressure at the top of the tunnel
D Land Cover Classification Based on Multispectral LIDAR Point Clouds
NASA Astrophysics Data System (ADS)
Zou, Xiaoliang; Zhao, Guihua; Li, Jonathan; Yang, Yuanxi; Fang, Yong
2016-06-01
Multispectral Lidar System can emit simultaneous laser pulses at the different wavelengths. The reflected multispectral energy is captured through a receiver of the sensor, and the return signal together with the position and orientation information of sensor is recorded. These recorded data are solved with GNSS/IMU data for further post-processing, forming high density multispectral 3D point clouds. As the first commercial multispectral airborne Lidar sensor, Optech Titan system is capable of collecting point clouds data from all three channels at 532nm visible (Green), at 1064 nm near infrared (NIR) and at 1550nm intermediate infrared (IR). It has become a new source of data for 3D land cover classification. The paper presents an Object Based Image Analysis (OBIA) approach to only use multispectral Lidar point clouds datasets for 3D land cover classification. The approach consists of three steps. Firstly, multispectral intensity images are segmented into image objects on the basis of multi-resolution segmentation integrating different scale parameters. Secondly, intensity objects are classified into nine categories by using the customized features of classification indexes and a combination the multispectral reflectance with the vertical distribution of object features. Finally, accuracy assessment is conducted via comparing random reference samples points from google imagery tiles with the classification results. The classification results show higher overall accuracy for most of the land cover types. Over 90% of overall accuracy is achieved via using multispectral Lidar point clouds for 3D land cover classification.
NASA Astrophysics Data System (ADS)
Charbonnier, P.; Chavant, P.; Foucher, P.; Muzet, V.; Prybyla, D.; Perrin, T.; Grussenmeyer, P.; Guillemin, S.
2013-07-01
With recent developments in the field of technology and computer science, conventional methods are being supplanted by laser scanning and digital photogrammetry. These two different surveying techniques generate 3-D models of real world objects or structures. In this paper, we consider the application of terrestrial Laser scanning (TLS) and photogrammetry to the surveying of canal tunnels. The inspection of such structures requires time, safe access, specific processing and professional operators. Therefore, a French partnership proposes to develop a dedicated equipment based on image processing for visual inspection of canal tunnels. A 3D model of the vault and side walls of the tunnel is constructed from images recorded onboard a boat moving inside the tunnel. To assess the accuracy of this photogrammetric model (PM), a reference model is build using static TLS. We here address the problem comparing the resulting point clouds. Difficulties arise because of the highly differentiated acquisition processes, which result in very different point densities. We propose a new tool, designed to compare differences between pairs of point cloud or surfaces (triangulated meshes). Moreover, dealing with huge datasets requires the implementation of appropriate structures and algorithms. Several techniques are presented : point-to-point, cloud-to-cloud and cloud-to-mesh. In addition farthest point resampling, octree structure and Hausdorff distance are adopted and described. Experimental results are shown for a 475 m long canal tunnel located in France.
Terrestrial laser scanning in monitoring of anthropogenic objects
NASA Astrophysics Data System (ADS)
Zaczek-Peplinska, Janina; Kowalska, Maria
2017-12-01
The registered xyz coordinates in the form of a point cloud captured by terrestrial laser scanner and the intensity values (I) assigned to them make it possible to perform geometric and spectral analyses. Comparison of point clouds registered in different time periods requires conversion of the data to a common coordinate system and proper data selection is necessary. Factors like point distribution dependant on the distance between the scanner and the surveyed surface, angle of incidence, tasked scan's density and intensity value have to be taken into consideration. A prerequisite for running a correct analysis of the obtained point clouds registered during periodic measurements using a laser scanner is the ability to determine the quality and accuracy of the analysed data. The article presents a concept of spectral data adjustment based on geometric analysis of a surface as well as examples of geometric analyses integrating geometric and physical data in one cloud of points: cloud point coordinates, recorded intensity values, and thermal images of an object. The experiments described here show multiple possibilities of usage of terrestrial laser scanning data and display the necessity of using multi-aspect and multi-source analyses in anthropogenic object monitoring. The article presents examples of multisource data analyses with regard to Intensity value correction due to the beam's incidence angle. The measurements were performed using a Leica Nova MS50 scanning total station, Z+F Imager 5010 scanner and the integrated Z+F T-Cam thermal camera.
D Point Cloud Model Colorization by Dense Registration of Digital Images
NASA Astrophysics Data System (ADS)
Crombez, N.; Caron, G.; Mouaddib, E.
2015-02-01
Architectural heritage is a historic and artistic property which has to be protected, preserved, restored and must be shown to the public. Modern tools like 3D laser scanners are more and more used in heritage documentation. Most of the time, the 3D laser scanner is completed by a digital camera which is used to enrich the accurate geometric informations with the scanned objects colors. However, the photometric quality of the acquired point clouds is generally rather low because of several problems presented below. We propose an accurate method for registering digital images acquired from any viewpoints on point clouds which is a crucial step for a good colorization by colors projection. We express this image-to-geometry registration as a pose estimation problem. The camera pose is computed using the entire images intensities under a photometric visual and virtual servoing (VVS) framework. The camera extrinsic and intrinsic parameters are automatically estimated. Because we estimates the intrinsic parameters we do not need any informations about the camera which took the used digital image. Finally, when the point cloud model and the digital image are correctly registered, we project the 3D model in the digital image frame and assign new colors to the visible points. The performance of the approach is proven in simulation and real experiments on indoor and outdoor datasets of the cathedral of Amiens, which highlight the success of our method, leading to point clouds with better photometric quality and resolution.
Analysis of Uncertainty in a Middle-Cost Device for 3D Measurements in BIM Perspective
Sánchez, Alonso; Naranjo, José-Manuel; Jiménez, Antonio; González, Alfonso
2016-01-01
Medium-cost devices equipped with sensors are being developed to get 3D measurements. Some allow for generating geometric models and point clouds. Nevertheless, the accuracy of these measurements should be evaluated, taking into account the requirements of the Building Information Model (BIM). This paper analyzes the uncertainty in outdoor/indoor three-dimensional coordinate measures and point clouds (using Spherical Accuracy Standard (SAS) methods) for Eyes Map, a medium-cost tablet manufactured by e-Capture Research & Development Company, Mérida, Spain. To achieve it, in outdoor tests, by means of this device, the coordinates of targets were measured from 1 to 6 m and cloud points were obtained. Subsequently, these were compared to the coordinates of the same targets measured by a Total Station. The Euclidean average distance error was 0.005–0.027 m for measurements by Photogrammetry and 0.013–0.021 m for the point clouds. All of them satisfy the tolerance for point cloud acquisition (0.051 m) according to the BIM Guide for 3D Imaging (General Services Administration); similar results are obtained in the indoor tests, with values of 0.022 m. In this paper, we establish the optimal distances for the observations in both, Photogrammetry and 3D Photomodeling modes (outdoor) and point out some working conditions to avoid in indoor environments. Finally, the authors discuss some recommendations for improving the performance and working methods of the device. PMID:27669245
Point-cloud-to-point-cloud technique on tool calibration for dental implant surgical path tracking
NASA Astrophysics Data System (ADS)
Lorsakul, Auranuch; Suthakorn, Jackrit; Sinthanayothin, Chanjira
2008-03-01
Dental implant is one of the most popular methods of tooth root replacement used in prosthetic dentistry. Computerize navigation system on a pre-surgical plan is offered to minimize potential risk of damage to critical anatomic structures of patients. Dental tool tip calibrating is basically an important procedure of intraoperative surgery to determine the relation between the hand-piece tool tip and hand-piece's markers. With the transferring coordinates from preoperative CT data to reality, this parameter is a part of components in typical registration problem. It is a part of navigation system which will be developed for further integration. A high accuracy is required, and this relation is arranged by point-cloud-to-point-cloud rigid transformations and singular value decomposition (SVD) for minimizing rigid registration errors. In earlier studies, commercial surgical navigation systems from, such as, BrainLAB and Materialize, have flexibility problem on tool tip calibration. Their systems either require a special tool tip calibration device or are unable to change the different tool. The proposed procedure is to use the pointing device or hand-piece to touch on the pivot and the transformation matrix. This matrix is calculated every time when it moves to the new position while the tool tip stays at the same point. The experiment acquired on the information of tracking device, image acquisition and image processing algorithms. The key success is that point-to-point-cloud requires only 3 post images of tool to be able to converge to the minimum errors 0.77%, and the obtained result is correct in using the tool holder to track the path simulation line displayed in graphic animation.
Linking the Climate and Thermal Phase Curve of 55 Cancri e
NASA Astrophysics Data System (ADS)
Hammond, Mark; Pierrehumbert, Raymond T.
2017-11-01
The thermal phase curve of 55 Cancri e is the first measurement of the temperature distribution of a tidally locked super-Earth, but raises a number of puzzling questions about the planet’s climate. The phase curve has a high amplitude and peak offset, suggesting that it has a significant eastward hot-spot shift as well as a large day-night temperature contrast. We use a general circulation model to model potential climates, and investigate the relation between bulk atmospheric composition and the magnitude of these seemingly contradictory features. We confirm theoretical models of tidally locked circulation are consistent with our numerical model of 55 Cnc e, and rule out certain atmospheric compositions based on their thermodynamic properties. Our best-fitting atmosphere has a significant hot-spot shift and day-night contrast, although these are not as large as the observed phase curve. We discuss possible physical processes that could explain the observations, and show that night-side cloud formation from species such as SiO from a day-side magma ocean could potentially increase the phase curve amplitude and explain the observations. We conclude that the observations could be explained by an optically thick atmosphere with a low mean molecular weight, a surface pressure of several bars, and a strong eastward circulation, with night-side cloud formation a possible explanation for the difference between our model and the observations.
Song, Liang; Zhang, Yong-Jiang; Chen, Xi; Li, Su; Lu, Hua-Zheng; Wu, Chuan-Sheng; Tan, Zheng-Hong; Liu, Wen-Yao; Shi, Xian-Meng
2015-07-01
Fan life forms are bryophytes with shoots rising from vertical substratum that branch repeatedly in the horizontal plane to form flattened photosynthetic surfaces, which are well suited for intercepting water from moving air. However, detailed water relations, gas exchange characteristics of fan bryophytes and their adaptations to particular microhabitats remain poorly understood. In this study, we measured and analyzed microclimatic data, as well as water release curves, pressure-volume relationships and photosynthetic water and light response curves for three common fan bryophytes in an Asian subtropical montane cloud forest (SMCF). Results demonstrate high relative humidity but low light levels and temperatures in the understory, and a strong effect of fog on water availability for bryophytes in the SMCF. The facts that fan bryophytes in dry air lose most of their free water within 1 h, and a strong dependence of net photosynthesis rates on water content, imply that the transition from a hydrated, photosynthetically active state to a dry, inactive state is rapid. In addition, fan bryophytes developed relatively high cell wall elasticity and the osmoregulatory capacity to tolerate desiccation. These fan bryophytes had low light saturation and compensation point of photosynthesis, indicating shade tolerance. It is likely that fan bryophytes can flourish on tree trunks in the SMCF because of substantial annual precipitation, average relative humidity, and frequent and persistent fog, which can provide continual water sources for them to intercept. Nevertheless, the low water retention capacity and strong dependence of net photosynthesis on water content of fan bryophytes indicate a high risk of unbalanced carbon budget if the frequency and severity of drought increase in the future as predicted.
Layer stacking: A novel algorithm for individual forest tree segmentation from LiDAR point clouds
Elias Ayrey; Shawn Fraver; John A. Kershaw; Laura S. Kenefic; Daniel Hayes; Aaron R. Weiskittel; Brian E. Roth
2017-01-01
As light detection and ranging (LiDAR) technology advances, it has become common for datasets to be acquired at a point density high enough to capture structural information from individual trees. To process these data, an automatic method of isolating individual trees from a LiDAR point cloud is required. Traditional methods for segmenting trees attempt to isolate...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Riihimaki, Laura D.; Comstock, Jennifer M.; Luke, Edward
To understand the microphysical processes that impact diabatic heating and cloud lifetimes in convection, we need to characterize the spatial distribution of supercooled liquid water. To address this observational challenge, vertically pointing active sensors at the Darwin Atmospheric Radiation Measurement (ARM) site are used to classify cloud phase within a deep convective cloud in a shallow to deep convection transitional case. The cloud cannot be fully observed by a lidar due to signal attenuation. Thus we develop an objective method for identifying hydrometeor classes, including mixed-phase conditions, using k-means clustering on parameters that describe the shape of the Doppler spectramore » from vertically pointing Ka band cloud radar. This approach shows that multiple, overlapping mixed-phase layers exist within the cloud, rather than a single region of supercooled liquid, indicating complexity to how ice growth and diabatic heating occurs in the vertical structure of the cloud.« less
NASA Astrophysics Data System (ADS)
Riihimaki, L. D.; Comstock, J. M.; Luke, E.; Thorsen, T. J.; Fu, Q.
2017-07-01
To understand the microphysical processes that impact diabatic heating and cloud lifetimes in convection, we need to characterize the spatial distribution of supercooled liquid water. To address this observational challenge, ground-based vertically pointing active sensors at the Darwin Atmospheric Radiation Measurement site are used to classify cloud phase within a deep convective cloud. The cloud cannot be fully observed by a lidar due to signal attenuation. Therefore, we developed an objective method for identifying hydrometeor classes, including mixed-phase conditions, using k-means clustering on parameters that describe the shape of the Doppler spectra from vertically pointing Ka-band cloud radar. This approach shows that multiple, overlapping mixed-phase layers exist within the cloud, rather than a single region of supercooled liquid. Diffusional growth calculations show that the conditions for the Wegener-Bergeron-Findeisen process exist within one of these mixed-phase microstructures.
NASA Astrophysics Data System (ADS)
Sirmacek, B.; Lindenbergh, R. C.; Menenti, M.
2013-10-01
Fusion of 3D airborne laser (LIDAR) data and terrestrial optical imagery can be applied in 3D urban modeling and model up-dating. The most challenging aspect of the fusion procedure is registering the terrestrial optical images on the LIDAR point clouds. In this article, we propose an approach for registering these two different data from different sensor sources. As we use iPhone camera images which are taken in front of the interested urban structure by the application user and the high resolution LIDAR point clouds of the acquired by an airborne laser sensor. After finding the photo capturing position and orientation from the iPhone photograph metafile, we automatically select the area of interest in the point cloud and transform it into a range image which has only grayscale intensity levels according to the distance from the image acquisition position. We benefit from local features for registering the iPhone image to the generated range image. In this article, we have applied the registration process based on local feature extraction and graph matching. Finally, the registration result is used for facade texture mapping on the 3D building surface mesh which is generated from the LIDAR point cloud. Our experimental results indicate possible usage of the proposed algorithm framework for 3D urban map updating and enhancing purposes.
NASA Technical Reports Server (NTRS)
Slobin, S. D.
1982-01-01
The microwave attenuation and noise temperature effects of clouds can result in serious degradation of telecommunications link performance, especially for low-noise systems presently used in deep-space communications. Although cloud effects are generally less than rain effects, the frequent presence of clouds will cause some amount of link degradation a large portion of the time. This paper presents a general review of cloud types and their water particle densities, attenuation and noise temperature calculations, and basic link signal-to-noise ratio calculations. Tabular results of calculations for 12 different cloud models are presented for frequencies in the range 10-50 GHz. Curves of average-year attenuation and noise temperature statistics at frequencies ranging from 10 to 90 GHz, calculated from actual surface and radiosonde observations, are given for 15 climatologically distinct regions in the contiguous United States, Alaska, and Hawaii. Nonuniform sky cover is considered in these calculations.
NASA Astrophysics Data System (ADS)
Mateos, David; di Sarra, Alcide; Meloni, Daniela; di Biagio, Claudia; Sferlazzo, Damiano M.
2011-08-01
Measurements of UV spectra, total ozone, cloud cover, and cloud optical thickness, obtained at Lampedusa (central Mediterranean), are used to investigate the influence of clouds on the spectral UV irradiance, through the cloud modification factor (CMF), and on five biological processes. The CMF decreases with cloud optical thickness (COT), from about 0.5 for COT˜15 to 0.25 for COT˜45, and decreases with increasing wavelength above 315-320-nm. Observations display an increase in the CMF from 295 to 320-nm, which is related to enhanced absorption by tropospheric ozone due to the long photon path lengths under cloudy conditions. The use of a wavelength independent CMF instead of the experimentally determined spectral curves produces an overestimation of the biological effects of UV irradiance. The overestimation may be as large as 30% for the DNA damage, 20% for vitamin D synthesis, 12% for plant damage, and 8-10% for phytoplankton inhibition and erythema.
Assessment of different models for computing the probability of a clear line of sight
NASA Astrophysics Data System (ADS)
Bojin, Sorin; Paulescu, Marius; Badescu, Viorel
2017-12-01
This paper is focused on modeling the morphological properties of the cloud fields in terms of the probability of a clear line of sight (PCLOS). PCLOS is defined as the probability that a line of sight between observer and a given point of the celestial vault goes freely without intersecting a cloud. A variety of PCLOS models assuming the cloud shape hemisphere, semi-ellipsoid and ellipsoid are tested. The effective parameters (cloud aspect ratio and absolute cloud fraction) are extracted from high-resolution series of sunshine number measurements. The performance of the PCLOS models is evaluated from the perspective of their ability in retrieving the point cloudiness. The advantages and disadvantages of the tested models are discussed, aiming to a simplified parameterization of PCLOS models.
UBV Photometry of Selected Eclipsing Binaries in the Magellanic Clouds.
NASA Astrophysics Data System (ADS)
Davidge, Timothy John
1987-12-01
UBV photoelectric observations of five eclipsing binaries in the Magellanic Clouds are presented and discussed in detail. The systems studied are HV162O and HV1669 in the Small Magellanic Cloud and HV2241, HV2765, and HV5943 in the Large Magellanic Cloud. Classification spectra indicate that the components of these systems are of spectral type late O or early B. The systems are located in moderately crowded areas. Therefore, CCD observations were used to construct models of the star fields around the variables. These were used to correct the photoelectric measurements for contamination. Light curve solutions were found with the Wilson -Devinney program. A two dimensional search of parameter space involving the mass ratio and the surface potential of the secondary component was employed. This procedure was tested by numerical simulation and was found to predict the light curve elements, including the mass ratios, within their estimated uncertainties. It appears likely that none of the systems are in contact, a surprising result considering the high frequency of early type contact binaries in the solar neighborhood. The light curve solutions were then used to compute the absolute dimensions of the components. Only one system, HV2241, has a radial velocity curve, allowing its absolute dimensions to be well established. Less accurate absolute dimensions were calculated for the remaining systems using photometric information. The components were then placed on H-R diagrams and compared with theoretical models of stellar evolution. The positions of the components on these diagrams appear to support the existence of convective core overshooting. The evolutionary status of the systems was also discussed. The system with the most accurately determined absolute dimensions, HV2241, appears to have undergone, or is nearing the end of, Case A mass transfer. Two other systems, HV1620 and HV1669, may also be involved in mass transfer. Finally, the use of eclipsing binaries as distance indicators was investigated. The distance modulus of the LMC was computed in two ways. One approach used the absolute dimensions found with the radial velocity data while the other employed the method of photometric parallaxes. The latter technique was also used to calculate the distance modulus of the SMC.
NASA Astrophysics Data System (ADS)
Gupta, S.; Lohani, B.
2014-05-01
Mobile augmented reality system is the next generation technology to visualise 3D real world intelligently. The technology is expanding at a fast pace to upgrade the status of a smart phone to an intelligent device. The research problem identified and presented in the current work is to view actual dimensions of various objects that are captured by a smart phone in real time. The methodology proposed first establishes correspondence between LiDAR point cloud, that are stored in a server, and the image t hat is captured by a mobile. This correspondence is established using the exterior and interior orientation parameters of the mobile camera and the coordinates of LiDAR data points which lie in the viewshed of the mobile camera. A pseudo intensity image is generated using LiDAR points and their intensity. Mobile image and pseudo intensity image are then registered using image registration method SIFT thereby generating a pipeline to locate a point in point cloud corresponding to a point (pixel) on the mobile image. The second part of the method uses point cloud data for computing dimensional information corresponding to the pairs of points selected on mobile image and fetch the dimensions on top of the image. This paper describes all steps of the proposed method. The paper uses an experimental setup to mimic the mobile phone and server system and presents some initial but encouraging results
Formation of massive, dense cores by cloud-cloud collisions
NASA Astrophysics Data System (ADS)
Takahira, Ken; Shima, Kazuhiro; Habe, Asao; Tasker, Elizabeth J.
2018-03-01
We performed sub-parsec (˜ 0.014 pc) scale simulations of cloud-cloud collisions of two idealized turbulent molecular clouds (MCs) with different masses in the range of (0.76-2.67) × 104 M_{⊙} and with collision speeds of 5-30 km s-1. Those parameters are larger than in Takahira, Tasker, and Habe (2014, ApJ, 792, 63), in which study the colliding system showed a partial gaseous arc morphology that supports the NANTEN observations of objects indicated to be colliding MCs using numerical simulations. Gas clumps with density greater than 10-20 g cm-3 were identified as pre-stellar cores and tracked through the simulation to investigate the effects of the mass of colliding clouds and the collision speeds on the resulting core population. Our results demonstrate that the smaller cloud property is more important for the results of cloud-cloud collisions. The mass function of formed cores can be approximated by a power-law relation with an index γ = -1.6 in slower cloud-cloud collisions (v ˜ 5 km s-1), and is in good agreement with observation of MCs. A faster relative speed increases the number of cores formed in the early stage of collisions and shortens the gas accretion phase of cores in the shocked region, leading to the suppression of core growth. The bending point appears in the high-mass part of the core mass function and the bending point mass decreases with increase in collision speed for the same combination of colliding clouds. The higher-mass part of the core mass function than the bending point mass can be approximated by a power law with γ = -2-3 that is similar to the power index of the massive part of the observed stellar initial mass function. We discuss implications of our results for the massive-star formation in our Galaxy.
Formation of massive, dense cores by cloud-cloud collisions
NASA Astrophysics Data System (ADS)
Takahira, Ken; Shima, Kazuhiro; Habe, Asao; Tasker, Elizabeth J.
2018-05-01
We performed sub-parsec (˜ 0.014 pc) scale simulations of cloud-cloud collisions of two idealized turbulent molecular clouds (MCs) with different masses in the range of (0.76-2.67) × 104 M_{⊙} and with collision speeds of 5-30 km s-1. Those parameters are larger than in Takahira, Tasker, and Habe (2014, ApJ, 792, 63), in which study the colliding system showed a partial gaseous arc morphology that supports the NANTEN observations of objects indicated to be colliding MCs using numerical simulations. Gas clumps with density greater than 10-20 g cm-3 were identified as pre-stellar cores and tracked through the simulation to investigate the effects of the mass of colliding clouds and the collision speeds on the resulting core population. Our results demonstrate that the smaller cloud property is more important for the results of cloud-cloud collisions. The mass function of formed cores can be approximated by a power-law relation with an index γ = -1.6 in slower cloud-cloud collisions (v ˜ 5 km s-1), and is in good agreement with observation of MCs. A faster relative speed increases the number of cores formed in the early stage of collisions and shortens the gas accretion phase of cores in the shocked region, leading to the suppression of core growth. The bending point appears in the high-mass part of the core mass function and the bending point mass decreases with increase in collision speed for the same combination of colliding clouds. The higher-mass part of the core mass function than the bending point mass can be approximated by a power law with γ = -2-3 that is similar to the power index of the massive part of the observed stellar initial mass function. We discuss implications of our results for the massive-star formation in our Galaxy.
Clouds off the Aleutian Islands
2017-12-08
March 23, 2010 - Clouds off the Aleutian Islands Interesting cloud patterns were visible over the Aleutian Islands in this image, captured by the MODIS on the Aqua satellite on March 14, 2010. Turbulence, caused by the wind passing over the highest points of the islands, is producing the pronounced eddies that swirl the clouds into a pattern called a vortex "street". In this image, the clouds have also aligned in parallel rows or streets. Cloud streets form when low-level winds move between and over obstacles causing the clouds to line up into rows (much like streets) that match the direction of the winds. At the point where the clouds first form streets, they're very narrow and well-defined. But as they age, they lose their definition, and begin to spread out and rejoin each other into a larger cloud mass. The Aleutians are a chain of islands that extend from Alaska toward the Kamchatka Peninsula in Russia. For more information related to this image go to: modis.gsfc.nasa.gov/gallery/individual.php?db_date=2010-0... For more information about Goddard Space Flight Center go here: www.nasa.gov/centers/goddard/home/index.html
Automated extraction and analysis of rock discontinuity characteristics from 3D point clouds
NASA Astrophysics Data System (ADS)
Bianchetti, Matteo; Villa, Alberto; Agliardi, Federico; Crosta, Giovanni B.
2016-04-01
A reliable characterization of fractured rock masses requires an exhaustive geometrical description of discontinuities, including orientation, spacing, and size. These are required to describe discontinuum rock mass structure, perform Discrete Fracture Network and DEM modelling, or provide input for rock mass classification or equivalent continuum estimate of rock mass properties. Although several advanced methodologies have been developed in the last decades, a complete characterization of discontinuity geometry in practice is still challenging, due to scale-dependent variability of fracture patterns and difficult accessibility to large outcrops. Recent advances in remote survey techniques, such as terrestrial laser scanning and digital photogrammetry, allow a fast and accurate acquisition of dense 3D point clouds, which promoted the development of several semi-automatic approaches to extract discontinuity features. Nevertheless, these often need user supervision on algorithm parameters which can be difficult to assess. To overcome this problem, we developed an original Matlab tool, allowing fast, fully automatic extraction and analysis of discontinuity features with no requirements on point cloud accuracy, density and homogeneity. The tool consists of a set of algorithms which: (i) process raw 3D point clouds, (ii) automatically characterize discontinuity sets, (iii) identify individual discontinuity surfaces, and (iv) analyse their spacing and persistence. The tool operates in either a supervised or unsupervised mode, starting from an automatic preliminary exploration data analysis. The identification and geometrical characterization of discontinuity features is divided in steps. First, coplanar surfaces are identified in the whole point cloud using K-Nearest Neighbor and Principal Component Analysis algorithms optimized on point cloud accuracy and specified typical facet size. Then, discontinuity set orientation is calculated using Kernel Density Estimation and principal vector similarity criteria. Poles to points are assigned to individual discontinuity objects using easy custom vector clustering and Jaccard distance approaches, and each object is segmented into planar clusters using an improved version of the DBSCAN algorithm. Modal set orientations are then recomputed by cluster-based orientation statistics to avoid the effects of biases related to cluster size and density heterogeneity of the point cloud. Finally, spacing values are measured between individual discontinuity clusters along scanlines parallel to modal pole vectors, whereas individual feature size (persistence) is measured using 3D convex hull bounding boxes. Spacing and size are provided both as raw population data and as summary statistics. The tool is optimized for parallel computing on 64bit systems, and a Graphic User Interface (GUI) has been developed to manage data processing, provide several outputs, including reclassified point clouds, tables, plots, derived fracture intensity parameters, and export to modelling software tools. We present test applications performed both on synthetic 3D data (simple 3D solids) and real case studies, validating the results with existing geomechanical datasets.
Applications of 3D-EDGE Detection for ALS Point Cloud
NASA Astrophysics Data System (ADS)
Ni, H.; Lin, X. G.; Zhang, J. X.
2017-09-01
Edge detection has been one of the major issues in the field of remote sensing and photogrammetry. With the fast development of sensor technology of laser scanning system, dense point clouds have become increasingly common. Precious 3D-edges are able to be detected from these point clouds and a great deal of edge or feature line extraction methods have been proposed. Among these methods, an easy-to-use 3D-edge detection method, AGPN (Analyzing Geometric Properties of Neighborhoods), has been proposed. The AGPN method detects edges based on the analysis of geometric properties of a query point's neighbourhood. The AGPN method detects two kinds of 3D-edges, including boundary elements and fold edges, and it has many applications. This paper presents three applications of AGPN, i.e., 3D line segment extraction, ground points filtering, and ground breakline extraction. Experiments show that the utilization of AGPN method gives a straightforward solution to these applications.
Building Facade Modeling Under Line Feature Constraint Based on Close-Range Images
NASA Astrophysics Data System (ADS)
Liang, Y.; Sheng, Y. H.
2018-04-01
To solve existing problems in modeling facade of building merely with point feature based on close-range images , a new method for modeling building facade under line feature constraint is proposed in this paper. Firstly, Camera parameters and sparse spatial point clouds data were restored using the SFM , and 3D dense point clouds were generated with MVS; Secondly, the line features were detected based on the gradient direction , those detected line features were fit considering directions and lengths , then line features were matched under multiple types of constraints and extracted from multi-image sequence. At last, final facade mesh of a building was triangulated with point cloud and line features. The experiment shows that this method can effectively reconstruct the geometric facade of buildings using the advantages of combining point and line features of the close - range image sequence, especially in restoring the contour information of the facade of buildings.
NASA Astrophysics Data System (ADS)
Machado, Luiz A. T.; Lima, Wagner F. A.; Pinto, Osmar; Morales, Carlos A.
This work presents a relationship between atmospheric cloud-to-ground discharges and penetrative convective clouds. It combines Infrared and Water Vapor channels from the GOES-12 geostationary satellite with cloud-to-ground discharge data from the Brazilian Integrated Lightning Detection Network (RINDAT) during the period from January to February 2005. The difference between water vapor and infrared brightness temperature is a tracer penetrating clouds. Due to the water vapor channel's strong absorption, this difference is positive only during overshooting cases, when convective clouds penetrate the stratosphere. From this difference and the cloud-to-ground, discharge measured on the ground by RINDAT, it was possible to adjust exponential curves that relate the brightness temperature difference from these two channels to the probability of occurrence of cloud-to-ground discharges, with a very large coefficient of determination. If WV-IR brightness temperature difference is greater than - 15 K there is a large potential for cloud-to-ground discharge activity. As this difference increases the cloud-to-ground discharge probably increases, for example: if this difference is equal to zero, the probability of having at least one cloud-to-ground discharge is 10.9%, 7.0% for two, 4.4% for four, 2.7% for eight and 1.5% for sixteen cloud-to-ground discharges. Through this process, was developed a scheme that estimates the probability of occurrence of cloud-to-ground discharge over all the continental region of South America.
Transmission spectroscopy of the inflated exo-Saturn HAT-P-19b
NASA Astrophysics Data System (ADS)
Mallonn, M.; von Essen, C.; Weingrill, J.; Strassmeier, K. G.; Ribas, I.; Carroll, T. A.; Herrero, E.; Granzer, T.; Claret, A.; Schwope, A.
2015-08-01
Context. Transiting highly inflated giant planets offer the possibility of characterizing their atmospheres. A fraction of the starlight passes through the high-altitude layers of the planetary atmosphere during transit. The resulting absorption is expected to be wavelength dependent for cloud-free atmospheres with an amplitude of up to 10-3 of the stellar flux, while a high-altitude cloud deck would cause a gray opacity. Aims: We observed the Saturn-mass and Jupiter-sized exoplanet HAT-P-19b to refine its transit parameters and ephemeris as well as to shed first light on its transmission spectrum. We monitored the host star over one year to quantify its flux variability and to correct the transmission spectrum for a slope caused by starspots. Methods: A transit of HAT-P-19b was observed spectroscopically with OSIRIS at the Gran Telescopio Canarias in January 2012. The spectra of the target and the comparison star covered the wavelength range from 5600 to 7600 Å. One high-precision differential light curve was created by integrating the entire spectral flux. This white-light curve was used to derive absolute transit parameters. Furthermore, a set of light curves over wavelength was formed by a flux integration in 41 wavelength channels of 50 Å width. We analyzed these spectral light curves for chromatic variations of transit depth. Results: The transit fit of the combined white-light curve yields a refined value of the planet-to-star radius ratio of 0.1390 ± 0.0012 and an inclination of 88.89 ± 0.32 deg. After a re-analysis of published data, we refine the orbital period to 4.0087844 ± 0.0000015 days. We obtain a flat transmission spectrum without significant additional absorption at any wavelength or any slope. However, our accuracy is not sufficient to significantly rule out the presence of a pressure-broadened sodium feature. Our photometric monitoring campaign allowed for an estimate of the stellar rotation period of 35.5 ± 2.5 days and an improved age estimate of 5.5+ 1.8-1.3 Gyr by gyrochronology. The calculated correction of the transit depth for unocculted spots on the visible hemisphere was found to be well within the derived 1σ uncertainty of the white-light curve and the spectral data points of the transmission spectrum. Based on observations made with the Gran Telescopio Canarias (GTC), installed in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofisica de Canarias, in the island of La Palma, as well as on data obtained with the STELLA robotic telescope in Tenerife, an AIP facility jointly operated by AIP and IAC.Tables 1 and 3 are available in electronic form at http://www.aanda.org
The existence of inflection points for generalized log-aesthetic curves satisfying G1 data
NASA Astrophysics Data System (ADS)
Karpagavalli, R.; Gobithaasan, R. U.; Miura, K. T.; Shanmugavel, Madhavan
2015-12-01
Log-Aesthetic (LA) curves have been implemented in a CAD/CAM system for various design feats. LA curves possess linear Logarithmic Curvature Graph (LCG) with gradient (shape parameter) denoted as α. In 2009, a generalized form of LA curves called Generalized Log-Aesthetic Curves (GLAC) has been proposed which has an extra shape parameter as ν compared to LA curves. Recently, G1 continuous GLAC algorithm has been proposed which utilizes the extra shape parameter using four control points. This paper discusses on the existence of inflection points in a GLAC segment satisfying G1 Hermite data and the effect of inflection point on convex hull property. It is found that the existence of inflection point can be avoided by manipulating the value of α. Numerical experiments show that the increase of α may remove the inflection point (if any) in a GLAC segment.
Real-time terrain storage generation from multiple sensors towards mobile robot operation interface.
Song, Wei; Cho, Seoungjae; Xi, Yulong; Cho, Kyungeun; Um, Kyhyun
2014-01-01
A mobile robot mounted with multiple sensors is used to rapidly collect 3D point clouds and video images so as to allow accurate terrain modeling. In this study, we develop a real-time terrain storage generation and representation system including a nonground point database (PDB), ground mesh database (MDB), and texture database (TDB). A voxel-based flag map is proposed for incrementally registering large-scale point clouds in a terrain model in real time. We quantize the 3D point clouds into 3D grids of the flag map as a comparative table in order to remove the redundant points. We integrate the large-scale 3D point clouds into a nonground PDB and a node-based terrain mesh using the CPU. Subsequently, we program a graphics processing unit (GPU) to generate the TDB by mapping the triangles in the terrain mesh onto the captured video images. Finally, we produce a nonground voxel map and a ground textured mesh as a terrain reconstruction result. Our proposed methods were tested in an outdoor environment. Our results show that the proposed system was able to rapidly generate terrain storage and provide high resolution terrain representation for mobile mapping services and a graphical user interface between remote operators and mobile robots.
Sideloading - Ingestion of Large Point Clouds Into the Apache Spark Big Data Engine
NASA Astrophysics Data System (ADS)
Boehm, J.; Liu, K.; Alis, C.
2016-06-01
In the geospatial domain we have now reached the point where data volumes we handle have clearly grown beyond the capacity of most desktop computers. This is particularly true in the area of point cloud processing. It is therefore naturally lucrative to explore established big data frameworks for big geospatial data. The very first hurdle is the import of geospatial data into big data frameworks, commonly referred to as data ingestion. Geospatial data is typically encoded in specialised binary file formats, which are not naturally supported by the existing big data frameworks. Instead such file formats are supported by software libraries that are restricted to single CPU execution. We present an approach that allows the use of existing point cloud file format libraries on the Apache Spark big data framework. We demonstrate the ingestion of large volumes of point cloud data into a compute cluster. The approach uses a map function to distribute the data ingestion across the nodes of a cluster. We test the capabilities of the proposed method to load billions of points into a commodity hardware compute cluster and we discuss the implications on scalability and performance. The performance is benchmarked against an existing native Apache Spark data import implementation.
Hamraz, Hamid; Contreras, Marco A; Zhang, Jun
2017-07-28
Airborne laser scanning (LiDAR) point clouds over large forested areas can be processed to segment individual trees and subsequently extract tree-level information. Existing segmentation procedures typically detect more than 90% of overstory trees, yet they barely detect 60% of understory trees because of the occlusion effect of higher canopy layers. Although understory trees provide limited financial value, they are an essential component of ecosystem functioning by offering habitat for numerous wildlife species and influencing stand development. Here we model the occlusion effect in terms of point density. We estimate the fractions of points representing different canopy layers (one overstory and multiple understory) and also pinpoint the required density for reasonable tree segmentation (where accuracy plateaus). We show that at a density of ~170 pt/m² understory trees can likely be segmented as accurately as overstory trees. Given the advancements of LiDAR sensor technology, point clouds will affordably reach this required density. Using modern computational approaches for big data, the denser point clouds can efficiently be processed to ultimately allow accurate remote quantification of forest resources. The methodology can also be adopted for other similar remote sensing or advanced imaging applications such as geological subsurface modelling or biomedical tissue analysis.
Real-Time Terrain Storage Generation from Multiple Sensors towards Mobile Robot Operation Interface
Cho, Seoungjae; Xi, Yulong; Cho, Kyungeun
2014-01-01
A mobile robot mounted with multiple sensors is used to rapidly collect 3D point clouds and video images so as to allow accurate terrain modeling. In this study, we develop a real-time terrain storage generation and representation system including a nonground point database (PDB), ground mesh database (MDB), and texture database (TDB). A voxel-based flag map is proposed for incrementally registering large-scale point clouds in a terrain model in real time. We quantize the 3D point clouds into 3D grids of the flag map as a comparative table in order to remove the redundant points. We integrate the large-scale 3D point clouds into a nonground PDB and a node-based terrain mesh using the CPU. Subsequently, we program a graphics processing unit (GPU) to generate the TDB by mapping the triangles in the terrain mesh onto the captured video images. Finally, we produce a nonground voxel map and a ground textured mesh as a terrain reconstruction result. Our proposed methods were tested in an outdoor environment. Our results show that the proposed system was able to rapidly generate terrain storage and provide high resolution terrain representation for mobile mapping services and a graphical user interface between remote operators and mobile robots. PMID:25101321
NASA Astrophysics Data System (ADS)
Böhm, J.; Bredif, M.; Gierlinger, T.; Krämer, M.; Lindenberg, R.; Liu, K.; Michel, F.; Sirmacek, B.
2016-06-01
Current 3D data capturing as implemented on for example airborne or mobile laser scanning systems is able to efficiently sample the surface of a city by billions of unselective points during one working day. What is still difficult is to extract and visualize meaningful information hidden in these point clouds with the same efficiency. This is where the FP7 IQmulus project enters the scene. IQmulus is an interactive facility for processing and visualizing big spatial data. In this study the potential of IQmulus is demonstrated on a laser mobile mapping point cloud of 1 billion points sampling ~ 10 km of street environment in Toulouse, France. After the data is uploaded to the IQmulus Hadoop Distributed File System, a workflow is defined by the user consisting of retiling the data followed by a PCA driven local dimensionality analysis, which runs efficiently on the IQmulus cloud facility using a Spark implementation. Points scattering in 3 directions are clustered in the tree class, and are separated next into individual trees. Five hours of processing at the 12 node computing cluster results in the automatic identification of 4000+ urban trees. Visualization of the results in the IQmulus fat client helps users to appreciate the results, and developers to identify remaining flaws in the processing workflow.
A snapshot of the inner dusty regions of a R CrB-type variable
NASA Astrophysics Data System (ADS)
Leão, I. C.; de Laverny, P.; Chesneau, O.; Mékarnia, D.; de Medeiros, J. R.
2007-05-01
Context: R Coronae Borealis (R CrB) variable stars are suspected to sporadically eject optically thick dust clouds causing, when one of them lies on the line-of-sight, a huge brightness decline in visible light. Direct detections with 8-m class adaptive optics of such clouds located at about 0.2-0.3 arcsec from the center (~1000 stellar radii) were recently reported for RY Sgr, the brightest R CrB of the southern hemisphere. Aims: Mid-infrared interferometric observations of RY Sgr allowed us to explore the circumstellar regions much closer to the central star (~20-40 mas) to look for the signature of any heterogeneities and to characterize them. Methods: Using the VLTI/MIDI instrument, five dispersed visibility curves in the N-band were recorded in May and June 2005 with different projected baselines oriented towards two roughly perpendicular directions. The large spatial frequencies' visibility curves exhibit a sinusoidal shape, whereas, at shorter spatial frequencies' visibility curves follow a Gaussian decrease. These observations are well interpreted with a geometrical model consisting of a central star surrounded by an extended circumstellar envelope in which one bright cloud is embedded. Results: Within this simple geometrical scheme, the inner 110 AU dusty environment of RY Sgr is dominated at the time of observations by a single dusty cloud, which at 10 μm represents ~10% of the total flux of the whole system, slightly less that the star flux. The cloud is located at about 100 stellar radii (or ~30 AU) from the center toward the East-North-East direction (or the symmetric direction with respect to center) within a circumstellar envelope whose FWHM is about 120 stellar radii. This first detection of a cloud so close to the central star supports the classical scenario of the R CrB brightness variations in the optical spectral domain and demonstrates the feasibility of a temporal monitoring of the dusty environment of this star on a monthly scale. Based on observations collected with the VLTI/MIDI instrument at Paranal Observatory, ESO (Chile) - Programme 75.D-0660. FITS files for the visibilities are only available in electronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/466/L1
Automatic Modelling of Rubble Mound Breakwaters from LIDAR Data
NASA Astrophysics Data System (ADS)
Bueno, M.; Díaz-Vilariño, L.; González-Jorge, H.; Martínez-Sánchez, J.; Arias, P.
2015-08-01
Rubble mound breakwaters maintenance is critical to the protection of beaches and ports. LiDAR systems provide accurate point clouds from the emerged part of the structure that can be modelled to make it more useful and easy to handle. This work introduces a methodology for the automatic modelling of breakwaters with armour units of cube shape. The algorithm is divided in three main steps: normal vector computation, plane segmentation, and cube reconstruction. Plane segmentation uses the normal orientation of the points and the edge length of the cube. Cube reconstruction uses the intersection of three perpendicular planes and the edge length. Three point clouds cropped from the main point cloud of the structure are used for the tests. The number of cubes detected is around 56 % for two of the point clouds and 32 % for the third one over the total physical cubes. Accuracy assessment is done by comparison with manually drawn cubes calculating the differences between the vertexes. It ranges between 6.4 cm and 15 cm. Computing time ranges between 578.5 s and 8018.2 s. The computing time increases with the number of cubes and the requirements of collision detection.
Application of Template Matching for Improving Classification of Urban Railroad Point Clouds
Arastounia, Mostafa; Oude Elberink, Sander
2016-01-01
This study develops an integrated data-driven and model-driven approach (template matching) that clusters the urban railroad point clouds into three classes of rail track, contact cable, and catenary cable. The employed dataset covers 630 m of the Dutch urban railroad corridors in which there are four rail tracks, two contact cables, and two catenary cables. The dataset includes only geometrical information (three dimensional (3D) coordinates of the points) with no intensity data and no RGB data. The obtained results indicate that all objects of interest are successfully classified at the object level with no false positives and no false negatives. The results also show that an average 97.3% precision and an average 97.7% accuracy at the point cloud level are achieved. The high precision and high accuracy of the rail track classification (both greater than 96%) at the point cloud level stems from the great impact of the employed template matching method on excluding the false positives. The cables also achieve quite high average precision (96.8%) and accuracy (98.4%) due to their high sampling and isolated position in the railroad corridor. PMID:27973452
The Use of Uas for Rapid 3d Mapping in Geomatics Education
NASA Astrophysics Data System (ADS)
Teo, Tee-Ann; Tian-Yuan Shih, Peter; Yu, Sz-Cheng; Tsai, Fuan
2016-06-01
With the development of technology, UAS is an advance technology to support rapid mapping for disaster response. The aim of this study is to develop educational modules for UAS data processing in rapid 3D mapping. The designed modules for this study are focused on UAV data processing from available freeware or trial software for education purpose. The key modules include orientation modelling, 3D point clouds generation, image georeferencing and visualization. The orientation modelling modules adopts VisualSFM to determine the projection matrix for each image station. Besides, the approximate ground control points are measured from OpenStreetMap for absolute orientation. The second module uses SURE and the orientation files from previous module for 3D point clouds generation. Then, the ground point selection and digital terrain model generation can be archived by LAStools. The third module stitches individual rectified images into a mosaic image using Microsoft ICE (Image Composite Editor). The last module visualizes and measures the generated dense point clouds in CloudCompare. These comprehensive UAS processing modules allow the students to gain the skills to process and deliver UAS photogrammetric products in rapid 3D mapping. Moreover, they can also apply the photogrammetric products for analysis in practice.
NASA Astrophysics Data System (ADS)
Hamraz, Hamid; Contreras, Marco A.; Zhang, Jun
2017-08-01
Airborne LiDAR point cloud representing a forest contains 3D data, from which vertical stand structure even of understory layers can be derived. This paper presents a tree segmentation approach for multi-story stands that stratifies the point cloud to canopy layers and segments individual tree crowns within each layer using a digital surface model based tree segmentation method. The novelty of the approach is the stratification procedure that separates the point cloud to an overstory and multiple understory tree canopy layers by analyzing vertical distributions of LiDAR points within overlapping locales. The procedure does not make a priori assumptions about the shape and size of the tree crowns and can, independent of the tree segmentation method, be utilized to vertically stratify tree crowns of forest canopies. We applied the proposed approach to the University of Kentucky Robinson Forest - a natural deciduous forest with complex and highly variable terrain and vegetation structure. The segmentation results showed that using the stratification procedure strongly improved detecting understory trees (from 46% to 68%) at the cost of introducing a fair number of over-segmented understory trees (increased from 1% to 16%), while barely affecting the overall segmentation quality of overstory trees. Results of vertical stratification of the canopy showed that the point density of understory canopy layers were suboptimal for performing a reasonable tree segmentation, suggesting that acquiring denser LiDAR point clouds would allow more improvements in segmenting understory trees. As shown by inspecting correlations of the results with forest structure, the segmentation approach is applicable to a variety of forest types.
Short apsidal period of three eccentric eclipsing binaries discovered in the Large Magellanic Cloud
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong, Kyeongsoo; Lee, Chung-Uk; Kim, Seung-Lee
2014-06-01
We present new elements of apsidal motion in three eccentric eclipsing binaries located in the Large Magellanic Cloud. The apsidal motions of the systems were analyzed using both light curves and eclipse timings. The OGLE-III data obtained during the long period of 8 yr (2002-2009) allowed us to determine the apsidal motion period from their analyses. The existence of third light in all selected systems was investigated by light curve analysis. The O – C diagrams of EROS 1018, EROS 1041, and EROS 1054 were analyzed using the 30, 44, and 26 new times of minimum light, respectively, determined frommore » full light curves constructed from EROS, MACHO, OGLE-II, OGLE-III, and our own observations. This enabled a detailed study of the apsidal motion in these systems for the first time. All of the systems have a significant apsidal motion below 100 yr. In particular, EROS 1018 shows a very fast apsidal period of 19.9 ± 2.2 yr in a detached system.« less
NASA Astrophysics Data System (ADS)
Wang, Jinxia; Dou, Aixia; Wang, Xiaoqing; Huang, Shusong; Yuan, Xiaoxiang
2016-11-01
Compared to remote sensing image, post-earthquake airborne Light Detection And Ranging (LiDAR) point cloud data contains a high-precision three-dimensional information on earthquake disaster which can improve the accuracy of the identification of destroy buildings. However after the earthquake, the damaged buildings showed so many different characteristics that we can't distinguish currently between trees and damaged buildings points by the most commonly used method of pre-processing. In this study, we analyse the number of returns for given pulse of trees and damaged buildings point cloud and explore methods to distinguish currently between trees and damaged buildings points. We propose a new method by searching for a certain number of neighbourhood space and calculate the ratio(R) of points whose number of returns for given pulse greater than 1 of the neighbourhood points to separate trees from buildings. In this study, we select some point clouds of typical undamaged building, collapsed building and tree as samples from airborne LiDAR point cloud data which got after 2010 earthquake in Haiti MW7.0 by the way of human-computer interaction. Testing to get the Rvalue to distinguish between trees and buildings and apply the R-value to test testing areas. The experiment results show that the proposed method in this study can distinguish between building (undamaged and damaged building) points and tree points effectively but be limited in area where buildings various, damaged complex and trees dense, so this method will be improved necessarily.
Diffuse cloud chemistry. [in interstellar matter
NASA Technical Reports Server (NTRS)
Van Dishoeck, Ewine F.; Black, John H.
1988-01-01
The current status of models of diffuse interstellar clouds is reviewed. A detailed comparison of recent gas-phase steady-state models shows that both the physical conditions and the molecular abundances in diffuse clouds are still not fully understood. Alternative mechanisms are discussed and observational tests which may discriminate between the various models are suggested. Recent developments regarding the velocity structure of diffuse clouds are mentioned. Similarities and differences between the chemistries in diffuse clouds and those in translucent and high latitude clouds are pointed out.
The pointing errors of geosynchronous satellites
NASA Technical Reports Server (NTRS)
Sikdar, D. N.; Das, A.
1971-01-01
A study of the correlation between cloud motion and wind field was initiated. Cloud heights and displacements were being obtained from a ceilometer and movie pictures, while winds were measured from pilot balloon observations on a near-simultaneous basis. Cloud motion vectors were obtained from time-lapse cloud pictures, using the WINDCO program, for 27, 28 July, 1969, in the Atlantic. The relationship between observed features of cloud clusters and the ambient wind field derived from cloud trajectories on a wide range of space and time scales is discussed.
NASA Astrophysics Data System (ADS)
Nakatsuji, Noriaki; Matsushima, Kyoji
2017-03-01
Full-parallax high-definition CGHs composed of more than billion pixels were so far created only by the polygon-based method because of its high performance. However, GPUs recently allow us to generate CGHs much faster by the point cloud. In this paper, we measure computation time of object fields for full-parallax high-definition CGHs, which are composed of 4 billion pixels and reconstruct the same scene, by using the point cloud with GPU and the polygon-based method with CPU. In addition, we compare the optical and simulated reconstructions between CGHs created by these techniques to verify the image quality.
Development of Three-Dimensional Dental Scanning Apparatus Using Structured Illumination
Park, Anjin; Lee, Byeong Ha; Eom, Joo Beom
2017-01-01
We demonstrated a three-dimensional (3D) dental scanning apparatus based on structured illumination. A liquid lens was used for tuning focus and a piezomotor stage was used for the shift of structured light. A simple algorithm, which detects intensity modulation, was used to perform optical sectioning with structured illumination. We reconstructed a 3D point cloud, which represents the 3D coordinates of the digitized surface of a dental gypsum cast by piling up sectioned images. We performed 3D registration of an individual 3D point cloud, which includes alignment and merging the 3D point clouds to exhibit a 3D model of the dental cast. PMID:28714897
Automatic Building Abstraction from Aerial Photogrammetry
NASA Astrophysics Data System (ADS)
Ley, A.; Hänsch, R.; Hellwich, O.
2017-09-01
Multi-view stereo has been shown to be a viable tool for the creation of realistic 3D city models. Nevertheless, it still states significant challenges since it results in dense, but noisy and incomplete point clouds when applied to aerial images. 3D city modelling usually requires a different representation of the 3D scene than these point clouds. This paper applies a fully-automatic pipeline to generate a simplified mesh from a given dense point cloud. The mesh provides a certain level of abstraction as it only consists of relatively large planar and textured surfaces. Thus, it is possible to remove noise, outlier, as well as clutter, while maintaining a high level of accuracy.
Ordóñez, Celestino; Cabo, Carlos; Sanz-Ablanedo, Enoc
2017-01-01
Mobile laser scanning (MLS) is a modern and powerful technology capable of obtaining massive point clouds of objects in a short period of time. Although this technology is nowadays being widely applied in urban cartography and 3D city modelling, it has some drawbacks that need to be avoided in order to strengthen it. One of the most important shortcomings of MLS data is concerned with the fact that it provides an unstructured dataset whose processing is very time-consuming. Consequently, there is a growing interest in developing algorithms for the automatic extraction of useful information from MLS point clouds. This work is focused on establishing a methodology and developing an algorithm to detect pole-like objects and classify them into several categories using MLS datasets. The developed procedure starts with the discretization of the point cloud by means of a voxelization, in order to simplify and reduce the processing time in the segmentation process. In turn, a heuristic segmentation algorithm was developed to detect pole-like objects in the MLS point cloud. Finally, two supervised classification algorithms, linear discriminant analysis and support vector machines, were used to distinguish between the different types of poles in the point cloud. The predictors are the principal component eigenvalues obtained from the Cartesian coordinates of the laser points, the range of the Z coordinate, and some shape-related indexes. The performance of the method was tested in an urban area with 123 poles of different categories. Very encouraging results were obtained, since the accuracy rate was over 90%. PMID:28640189
Estimating Aircraft Heading Based on Laserscanner Derived Point Clouds
NASA Astrophysics Data System (ADS)
Koppanyi, Z.; Toth, C., K.
2015-03-01
Using LiDAR sensors for tracking and monitoring an operating aircraft is a new application. In this paper, we present data processing methods to estimate the heading of a taxiing aircraft using laser point clouds. During the data acquisition, a Velodyne HDL-32E laser scanner tracked a moving Cessna 172 airplane. The point clouds captured at different times were used for heading estimation. After addressing the problem and specifying the equation of motion to reconstruct the aircraft point cloud from the consecutive scans, three methods are investigated here. The first requires a reference model to estimate the relative angle from the captured data by fitting different cross-sections (horizontal profiles). In the second approach, iterative closest point (ICP) method is used between the consecutive point clouds to determine the horizontal translation of the captured aircraft body. Regarding the ICP, three different versions were compared, namely, the ordinary 3D, 3-DoF 3D and 2-DoF 3D ICP. It was found that 2-DoF 3D ICP provides the best performance. Finally, the last algorithm searches for the unknown heading and velocity parameters by minimizing the volume of the reconstructed plane. The three methods were compared using three test datatypes which are distinguished by object-sensor distance, heading and velocity. We found that the ICP algorithm fails at long distances and when the aircraft motion direction perpendicular to the scan plane, but the first and the third methods give robust and accurate results at 40m object distance and at ~12 knots for a small Cessna airplane.
NASA Astrophysics Data System (ADS)
Manjavacas, Elena; Apai, Dániel; Zhou, Yifan; Karalidi, Theodora; Lew, Ben W. P.; Schneider, Glenn; Cowan, Nicolas; Metchev, Stan; Miles-Páez, Paulo A.; Burgasser, Adam J.; Radigan, Jacqueline; Bedin, Luigi R.; Lowrance, Patrick J.; Marley, Mark S.
2018-01-01
Observations of rotational modulations of brown dwarfs and giant exoplanets allow the characterization of condensate cloud properties. As of now, rotational spectral modulations have only been seen in three L-type brown dwarfs. We report here the discovery of rotational spectral modulations in LP261-75B, an L6-type intermediate surface gravity companion to an M4.5 star. As a part of the Cloud Atlas Treasury program, we acquired time-resolved Wide Field Camera 3 grism spectroscopy (1.1–1.69 μm) of LP261-75B. We find gray spectral variations with the relative amplitude displaying only a weak wavelength dependence and no evidence for lower-amplitude modulations in the 1.4 μm water band than in the adjacent continuum. The likely rotational modulation period is 4.78 ± 0.95 hr, although the rotational phase is not well sampled. The minimum relative amplitude in the white light curve measured over the whole wavelength range is 2.41% ± 0.14%. We report an unusual light curve, which seems to have three peaks approximately evenly distributed in rotational phase. The spectral modulations suggests that the upper atmosphere cloud properties in LP261-75B are similar to two other mid-L dwarfs of typical infrared colors, but differ from that of the extremely red L-dwarf WISE0047.
PROGRA2 experiment: New results for dust clouds and regoliths analogs
NASA Astrophysics Data System (ADS)
Hadamcik, E.; Renard, J.-B.; Levasseur-Regourd, A. C.; Worms, J.-C.
2006-01-01
With the PROGRA2 experience, linear polarization of scattered light is measured on various types of dust clouds lifted by microgravity, or by an air-draught. The aim is to compare the phase curves for dust analogs with those obtained in the Solar System (cometary comae, and solid particles in planetary atmospheres) by remote-sensing and in situ techniques. Measurements are also performed on layers of particles (on the ground) and compared with remote measurements on asteroidal regoliths and planetary surfaces. New phase curves have been obtained, e.g., for quartz samples, crystals, fluffy mixtures of silica and carbon blacks and a high porosity regolith analog made of micron-sized silica spheres. This work will contribute to the choice of the samples to be studied with the ICAPS experiment onboard the ISS and on the precursor experiment.
Temporal Analysis and Automatic Calibration of the Velodyne HDL-32E LiDAR System
NASA Astrophysics Data System (ADS)
Chan, T. O.; Lichti, D. D.; Belton, D.
2013-10-01
At the end of the first quarter of 2012, more than 600 Velodyne LiDAR systems had been sold worldwide for various robotic and high-accuracy survey applications. The ultra-compact Velodyne HDL-32E LiDAR has become a predominant sensor for many applications that require lower sensor size/weight and cost. For high accuracy applications, cost-effective calibration methods with minimal manual intervention are always desired by users. However, the calibrations are complicated by the Velodyne LiDAR's narrow vertical field of view and the very highly time-variant nature of its measurements. In the paper, the temporal stability of the HDL-32E is first analysed as the motivation for developing a new, automated calibration method. This is followed by a detailed description of the calibration method that is driven by a novel segmentation method for extracting vertical cylindrical features from the Velodyne point clouds. The proposed segmentation method utilizes the Velodyne point cloud's slice-like nature and first decomposes the point clouds into 2D layers. Then the layers are treated as 2D images and are processed with the Generalized Hough Transform which extracts the points distributed in circular patterns from the point cloud layers. Subsequently, the vertical cylindrical features can be readily extracted from the whole point clouds based on the previously extracted points. The points are passed to the calibration that estimates the cylinder parameters and the LiDAR's additional parameters simultaneously by constraining the segmented points to fit to the cylindrical geometric model in such a way the weighted sum of the adjustment residuals are minimized. The proposed calibration is highly automatic and this allows end users to obtain the time-variant additional parameters instantly and frequently whenever there are vertical cylindrical features presenting in scenes. The methods were verified with two different real datasets, and the results suggest that up to 78.43% accuracy improvement for the HDL-32E can be achieved using the proposed calibration method.
1998-10-31
CAPE CANAVERAL, Fla. -- Taken during the STS-95 mission from a point over Cuba, this photo shows an oblique, foreshortened view of the Florida Peninsula, with the light blue, shallow seafloor of both the Florida Keys (curving across the bottom of the view) and the Bahama banks (right). "Popcorn" cumulus cloud covers Miami and the Southern Everglades, although the built-up area from Ft. Lauderdale to West Palm Beach can be discerned. Lake Okeechobee is the prominent waterbody in Florida. Cape Canaveral is shown well, half way up the peninsula. Orlando appears as the lighter patch West (left) of Cape Canaveral, near the middle of the peninsula. Cape Hatteras appears top right, with the North part of Chesapeake Bay also visible. This is a visibility of 16 degrees of latitude (23 degrees N over Cuba to 39 degrees at Baltimore), showing unusual atmospheric clarity.
Robust point cloud classification based on multi-level semantic relationships for urban scenes
NASA Astrophysics Data System (ADS)
Zhu, Qing; Li, Yuan; Hu, Han; Wu, Bo
2017-07-01
The semantic classification of point clouds is a fundamental part of three-dimensional urban reconstruction. For datasets with high spatial resolution but significantly more noises, a general trend is to exploit more contexture information to surmount the decrease of discrimination of features for classification. However, previous works on adoption of contexture information are either too restrictive or only in a small region and in this paper, we propose a point cloud classification method based on multi-level semantic relationships, including point-homogeneity, supervoxel-adjacency and class-knowledge constraints, which is more versatile and incrementally propagate the classification cues from individual points to the object level and formulate them as a graphical model. The point-homogeneity constraint clusters points with similar geometric and radiometric properties into regular-shaped supervoxels that correspond to the vertices in the graphical model. The supervoxel-adjacency constraint contributes to the pairwise interactions by providing explicit adjacent relationships between supervoxels. The class-knowledge constraint operates at the object level based on semantic rules, guaranteeing the classification correctness of supervoxel clusters at that level. International Society of Photogrammetry and Remote Sensing (ISPRS) benchmark tests have shown that the proposed method achieves state-of-the-art performance with an average per-area completeness and correctness of 93.88% and 95.78%, respectively. The evaluation of classification of photogrammetric point clouds and DSM generated from aerial imagery confirms the method's reliability in several challenging urban scenes.
Terrain Extraction by Integrating Terrestrial Laser Scanner Data and Spectral Information
NASA Astrophysics Data System (ADS)
Lau, C. L.; Halim, S.; Zulkepli, M.; Azwan, A. M.; Tang, W. L.; Chong, A. K.
2015-10-01
The extraction of true terrain points from unstructured laser point cloud data is an important process in order to produce an accurate digital terrain model (DTM). However, most of these spatial filtering methods just utilizing the geometrical data to discriminate the terrain points from nonterrain points. The point cloud filtering method also can be improved by using the spectral information available with some scanners. Therefore, the objective of this study is to investigate the effectiveness of using the three-channel (red, green and blue) of the colour image captured from built-in digital camera which is available in some Terrestrial Laser Scanner (TLS) for terrain extraction. In this study, the data acquisition was conducted at a mini replica landscape in Universiti Teknologi Malaysia (UTM), Skudai campus using Leica ScanStation C10. The spectral information of the coloured point clouds from selected sample classes are extracted for spectral analysis. The coloured point clouds which within the corresponding preset spectral threshold are identified as that specific feature point from the dataset. This process of terrain extraction is done through using developed Matlab coding. Result demonstrates that a higher spectral resolution passive image is required in order to improve the output. This is because low quality of the colour images captured by the sensor contributes to the low separability in spectral reflectance. In conclusion, this study shows that, spectral information is capable to be used as a parameter for terrain extraction.
Liu, Jing-fu; Liu, Rui; Yin, Yong-guang; Jiang, Gui-bin
2009-03-28
Capable of preserving the sizes and shapes of nanomaterials during the phase transferring, Triton X-114 based cloud point extraction provides a general, simple, and cost-effective route for reversible concentration/separation or dispersion of various nanomaterials in the aqueous phase.
NASA Astrophysics Data System (ADS)
Li, Jiekang; Li, Guirong; Han, Qian
2016-12-01
In this paper, two kinds of salophens (Sal) with different solubilities, Sal1 and Sal2, have been respectively synthesized, and they all can combine with uranyl to form stable complexes: [UO22 +-Sal1] and [UO22 +-Sal2]. Among them, [UO22 +-Sal1] was used as ligand to extract uranium in complex samples by dual cloud point extraction (dCPE), and [UO22 +-Sal2] was used as catalyst for the determination of uranium by photocatalytic resonance fluorescence (RF) method. The photocatalytic characteristic of [UO22 +-Sal2] on the oxidized pyronine Y (PRY) by potassium bromate which leads to the decrease of RF intensity of PRY were studied. The reduced value of RF intensity of reaction system (ΔF) is in proportional to the concentration of uranium (c), and a novel photo-catalytic RF method was developed for the determination of trace uranium (VI) after dCPE. The combination of photo-catalytic RF techniques and dCPE procedure endows the presented methods with enhanced sensitivity and selectivity. Under optimal conditions, the linear calibration curves range for 0.067 to 6.57 ng mL- 1, the linear regression equation was ΔF = 438.0 c (ng mL- 1) + 175.6 with the correlation coefficient r = 0.9981. The limit of detection was 0.066 ng mL- 1. The proposed method was successfully applied for the separation and determination of uranium in real samples with the recoveries of 95.0-103.5%. The mechanisms of the indicator reaction and dCPE are discussed.
The First Release of the AST3-1 Point Source Catalogue from Dome A, Antarctica
NASA Astrophysics Data System (ADS)
Ma, Bin; Shang, Zhaohui; Hu, Yi; Hu, Keliang; Liu, Qiang; Ashley, Michael C. B.; Cui, Xiangqun; Du, Fujia; Fan, Dongwei; Feng, Longlong; Huang, Fang; Gu, Bozhong; He, Boliang; Ji, Tuo; Li, Xiaoyan; Li, Zhengyang; Liu, Huigen; Tian, Qiguo; Tao, Charling; Wang, Daxing; Wang, Lifan; Wang, Songhu; Wang, Xiaofeng; Wei, Peng; Wu, Jianghua; Xu, Lingzhe; Yang, Shihai; Yang, Ming; Yang, Yi; Yu, Ce; Yuan, Xiangyan; Zhou, Hongyan; Zhang, Hui; Zhang, Xueguang; Zhang, Yi; Zhao, Cheng; Zhou, Jilin; Zhu, Zong-Hong
2018-05-01
The three Antarctic Survey Telescopes (AST3) aim to carry out time domain imaging survey at Dome A, Antarctica. The first of the three telescopes (AST3-1) was successfully deployed on January 2012. AST3-1 is a 500 mm aperture modified Schmidt telescope with a 680 mm diameter primary mirror. AST3-1 is equipped with a SDSS i filter and a 10k × 10k frame transfer CCD camera, reduced to 5k × 10k by electronic shuttering, resulting in a 4.3 deg2 field-of-view. To verify the capability of AST3-1 for a variety of science goals, extensive commissioning was carried out between March and May 2012. The commissioning included a survey covering 2000 deg2 as well as the entire Large and Small Magellanic Clouds. Frequent repeated images were made of the center of the Large Magellanic Cloud, a selected exoplanet transit field, and fields including some Wolf-Rayet stars. Here we present the data reduction and photometric measurements of the point sources observed by AST3-1. We have achieved a survey depth of 19.3 mag in 60 s exposures with 5 mmag precision in the light curves of bright stars. The facility achieves sub-mmag photometric precision under stable survey conditions, approaching its photon noise limit. These results demonstrate that AST3-1 at Dome A is extraordinarily competitive in time-domain astronomy, including both quick searches for faint transients and the detection of tiny transit signals.
Effects of Phase Separation Behavior on Morphology and Performance of Polycarbonate Membranes.
Idris, Alamin; Man, Zakaria; Maulud, Abdulhalim S; Khan, Muhammad Saad
2017-04-05
The phase separation behavior of bisphenol-A-polycarbonate (PC), dissolved in N -methyl-2-pyrrolidone and dichloromethane solvents in coagulant water, was studied by the cloud point method. The respective cloud point data were determined by titration against water at room temperature and the characteristic binodal curves for the ternary systems were plotted. Further, the physical properties such as viscosity, refractive index, and density of the solution were measured. The critical polymer concentrations were determined from the viscosity measurements. PC/NMP and PC/DCM membranes were fabricated by the dry-wet phase inversion technique and characterized for their morphology, structure, and thermal stability using field emission scanning electron microscopy, Fourier transform infrared spectroscopy, and thermogravimetric analysis, respectively. The membranes' performances were tested for their permeance to CO₂, CH₄, and N₂ gases at 24 ± 0.5 °C with varying feed pressures from 2 to 10 bar. The PC/DCM membranes appeared to be asymmetric dense membrane types with appreciable thermal stability, whereas the PC/NMP membranes were observed to be asymmetric with porous structures exhibiting 4.18% and 9.17% decrease in the initial and maximum degradation temperatures, respectively. The ideal CO₂/N₂ and CO₂/CH₄ selectivities of the PC/NMP membrane decreased with the increase in feed pressures, while for the PC/DCM membrane, the average ideal CO₂/N₂ and CO₂/CH₄ selectivities were found to be 25.1 ± 0.8 and 21.1 ± 0.6, respectively. Therefore, the PC/DCM membranes with dense morphologies are appropriate for gas separation applications.
Automatic Extraction of Road Markings from Mobile Laser-Point Cloud Using Intensity Data
NASA Astrophysics Data System (ADS)
Yao, L.; Chen, Q.; Qin, C.; Wu, H.; Zhang, S.
2018-04-01
With the development of intelligent transportation, road's high precision information data has been widely applied in many fields. This paper proposes a concise and practical way to extract road marking information from point cloud data collected by mobile mapping system (MMS). The method contains three steps. Firstly, road surface is segmented through edge detection from scan lines. Then the intensity image is generated by inverse distance weighted (IDW) interpolation and the road marking is extracted by using adaptive threshold segmentation based on integral image without intensity calibration. Moreover, the noise is reduced by removing a small number of plaque pixels from binary image. Finally, point cloud mapped from binary image is clustered into marking objects according to Euclidean distance, and using a series of algorithms including template matching and feature attribute filtering for the classification of linear markings, arrow markings and guidelines. Through processing the point cloud data collected by RIEGL VUX-1 in case area, the results show that the F-score of marking extraction is 0.83, and the average classification rate is 0.9.
Point Cloud Based Approach to Stem Width Extraction of Sorghum
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, Jihui; Zakhor, Avideh
A revolution in the field of genomics has produced vast amounts of data and furthered our understanding of the genotypephenotype map, but is currently constrained by manually intensive or limited phenotype data collection. We propose an algorithm to estimate stem width, a key characteristic used for biomass potential evaluation, from 3D point cloud data collected by a robot equipped with a depth sensor in a single pass in a standard field. The algorithm applies a two step alignment to register point clouds in different frames, a Frangi filter to identify stemlike objects in the point cloud and an orientation basedmore » filter to segment out and refine individual stems for width estimation. Individually, detected stems which are split due to occlusions are merged and then registered with previously found stems in previous camera frames in order to track temporally. We then refine the estimates to produce an accurate histogram of width estimates per plot. Since the plants in each plot are genetically identical, distributions of the stem width per plot can be useful in identifying genetically superior sorghum for biofuels.« less
Airborne LIDAR point cloud tower inclination judgment
NASA Astrophysics Data System (ADS)
liang, Chen; zhengjun, Liu; jianguo, Qian
2016-11-01
Inclined transmission line towers for the safe operation of the line caused a great threat, how to effectively, quickly and accurately perform inclined judgment tower of power supply company safety and security of supply has played a key role. In recent years, with the development of unmanned aerial vehicles, unmanned aerial vehicles equipped with a laser scanner, GPS, inertial navigation is one of the high-precision 3D Remote Sensing System in the electricity sector more and more. By airborne radar scan point cloud to visually show the whole picture of the three-dimensional spatial information of the power line corridors, such as the line facilities and equipment, terrain and trees. Currently, LIDAR point cloud research in the field has not yet formed an algorithm to determine tower inclination, the paper through the existing power line corridor on the tower base extraction, through their own tower shape characteristic analysis, a vertical stratification the method of combining convex hull algorithm for point cloud tower scarce two cases using two different methods for the tower was Inclined to judge, and the results with high reliability.
Fast grasping of unknown objects using cylinder searching on a single point cloud
NASA Astrophysics Data System (ADS)
Lei, Qujiang; Wisse, Martijn
2017-03-01
Grasping of unknown objects with neither appearance data nor object models given in advance is very important for robots that work in an unfamiliar environment. The goal of this paper is to quickly synthesize an executable grasp for one unknown object by using cylinder searching on a single point cloud. Specifically, a 3D camera is first used to obtain a partial point cloud of the target unknown object. An original method is then employed to do post treatment on the partial point cloud to minimize the uncertainty which may lead to grasp failure. In order to accelerate the grasp searching, surface normal of the target object is then used to constrain the synthetization of the cylinder grasp candidates. Operability analysis is then used to select out all executable grasp candidates followed by force balance optimization to choose the most reliable grasp as the final grasp execution. In order to verify the effectiveness of our algorithm, Simulations on a Universal Robot arm UR5 and an under-actuated Lacquey Fetch gripper are used to examine the performance of this algorithm, and successful results are obtained.
Sloped Terrain Segmentation for Autonomous Drive Using Sparse 3D Point Cloud
Cho, Seoungjae; Kim, Jonghyun; Ikram, Warda; Cho, Kyungeun; Sim, Sungdae
2014-01-01
A ubiquitous environment for road travel that uses wireless networks requires the minimization of data exchange between vehicles. An algorithm that can segment the ground in real time is necessary to obtain location data between vehicles simultaneously executing autonomous drive. This paper proposes a framework for segmenting the ground in real time using a sparse three-dimensional (3D) point cloud acquired from undulating terrain. A sparse 3D point cloud can be acquired by scanning the geography using light detection and ranging (LiDAR) sensors. For efficient ground segmentation, 3D point clouds are quantized in units of volume pixels (voxels) and overlapping data is eliminated. We reduce nonoverlapping voxels to two dimensions by implementing a lowermost heightmap. The ground area is determined on the basis of the number of voxels in each voxel group. We execute ground segmentation in real time by proposing an approach to minimize the comparison between neighboring voxels. Furthermore, we experimentally verify that ground segmentation can be executed at about 19.31 ms per frame. PMID:25093204
Drawing and Landscape Simulation for Japanese Garden by Using Terrestrial Laser Scanner
NASA Astrophysics Data System (ADS)
Kumazaki, R.; Kunii, Y.
2015-05-01
Recently, many laser scanners are applied for various measurement fields. This paper investigates that it was useful to use the terrestrial laser scanner in the field of landscape architecture and examined a usage in Japanese garden. As for the use of 3D point cloud data in the Japanese garden, it is the visual use such as the animations. Therefore, some applications of the 3D point cloud data was investigated that are as follows. Firstly, ortho image of the Japanese garden could be outputted for the 3D point cloud data. Secondly, contour lines of the Japanese garden also could be extracted, and drawing was became possible. Consequently, drawing of Japanese garden was realized more efficiency due to achievement of laborsaving. Moreover, operation of the measurement and drawing could be performed without technical skills, and any observers can be operated. Furthermore, 3D point cloud data could be edited, and some landscape simulations that extraction and placement of tree or some objects were became possible. As a result, it can be said that the terrestrial laser scanner will be applied in landscape architecture field more widely.
plas.io: Open Source, Browser-based WebGL Point Cloud Visualization
NASA Astrophysics Data System (ADS)
Butler, H.; Finnegan, D. C.; Gadomski, P. J.; Verma, U. K.
2014-12-01
Point cloud data, in the form of Light Detection and Ranging (LiDAR), RADAR, or semi-global matching (SGM) image processing, are rapidly becoming a foundational data type to quantify and characterize geospatial processes. Visualization of these data, due to overall volume and irregular arrangement, is often difficult. Technological advancement in web browsers, in the form of WebGL and HTML5, have made interactivity and visualization capabilities ubiquitously available which once only existed in desktop software. plas.io is an open source JavaScript application that provides point cloud visualization, exploitation, and compression features in a web-browser platform, reducing the reliance for client-based desktop applications. The wide reach of WebGL and browser-based technologies mean plas.io's capabilities can be delivered to a diverse list of devices -- from phones and tablets to high-end workstations -- with very little custom software development. These properties make plas.io an ideal open platform for researchers and software developers to communicate visualizations of complex and rich point cloud data to devices to which everyone has easy access.
- and Scene-Guided Integration of Tls and Photogrammetric Point Clouds for Landslide Monitoring
NASA Astrophysics Data System (ADS)
Zieher, T.; Toschi, I.; Remondino, F.; Rutzinger, M.; Kofler, Ch.; Mejia-Aguilar, A.; Schlögel, R.
2018-05-01
Terrestrial and airborne 3D imaging sensors are well-suited data acquisition systems for the area-wide monitoring of landslide activity. State-of-the-art surveying techniques, such as terrestrial laser scanning (TLS) and photogrammetry based on unmanned aerial vehicle (UAV) imagery or terrestrial acquisitions have advantages and limitations associated with their individual measurement principles. In this study we present an integration approach for 3D point clouds derived from these techniques, aiming at improving the topographic representation of landslide features while enabling a more accurate assessment of landslide-induced changes. Four expert-based rules involving local morphometric features computed from eigenvectors, elevation and the agreement of the individual point clouds, are used to choose within voxels of selectable size which sensor's data to keep. Based on the integrated point clouds, digital surface models and shaded reliefs are computed. Using an image correlation technique, displacement vectors are finally derived from the multi-temporal shaded reliefs. All results show comparable patterns of landslide movement rates and directions. However, depending on the applied integration rule, differences in spatial coverage and correlation strength emerge.
Road traffic sign detection and classification from mobile LiDAR point clouds
NASA Astrophysics Data System (ADS)
Weng, Shengxia; Li, Jonathan; Chen, Yiping; Wang, Cheng
2016-03-01
Traffic signs are important roadway assets that provide valuable information of the road for drivers to make safer and easier driving behaviors. Due to the development of mobile mapping systems that can efficiently acquire dense point clouds along the road, automated detection and recognition of road assets has been an important research issue. This paper deals with the detection and classification of traffic signs in outdoor environments using mobile light detection and ranging (Li- DAR) and inertial navigation technologies. The proposed method contains two main steps. It starts with an initial detection of traffic signs based on the intensity attributes of point clouds, as the traffic signs are always painted with highly reflective materials. Then, the classification of traffic signs is achieved based on the geometric shape and the pairwise 3D shape context. Some results and performance analyses are provided to show the effectiveness and limits of the proposed method. The experimental results demonstrate the feasibility and effectiveness of the proposed method in detecting and classifying traffic signs from mobile LiDAR point clouds.
NASA Astrophysics Data System (ADS)
Poux, F.; Neuville, R.; Hallot, P.; Van Wersch, L.; Luczfalvy Jancsó, A.; Billen, R.
2017-05-01
While virtual copies of the real world tend to be created faster than ever through point clouds and derivatives, their working proficiency by all professionals' demands adapted tools to facilitate knowledge dissemination. Digital investigations are changing the way cultural heritage researchers, archaeologists, and curators work and collaborate to progressively aggregate expertise through one common platform. In this paper, we present a web application in a WebGL framework accessible on any HTML5-compatible browser. It allows real time point cloud exploration of the mosaics in the Oratory of Germigny-des-Prés, and emphasises the ease of use as well as performances. Our reasoning engine is constructed over a semantically rich point cloud data structure, where metadata has been injected a priori. We developed a tool that directly allows semantic extraction and visualisation of pertinent information for the end users. It leads to efficient communication between actors by proposing optimal 3D viewpoints as a basis on which interactions can grow.
Point Cloud Based Approach to Stem Width Extraction of Sorghum
Jin, Jihui; Zakhor, Avideh
2017-01-29
A revolution in the field of genomics has produced vast amounts of data and furthered our understanding of the genotypephenotype map, but is currently constrained by manually intensive or limited phenotype data collection. We propose an algorithm to estimate stem width, a key characteristic used for biomass potential evaluation, from 3D point cloud data collected by a robot equipped with a depth sensor in a single pass in a standard field. The algorithm applies a two step alignment to register point clouds in different frames, a Frangi filter to identify stemlike objects in the point cloud and an orientation basedmore » filter to segment out and refine individual stems for width estimation. Individually, detected stems which are split due to occlusions are merged and then registered with previously found stems in previous camera frames in order to track temporally. We then refine the estimates to produce an accurate histogram of width estimates per plot. Since the plants in each plot are genetically identical, distributions of the stem width per plot can be useful in identifying genetically superior sorghum for biofuels.« less
Large-Scale Point-Cloud Visualization through Localized Textured Surface Reconstruction.
Arikan, Murat; Preiner, Reinhold; Scheiblauer, Claus; Jeschke, Stefan; Wimmer, Michael
2014-09-01
In this paper, we introduce a novel scene representation for the visualization of large-scale point clouds accompanied by a set of high-resolution photographs. Many real-world applications deal with very densely sampled point-cloud data, which are augmented with photographs that often reveal lighting variations and inaccuracies in registration. Consequently, the high-quality representation of the captured data, i.e., both point clouds and photographs together, is a challenging and time-consuming task. We propose a two-phase approach, in which the first (preprocessing) phase generates multiple overlapping surface patches and handles the problem of seamless texture generation locally for each patch. The second phase stitches these patches at render-time to produce a high-quality visualization of the data. As a result of the proposed localization of the global texturing problem, our algorithm is more than an order of magnitude faster than equivalent mesh-based texturing techniques. Furthermore, since our preprocessing phase requires only a minor fraction of the whole data set at once, we provide maximum flexibility when dealing with growing data sets.
NASA Astrophysics Data System (ADS)
Petschko, Helene; Goetz, Jason; Schmidt, Sven
2017-04-01
Sinkholes are a serious threat on life, personal property and infrastructure in large parts of Thuringia. Over 9000 sinkholes have been documented by the Geological Survey of Thuringia, which are caused by collapsing hollows which formed due to solution processes within the local bedrock material. However, little is known about surface processes and their dynamics at the flanks of the sinkhole once the sinkhole has shaped. These processes are of high interest as they might lead to dangerous situations at or within the vicinity of the sinkhole. Our objective was the analysis of these deformations over time in 3D by applying terrestrial photogrammetry with a simple DSLR camera. Within this study, we performed an analysis of deformations within a sinkhole close to Bad Frankenhausen (Thuringia) using terrestrial photogrammetry and multi-view stereo 3D reconstruction to obtain a 3D point cloud describing the morphology of the sinkhole. This was performed for multiple data collection campaigns over a 6-month period. The photos of the sinkhole were taken with a Nikon D3000 SLR Camera. For the comparison of the point clouds the Multiscale Model to Model Comparison (M3C2) plugin of the software CloudCompare was used. It allows to apply advanced methods of point cloud difference calculation which considers the co-registration error between two point clouds for assessing the significance of the calculated difference (given in meters). Three Styrofoam cuboids of known dimensions (16 cm wide/29 cm high/11.5 cm deep) were placed within the sinkhole to test the accuracy of the point cloud difference calculation. The multi-view stereo 3D reconstruction was performed with Agisoft Photoscan. Preliminary analysis indicates that about 26% of the sinkhole showed changes exceeding the co-registration error of the point clouds. The areas of change can mainly be detected on the flanks of the sinkhole and on an earth pillar that formed in the center of the sinkhole. These changes describe toppling (positive change of a few centimeters at the earth pillar) and a few erosion processes along the flanks (negative change of a few centimeters) compared to the first date of data acquisition. Additionally, the Styrofoam cuboids have successfully been detected with an observed depth change of 10 cm. However, the limitations of this approach related to the co-registration of the point clouds and data acquisition (windy conditions) have to be analyzed in more detail.
Ifcwall Reconstruction from Unstructured Point Clouds
NASA Astrophysics Data System (ADS)
Bassier, M.; Klein, R.; Van Genechten, B.; Vergauwen, M.
2018-05-01
The automated reconstruction of Building Information Modeling (BIM) objects from point cloud data is still ongoing research. A key aspect is the creation of accurate wall geometry as it forms the basis for further reconstruction of objects in a BIM. After segmenting and classifying the initial point cloud, the labelled segments are processed and the wall topology is reconstructed. However, the preocedure is challenging due to noise, occlusions and the complexity of the input data.In this work, a method is presented to automatically reconstruct consistent wall geometry from point clouds. More specifically, the use of room information is proposed to aid the wall topology creation. First, a set of partial walls is constructed based on classified planar primitives. Next, the rooms are identified using the retrieved wall information along with the floors and ceilings. The wall topology is computed by the intersection of the partial walls conditioned on the room information. The final wall geometry is defined by creating IfcWallStandardCase objects conform the IFC4 standard. The result is a set of walls according to the as-built conditions of a building. The experiments prove that the used method is a reliable framework for wall reconstruction from unstructured point cloud data. Also, the implementation of room information reduces the rate of false positives for the wall topology. Given the walls, ceilings and floors, 94% of the rooms is correctly identified. A key advantage of the proposed method is that it deals with complex rooms and is not bound to single storeys.
Automatic Generation of Indoor Navigable Space Using a Point Cloud and its Scanner Trajectory
NASA Astrophysics Data System (ADS)
Staats, B. R.; Diakité, A. A.; Voûte, R. L.; Zlatanova, S.
2017-09-01
Automatic generation of indoor navigable models is mostly based on 2D floor plans. However, in many cases the floor plans are out of date. Buildings are not always built according to their blue prints, interiors might change after a few years because of modified walls and doors, and furniture may be repositioned to the user's preferences. Therefore, new approaches for the quick recording of indoor environments should be investigated. This paper concentrates on laser scanning with a Mobile Laser Scanner (MLS) device. The MLS device stores a point cloud and its trajectory. If the MLS device is operated by a human, the trajectory contains information which can be used to distinguish different surfaces. In this paper a method is presented for the identification of walkable surfaces based on the analysis of the point cloud and the trajectory of the MLS scanner. This method consists of several steps. First, the point cloud is voxelized. Second, the trajectory is analysing and projecting to acquire seed voxels. Third, these seed voxels are generated into floor regions by the use of a region growing process. By identifying dynamic objects, doors and furniture, these floor regions can be modified so that each region represents a specific navigable space inside a building as a free navigable voxel space. By combining the point cloud and its corresponding trajectory, the walkable space can be identified for any type of building even if the interior is scanned during business hours.
Rapid, semi-automatic fracture and contact mapping for point clouds, images and geophysical data
NASA Astrophysics Data System (ADS)
Thiele, Samuel T.; Grose, Lachlan; Samsu, Anindita; Micklethwaite, Steven; Vollgger, Stefan A.; Cruden, Alexander R.
2017-12-01
The advent of large digital datasets from unmanned aerial vehicle (UAV) and satellite platforms now challenges our ability to extract information across multiple scales in a timely manner, often meaning that the full value of the data is not realised. Here we adapt a least-cost-path solver and specially tailored cost functions to rapidly interpolate structural features between manually defined control points in point cloud and raster datasets. We implement the method in the geographic information system QGIS and the point cloud and mesh processing software CloudCompare. Using these implementations, the method can be applied to a variety of three-dimensional (3-D) and two-dimensional (2-D) datasets, including high-resolution aerial imagery, digital outcrop models, digital elevation models (DEMs) and geophysical grids. We demonstrate the algorithm with four diverse applications in which we extract (1) joint and contact patterns in high-resolution orthophotographs, (2) fracture patterns in a dense 3-D point cloud, (3) earthquake surface ruptures of the Greendale Fault associated with the Mw7.1 Darfield earthquake (New Zealand) from high-resolution light detection and ranging (lidar) data, and (4) oceanic fracture zones from bathymetric data of the North Atlantic. The approach improves the consistency of the interpretation process while retaining expert guidance and achieves significant improvements (35-65 %) in digitisation time compared to traditional methods. Furthermore, it opens up new possibilities for data synthesis and can quantify the agreement between datasets and an interpretation.
NASA Astrophysics Data System (ADS)
Michele, Mangiameli; Giuseppe, Mussumeci; Salvatore, Zito
2017-07-01
The Structure From Motion (SFM) is a technique applied to a series of photographs of an object that returns a 3D reconstruction made up by points in the space (point clouds). This research aims at comparing the results of the SFM approach with the results of a 3D laser scanning in terms of density and accuracy of the model. The experience was conducted by detecting several architectural elements (walls and portals of historical buildings) both with a 3D laser scanner of the latest generation and an amateur photographic camera. The point clouds acquired by laser scanner and those acquired by the photo camera have been systematically compared. In particular we present the experience carried out on the "Don Diego Pappalardo Palace" site in Pedara (Catania, Sicily).
Facets : a Cloudcompare Plugin to Extract Geological Planes from Unstructured 3d Point Clouds
NASA Astrophysics Data System (ADS)
Dewez, T. J. B.; Girardeau-Montaut, D.; Allanic, C.; Rohmer, J.
2016-06-01
Geological planar facets (stratification, fault, joint…) are key features to unravel the tectonic history of rock outcrop or appreciate the stability of a hazardous rock cliff. Measuring their spatial attitude (dip and strike) is generally performed by hand with a compass/clinometer, which is time consuming, requires some degree of censoring (i.e. refusing to measure some features judged unimportant at the time), is not always possible for fractures higher up on the outcrop and is somewhat hazardous. 3D virtual geological outcrop hold the potential to alleviate these issues. Efficiently segmenting massive 3D point clouds into individual planar facets, inside a convenient software environment was lacking. FACETS is a dedicated plugin within CloudCompare v2.6.2 (http://cloudcompare.org/ ) implemented to perform planar facet extraction, calculate their dip and dip direction (i.e. azimuth of steepest decent) and report the extracted data in interactive stereograms. Two algorithms perform the segmentation: Kd-Tree and Fast Marching. Both divide the point cloud into sub-cells, then compute elementary planar objects and aggregate them progressively according to a planeity threshold into polygons. The boundaries of the polygons are adjusted around segmented points with a tension parameter, and the facet polygons can be exported as 3D polygon shapefiles towards third party GIS software or simply as ASCII comma separated files. One of the great features of FACETS is the capability to explore planar objects but also 3D points with normals with the stereogram tool. Poles can be readily displayed, queried and manually segmented interactively. The plugin blends seamlessly into CloudCompare to leverage all its other 3D point cloud manipulation features. A demonstration of the tool is presented to illustrate these different features. While designed for geological applications, FACETS could be more widely applied to any planar objects.
Making data matter: Voxel printing for the digital fabrication of data across scales and domains.
Bader, Christoph; Kolb, Dominik; Weaver, James C; Sharma, Sunanda; Hosny, Ahmed; Costa, João; Oxman, Neri
2018-05-01
We present a multimaterial voxel-printing method that enables the physical visualization of data sets commonly associated with scientific imaging. Leveraging voxel-based control of multimaterial three-dimensional (3D) printing, our method enables additive manufacturing of discontinuous data types such as point cloud data, curve and graph data, image-based data, and volumetric data. By converting data sets into dithered material deposition descriptions, through modifications to rasterization processes, we demonstrate that data sets frequently visualized on screen can be converted into physical, materially heterogeneous objects. Our approach alleviates the need to postprocess data sets to boundary representations, preventing alteration of data and loss of information in the produced physicalizations. Therefore, it bridges the gap between digital information representation and physical material composition. We evaluate the visual characteristics and features of our method, assess its relevance and applicability in the production of physical visualizations, and detail the conversion of data sets for multimaterial 3D printing. We conclude with exemplary 3D-printed data sets produced by our method pointing toward potential applications across scales, disciplines, and problem domains.
Robust fiber clustering of cerebral fiber bundles in white matter
NASA Astrophysics Data System (ADS)
Yao, Xufeng; Wang, Yongxiong; Zhuang, Songlin
2014-11-01
Diffusion tensor imaging fiber tracking (DTI-FT) has been widely accepted in the diagnosis and treatment of brain diseases. During the rendering pipeline of specific fiber tracts, the image noise and low resolution of DTI would lead to false propagations. In this paper, we propose a robust fiber clustering (FC) approach to diminish false fibers from one fiber tract. Our algorithm consists of three steps. Firstly, the optimized fiber assignment continuous tracking (FACT) is implemented to reconstruct one fiber tract; and then each curved fiber in the fiber tract is mapped to a point by kernel principal component analysis (KPCA); finally, the point clouds of fiber tract are clustered by hierarchical clustering which could distinguish false fibers from true fibers in one tract. In our experiment, the corticospinal tract (CST) in one case of human data in vivo was used to validate our method. Our method showed reliable capability in decreasing the false fibers in one tract. In conclusion, our method could effectively optimize the visualization of fiber bundles and would help a lot in the field of fiber evaluation.
Extracting valley-ridge lines from point-cloud-based 3D fingerprint models.
Pang, Xufang; Song, Zhan; Xie, Wuyuan
2013-01-01
3D fingerprinting is an emerging technology with the distinct advantage of touchless operation. More important, 3D fingerprint models contain more biometric information than traditional 2D fingerprint images. However, current approaches to fingerprint feature detection usually must transform the 3D models to a 2D space through unwrapping or other methods, which might introduce distortions. A new approach directly extracts valley-ridge features from point-cloud-based 3D fingerprint models. It first applies the moving least-squares method to fit a local paraboloid surface and represent the local point cloud area. It then computes the local surface's curvatures and curvature tensors to facilitate detection of the potential valley and ridge points. The approach projects those points to the most likely valley-ridge lines, using statistical means such as covariance analysis and cross correlation. To finally extract the valley-ridge lines, it grows the polylines that approximate the projected feature points and removes the perturbations between the sampled points. Experiments with different 3D fingerprint models demonstrate this approach's feasibility and performance.
NASA Astrophysics Data System (ADS)
Anisuzzaman, S. M.; Abang, S.; Bono, A.; Krishnaiah, D.; Karali, R.; Safuan, M. K.
2017-06-01
Wax precipitation and deposition is one of the most significant flow assurance challenges in the production system of the crude oil. Wax inhibitors are developed as a preventive strategy to avoid an absolute wax deposition. Wax inhibitors are polymers which can be known as pour point depressants as they impede the wax crystals formation, growth, and deposition. In this study three formulations of wax inhibitors were prepared, ethylene vinyl acetate, ethylene vinyl acetate co-methyl methacrylate (EVA co-MMA) and ethylene vinyl acetate co-diethanolamine (EVA co-DEA) and the comparison of their efficiencies in terms of cloud point¸ pour point, performance inhibition efficiency (%PIE) and viscosity were evaluated. The cloud point and pour point for both EVA and EVA co-MMA were similar, 15°C and 10-5°C, respectively. Whereas, the cloud point and pour point for EVA co-DEA were better, 10°C and 10-5°C respectively. In conclusion, EVA co-DEA had shown the best % PIE (28.42%) which indicates highest percentage reduction of wax deposit as compared to the other two inhibitors.
Marine Boundary Layer Cloud Properties From AMF Point Reyes Satellite Observations
NASA Technical Reports Server (NTRS)
Jensen, Michael; Vogelmann, Andrew M.; Luke, Edward; Minnis, Patrick; Miller, Mark A.; Khaiyer, Mandana; Nguyen, Louis; Palikonda, Rabindra
2007-01-01
Cloud Diameter, C(sub D), offers a simple measure of Marine Boundary Layer (MBL) cloud organization. The diurnal cycle of cloud-physical properties and C(sub D) at Pt Reyes are consistent with previous work. The time series of C(sub D) can be used to identify distinct mesoscale organization regimes within the Pt. Reyes observation period.
a New Approach for Subway Tunnel Deformation Monitoring: High-Resolution Terrestrial Laser Scanning
NASA Astrophysics Data System (ADS)
Li, J.; Wan, Y.; Gao, X.
2012-07-01
With the improvement of the accuracy and efficiency of laser scanning technology, high-resolution terrestrial laser scanning (TLS) technology can obtain high precise points-cloud and density distribution and can be applied to high-precision deformation monitoring of subway tunnels and high-speed railway bridges and other fields. In this paper, a new approach using a points-cloud segmentation method based on vectors of neighbor points and surface fitting method based on moving least squares was proposed and applied to subway tunnel deformation monitoring in Tianjin combined with a new high-resolution terrestrial laser scanner (Riegl VZ-400). There were three main procedures. Firstly, a points-cloud consisted of several scanning was registered by linearized iterative least squares approach to improve the accuracy of registration, and several control points were acquired by total stations (TS) and then adjusted. Secondly, the registered points-cloud was resampled and segmented based on vectors of neighbor points to select suitable points. Thirdly, the selected points were used to fit the subway tunnel surface with moving least squares algorithm. Then a series of parallel sections obtained from temporal series of fitting tunnel surfaces were compared to analysis the deformation. Finally, the results of the approach in z direction were compared with the fiber optical displacement sensor approach and the results in x, y directions were compared with TS respectively, and comparison results showed the accuracy errors of x, y, z directions were respectively about 1.5 mm, 2 mm, 1 mm. Therefore the new approach using high-resolution TLS can meet the demand of subway tunnel deformation monitoring.
New from the Old - Measuring Coastal Cliff Change with Historical Oblique Aerial Photos
NASA Astrophysics Data System (ADS)
Warrick, J. A.; Ritchie, A.
2016-12-01
Oblique aerial photographs are commonly collected to document coastal landscapes. Here we show that these historical photographs can be used to develop topographic models with Structure-from-Motion (SfM) photogrammetric techniques if adequate photo-to-photo overlaps exist. Focusing on the 60-m high cliffs of Fort Funston, California, photographs from the California Coastal Records Project were combined with ground control points to develop topographic point clouds of the study area for five years between 2002 and 2010. Uncertainties in the results were assessed by comparing SfM-derived point clouds with airborne lidar data, and the differences between these data were related to the number and spatial distribution of ground control points used in the SfM analyses. With six or more ground control points the root mean squared error between the SfM and lidar data was less than 0.3 m (minimum = 0.18 m) and the mean systematic error was consistently less than 0.10 m. Because of the oblique orientation of the imagery, the SfM-derived point clouds provided coverage on vertical to overhanging portions of the cliff, and point densities from the SfM techniques averaged between 17 and 161 points/m2 on the cliff face. The time-series of topographic point clouds revealed many topographic changes, including landslides, rockfalls and the erosion of landslide talus along the Fort Funston beach. Thus, we concluded that historical oblique photographs, such as those generated by the California Coastal Records Project, can provide useful tools for mapping coastal topography and measuring coastal change.
Min-Cut Based Segmentation of Airborne LIDAR Point Clouds
NASA Astrophysics Data System (ADS)
Ural, S.; Shan, J.
2012-07-01
Introducing an organization to the unstructured point cloud before extracting information from airborne lidar data is common in many applications. Aggregating the points with similar features into segments in 3-D which comply with the nature of actual objects is affected by the neighborhood, scale, features and noise among other aspects. In this study, we present a min-cut based method for segmenting the point cloud. We first assess the neighborhood of each point in 3-D by investigating the local geometric and statistical properties of the candidates. Neighborhood selection is essential since point features are calculated within their local neighborhood. Following neighborhood determination, we calculate point features and determine the clusters in the feature space. We adapt a graph representation from image processing which is especially used in pixel labeling problems and establish it for the unstructured 3-D point clouds. The edges of the graph that are connecting the points with each other and nodes representing feature clusters hold the smoothness costs in the spatial domain and data costs in the feature domain. Smoothness costs ensure spatial coherence, while data costs control the consistency with the representative feature clusters. This graph representation formalizes the segmentation task as an energy minimization problem. It allows the implementation of an approximate solution by min-cuts for a global minimum of this NP hard minimization problem in low order polynomial time. We test our method with airborne lidar point cloud acquired with maximum planned post spacing of 1.4 m and a vertical accuracy 10.5 cm as RMSE. We present the effects of neighborhood and feature determination in the segmentation results and assess the accuracy and efficiency of the implemented min-cut algorithm as well as its sensitivity to the parameters of the smoothness and data cost functions. We find that smoothness cost that only considers simple distance parameter does not strongly conform to the natural structure of the points. Including shape information within the energy function by assigning costs based on the local properties may help to achieve a better representation for segmentation.
Clustering, randomness, and regularity in cloud fields: 2. Cumulus cloud fields
NASA Astrophysics Data System (ADS)
Zhu, T.; Lee, J.; Weger, R. C.; Welch, R. M.
1992-12-01
During the last decade a major controversy has been brewing concerning the proper characterization of cumulus convection. The prevailing view has been that cumulus clouds form in clusters, in which cloud spacing is closer than that found for the overall cloud field and which maintains its identity over many cloud lifetimes. This "mutual protection hypothesis" of Randall and Huffman (1980) has been challenged by the "inhibition hypothesis" of Ramirez et al. (1990) which strongly suggests that the spatial distribution of cumuli must tend toward a regular distribution. A dilemma has resulted because observations have been reported to support both hypotheses. The present work reports a detailed analysis of cumulus cloud field spatial distributions based upon Landsat, Advanced Very High Resolution Radiometer, and Skylab data. Both nearest-neighbor and point-to-cloud cumulative distribution function statistics are investigated. The results show unequivocally that when both large and small clouds are included in the cloud field distribution, the cloud field always has a strong clustering signal. The strength of clustering is largest at cloud diameters of about 200-300 m, diminishing with increasing cloud diameter. In many cases, clusters of small clouds are found which are not closely associated with large clouds. As the small clouds are eliminated from consideration, the cloud field typically tends towards regularity. Thus it would appear that the "inhibition hypothesis" of Ramirez and Bras (1990) has been verified for the large clouds. However, these results are based upon the analysis of point processes. A more exact analysis also is made which takes into account the cloud size distributions. Since distinct clouds are by definition nonoverlapping, cloud size effects place a restriction upon the possible locations of clouds in the cloud field. The net effect of this analysis is that the large clouds appear to be randomly distributed, with only weak tendencies towards regularity. For clouds less than 1 km in diameter, the average nearest-neighbor distance is equal to 3-7 cloud diameters. For larger clouds, the ratio of cloud nearest-neighbor distance to cloud diameter increases sharply with increasing cloud diameter. This demonstrates that large clouds inhibit the growth of other large clouds in their vicinity. Nevertheless, this leads to random distributions of large clouds, not regularity.
Riihimaki, Laura D.; Comstock, J. M.; Luke, E.; ...
2017-07-12
To understand the microphysical processes that impact diabatic heating and cloud lifetimes in convection, we need to characterize the spatial distribution of supercooled liquid water. To address this observational challenge, ground-based vertically pointing active sensors at the Darwin Atmospheric Radiation Measurement site are used to classify cloud phase within a deep convective cloud. The cloud cannot be fully observed by a lidar due to signal attenuation. Therefore, we developed an objective method for identifying hydrometeor classes, including mixed-phase conditions, using k-means clustering on parameters that describe the shape of the Doppler spectra from vertically pointing Ka-band cloud radar. Furthermore, thismore » approach shows that multiple, overlapping mixed-phase layers exist within the cloud, rather than a single region of supercooled liquid. Diffusional growth calculations show that the conditions for the Wegener-Bergeron-Findeisen process exist within one of these mixed-phase microstructures.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Riihimaki, Laura D.; Comstock, J. M.; Luke, E.
To understand the microphysical processes that impact diabatic heating and cloud lifetimes in convection, we need to characterize the spatial distribution of supercooled liquid water. To address this observational challenge, ground-based vertically pointing active sensors at the Darwin Atmospheric Radiation Measurement site are used to classify cloud phase within a deep convective cloud. The cloud cannot be fully observed by a lidar due to signal attenuation. Therefore, we developed an objective method for identifying hydrometeor classes, including mixed-phase conditions, using k-means clustering on parameters that describe the shape of the Doppler spectra from vertically pointing Ka-band cloud radar. Furthermore, thismore » approach shows that multiple, overlapping mixed-phase layers exist within the cloud, rather than a single region of supercooled liquid. Diffusional growth calculations show that the conditions for the Wegener-Bergeron-Findeisen process exist within one of these mixed-phase microstructures.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Guosheng
2013-03-15
Single-column modeling (SCM) is one of the key elements of Atmospheric Radiation Measurement (ARM) research initiatives for the development and testing of various physical parameterizations to be used in general circulation models (GCMs). The data required for use with an SCM include observed vertical profiles of temperature, water vapor, and condensed water, as well as the large-scale vertical motion and tendencies of temperature, water vapor, and condensed water due to horizontal advection. Surface-based measurements operated at ARM sites and upper-air sounding networks supply most of the required variables for model inputs, but do not provide the horizontal advection term ofmore » condensed water. Since surface cloud radar and microwave radiometer observations at ARM sites are single-point measurements, they can provide the amount of condensed water at the location of observation sites, but not a horizontal distribution of condensed water contents. Consequently, observational data for the large-scale advection tendencies of condensed water have not been available to the ARM cloud modeling community based on surface observations alone. This lack of advection data of water condensate could cause large uncertainties in SCM simulations. Additionally, to evaluate GCMs cloud physical parameterization, we need to compare GCM results with observed cloud water amounts over a scale that is large enough to be comparable to what a GCM grid represents. To this end, the point-measurements at ARM surface sites are again not adequate. Therefore, cloud water observations over a large area are needed. The main goal of this project is to retrieve ice water contents over an area of 10 x 10 deg. surrounding the ARM sites by combining surface and satellite observations. Built on the progress made during previous ARM research, we have conducted the retrievals of 3-dimensional ice water content by combining surface radar/radiometer and satellite measurements, and have produced 3-D cloud ice water contents in support of cloud modeling activities. The approach of the study is to expand a (surface) point measurement to an (satellite) area measurement. That is, the study takes the advantage of the high quality cloud measurements (particularly cloud radar and microwave radiometer measurements) at the point of the ARM sites. We use the cloud ice water characteristics derived from the point measurement to guide/constrain a satellite retrieval algorithm, then use the satellite algorithm to derive the 3-D cloud ice water distributions within an 10° (latitude) x 10° (longitude) area. During the research period, we have developed, validated and improved our cloud ice water retrievals, and have produced and archived at ARM website as a PI-product of the 3-D cloud ice water contents using combined satellite high-frequency microwave and surface radar observations for SGP March 2000 IOP and TWP-ICE 2006 IOP over 10 deg. x 10 deg. area centered at ARM SGP central facility and Darwin sites. We have also worked on validation of the 3-D ice water product by CloudSat data, synergy with visible/infrared cloud ice water retrievals for better results at low ice water conditions, and created a long-term (several years) of ice water climatology in 10 x 10 deg. area of ARM SGP and TWP sites and then compared it with GCMs.« less
Tropical Oceanic Precipitation Processes over Warm Pool: 2D and 3D Cloud Resolving Model Simulations
NASA Technical Reports Server (NTRS)
Tao, W.- K.; Johnson, D.
1998-01-01
Rainfall is a key link in the hydrologic cycle as well as the primary heat source for the atmosphere, The vertical distribution of convective latent-heat release modulates the large-scale circulations of the tropics, Furthermore, changes in the moisture distribution at middle and upper levels of the troposphere can affect cloud distributions and cloud liquid water and ice contents. How the incoming solar and outgoing longwave radiation respond to these changes in clouds is a major factor in assessing climate change. Present large-scale weather and climate models simulate cloud processes only crudely, reducing confidence in their predictions on both global and regional scales. One of the most promising methods to test physical parameterizations used in General Circulation Models (GCMS) and climate models is to use field observations together with Cloud Resolving Models (CRMs). The CRMs use more sophisticated and physically realistic parameterizations of cloud microphysical processes, and allow for their complex interactions with solar and infrared radiative transfer processes. The CRMs can reasonably well resolve the evolution, structure, and life cycles of individual clouds and cloud systems, The major objective of this paper is to investigate the latent heating, moisture and momenti,im budgets associated with several convective systems developed during the TOGA COARE IFA - westerly wind burst event (late December, 1992). The tool for this study is the Goddard Cumulus Ensemble (CCE) model which includes a 3-class ice-phase microphysical scheme, The model domain contains 256 x 256 grid points (using 2 km resolution) in the horizontal and 38 grid points (to a depth of 22 km depth) in the vertical, The 2D domain has 1024 grid points. The simulations are performed over a 7 day time period. We will examine (1) the precipitation processes (i.e., condensation/evaporation) and their interaction with warm pool; (2) the heating and moisture budgets in the convective and stratiform regions; (3) the cloud (upward-downward) mass fluxes in convective and stratiform regions; (4) characteristics of clouds (such as cloud size, updraft intensity and cloud lifetime) and the comparison of clouds with Radar observations. Differences and similarities in organization of convection between simulated 2D and 3D cloud systems. Preliminary results indicated that there is major differences between 2D and 3D simulated stratiform rainfall amount and convective updraft and downdraft mass fluxes.
Analysis on the security of cloud computing
NASA Astrophysics Data System (ADS)
He, Zhonglin; He, Yuhua
2011-02-01
Cloud computing is a new technology, which is the fusion of computer technology and Internet development. It will lead the revolution of IT and information field. However, in cloud computing data and application software is stored at large data centers, and the management of data and service is not completely trustable, resulting in safety problems, which is the difficult point to improve the quality of cloud service. This paper briefly introduces the concept of cloud computing. Considering the characteristics of cloud computing, it constructs the security architecture of cloud computing. At the same time, with an eye toward the security threats cloud computing faces, several corresponding strategies are provided from the aspect of cloud computing users and service providers.
NASA Astrophysics Data System (ADS)
Hinojosa-Corona, A.; Nissen, E.; Arrowsmith, R.; Krishnan, A. K.; Saripalli, S.; Oskin, M. E.; Arregui, S. M.; Limon, J. F.
2012-12-01
The Mw 7.2 El Mayor-Cucapah earthquake (EMCE) of 4 April 2010 generated a ~110 km long, NW-SE trending rupture, with normal and right-lateral slip in the order of 2-3m in the Sierra Cucapah, the northern half, where the surface rupture has the most outstanding expression. Vertical and horizontal surface displacements produced by the EMCE have been addressed separately by other authors with a variety of aerial and satellite remote sensing techniques. Slip variation along fault and post-seismic scarp erosion and diffusion have been estimated in other studies using terrestrial LiDAR (TLS) on segments of the rupture. To complement these other studies, we computed the 3D deformation field by comparing pre- to post-event point clouds from aerial LiDAR surveys. The pre-event LiDAR with lower point density (0.013-0.033 pts m-2) required filtering and post-processing before comparing with the denser (9-18 pts m-2) more accurate post event dataset. The 3-dimensional surface displacement field was determined using an adaptation of the Iterative Closest Point (ICP) algorithm, implemented in the open source Point Cloud Library (PCL). The LiDAR datasets are first split into a grid of windows, and for each one, ICP iteratively converges on the rigid body transformation (comprising a translation and a rotation) that best aligns the pre- to post-event points. Testing on synthetic datasets perturbed with displacements of known magnitude showed that windows with dimensions of 100-200m gave the best results for datasets with these densities. Here we present the deformation field with detailed displacements in segments of the surface rupture where its expression was recognized by ICP from the point cloud matching, mainly the scarcely vegetated Sierra Cucapah with the Borrego and Paso Superior fault segments the most outstanding, where we are able to compare our results with values measured in the field and results from TLS reported in other works. EMC simulated displacement field for a 2m right lateral normal (east block down) slip on the pre-event point cloud along the Borrego fault on Sierra Cucapah. Shaded DEM from post-event point cloud as backdrop.
Scan Line Based Road Marking Extraction from Mobile LiDAR Point Clouds.
Yan, Li; Liu, Hua; Tan, Junxiang; Li, Zan; Xie, Hong; Chen, Changjun
2016-06-17
Mobile Mapping Technology (MMT) is one of the most important 3D spatial data acquisition technologies. The state-of-the-art mobile mapping systems, equipped with laser scanners and named Mobile LiDAR Scanning (MLS) systems, have been widely used in a variety of areas, especially in road mapping and road inventory. With the commercialization of Advanced Driving Assistance Systems (ADASs) and self-driving technology, there will be a great demand for lane-level detailed 3D maps, and MLS is the most promising technology to generate such lane-level detailed 3D maps. Road markings and road edges are necessary information in creating such lane-level detailed 3D maps. This paper proposes a scan line based method to extract road markings from mobile LiDAR point clouds in three steps: (1) preprocessing; (2) road points extraction; (3) road markings extraction and refinement. In preprocessing step, the isolated LiDAR points in the air are removed from the LiDAR point clouds and the point clouds are organized into scan lines. In the road points extraction step, seed road points are first extracted by Height Difference (HD) between trajectory data and road surface, then full road points are extracted from the point clouds by moving least squares line fitting. In the road markings extraction and refinement step, the intensity values of road points in a scan line are first smoothed by a dynamic window median filter to suppress intensity noises, then road markings are extracted by Edge Detection and Edge Constraint (EDEC) method, and the Fake Road Marking Points (FRMPs) are eliminated from the detected road markings by segment and dimensionality feature-based refinement. The performance of the proposed method is evaluated by three data samples and the experiment results indicate that road points are well extracted from MLS data and road markings are well extracted from road points by the applied method. A quantitative study shows that the proposed method achieves an average completeness, correctness, and F-measure of 0.96, 0.93, and 0.94, respectively. The time complexity analysis shows that the scan line based road markings extraction method proposed in this paper provides a promising alternative for offline road markings extraction from MLS data.
Noctilucent cloud polarimetry: Twilight measurements in a wide range of scattering angles
NASA Astrophysics Data System (ADS)
Ugolnikov, Oleg S.; Maslov, Igor A.; Kozelov, Boris V.; Dlugach, Janna M.
2016-06-01
Wide-field polarization measurements of the twilight sky background during several nights with bright and extended noctilucent clouds in central and northern Russia in 2014 and 2015 are used to build the phase dependence of the degree of polarization of sunlight scattered by cloud particles in a wide range of scattering angles (from 40° to 130°). This range covers the linear polarization maximum near 90° and large-angle slope of the curve. The polarization in this angle range is most sensitive to the particle size. The method of separation of scattering on cloud particles from the twilight background is presented. Results are compared with T-matrix simulations for different sizes and shapes of ice particles; the best-fit model radius of particles (0.06 μm) and maximum radius (about 0.1 μm) are estimated.
Speciation and Determination of Low Concentration of Iron in Beer Samples by Cloud Point Extraction
ERIC Educational Resources Information Center
Khalafi, Lida; Doolittle, Pamela; Wright, John
2018-01-01
A laboratory experiment is described in which students determine the concentration and speciation of iron in beer samples using cloud point extraction and absorbance spectroscopy. The basis of determination is the complexation between iron and 2-(5-bromo-2- pyridylazo)-5-diethylaminophenol (5-Br-PADAP) as a colorimetric reagent in an aqueous…
ERIC Educational Resources Information Center
Bodzewski, Kentaro Y.; Caylor, Ryan L.; Comstock, Ashley M.; Hadley, Austin T.; Imholt, Felisha M.; Kirwan, Kory D.; Oyama, Kira S.; Wise, Matthew E.
2016-01-01
A differential scanning calorimeter was used to study homogeneous nucleation of ice from micron-sized aqueous ammonium sulfate aerosol particles. It is important to understand the conditions at which these particles nucleate ice because of their connection to cirrus cloud formation. Additionally, the concept of freezing point depression, a topic…
NASA Technical Reports Server (NTRS)
Levasseur-Regourd, A. C.; Lasue, J.
2011-01-01
Interplanetary dust particles physical properties may be approached through observations of the solar light they scatter, specially its polarization, and of their thermal emission. Results, at least near the ecliptic plane, on polarization phase curves and on the heliocentric dependence of the local spatial density, albedo, polarization and temperature are summarized. As far as interpretations through simulations are concerned, a very good fit of the polarization phase curve near 1.5 AU is obtained for a mixture of silicates and more absorbing organics material, with a significant amount of fluffy aggregates. In the 1.5-0.5 AU solar distance range, the temperature variation suggests the presence of a large amount of absorbing organic compounds, while the decrease of the polarization with decreasing solar distance is indeed compatible with a decrease of the organics towards the Sun. Such results are in favor of the predominance of dust of cometary origin in the interplanetary dust cloud, at least below 1.5 AU. The implication of these results on the delivery of complex organic molecules on Earth during the LHB epoch, when the spatial density of the interplanetary dust cloud was orders of magnitude greater than today, is discussed.
NASA Astrophysics Data System (ADS)
Earlie, C. S.; Masselink, G.; Russell, P.; Shail, R.; Kingston, K.
2013-12-01
Our understanding of the evolution of hard rock coastlines is limited due to the episodic nature and ';slow' rate at which changes occur. High-resolution surveying techniques, such as Terrestrial Laser Scanning (TLS), have just begun to be adopted as a method of obtaining detailed point cloud data to monitor topographical changes over short periods of time (weeks to months). However, the difficulties involved in comparing consecutive point cloud data sets in a complex three-dimensional plane, such as occlusion due to surface roughness and positioning of data capture point as a result of a consistently changing environment (a beach profile), mean that comparing data sets can lead to errors in the region of 10 - 20 cm. Meshing techniques are often used for point cloud data analysis for simple surfaces, but in surfaces such as rocky cliff faces, this technique has been found to be ineffective. Recession rates of hard rock coastlines in the UK are typically determined using aerial photography or airborne LiDAR data, yet the detail of the important changes occurring to the cliff face and toe are missed using such techniques. In this study we apply an algorithm (M3C2 - Multiscale Model to Model Cloud Comparison), initially developed for analysing fluvial morphological change, that directly compares point to point cloud data using surface normals that are consistent with surface roughness and measure the change that occurs along the normal direction (Lague et al., 2013). The surfaces changes are analysed using a set of user defined scales based on surface roughness and registration error. Once the correct parameters are defined, the volumetric cliff face changes are calculated by integrating the mean distance between the point clouds. The analysis has been undertaken at two hard rock sites identified for their active erosion located on the UK's south west peninsular at Porthleven in south west Cornwall and Godrevy in north Cornwall. Alongside TLS point cloud data, in-situ measurements of the nearshore wave climate, using a pressure transducer, offshore wave climate from a directional wavebuoy, and rainfall records from nearby weather stations were collected. Combining beach elevation information from the georeferenced point clouds with a continuous time series of wave climate provides an indication of the variation in wave energy delivered to the cliff face. The rates of retreat were found to agree with the existing rates that are currently used in shoreline management. The additional geotechnical detail afforded by applying the M3C2 method to a hard rock environment provides not only a means of obtaining volumetric changes with confidence, but also a clear illustration of the locations of failure on the cliff face. Monthly cliff scans help to narrow down the timings of failure under energetic wave conditions or periods of heavy rainfall. Volumetric changes and sensitive regions to failure established using this method allows us to capture episodic changes to the cliff face at a high resolution (1 - 2 cm) that are otherwise missed using lower resolution techniques typically used for shoreline management, and to understand in greater detail the geotechnical behaviour of hard rock cliffs and determine rates of erosion with greater accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benincasa, Samantha M.; Pudritz, Ralph E.; Wadsley, James
We present the results of a study of simulated giant molecular clouds (GMCs) formed in a Milky Way-type galactic disk with a flat rotation curve. This simulation, which does not include star formation or feedback, produces clouds with masses ranging between 10{sup 4} M{sub ☉} and 10{sup 7} M{sub ☉}. We compare our simulated cloud population to two observational surveys: the Boston University-Five College Radio Astronomy Observatory Galactic Ring Survey and the BIMA All-Disk Survey of M33. An analysis of the global cloud properties as well as a comparison of Larson's scaling relations is carried out. We find that simulatedmore » cloud properties agree well with the observed cloud properties, with the closest agreement occurring between the clouds at comparable resolution in M33. Our clouds are highly filamentary—a property that derives both from their formation due to gravitational instability in the sheared galactic environment, as well as to cloud-cloud gravitational encounters. We also find that the rate at which potentially star-forming gas accumulates within dense regions—wherein n{sub thresh} ≥ 10{sup 4} cm{sup –3}—is 3% per 10 Myr, in clouds of roughly 10{sup 6} M{sub ☉}. This suggests that star formation rates in observed clouds are related to the rates at which gas can be accumulated into dense subregions within GMCs via filamentary flows. The most internally well-resolved clouds are chosen for listing in a catalog of simulated GMCs—the first of its kind. The cataloged clouds are available as an extracted data set from the global simulation.« less
D Building Reconstruction by Multiview Images and the Integrated Application with Augmented Reality
NASA Astrophysics Data System (ADS)
Hwang, Jin-Tsong; Chu, Ting-Chen
2016-10-01
This study presents an approach wherein photographs with a high degree of overlap are clicked using a digital camera and used to generate three-dimensional (3D) point clouds via feature point extraction and matching. To reconstruct a building model, an unmanned aerial vehicle (UAV) is used to click photographs from vertical shooting angles above the building. Multiview images are taken from the ground to eliminate the shielding effect on UAV images caused by trees. Point clouds from the UAV and multiview images are generated via Pix4Dmapper. By merging two sets of point clouds via tie points, the complete building model is reconstructed. The 3D models are reconstructed using AutoCAD 2016 to generate vectors from the point clouds; SketchUp Make 2016 is used to rebuild a complete building model with textures. To apply 3D building models in urban planning and design, a modern approach is to rebuild the digital models; however, replacing the landscape design and building distribution in real time is difficult as the frequency of building replacement increases. One potential solution to these problems is augmented reality (AR). Using Unity3D and Vuforia to design and implement the smartphone application service, a markerless AR of the building model can be built. This study is aimed at providing technical and design skills related to urban planning, urban designing, and building information retrieval using AR.
Combining 3d Volume and Mesh Models for Representing Complicated Heritage Buildings
NASA Astrophysics Data System (ADS)
Tsai, F.; Chang, H.; Lin, Y.-W.
2017-08-01
This study developed a simple but effective strategy to combine 3D volume and mesh models for representing complicated heritage buildings and structures. The idea is to seamlessly integrate 3D parametric or polyhedral models and mesh-based digital surfaces to generate a hybrid 3D model that can take advantages of both modeling methods. The proposed hybrid model generation framework is separated into three phases. Firstly, after acquiring or generating 3D point clouds of the target, these 3D points are partitioned into different groups. Secondly, a parametric or polyhedral model of each group is generated based on plane and surface fitting algorithms to represent the basic structure of that region. A "bare-bones" model of the target can subsequently be constructed by connecting all 3D volume element models. In the third phase, the constructed bare-bones model is used as a mask to remove points enclosed by the bare-bones model from the original point clouds. The remaining points are then connected to form 3D surface mesh patches. The boundary points of each surface patch are identified and these boundary points are projected onto the surfaces of the bare-bones model. Finally, new meshes are created to connect the projected points and original mesh boundaries to integrate the mesh surfaces with the 3D volume model. The proposed method was applied to an open-source point cloud data set and point clouds of a local historical structure. Preliminary results indicated that the reconstructed hybrid models using the proposed method can retain both fundamental 3D volume characteristics and accurate geometric appearance with fine details. The reconstructed hybrid models can also be used to represent targets in different levels of detail according to user and system requirements in different applications.
Supervised Outlier Detection in Large-Scale Mvs Point Clouds for 3d City Modeling Applications
NASA Astrophysics Data System (ADS)
Stucker, C.; Richard, A.; Wegner, J. D.; Schindler, K.
2018-05-01
We propose to use a discriminative classifier for outlier detection in large-scale point clouds of cities generated via multi-view stereo (MVS) from densely acquired images. What makes outlier removal hard are varying distributions of inliers and outliers across a scene. Heuristic outlier removal using a specific feature that encodes point distribution often delivers unsatisfying results. Although most outliers can be identified correctly (high recall), many inliers are erroneously removed (low precision), too. This aggravates object 3D reconstruction due to missing data. We thus propose to discriminatively learn class-specific distributions directly from the data to achieve high precision. We apply a standard Random Forest classifier that infers a binary label (inlier or outlier) for each 3D point in the raw, unfiltered point cloud and test two approaches for training. In the first, non-semantic approach, features are extracted without considering the semantic interpretation of the 3D points. The trained model approximates the average distribution of inliers and outliers across all semantic classes. Second, semantic interpretation is incorporated into the learning process, i.e. we train separate inlieroutlier classifiers per semantic class (building facades, roof, ground, vegetation, fields, and water). Performance of learned filtering is evaluated on several large SfM point clouds of cities. We find that results confirm our underlying assumption that discriminatively learning inlier-outlier distributions does improve precision over global heuristics by up to ≍ 12 percent points. Moreover, semantically informed filtering that models class-specific distributions further improves precision by up to ≍ 10 percent points, being able to remove very isolated building, roof, and water points while preserving inliers on building facades and vegetation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Knuth, Eldon L.; Miller, David R.; Even, Uzi
2014-12-09
Data extracted from time-of-flight (TOF) measurements made on steady-state He free jets at Göttingen already in 1986 and for pulsed Ne free jets investigated recently at Tel Aviv have been added to an earlier plot of terminal condensed-phase mass fraction x{sub 2∞} as a function of the dimensionless scaling parameter Γ. Γ characterizes the source (fluid species, temperature, pressure and throat diameter); values of x{sub 2∞} are extracted from TOF measurements using conservation of energy in the free-jet expansion. For nozzles consisting of an orifice in a thin plate; the extracted data yield 22 data points which are correlated satisfactorilymore » by a single curve. The Ne free jets were expanded from a conical nozzle with a 20° half angle; the three extracted data points stand together but apart from the aforementioned curve, indicating that the presence of the conical wall influences significantly the expansion and hence the condensation. The 22 data points for the expansions via an orifice consist of 15 measurements with expansions from the gas-phase side of the binodal curve which crossed the binodal curve downstream from the sonic point and 7 measurements with expansions of the gas-phase product of the flashing which occurred after an expansion from the liquid-phase side of the binodal curve crossed the binodal curve upstream from the sonic point. The association of these 22 points with a single curve supports the alternating-phase model for flows with flashing upstream from the sonic point proposed earlier. In order to assess the role of the spinodal curve in such expansions, the spinodal curves for He and Ne were computed using general multi-parameter Helmholtz-free-energy equation-of-state formulations. Then, for the several sets of source-chamber conditions used in the free-jet measurements, thermodynamic states at key locations in the free-jet expansions (binodal curve, sonic point and spinodal curve) were evaluated, with the expansion presumed to be metastable from the binodal curve to the spinodal curve. TOF distributions with more than two peaks (interpreted earlier as superimposed alternating-state TOF distributions) indicated flashing of the metastable flow downstream from the binodal curve but upstream from the sonic point. This relatively early flashing is due apparently to destabilizing interactions with the walls of the source. If the expansion crosses the binodal curve downstream from the nozzle, the metastable fluid does not interact with surfaces and flashing might be delayed until the expansion reaches the spinodal curve. It is concluded that, if the expansion crosses the binodal curve before reaching the sonic point, the resulting metastable fluid downstream from the binodal curve interacts with the adjacent surfaces and flashes into liquid and vapor phases which expand alternately through the nozzle; the two associated alternating TOF distributions are superposed by the chopping process so that the result has the appearance of a single distribution with three peaks.« less
Lost in Virtual Reality: Pathfinding Algorithms Detect Rock Fractures and Contacts in Point Clouds
NASA Astrophysics Data System (ADS)
Thiele, S.; Grose, L.; Micklethwaite, S.
2016-12-01
UAV-based photogrammetric and LiDAR techniques provide high resolution 3D point clouds and ortho-rectified photomontages that can capture surface geology in outstanding detail over wide areas. Automated and semi-automated methods are vital to extract full value from these data in practical time periods, though the nuances of geological structures and materials (natural variability in colour and geometry, soft and hard linkage, shadows and multiscale properties) make this a challenging task. We present a novel method for computer assisted trace detection in dense point clouds, using a lowest cost path solver to "follow" fracture traces and lithological contacts between user defined end points. This is achieved by defining a local neighbourhood network where each point in the cloud is linked to its neighbours, and then using a least-cost path algorithm to search this network and estimate the trace of the fracture or contact. A variety of different algorithms can then be applied to calculate the best fit plane, produce a fracture network, or map properties such as roughness, curvature and fracture intensity. Our prototype of this method (Fig. 1) suggests the technique is feasible and remarkably good at following traces under non-optimal conditions such as variable-shadow, partial occlusion and complex fracturing. Furthermore, if a fracture is initially mapped incorrectly, the user can easily provide further guidance by defining intermediate waypoints. Future development will include optimization of the algorithm to perform well on large point clouds and modifications that permit the detection of features such as step-overs. We also plan on implementing this approach in an interactive graphical user environment.
NASA Astrophysics Data System (ADS)
Igel, M.
2015-12-01
The tropical atmosphere exhibits an abrupt statistical switch between non-raining and heavily raining states as column moisture increases across a wide range of length scales. Deep convection occurs at values of column humidity above the transition point and induces drying of moist columns. With a 1km resolution, large domain cloud resolving model run in RCE, what will be made clear here for the first time is how the entire tropical convective cloud population is affected by and feeds back to the pickup in heavy precipitation. Shallow convection can act to dry the low levels through weak precipitation or vertical redistribution of moisture, or to moisten toward a transition to deep convection. It is shown that not only can deep convection dehydrate the entire column, it can also dry just the lower layer through intense rain. In the latter case, deep stratiform cloud then forms to dry the upper layer through rain with anomalously high rates for its value of column humidity until both the total column moisture falls below the critical transition point and the upper levels are cloud free. Thus, all major tropical cloud types are shown to respond strongly to the same critical phase-transition point. This mutual response represents a potentially strong organizational mechanism for convection, and the frequency of and logical rules determining physical evolutions between these convective regimes will be discussed. The precise value of the point in total column moisture at which the transition to heavy precipitation occurs is shown to result from two independent thresholds in lower-layer and upper-layer integrated humidity.
Wang, Yunsheng; Weinacker, Holger; Koch, Barbara
2008-01-01
A procedure for both vertical canopy structure analysis and 3D single tree modelling based on Lidar point cloud is presented in this paper. The whole area of research is segmented into small study cells by a raster net. For each cell, a normalized point cloud whose point heights represent the absolute heights of the ground objects is generated from the original Lidar raw point cloud. The main tree canopy layers and the height ranges of the layers are detected according to a statistical analysis of the height distribution probability of the normalized raw points. For the 3D modelling of individual trees, individual trees are detected and delineated not only from the top canopy layer but also from the sub canopy layer. The normalized points are resampled into a local voxel space. A series of horizontal 2D projection images at the different height levels are then generated respect to the voxel space. Tree crown regions are detected from the projection images. Individual trees are then extracted by means of a pre-order forest traversal process through all the tree crown regions at the different height levels. Finally, 3D tree crown models of the extracted individual trees are reconstructed. With further analyses on the 3D models of individual tree crowns, important parameters such as crown height range, crown volume and crown contours at the different height levels can be derived. PMID:27879916
Structure Line Detection from LIDAR Point Clouds Using Topological Elevation Analysis
NASA Astrophysics Data System (ADS)
Lo, C. Y.; Chen, L. C.
2012-07-01
Airborne LIDAR point clouds, which have considerable points on object surfaces, are essential to building modeling. In the last two decades, studies have developed different approaches to identify structure lines using two main approaches, data-driven and modeldriven. These studies have shown that automatic modeling processes depend on certain considerations, such as used thresholds, initial value, designed formulas, and predefined cues. Following the development of laser scanning systems, scanning rates have increased and can provide point clouds with higher point density. Therefore, this study proposes using topological elevation analysis (TEA) to detect structure lines instead of threshold-dependent concepts and predefined constraints. This analysis contains two parts: data pre-processing and structure line detection. To preserve the original elevation information, a pseudo-grid for generating digital surface models is produced during the first part. The highest point in each grid is set as the elevation value, and its original threedimensional position is preserved. In the second part, using TEA, the structure lines are identified based on the topology of local elevation changes in two directions. Because structure lines can contain certain geometric properties, their locations have small relieves in the radial direction and steep elevation changes in the circular direction. Following the proposed approach, TEA can be used to determine 3D line information without selecting thresholds. For validation, the TEA results are compared with those of the region growing approach. The results indicate that the proposed method can produce structure lines using dense point clouds.
NASA Astrophysics Data System (ADS)
Yang, Fan
Ice particles in atmospheric clouds play an important role in determining cloud lifetime, precipitation and radiation. It is therefore important to understand the whole life cycle of ice particles in the atmosphere, e.g., where they come from (nucleation), how they evolve (growth), and where they go (precipitation). Ice nucleation is the crucial step for ice formation, and in this study, we will mainly focus on ice nucleation in the lab and its effect on mixed-phase stratiform clouds. In the first half of this study, we investigate the relevance of moving contact lines (i.e., the region where three or more phases meet) on the phenomenon of contact nucleation. High speed video is used to investigate heterogeneous ice nucleation in supercooled droplets resting on cold substrates under two different dynamic conditions: droplet electrowetting and droplet vibration. The results show that contact-line motion is not a sufficient condition to trigger ice nucleation, while locally curved contact lines that can result from contact-line motion are strongly related to ice nucleation. We propose that pressure perturbations due to locally curved contact lines can strongly enhance the ice nucleation rate, which gives another interpretation for the mechanism for contact nucleation. Corresponding theoretical results provide a quantitative connection between pressure perturbations and temperature, providing a useful tool for ice nucleation calculations in atmospheric models. In this second half of the study, we build a minimalist model for long lifetime mixed-phase stratiform clouds based on stochastic ice nucleation. Our result shows that there is a non-linear relationship between ice water contact and ice number concentration in the mixed-phase cloud, as long as the volume ice nucleation rate is constant. This statistical property may help identify the source of ice nuclei in mixed-phase clouds. In addition, results from Lagrangian ice particle tracking in time dependent fields show that long lifetime ice particles exist in mixed-phase stratiform clouds. We find that small ice particle can be trapped in eddy-like structures. Whether ice particles grow or sublimate depends on the thermodynamic field in the trapping region. This dynamic-thermodynamic coupling effect on the lifetime of ice particles might explain the fast phase-partition change observed in the mixed phase cloud.
Contextual Classification of Point Cloud Data by Exploiting Individual 3d Neigbourhoods
NASA Astrophysics Data System (ADS)
Weinmann, M.; Schmidt, A.; Mallet, C.; Hinz, S.; Rottensteiner, F.; Jutzi, B.
2015-03-01
The fully automated analysis of 3D point clouds is of great importance in photogrammetry, remote sensing and computer vision. For reliably extracting objects such as buildings, road inventory or vegetation, many approaches rely on the results of a point cloud classification, where each 3D point is assigned a respective semantic class label. Such an assignment, in turn, typically involves statistical methods for feature extraction and machine learning. Whereas the different components in the processing workflow have extensively, but separately been investigated in recent years, the respective connection by sharing the results of crucial tasks across all components has not yet been addressed. This connection not only encapsulates the interrelated issues of neighborhood selection and feature extraction, but also the issue of how to involve spatial context in the classification step. In this paper, we present a novel and generic approach for 3D scene analysis which relies on (i) individually optimized 3D neighborhoods for (ii) the extraction of distinctive geometric features and (iii) the contextual classification of point cloud data. For a labeled benchmark dataset, we demonstrate the beneficial impact of involving contextual information in the classification process and that using individual 3D neighborhoods of optimal size significantly increases the quality of the results for both pointwise and contextual classification.
Feature relevance assessment for the semantic interpretation of 3D point cloud data
NASA Astrophysics Data System (ADS)
Weinmann, M.; Jutzi, B.; Mallet, C.
2013-10-01
The automatic analysis of large 3D point clouds represents a crucial task in photogrammetry, remote sensing and computer vision. In this paper, we propose a new methodology for the semantic interpretation of such point clouds which involves feature relevance assessment in order to reduce both processing time and memory consumption. Given a standard benchmark dataset with 1.3 million 3D points, we first extract a set of 21 geometric 3D and 2D features. Subsequently, we apply a classifier-independent ranking procedure which involves a general relevance metric in order to derive compact and robust subsets of versatile features which are generally applicable for a large variety of subsequent tasks. This metric is based on 7 different feature selection strategies and thus addresses different intrinsic properties of the given data. For the example of semantically interpreting 3D point cloud data, we demonstrate the great potential of smaller subsets consisting of only the most relevant features with 4 different state-of-the-art classifiers. The results reveal that, instead of including as many features as possible in order to compensate for lack of knowledge, a crucial task such as scene interpretation can be carried out with only few versatile features and even improved accuracy.
Visualization of the Construction of Ancient Roman Buildings in Ostia Using Point Cloud Data
NASA Astrophysics Data System (ADS)
Hori, Y.; Ogawa, T.
2017-02-01
The implementation of laser scanning in the field of archaeology provides us with an entirely new dimension in research and surveying. It allows us to digitally recreate individual objects, or entire cities, using millions of three-dimensional points grouped together in what is referred to as "point clouds". In addition, the visualization of the point cloud data, which can be used in the final report by archaeologists and architects, should usually be produced as a JPG or TIFF file. Not only the visualization of point cloud data, but also re-examination of older data and new survey of the construction of Roman building applying remote-sensing technology for precise and detailed measurements afford new information that may lead to revising drawings of ancient buildings which had been adduced as evidence without any consideration of a degree of accuracy, and finally can provide new research of ancient buildings. We used laser scanners at fields because of its speed, comprehensive coverage, accuracy and flexibility of data manipulation. Therefore, we "skipped" many of post-processing and focused on the images created from the meta-data simply aligned using a tool which extended automatic feature-matching algorithm and a popular renderer that can provide graphic results.
Satellite Articulation Characterization from an Image Trajectory Matrix Using Optimization
NASA Astrophysics Data System (ADS)
Curtis, D. H.; Cobb, R. G.
Autonomous on-orbit satellite servicing and inspection benefits from an inspector satellite that can autonomously gain as much information as possible about the primary satellite. This includes performance of articulated objects such as solar arrays, antennas, and sensors. This paper presents a method of characterizing the articulation of a satellite using resolved monocular imagery. A simulated point cloud representing a nominal satellite with articulating solar panels and a complex articulating appendage is developed and projected to the image coordinates that would be seen from an inspector following a given inspection route. A method is developed to analyze the resulting image trajectory matrix. The developed method takes advantage of the fact that the route of the inspector satellite is known to assist in the segmentation of the points into different rigid bodies, the creation of the 3D point cloud, and the identification of the articulation parameters. Once the point cloud and the articulation parameters are calculated, they can be compared to the known truth. The error in the calculated point cloud is determined as well as the difference between the true workspace of the satellite and the calculated workspace. These metrics can be used to compare the quality of various inspection routes for characterizing the satellite and its articulation.
- and Graph-Based Point Cloud Segmentation of 3d Scenes Using Perceptual Grouping Laws
NASA Astrophysics Data System (ADS)
Xu, Y.; Hoegner, L.; Tuttas, S.; Stilla, U.
2017-05-01
Segmentation is the fundamental step for recognizing and extracting objects from point clouds of 3D scene. In this paper, we present a strategy for point cloud segmentation using voxel structure and graph-based clustering with perceptual grouping laws, which allows a learning-free and completely automatic but parametric solution for segmenting 3D point cloud. To speak precisely, two segmentation methods utilizing voxel and supervoxel structures are reported and tested. The voxel-based data structure can increase efficiency and robustness of the segmentation process, suppressing the negative effect of noise, outliers, and uneven points densities. The clustering of voxels and supervoxel is carried out using graph theory on the basis of the local contextual information, which commonly conducted utilizing merely pairwise information in conventional clustering algorithms. By the use of perceptual laws, our method conducts the segmentation in a pure geometric way avoiding the use of RGB color and intensity information, so that it can be applied to more general applications. Experiments using different datasets have demonstrated that our proposed methods can achieve good results, especially for complex scenes and nonplanar surfaces of objects. Quantitative comparisons between our methods and other representative segmentation methods also confirms the effectiveness and efficiency of our proposals.
Hierarchical extraction of urban objects from mobile laser scanning data
NASA Astrophysics Data System (ADS)
Yang, Bisheng; Dong, Zhen; Zhao, Gang; Dai, Wenxia
2015-01-01
Point clouds collected in urban scenes contain a huge number of points (e.g., billions), numerous objects with significant size variability, complex and incomplete structures, and variable point densities, raising great challenges for the automated extraction of urban objects in the field of photogrammetry, computer vision, and robotics. This paper addresses these challenges by proposing an automated method to extract urban objects robustly and efficiently. The proposed method generates multi-scale supervoxels from 3D point clouds using the point attributes (e.g., colors, intensities) and spatial distances between points, and then segments the supervoxels rather than individual points by combining graph based segmentation with multiple cues (e.g., principal direction, colors) of the supervoxels. The proposed method defines a set of rules for merging segments into meaningful units according to types of urban objects and forms the semantic knowledge of urban objects for the classification of objects. Finally, the proposed method extracts and classifies urban objects in a hierarchical order ranked by the saliency of the segments. Experiments show that the proposed method is efficient and robust for extracting buildings, streetlamps, trees, telegraph poles, traffic signs, cars, and enclosures from mobile laser scanning (MLS) point clouds, with an overall accuracy of 92.3%.
Sawicki, Piotr
2018-01-01
The paper presents the results of testing a proposed image-based point clouds measuring method for geometric parameters determination of a railway track. The study was performed based on a configuration of digital images and reference control network. A DSLR (digital Single-Lens-Reflex) Nikon D5100 camera was used to acquire six digital images of the tested section of railway tracks. The dense point clouds and the 3D mesh model were generated with the use of two software systems, RealityCapture and PhotoScan, which have implemented different matching and 3D object reconstruction techniques: Multi-View Stereo and Semi-Global Matching, respectively. The study found that both applications could generate appropriate 3D models. Final meshes of 3D models were filtered with the MeshLab software. The CloudCompare application was used to determine the track gauge and cant for defined cross-sections, and the results obtained from point clouds by dense image matching techniques were compared with results of direct geodetic measurements. The obtained RMS difference in the horizontal (gauge) and vertical (cant) plane was RMS∆ < 0.45 mm. The achieved accuracy meets the accuracy condition of measurements and inspection of the rail tracks (error m < 1 mm), specified in the Polish branch railway instruction Id-14 (D-75) and the European technical norm EN 13848-4:2011. PMID:29509679
Gabara, Grzegorz; Sawicki, Piotr
2018-03-06
The paper presents the results of testing a proposed image-based point clouds measuring method for geometric parameters determination of a railway track. The study was performed based on a configuration of digital images and reference control network. A DSLR (digital Single-Lens-Reflex) Nikon D5100 camera was used to acquire six digital images of the tested section of railway tracks. The dense point clouds and the 3D mesh model were generated with the use of two software systems, RealityCapture and PhotoScan, which have implemented different matching and 3D object reconstruction techniques: Multi-View Stereo and Semi-Global Matching, respectively. The study found that both applications could generate appropriate 3D models. Final meshes of 3D models were filtered with the MeshLab software. The CloudCompare application was used to determine the track gauge and cant for defined cross-sections, and the results obtained from point clouds by dense image matching techniques were compared with results of direct geodetic measurements. The obtained RMS difference in the horizontal (gauge) and vertical (cant) plane was RMS∆ < 0.45 mm. The achieved accuracy meets the accuracy condition of measurements and inspection of the rail tracks (error m < 1 mm), specified in the Polish branch railway instruction Id-14 (D-75) and the European technical norm EN 13848-4:2011.
2. HORSESHOE CURVE IN GLACIER POINT ROAD NEAR GLACIER POINT. ...
2. HORSESHOE CURVE IN GLACIER POINT ROAD NEAR GLACIER POINT. HALF DOME AT CENTER REAR. LOOKING NNE. GIS N-37 43 44.3 / W-119 34 14.1 - Glacier Point Road, Between Chinquapin Flat & Glacier Point, Yosemite Village, Mariposa County, CA
Object recognition and localization from 3D point clouds by maximum-likelihood estimation
NASA Astrophysics Data System (ADS)
Dantanarayana, Harshana G.; Huntley, Jonathan M.
2017-08-01
We present an algorithm based on maximum-likelihood analysis for the automated recognition of objects, and estimation of their pose, from 3D point clouds. Surfaces segmented from depth images are used as the features, unlike `interest point'-based algorithms which normally discard such data. Compared to the 6D Hough transform, it has negligible memory requirements, and is computationally efficient compared to iterative closest point algorithms. The same method is applicable to both the initial recognition/pose estimation problem as well as subsequent pose refinement through appropriate choice of the dispersion of the probability density functions. This single unified approach therefore avoids the usual requirement for different algorithms for these two tasks. In addition to the theoretical description, a simple 2 degrees of freedom (d.f.) example is given, followed by a full 6 d.f. analysis of 3D point cloud data from a cluttered scene acquired by a projected fringe-based scanner, which demonstrated an RMS alignment error as low as 0.3 mm.
OFF-AXIS THERMAL AND SYNCHROTRON EMISSION FOR SHORT GAMMA RAY BURST
NASA Astrophysics Data System (ADS)
Xie, Xiaoyi
2018-01-01
We present light curves of photospheric and synchrotron emission from a relativistic jet propagating through the ejecta cloud of a neutron star merger. We use a moving-mesh relativistic hydrodynamics code with adaptive mesh refinement to compute the continuous evolution of jet over 13 orders of magnitude in radius from the scale of the central merger engine all the way through the late afterglow phase. As the jet propagates through the cloud it forms a hot cocoon surrounding the jet core. We find that the photospheric emission released by the hot cocoon is bright for on-axis observers and is detectable for off-axis observers at a wide range of observing angles for sufficiently close sources. As the jet and cocoon drive an external shock into the surrounding medium we compute synchrotron light curves and find bright emission for off-axis observers which differs from top-hat Blandford-McKee jets, especially for lower explosion energies.
Automatic pole-like object modeling via 3D part-based analysis of point cloud
NASA Astrophysics Data System (ADS)
He, Liu; Yang, Haoxiang; Huang, Yuchun
2016-10-01
Pole-like objects, including trees, lampposts and traffic signs, are indispensable part of urban infrastructure. With the advance of vehicle-based laser scanning (VLS), massive point cloud of roadside urban areas becomes applied in 3D digital city modeling. Based on the property that different pole-like objects have various canopy parts and similar trunk parts, this paper proposed the 3D part-based shape analysis to robustly extract, identify and model the pole-like objects. The proposed method includes: 3D clustering and recognition of trunks, voxel growing and part-based 3D modeling. After preprocessing, the trunk center is identified as the point that has local density peak and the largest minimum inter-cluster distance. Starting from the trunk centers, the remaining points are iteratively clustered to the same centers of their nearest point with higher density. To eliminate the noisy points, cluster border is refined by trimming boundary outliers. Then, candidate trunks are extracted based on the clustering results in three orthogonal planes by shape analysis. Voxel growing obtains the completed pole-like objects regardless of overlaying. Finally, entire trunk, branch and crown part are analyzed to obtain seven feature parameters. These parameters are utilized to model three parts respectively and get signal part-assembled 3D model. The proposed method is tested using the VLS-based point cloud of Wuhan University, China. The point cloud includes many kinds of trees, lampposts and other pole-like posters under different occlusions and overlaying. Experimental results show that the proposed method can extract the exact attributes and model the roadside pole-like objects efficiently.
Surface representations of two- and three-dimensional fluid flow topology
NASA Technical Reports Server (NTRS)
Helman, James L.; Hesselink, Lambertus
1990-01-01
We discuss our work using critical point analysis to generate representations of the vector field topology of numerical flow data sets. Critical points are located and characterized in a two-dimensional domain, which may be either a two-dimensional flow field or the tangential velocity field near a three-dimensional body. Tangent curves are then integrated out along the principal directions of certain classes of critical points. The points and curves are linked to form a skeleton representing the two-dimensional vector field topology. When generated from the tangential velocity field near a body in a three-dimensional flow, the skeleton includes the critical points and curves which provide a basis for analyzing the three-dimensional structure of the flow separation. The points along the separation curves in the skeleton are used to start tangent curve integrations to generate surfaces representing the topology of the associated flow separations.
Elliptic Curve Integral Points on y2 = x3 + 3x ‑ 14
NASA Astrophysics Data System (ADS)
Zhao, Jianhong
2018-03-01
The positive integer points and integral points of elliptic curves are very important in the theory of number and arithmetic algebra, it has a wide range of applications in cryptography and other fields. There are some results of positive integer points of elliptic curve y 2 = x 3 + ax + b, a, b ∈ Z In 1987, D. Zagier submit the question of the integer points on y 2 = x 3 ‑ 27x + 62, it count a great deal to the study of the arithmetic properties of elliptic curves. In 2009, Zhu H L and Chen J H solved the problem of the integer points on y 2 = x 3 ‑ 27x + 62 by using algebraic number theory and P-adic analysis method. In 2010, By using the elementary method, Wu H M obtain all the integral points of elliptic curves y 2 = x 3 ‑ 27x ‑ 62. In 2015, Li Y Z and Cui B J solved the problem of the integer points on y 2 = x 3 ‑ 21x ‑ 90 By using the elementary method. In 2016, Guo J solved the problem of the integer points on y 2 = x 3 + 27x + 62 by using the elementary method. In 2017, Guo J proved that y 2 = x 3 ‑ 21x + 90 has no integer points by using the elementary method. Up to now, there is no relevant conclusions on the integral points of elliptic curves y 2 = x 3 + 3x ‑ 14, which is the subject of this paper. By using congruence and Legendre Symbol, it can be proved that elliptic curve y 2 = x 3 + 3x ‑ 14 has only one integer point: (x, y) = (2, 0).
H-alpha images of the Cygnus Loop - A new look at shock-wave dynamics in an old supernova remnant
NASA Technical Reports Server (NTRS)
Fesen, Robert A.; Kwitter, Karen B.; Downes, Ronald A.
1992-01-01
Attention is given to deep H-alpha images of portions of the east, west, and southwest limbs of the Cygnus Loop which illustrate several aspects of shock dynamics in a multiphase interstellar medium. An H-alpha image of the isolated eastern shocked cloud reveals cloud deformation and gas stripping along the cloud's edges, shock front diffraction and reflection around the rear of the cloud, and interior remnant emission due to upstream shock reflection. A faint Balmer-dominated filament is identified 30 arcmin further west of the remnant's bright line of western radiative filaments. This detection indicates a far more westerly intercloud shock front position than previously realized, and resolves the nature of the weak X-ray, optical, and nonthermal radio emission observed west of NGC 6960. Strongly curved Balmer-dominated filaments along the remnant's west and southwest edge may indicate shock diffraction caused by shock wave passage in between clouds.
Cloud-Scale Vertical Velocity and Turbulent Dissipation Rate Retrievals
Shupe, Matthew
2013-05-22
Time-height fields of retrieved in-cloud vertical wind velocity and turbulent dissipation rate, both retrieved primarily from vertically-pointing, Ka-band cloud radar measurements. Files are available for manually-selected, stratiform, mixed-phase cloud cases observed at the North Slope of Alaska (NSA) site during periods covering the Mixed-Phase Arctic Cloud Experiment (MPACE, late September through early November 2004) and the Indirect and Semi-Direct Aerosol Campaign (ISDAC, April-early May 2008). These time periods will be expanded in a future submission.
Liu, Wenyang; Cheung, Yam; Sabouri, Pouya; Arai, Tatsuya J; Sawant, Amit; Ruan, Dan
2015-11-01
To accurately and efficiently reconstruct a continuous surface from noisy point clouds captured by a surface photogrammetry system (VisionRT). The authors have developed a level-set based surface reconstruction method on point clouds captured by a surface photogrammetry system (VisionRT). The proposed method reconstructs an implicit and continuous representation of the underlying patient surface by optimizing a regularized fitting energy, offering extra robustness to noise and missing measurements. By contrast to explicit/discrete meshing-type schemes, their continuous representation is particularly advantageous for subsequent surface registration and motion tracking by eliminating the need for maintaining explicit point correspondences as in discrete models. The authors solve the proposed method with an efficient narrowband evolving scheme. The authors evaluated the proposed method on both phantom and human subject data with two sets of complementary experiments. In the first set of experiment, the authors generated a series of surfaces each with different black patches placed on one chest phantom. The resulting VisionRT measurements from the patched area had different degree of noise and missing levels, since VisionRT has difficulties in detecting dark surfaces. The authors applied the proposed method to point clouds acquired under these different configurations, and quantitatively evaluated reconstructed surfaces by comparing against a high-quality reference surface with respect to root mean squared error (RMSE). In the second set of experiment, the authors applied their method to 100 clinical point clouds acquired from one human subject. In the absence of ground-truth, the authors qualitatively validated reconstructed surfaces by comparing the local geometry, specifically mean curvature distributions, against that of the surface extracted from a high-quality CT obtained from the same patient. On phantom point clouds, their method achieved submillimeter reconstruction RMSE under different configurations, demonstrating quantitatively the faith of the proposed method in preserving local structural properties of the underlying surface in the presence of noise and missing measurements, and its robustness toward variations of such characteristics. On point clouds from the human subject, the proposed method successfully reconstructed all patient surfaces, filling regions where raw point coordinate readings were missing. Within two comparable regions of interest in the chest area, similar mean curvature distributions were acquired from both their reconstructed surface and CT surface, with mean and standard deviation of (μrecon=-2.7×10(-3) mm(-1), σrecon=7.0×10(-3) mm(-1)) and (μCT=-2.5×10(-3) mm(-1), σCT=5.3×10(-3) mm(-1)), respectively. The agreement of local geometry properties between the reconstructed surfaces and the CT surface demonstrated the ability of the proposed method in faithfully representing the underlying patient surface. The authors have integrated and developed an accurate level-set based continuous surface reconstruction method on point clouds acquired by a 3D surface photogrammetry system. The proposed method has generated a continuous representation of the underlying phantom and patient surfaces with good robustness against noise and missing measurements. It serves as an important first step for further development of motion tracking methods during radiotherapy.
Analysis of Cumulus Solar Irradiance Reflectance (CSIR) Events
NASA Technical Reports Server (NTRS)
Laird, John L.; Harshvardham
1996-01-01
Clouds are extremely important with regard to the transfer of solar radiation at the earth's surface. This study investigates Cumulus Solar Irradiance Reflection (CSIR) using ground-based pyranometers. CSIR events are short-term increases in solar radiation observed at the surface as a result of reflection off the sides of convective clouds. When sun-cloud observer geometry is favorable, these occurrences produce characteristic spikes in the pyranometer traces and solar irradiance values may exceed expected clear-sky values. Ultraviolet CSIR events were investigated during the summer of 1995 using Yankee Environmental Systems UVA-1 and UVB-1 pyranometers. Observed data were compared to clear-sky curves which were generated using a third degree polynomial best-fit line technique. Periods during which the observed data exceeded this clear-sky curve were identified as CSIR events. The magnitude of a CSIR event was determined by two different quantitative calculations. The MAC (magnitude above clear-sky) is an absolute measure of the difference between the observed and clear-sky irradiances. Maximum MAC values of 3.4 Wm(exp -2) and 0.069 Wm(exp -2) were observed at the UV-A and UV-B wavelengths, respectively. The second calculation determined the percentage above clear-sky (PAC) which indicated the relative magnitude of a CSIR event. Maximum UV-A and UV-B PAC magnitudes of 10.1% and 7.8%, respectively, were observed during the study. Also of interest was the duration of the CSIR events which is a function of sun-cloud-sensor geometry and the speed of cloud propagation over the measuring site. In both the UV-A and UV-B wavelengths, significant CSIR durations of up to 30 minutes were observed.
Underwater 3d Modeling: Image Enhancement and Point Cloud Filtering
NASA Astrophysics Data System (ADS)
Sarakinou, I.; Papadimitriou, K.; Georgoula, O.; Patias, P.
2016-06-01
This paper examines the results of image enhancement and point cloud filtering on the visual and geometric quality of 3D models for the representation of underwater features. Specifically it evaluates the combination of effects from the manual editing of images' radiometry (captured at shallow depths) and the selection of parameters for point cloud definition and mesh building (processed in 3D modeling software). Such datasets, are usually collected by divers, handled by scientists and used for geovisualization purposes. In the presented study, have been created 3D models from three sets of images (seafloor, part of a wreck and a small boat's wreck) captured at three different depths (3.5m, 10m and 14m respectively). Four models have been created from the first dataset (seafloor) in order to evaluate the results from the application of image enhancement techniques and point cloud filtering. The main process for this preliminary study included a) the definition of parameters for the point cloud filtering and the creation of a reference model, b) the radiometric editing of images, followed by the creation of three improved models and c) the assessment of results by comparing the visual and the geometric quality of improved models versus the reference one. Finally, the selected technique is tested on two other data sets in order to examine its appropriateness for different depths (at 10m and 14m) and different objects (part of a wreck and a small boat's wreck) in the context of an ongoing research in the Laboratory of Photogrammetry and Remote Sensing.
NASA Astrophysics Data System (ADS)
Owers, Christopher J.; Rogers, Kerrylee; Woodroffe, Colin D.
2018-05-01
Above-ground biomass represents a small yet significant contributor to carbon storage in coastal wetlands. Despite this, above-ground biomass is often poorly quantified, particularly in areas where vegetation structure is complex. Traditional methods for providing accurate estimates involve harvesting vegetation to develop mangrove allometric equations and quantify saltmarsh biomass in quadrats. However broad scale application of these methods may not capture structural variability in vegetation resulting in a loss of detail and estimates with considerable uncertainty. Terrestrial laser scanning (TLS) collects high resolution three-dimensional point clouds capable of providing detailed structural morphology of vegetation. This study demonstrates that TLS is a suitable non-destructive method for estimating biomass of structurally complex coastal wetland vegetation. We compare volumetric models, 3-D surface reconstruction and rasterised volume, and point cloud elevation histogram modelling techniques to estimate biomass. Our results show that current volumetric modelling approaches for estimating TLS-derived biomass are comparable to traditional mangrove allometrics and saltmarsh harvesting. However, volumetric modelling approaches oversimplify vegetation structure by under-utilising the large amount of structural information provided by the point cloud. The point cloud elevation histogram model presented in this study, as an alternative to volumetric modelling, utilises all of the information within the point cloud, as opposed to sub-sampling based on specific criteria. This method is simple but highly effective for both mangrove (r2 = 0.95) and saltmarsh (r2 > 0.92) vegetation. Our results provide evidence that application of TLS in coastal wetlands is an effective non-destructive method to accurately quantify biomass for structurally complex vegetation.
NASA Technical Reports Server (NTRS)
Ko, William L.; Fleischer, Van Tran
2011-01-01
The Ko displacement theory originally developed for shape predictions of straight beams is extended to shape predictions of curved beams. The surface strains needed for shape predictions were analytically generated from finite-element nodal stress outputs. With the aid of finite-element displacement outputs, mathematical functional forms for curvature-effect correction terms are established and incorporated into straight-beam deflection equations for shape predictions of both cantilever and two-point supported curved beams. The newly established deflection equations for cantilever curved beams could provide quite accurate shape predictions for different cantilever curved beams, including the quarter-circle cantilever beam. Furthermore, the newly formulated deflection equations for two-point supported curved beams could provide accurate shape predictions for a range of two-point supported curved beams, including the full-circular ring. Accuracy of the newly developed curved-beam deflection equations is validated through shape prediction analysis of curved beams embedded in the windward shallow spherical shell of a generic crew exploration vehicle. A single-point collocation method for optimization of shape predictions is discussed in detail
NASA Technical Reports Server (NTRS)
Hung, R. J.; Tsao, Y. D.
1988-01-01
Rawinsonde data and geosynchronous satellite imagery were used to investigate the life cycles of St. Anthony, Minnesota's severe convective storms. It is found that the fully developed storm clouds, with overshooting cloud tops penetrating above the tropopause, collapsed about three minutes before the touchdown of the tornadoes. Results indicate that the probability of producing an outbreak of tornadoes causing greater damage increases when there are higher values of potential energy storage per unit area for overshooting cloud tops penetrating the tropopause. It is also found that there is less chance for clouds with a lower moisture content to be outgrown as a storm cloud than clouds with a higher moisture content.
FUNCTION GENERATOR FOR ANALOGUE COMPUTERS
Skramstad, H.K.; Wright, J.H.; Taback, L.
1961-12-12
An improved analogue computer is designed which can be used to determine the final ground position of radioactive fallout particles in an atomic cloud. The computer determines the fallout pattern on the basis of known wind velocity and direction at various altitudes, and intensity of radioactivity in the mushroom cloud as a function of particle size and initial height in the cloud. The output is then displayed on a cathode-ray tube so that the average or total luminance of the tube screen at any point represents the intensity of radioactive fallout at the geographical location represented by that point. (AEC)
Machine learning search for variable stars
NASA Astrophysics Data System (ADS)
Pashchenko, Ilya N.; Sokolovsky, Kirill V.; Gavras, Panagiotis
2018-04-01
Photometric variability detection is often considered as a hypothesis testing problem: an object is variable if the null hypothesis that its brightness is constant can be ruled out given the measurements and their uncertainties. The practical applicability of this approach is limited by uncorrected systematic errors. We propose a new variability detection technique sensitive to a wide range of variability types while being robust to outliers and underestimated measurement uncertainties. We consider variability detection as a classification problem that can be approached with machine learning. Logistic Regression (LR), Support Vector Machines (SVM), k Nearest Neighbours (kNN), Neural Nets (NN), Random Forests (RF), and Stochastic Gradient Boosting classifier (SGB) are applied to 18 features (variability indices) quantifying scatter and/or correlation between points in a light curve. We use a subset of Optical Gravitational Lensing Experiment phase two (OGLE-II) Large Magellanic Cloud (LMC) photometry (30 265 light curves) that was searched for variability using traditional methods (168 known variable objects) as the training set and then apply the NN to a new test set of 31 798 OGLE-II LMC light curves. Among 205 candidates selected in the test set, 178 are real variables, while 13 low-amplitude variables are new discoveries. The machine learning classifiers considered are found to be more efficient (select more variables and fewer false candidates) compared to traditional techniques using individual variability indices or their linear combination. The NN, SGB, SVM, and RF show a higher efficiency compared to LR and kNN.
Designing and Testing a UAV Mapping System for Agricultural Field Surveying
Skovsen, Søren
2017-01-01
A Light Detection and Ranging (LiDAR) sensor mounted on an Unmanned Aerial Vehicle (UAV) can map the overflown environment in point clouds. Mapped canopy heights allow for the estimation of crop biomass in agriculture. The work presented in this paper contributes to sensory UAV setup design for mapping and textual analysis of agricultural fields. LiDAR data are combined with data from Global Navigation Satellite System (GNSS) and Inertial Measurement Unit (IMU) sensors to conduct environment mapping for point clouds. The proposed method facilitates LiDAR recordings in an experimental winter wheat field. Crop height estimates ranging from 0.35–0.58 m are correlated to the applied nitrogen treatments of 0–300 kgNha. The LiDAR point clouds are recorded, mapped, and analysed using the functionalities of the Robot Operating System (ROS) and the Point Cloud Library (PCL). Crop volume estimation is based on a voxel grid with a spatial resolution of 0.04 × 0.04 × 0.001 m. Two different flight patterns are evaluated at an altitude of 6 m to determine the impacts of the mapped LiDAR measurements on crop volume estimations. PMID:29168783
Reconstruction of Consistent 3d CAD Models from Point Cloud Data Using a Priori CAD Models
NASA Astrophysics Data System (ADS)
Bey, A.; Chaine, R.; Marc, R.; Thibault, G.; Akkouche, S.
2011-09-01
We address the reconstruction of 3D CAD models from point cloud data acquired in industrial environments, using a pre-existing 3D model as an initial estimate of the scene to be processed. Indeed, this prior knowledge can be used to drive the reconstruction so as to generate an accurate 3D model matching the point cloud. We more particularly focus our work on the cylindrical parts of the 3D models. We propose to state the problem in a probabilistic framework: we have to search for the 3D model which maximizes some probability taking several constraints into account, such as the relevancy with respect to the point cloud and the a priori 3D model, and the consistency of the reconstructed model. The resulting optimization problem can then be handled using a stochastic exploration of the solution space, based on the random insertion of elements in the configuration under construction, coupled with a greedy management of the conflicts which efficiently improves the configuration at each step. We show that this approach provides reliable reconstructed 3D models by presenting some results on industrial data sets.
Designing and Testing a UAV Mapping System for Agricultural Field Surveying.
Christiansen, Martin Peter; Laursen, Morten Stigaard; Jørgensen, Rasmus Nyholm; Skovsen, Søren; Gislum, René
2017-11-23
A Light Detection and Ranging (LiDAR) sensor mounted on an Unmanned Aerial Vehicle (UAV) can map the overflown environment in point clouds. Mapped canopy heights allow for the estimation of crop biomass in agriculture. The work presented in this paper contributes to sensory UAV setup design for mapping and textual analysis of agricultural fields. LiDAR data are combined with data from Global Navigation Satellite System (GNSS) and Inertial Measurement Unit (IMU) sensors to conduct environment mapping for point clouds. The proposed method facilitates LiDAR recordings in an experimental winter wheat field. Crop height estimates ranging from 0.35-0.58 m are correlated to the applied nitrogen treatments of 0-300 kg N ha . The LiDAR point clouds are recorded, mapped, and analysed using the functionalities of the Robot Operating System (ROS) and the Point Cloud Library (PCL). Crop volume estimation is based on a voxel grid with a spatial resolution of 0.04 × 0.04 × 0.001 m. Two different flight patterns are evaluated at an altitude of 6 m to determine the impacts of the mapped LiDAR measurements on crop volume estimations.
Pairwise registration of TLS point clouds using covariance descriptors and a non-cooperative game
NASA Astrophysics Data System (ADS)
Zai, Dawei; Li, Jonathan; Guo, Yulan; Cheng, Ming; Huang, Pengdi; Cao, Xiaofei; Wang, Cheng
2017-12-01
It is challenging to automatically register TLS point clouds with noise, outliers and varying overlap. In this paper, we propose a new method for pairwise registration of TLS point clouds. We first generate covariance matrix descriptors with an adaptive neighborhood size from point clouds to find candidate correspondences, we then construct a non-cooperative game to isolate mutual compatible correspondences, which are considered as true positives. The method was tested on three models acquired by two different TLS systems. Experimental results demonstrate that our proposed adaptive covariance (ACOV) descriptor is invariant to rigid transformation and robust to noise and varying resolutions. The average registration errors achieved on three models are 0.46 cm, 0.32 cm and 1.73 cm, respectively. The computational times cost on these models are about 288 s, 184 s and 903 s, respectively. Besides, our registration framework using ACOV descriptors and a game theoretic method is superior to the state-of-the-art methods in terms of both registration error and computational time. The experiment on a large outdoor scene further demonstrates the feasibility and effectiveness of our proposed pairwise registration framework.
The computation of all plane/surface intersections for CAD/CAM applications
NASA Technical Reports Server (NTRS)
Hoitsma, D. H., Jr.; Roche, M.
1984-01-01
The problem of the computation and display of all intersections of a given plane with a rational bicubic surface patch for use on an interactive CAD/CAM system is examined. The general problem of calculating all intersections of a plane and a surface consisting of rational bicubic patches is reduced to the case of a single generic patch by applying a rejection algorithm which excludes all patches that do not intersect the plane. For each pertinent patch the algorithm presented computed the intersection curves by locating an initial point on each curve, and computes successive points on the curve using a tolerance step equation. A single cubic equation solver is used to compute the initial curve points lying on the boundary of a surface patch, and the method of resultants as applied to curve theory is used to determine critical points which, in turn, are used to locate initial points that lie on intersection curves which are in the interior of the patch. Examples are given to illustrate the ability of this algorithm to produce all intersection curves.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oakley, P. H. H.; Cash, W.
2009-08-01
The orbital light curve of a terrestrial exoplanet will likely contain valuable information about the surface and atmospheric features of the planet, both in its overall shape and hourly variations. We have constructed an empirically based code capable of simulating observations of the Earth from any orientation, at any time of year with continuously updated cloud and snow coverage with a New Worlds Observatory. By simulating these observations over a full orbital revolution at a distance of 10 pc we determine that the detection of an obliquity or seasonal terrain change is possible at low inclinations. In agreement with othermore » studies, a 4 m New Worlds Observer can accurately determine the rotation rate of the planet at a success rate from {approx}30% to 80% with only 5 days of observations depending on the signal to noise of the observations. We also attempt simple inversions of these diurnal light curves to sketch a map of the reflecting planet's surface features. This mapping technique is only successful with highly favorable systems and in particular requires that the cloud coverage must be lower than the Earth's average. Our test case of a 2 M {sub +} planet at 7 pc distance with low exo-zodiacal light and 25% cloud coverage produced crude, but successful results. Additionally, with these highly favorable systems NWO may be able to discern the presence of liquid surface water (or other smooth surfaces) though it requires a complex detection available only at crescent phases in high inclination systems.« less
NASA Astrophysics Data System (ADS)
Faherty, Jacqueline; Cruz, Kelle; Rice, Emily; Gagne, Jonathan; Marley, Mark; Gizis, John
2018-05-01
Emerging as an important insight into cool-temperature atmospheric physics is evidence for a correlation between enhanced clouds and youth. With this Spitzer Cycle 14 large GO program, we propose to obtain qualifying evidence for this hypothesis using an age calibrated sample of brown dwarf-exoplanet analogs recently discovered and characterized by team members. Using Spitzer's unparalleled ability to conduct uninterrupted, high-cadence observations over numerous hours, we will examine the periodic brightness variations at 3.5 microns, where clouds are thought to be most disruptive to emergent flux. Compared to older sources, theory predicts that younger or lower-surface gravity objects will have cooler brightness temperatures at 3.5 microns and larger peak to peak amplitude variations due to higher altitude, more turbulent clouds. Therefore we propose to obtain light curves for 26 sources that span L3-L8 spectral types (Teff 2500-1700 K), 20-130 Myr ages, and predicted 8-30 MJup masses. Comparing to the variability trends and statistics of field (3-5 Gyr) Spitzer Space Telescope General Observer Proposal equivalents currently being monitored by Spitzer, we will have unequivocal evidence for (or against) the turbulent atmospheric nature of younger sources. Coupling this Spitzer dataset with the multitude of spectral information we have on each source, the light curves obtained through this proposal will form the definitive library of data for investigating atmosphere dynamics (rotation rates, winds, storms, changing cloud structures) in young giant exoplanets and brown dwarfs.
Normalized vertical ice mass flux profiles from vertically pointing 8-mm-wavelength Doppler radar
NASA Technical Reports Server (NTRS)
Orr, Brad W.; Kropfli, Robert A.
1993-01-01
During the FIRE 2 (First International Satellite Cloud Climatology Project Regional Experiment) project, NOAA's Wave Propagation Laboratory (WPL) operated its 8-mm wavelength Doppler radar extensively in the vertically pointing mode. This allowed for the calculation of a number of important cirrus cloud parameters, including cloud boundary statistics, cloud particle characteristic sizes and concentrations, and ice mass content (imc). The flux of imc, or, alternatively, ice mass flux (imf), is also an important parameter of a cirrus cloud system. Ice mass flux is important in the vertical redistribution of water substance and thus, in part, determines the cloud evolution. It is important for the development of cloud parameterizations to be able to define the essential physical characteristics of large populations of clouds in the simplest possible way. One method would be to normalize profiles of observed cloud properties, such as those mentioned above, in ways similar to those used in the convective boundary layer. The height then scales from 0.0 at cloud base to 1.0 at cloud top, and the measured cloud parameter scales by its maximum value so that all normalized profiles have 1.0 as their maximum value. The goal is that there will be a 'universal' shape to profiles of the normalized data. This idea was applied to estimates of imf calculated from data obtained by the WPL cloud radar during FIRE II. Other quantities such as median particle diameter, concentration, and ice mass content can also be estimated with this radar, and we expect to also examine normalized profiles of these quantities in time for the 1993 FIRE II meeting.
Ultraviolet light curves of beta Lyrae: Comparison of OAO A-2, IUE, and Voyager Observations
NASA Technical Reports Server (NTRS)
Kondo, Yoji; Mccluskey, George E.; Silvis, Jeffery M. S.; Polidan, Ronald S.; Mccluskey, Carolina P. S.; Eaton, Joel A.
1994-01-01
The six-band ultraviolet light curves of beta Lyrae obtained with the Orbiting Astronomical Observatory (OAO) A-2 in 1970 exhibited a very unusual behavior. The secondary minimum deepened at shorter wavelength, indicating that one was not observing light variations caused primarily by the eclipses of two stars having a roughly Planckian energy distribution. It was then suggested that the light variations were caused by a viewing angle effect of an optically thick, ellipsoidal circumbinary gas cloud. Since 1978 beta Lyrae has been observed with the International Ultraviolet Explorer (IUE) satellite. We have constructed ultraviolet light curves from the IUE archival data for comparison with the OAO A-2 results. We find that they are in substantial agreement with each other. The Voyager ultraviolet spectrometer was also used to observe this binary during a period covered by IUE observations. The Voyager results agree with those of the two other satellite observatories at wavelengths longer than about 1350 A. However, in the wavelength region shorter than the Lyman-alpha line at 1216 A, the light curves at 1085 and 965 A show virtually no light variation except an apparent flaring near phase 0.7, which is also in evidence at longer wavelengths. We suggest that the optically thick circumbinary gas cloud, which envelops the two stars completely, assumes a roughly spherical shape when observed at these shorter wavelengths.
Cloud top structure of Venus revealed by Subaru/COMICS mid-infrared images
NASA Astrophysics Data System (ADS)
Sato, T. M.; Sagawa, H.; Kouyama, T.; Mitsuyama, K.; Satoh, T.; Ohtsuki, S.; Ueno, M.; Kasaba, Y.; Nakamura, M.; Imamura, T.
2014-11-01
We have investigated the cloud top structure of Venus by analyzing ground-based images taken at the mid-infrared wavelengths of 8.66 μm and 11.34 μm. Venus at a solar phase angle of ∼90°, with the morning terminator in view, was observed by the Cooled Mid-Infrared Camera and Spectrometer (COMICS), mounted on the 8.2-m Subaru Telescope, during the period October 25-29, 2007. The disk-averaged brightness temperatures for the observation period are ∼230 K and ∼238 K at 8.66 μm and 11.34 μm, respectively. The obtained images with good signal-to-noise ratio and with high spatial resolution (∼200 km at the sub-observer point) provide several important findings. First, we present observational evidence, for the first time, of the possibility that the westward rotation of the polar features (the hot polar spots and the surrounding cold collars) is synchronized between the northern and southern hemispheres. Second, after high-pass filtering, the images reveal that streaks and mottled and patchy patterns are distributed over the entire disk, with typical amplitudes of ∼0.5 K, and vary from day to day. The detected features, some of which are similar to those seen in past UV images, result from inhomogeneities of both the temperature and the cloud top altitude. Third, the equatorial center-to-limb variations of brightness temperatures have a systematic day-night asymmetry, except those on October 25, that the dayside brightness temperatures are higher than the nightside brightness temperatures by 0-4 K under the same viewing geometry. Such asymmetry would be caused by the propagation of the migrating semidiurnal tide. Finally, by applying the lapse rates deduced from previous studies, we demonstrate that the equatorial center-to-limb curves in the two spectral channels give access to two parameters: the cloud scale height H and the cloud top altitude zc. The acceptable models for data on October 25 are obtained at H = 2.4-4.3 km and zc = 66-69 km; this supports previous results determined from spacecraft observations.
Chemical Abundances of Metal-poor RR Lyrae Stars in the Magellanic Clouds
NASA Astrophysics Data System (ADS)
Haschke, Raoul; Grebel, Eva K.; Frebel, Anna; Duffau, Sonia; Hansen, Camilla J.; Koch, Andreas
2012-09-01
We present for the first time a detailed spectroscopic study of chemical element abundances of metal-poor RR Lyrae stars in the Large and Small Magellanic Cloud (LMC and SMC). Using the MagE echelle spectrograph at the 6.5 m Magellan telescopes, we obtain medium resolution (R ~ 2000-6000) spectra of six RR Lyrae stars in the LMC and three RR Lyrae stars in the SMC. These stars were chosen because their previously determined photometric metallicities were among the lowest metallicities found for stars belonging to the old populations in the Magellanic Clouds. We find the spectroscopic metallicities of these stars to be as low as [Fe/H]spec = -2.7 dex, the lowest metallicity yet measured for any star in the Magellanic Clouds. We confirm that for metal-poor stars, the photometric metallicities from the Fourier decomposition of the light curves are systematically too high compared to their spectroscopic counterparts. However, for even more metal-poor stars below [Fe/H]phot < -2.8 dex this trend is reversed and the spectroscopic metallicities are systematically higher than the photometric estimates. We are able to determine abundance ratios for 10 chemical elements (Fe, Na, Mg, Al, Ca, Sc, Ti, Cr, Sr, and Ba), which extend the abundance measurements of chemical elements for RR Lyrae stars in the Clouds beyond [Fe/H] for the first time. For the overall [α/Fe] ratio, we obtain an overabundance of 0.36 dex, which is in very good agreement with results from metal-poor stars in the Milky Way halo as well as from the metal-poor tail in dwarf spheroidal galaxies. Comparing the abundances with those of the stars in the Milky Way halo we find that the abundance ratios of stars of both populations are consistent with another. Therefore, we conclude that from a chemical point of view early contributions from Magellanic-type galaxies to the formation of the Galactic halo as claimed in cosmological models are plausible. This paper includes data gathered with the 6.5 meter Magellan Telescopes located at Las Campanas Observatory, Chile.
Volume Phase Masks in Photo-Thermo-Refractive Glass
2014-10-06
development when forming the nanocrystals. Fig. 1.1 shows the refractive index change curves for some common glass melts when exposed to a beam at 325 nm...integral curve to the curve for the ideal phase mask. If there is a deviation in the experimental curve from the ideal curve , whether the overlap...redevelopments of the sample. Note that the third point on the spherical curve and the third and fourth points on the coma y curve have larger error bars than
2.5D multi-view gait recognition based on point cloud registration.
Tang, Jin; Luo, Jian; Tjahjadi, Tardi; Gao, Yan
2014-03-28
This paper presents a method for modeling a 2.5-dimensional (2.5D) human body and extracting the gait features for identifying the human subject. To achieve view-invariant gait recognition, a multi-view synthesizing method based on point cloud registration (MVSM) to generate multi-view training galleries is proposed. The concept of a density and curvature-based Color Gait Curvature Image is introduced to map 2.5D data onto a 2D space to enable data dimension reduction by discrete cosine transform and 2D principle component analysis. Gait recognition is achieved via a 2.5D view-invariant gait recognition method based on point cloud registration. Experimental results on the in-house database captured by a Microsoft Kinect camera show a significant performance gain when using MVSM.
The potential of cloud point system as a novel two-phase partitioning system for biotransformation.
Wang, Zhilong
2007-05-01
Although the extractive biotransformation in two-phase partitioning systems have been studied extensively, such as the water-organic solvent two-phase system, the aqueous two-phase system, the reverse micelle system, and the room temperature ionic liquid, etc., this has not yet resulted in a widespread industrial application. Based on the discussion of the main obstacles, an exploitation of a cloud point system, which has already been applied in a separation field known as a cloud point extraction, as a novel two-phase partitioning system for biotransformation, is reviewed by analysis of some topical examples. At the end of the review, the process control and downstream processing in the application of the novel two-phase partitioning system for biotransformation are also briefly discussed.
Motion data classification on the basis of dynamic time warping with a cloud point distance measure
NASA Astrophysics Data System (ADS)
Switonski, Adam; Josinski, Henryk; Zghidi, Hafedh; Wojciechowski, Konrad
2016-06-01
The paper deals with the problem of classification of model free motion data. The nearest neighbors classifier which is based on comparison performed by Dynamic Time Warping transform with cloud point distance measure is proposed. The classification utilizes both specific gait features reflected by a movements of subsequent skeleton joints and anthropometric data. To validate proposed approach human gait identification challenge problem is taken into consideration. The motion capture database containing data of 30 different humans collected in Human Motion Laboratory of Polish-Japanese Academy of Information Technology is used. The achieved results are satisfactory, the obtained accuracy of human recognition exceeds 90%. What is more, the applied cloud point distance measure does not depend on calibration process of motion capture system which results in reliable validation.
Lidars for smoke and dust cloud diagnostics
NASA Astrophysics Data System (ADS)
Fujimura, S. F.; Warren, R. E.; Lutomirski, R. F.
1980-11-01
An algorithm that integrates a time-resolved lidar signature for use in estimating transmittance, extinction coefficient, mass concentration, and CL values generated under battlefield conditions is applied to lidar signatures measured during the DIRT-I tests. Estimates are given for the dependence of the inferred transmittance and extinction coefficient on uncertainties in parameters such as the obscurant backscatter-to-extinction ratio. The enhanced reliability in estimating transmittance through use of a target behind the obscurant cloud is discussed. It is found that the inversion algorithm can produce reliable estimates of smoke or dust transmittance and extinction from all points within the cloud for which a resolvable signal can be detected, and that a single point calibration measurement can convert the extinction values to mass concentration for each resolvable signal point.
NASA Astrophysics Data System (ADS)
Antova, Gergana; Kunchev, Ivan; Mickrenska-Cherneva, Christina
2016-10-01
The representation of physical buildings in Building Information Models (BIM) has been a subject of research since four decades in the fields of Construction Informatics and GeoInformatics. The early digital representations of buildings mainly appeared as 3D drawings constructed by CAD software, and the 3D representation of the buildings was only geometric, while semantics and topology were out of modelling focus. On the other hand, less detailed building representations, with often focus on ‘outside’ representations were also found in form of 2D /2,5D GeoInformation models. Point clouds from 3D laser scanning data give a full and exact representation of the building geometry. The article presents different aspects and the benefits of using point clouds in BIM in the different stages of a lifecycle of a building.
NASA Astrophysics Data System (ADS)
Tahani, M.; Plume, R.; Brown, J. C.; Kainulainen, J.
2018-06-01
Context. Magnetic fields pervade in the interstellar medium (ISM) and are believed to be important in the process of star formation, yet probing magnetic fields in star formation regions is challenging. Aims: We propose a new method to use Faraday rotation measurements in small-scale star forming regions to find the direction and magnitude of the component of magnetic field along the line of sight. We test the proposed method in four relatively nearby regions of Orion A, Orion B, Perseus, and California. Methods: We use rotation measure data from the literature. We adopt a simple approach based on relative measurements to estimate the rotation measure due to the molecular clouds over the Galactic contribution. We then use a chemical evolution code along with extinction maps of each cloud to find the electron column density of the molecular cloud at the position of each rotation measure data point. Combining the rotation measures produced by the molecular clouds and the electron column density, we calculate the line-of-sight magnetic field strength and direction. Results: In California and Orion A, we find clear evidence that the magnetic fields at one side of these filamentary structures are pointing towards us and are pointing away from us at the other side. Even though the magnetic fields in Perseus might seem to suggest the same behavior, not enough data points are available to draw such conclusions. In Orion B, as well, there are not enough data points available to detect such behavior. This magnetic field reversal is consistent with a helical magnetic field morphology. In the vicinity of available Zeeman measurements in OMC-1, OMC-B, and the dark cloud Barnard 1, we find magnetic field values of - 23 ± 38 μG, - 129 ± 28 μG, and 32 ± 101 μG, respectively, which are in agreement with the Zeeman measurements. Tables 1 to 7 are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/614/A100
NASA Technical Reports Server (NTRS)
Ni, Wenjian; Ranson, Kenneth Jon; Zhang, Zhiyu; Sun, Guoqing
2014-01-01
LiDAR waveform data from airborne LiDAR scanners (ALS) e.g. the Land Vegetation and Ice Sensor (LVIS) havebeen successfully used for estimation of forest height and biomass at local scales and have become the preferredremote sensing dataset. However, regional and global applications are limited by the cost of the airborne LiDARdata acquisition and there are no available spaceborne LiDAR systems. Some researchers have demonstrated thepotential for mapping forest height using aerial or spaceborne stereo imagery with very high spatial resolutions.For stereo imageswith global coverage but coarse resolution newanalysis methods need to be used. Unlike mostresearch based on digital surface models, this study concentrated on analyzing the features of point cloud datagenerated from stereo imagery. The synthesizing of point cloud data from multi-view stereo imagery increasedthe point density of the data. The point cloud data over forested areas were analyzed and compared to small footprintLiDAR data and large-footprint LiDAR waveform data. The results showed that the synthesized point clouddata from ALOSPRISM triplets produce vertical distributions similar to LiDAR data and detected the verticalstructure of sparse and non-closed forests at 30mresolution. For dense forest canopies, the canopy could be capturedbut the ground surface could not be seen, so surface elevations from other sourceswould be needed to calculatethe height of the canopy. A canopy height map with 30 m pixels was produced by subtracting nationalelevation dataset (NED) fromthe averaged elevation of synthesized point clouds,which exhibited spatial featuresof roads, forest edges and patches. The linear regression showed that the canopy height map had a good correlationwith RH50 of LVIS data with a slope of 1.04 and R2 of 0.74 indicating that the canopy height derived fromPRISM triplets can be used to estimate forest biomass at 30 m resolution.
Multibeam 3D Underwater SLAM with Probabilistic Registration.
Palomer, Albert; Ridao, Pere; Ribas, David
2016-04-20
This paper describes a pose-based underwater 3D Simultaneous Localization and Mapping (SLAM) using a multibeam echosounder to produce high consistency underwater maps. The proposed algorithm compounds swath profiles of the seafloor with dead reckoning localization to build surface patches (i.e., point clouds). An Iterative Closest Point (ICP) with a probabilistic implementation is then used to register the point clouds, taking into account their uncertainties. The registration process is divided in two steps: (1) point-to-point association for coarse registration and (2) point-to-plane association for fine registration. The point clouds of the surfaces to be registered are sub-sampled in order to decrease both the computation time and also the potential of falling into local minima during the registration. In addition, a heuristic is used to decrease the complexity of the association step of the ICP from O(n2) to O(n) . The performance of the SLAM framework is tested using two real world datasets: First, a 2.5D bathymetric dataset obtained with the usual down-looking multibeam sonar configuration, and second, a full 3D underwater dataset acquired with a multibeam sonar mounted on a pan and tilt unit.
Borkowski, Andrzej; Owczarek-Wesołowska, Magdalena; Gromczak, Anna
2017-01-01
Terrestrial laser scanning is an efficient technique in providing highly accurate point clouds for various geoscience applications. The point clouds have to be transformed to a well-defined reference frame, such as the global Geodetic Reference System 1980. The transformation to the geocentric coordinate frame is based on estimating seven Helmert parameters using several GNSS (Global Navigation Satellite System) referencing points. This paper proposes a method for direct point cloud georeferencing that provides coordinates in the geocentric frame. The proposed method employs the vertical deflection from an external global Earth gravity model and thus demands a minimum number of GNSS measurements. The proposed method can be helpful when the number of georeferencing GNSS points is limited, for instance in city corridors. It needs only two georeferencing points. The validation of the method in a field test reveals that the differences between the classical georefencing and the proposed method amount at maximum to 7 mm with the standard deviation of 8 mm for all of three coordinate components. The proposed method may serve as an alternative for the laser scanning data georeferencing, especially when the number of GNSS points is insufficient for classical methods. PMID:28672795
Osada, Edward; Sośnica, Krzysztof; Borkowski, Andrzej; Owczarek-Wesołowska, Magdalena; Gromczak, Anna
2017-06-24
Terrestrial laser scanning is an efficient technique in providing highly accurate point clouds for various geoscience applications. The point clouds have to be transformed to a well-defined reference frame, such as the global Geodetic Reference System 1980. The transformation to the geocentric coordinate frame is based on estimating seven Helmert parameters using several GNSS (Global Navigation Satellite System) referencing points. This paper proposes a method for direct point cloud georeferencing that provides coordinates in the geocentric frame. The proposed method employs the vertical deflection from an external global Earth gravity model and thus demands a minimum number of GNSS measurements. The proposed method can be helpful when the number of georeferencing GNSS points is limited, for instance in city corridors. It needs only two georeferencing points. The validation of the method in a field test reveals that the differences between the classical georefencing and the proposed method amount at maximum to 7 mm with the standard deviation of 8 mm for all of three coordinate components. The proposed method may serve as an alternative for the laser scanning data georeferencing, especially when the number of GNSS points is insufficient for classical methods.
HORSESHOE CURVE IN GLACIER POINT ROAD NEAR GLACIER POINT. HALF ...
HORSESHOE CURVE IN GLACIER POINT ROAD NEAR GLACIER POINT. HALF DOME AT CENTER REAR. SAME VIEW AT CA-157-2. LOOKING NNE. GIS: N-37' 43 44.3 / W-119 34 14.1 - Glacier Point Road, Between Chinquapin Flat & Glacier Point, Yosemite Village, Mariposa County, CA
NASA Astrophysics Data System (ADS)
Schlesinger, Robert E.
1988-05-01
An anelastic three-dimensional model is used to investigate the effects of stratospheric temperature lapse rate on cloud top height/temperature structure for strongly sheared mature isolated midlatitude thunderstorms. Three comparative experiments are performed, differing only with respect to the stratospheric stability. The assumed stratospheric lapse rate is 0 K km1 (isothermal) in the first experiment, 3 K km1 in the second, and 3 K km1 (inversion) in the third.Kinematic storm structure is very similar in all three cases, especially in the troposphere. A strong quasi-steady updraft evolves splitting into a dominant cyclonic overshooting right-mover and a weaker anticyclonic left-mover that does not reach the tropopause. Strongest downdrafts occur at low to middle levels between the updrafts, and in the lower stratosphere a few kilometers upshear and downshear of the tapering updraft summit.Each storm shows a cloud-top thermal couplet, relatively cold near and upshear of the summit, and with a `close-in' warm region downshear. Both cold and warm regions become warmer, with significant morphological changes and a lowering of the cloud summit, as stratospheric stability is increased, though the temperature spread is not greatly affected.The coldest and highest cloud-top points are nearly colocated in the absence of a stratospheric inversion, but the coldest point is offset well upshear of the summit when an inversion is present. The cold region as a whole in each case shows at least a transient `V' shape, with the arms pointing downshear, although this shape is persistent only with the inversion.In the experiment with a 3 K km1 stratospheric lapse rate (weakest stability), the warm region is small and separates into two spots with secondary cold spots downshear of them. The warm region becomes larger, and remains single, as stratospheric stability increase. In each run, the warm regions are not accompanied by corresponding cloud-top height minima except very briefly.The cold cloud-top points are near or slightly downwind of relative vertical velocity maxima, usually positive, while the warm points are imbedded in subsidence downwind of the principal cloud-top downdraft core. The storm-relative cloud-top horizontal wind fields are consistent with the `V' shape of the cold region, showing strong diffluent flow directed downshear along the flanks from an upshear stagnation zone.
Multiseasonal Tree Crown Structure Mapping with Point Clouds from OTS Quadrocopter Systems
NASA Astrophysics Data System (ADS)
Hese, S.; Behrendt, F.
2017-08-01
OTF (Off The Shelf) quadro copter systems provide a cost effective (below 2000 Euro), flexible and mobile platform for high resolution point cloud mapping. Various studies showed the full potential of these small and flexible platforms. Especially in very tight and complex 3D environments the automatic obstacle avoidance, low copter weight, long flight times and precise maneuvering are important advantages of these small OTS systems in comparison with larger octocopter systems. This study examines the potential of the DJI Phantom 4 pro series and the Phantom 3A series for within-stand and forest tree crown 3D point cloud mapping using both within stand oblique imaging in different altitude levels and data captured from a nadir perspective. On a test site in Brandenburg/Germany a beach crown was selected and measured with 3 different altitude levels in Point Of Interest (POI) mode with oblique data capturing and deriving one nadir mosaic created with 85/85 % overlap using Drone Deploy automatic mapping software. Three different flight campaigns were performed, one in September 2016 (leaf-on), one in March 2017 (leaf-off) and one in May 2017 (leaf-on) to derive point clouds from different crown structure and phenological situations - covering the leaf-on and leafoff status of the tree crown. After height correction, the point clouds where used with GPS geo referencing to calculate voxel based densities on 50 × 10 × 10 cm voxel definitions using a topological network of chessboard image objects in 0,5 m height steps in an object based image processing environment. Comparison between leaf-off and leaf-on status was done on volume pixel definitions comparing the attributed point densities per volume and plotting the resulting values as a function of distance to the crown center. In the leaf-off status SFM (structure from motion) algorithms clearly identified the central stem and also secondary branch systems. While the penetration into the crown structure is limited in the leaf-on status (the point cloud is a mainly a description of the interpolated crown surface) - the visibility of the internal crown structure in leaf-off status allows to map also the internal tree structure up to and stopping at the secondary branch level system. When combined the leaf-on and leaf-off point clouds generate a comprehensive tree crown structure description that allows a low cost and detailed 3D crown structure mapping and potentially precise biomass mapping and/or internal structural differentiation of deciduous tree species types. Compared to TLS (Terrestrial Laser Scanning) based measurements the costs are neglectable and in the range of 1500-2500 €. This suggests the approach for low cost but fine scale in-situ applications and/or projects where TLS measurements cannot be derived and for less dense forest stands where POI flights can be performed. This study used the in-copter GPS measurements for geo referencing. Better absolute geo referencing results will be obtained with DGPS reference points. The study however clearly demonstrates the potential of OTS very low cost copter systems and the image attributed GPS measurements of the copter for the automatic calculation of complex 3D point clouds in a multi temporal tree crown mapping context.
D Central Line Extraction of Fossil Oyster Shells
NASA Astrophysics Data System (ADS)
Djuricic, A.; Puttonen, E.; Harzhauser, M.; Mandic, O.; Székely, B.; Pfeifer, N.
2016-06-01
Photogrammetry provides a powerful tool to digitally document protected, inaccessible, and rare fossils. This saves manpower in relation to current documentation practice and makes the fragile specimens more available for paleontological analysis and public education. In this study, high resolution orthophoto (0.5 mm) and digital surface models (1 mm) are used to define fossil boundaries that are then used as an input to automatically extract fossil length information via central lines. In general, central lines are widely used in geosciences as they ease observation, monitoring and evaluation of object dimensions. Here, the 3D central lines are used in a novel paleontological context to study fossilized oyster shells with photogrammetric and LiDAR-obtained 3D point cloud data. 3D central lines of 1121 Crassostrea gryphoides oysters of various shapes and sizes were computed in the study. Central line calculation included: i) Delaunay triangulation between the fossil shell boundary points and formation of the Voronoi diagram; ii) extraction of Voronoi vertices and construction of a connected graph tree from them; iii) reduction of the graph to the longest possible central line via Dijkstra's algorithm; iv) extension of longest central line to the shell boundary and smoothing by an adjustment of cubic spline curve; and v) integration of the central line into the corresponding 3D point cloud. The resulting longest path estimate for the 3D central line is a size parameter that can be applied in oyster shell age determination both in paleontological and biological applications. Our investigation evaluates ability and performance of the central line method to measure shell sizes accurately by comparing automatically extracted central lines with manually collected reference data used in paleontological analysis. Our results show that the automatically obtained central line length overestimated the manually collected reference by 1.5% in the test set, which is deemed sufficient for the selected paleontological application, namely shell age determination.
Automatic Extraction of Road Markings from Mobile Laser Scanning Data
NASA Astrophysics Data System (ADS)
Ma, H.; Pei, Z.; Wei, Z.; Zhong, R.
2017-09-01
Road markings as critical feature in high-defination maps, which are Advanced Driver Assistance System (ADAS) and self-driving technology required, have important functions in providing guidance and information to moving cars. Mobile laser scanning (MLS) system is an effective way to obtain the 3D information of the road surface, including road markings, at highway speeds and at less than traditional survey costs. This paper presents a novel method to automatically extract road markings from MLS point clouds. Ground points are first filtered from raw input point clouds using neighborhood elevation consistency method. The basic assumption of the method is that the road surface is smooth. Points with small elevation-difference between neighborhood are considered to be ground points. Then ground points are partitioned into a set of profiles according to trajectory data. The intensity histogram of points in each profile is generated to find intensity jumps in certain threshold which inversely to laser distance. The separated points are used as seed points to region grow based on intensity so as to obtain road mark of integrity. We use the point cloud template-matching method to refine the road marking candidates via removing the noise clusters with low correlation coefficient. During experiment with a MLS point set of about 2 kilometres in a city center, our method provides a promising solution to the road markings extraction from MLS data.
Saba, Luca; Banchhor, Sumit K; Suri, Harman S; Londhe, Narendra D; Araki, Tadashi; Ikeda, Nobutaka; Viskovic, Klaudija; Shafique, Shoaib; Laird, John R; Gupta, Ajay; Nicolaides, Andrew; Suri, Jasjit S
2016-08-01
This study presents AtheroCloud™ - a novel cloud-based smart carotid intima-media thickness (cIMT) measurement tool using B-mode ultrasound for stroke/cardiovascular risk assessment and its stratification. This is an anytime-anywhere clinical tool for routine screening and multi-center clinical trials. In this pilot study, the physician can upload ultrasound scans in one of the following formats (DICOM, JPEG, BMP, PNG, GIF or TIFF) directly into the proprietary cloud of AtheroPoint from the local server of the physician's office. They can then run the intelligent and automated AtheroCloud™ cIMT measurements in point-of-care settings in less than five seconds per image, while saving the vascular reports in the cloud. We statistically benchmark AtheroCloud™ cIMT readings against sonographer (a registered vascular technologist) readings and manual measurements derived from the tracings of the radiologist. One hundred patients (75 M/25 F, mean age: 68±11 years), IRB approved, Toho University, Japan, consisted of Left/Right common carotid artery (CCA) artery (200 ultrasound scans), (Toshiba, Tokyo, Japan) were collected using a 7.5MHz transducer. The measured cIMTs for L/R carotid were as follows (in mm): (i) AtheroCloud™ (0.87±0.20, 0.77±0.20); (ii) sonographer (0.97±0.26, 0.89±0.29) and (iii) manual (0.90±0.20, 0.79±0.20), respectively. The coefficient of correlation (CC) between sonographer and manual for L/R cIMT was 0.74 (P<0.0001) and 0.65 (P<0.0001), while, between AtheroCloud™ and manual was 0.96 (P<0.0001) and 0.97 (P<0.0001), respectively. We observed that 91.15% of the population in AtheroCloud™ had a mean cIMT error less than 0.11mm compared to sonographer's 68.31%. The area under curve for receiving operating characteristics was 0.99 for AtheroCloud™ against 0.81 for sonographer. Our Framingham Risk Score stratified the population into three bins as follows: 39% in low-risk, 70.66% in medium-risk and 10.66% in high-risk bins. Statistical tests were performed to demonstrate consistency, reliability and accuracy of the results. The proposed AtheroCloud™ system is completely reliable, automated, fast (3-5 seconds depending upon the image size having an internet speed of 180Mbps), accurate, and an intelligent, web-based clinical tool for multi-center clinical trials and routine telemedicine clinical care. Copyright © 2016 Elsevier Ltd. All rights reserved.
HNSciCloud - Overview and technical Challenges
NASA Astrophysics Data System (ADS)
Gasthuber, Martin; Meinhard, Helge; Jones, Robert
2017-10-01
HEP is only one of many sciences with sharply increasing compute requirements that cannot be met by profiting from Moore’s law alone. Commercial clouds potentially allow for realising larger economies of scale. While some small-scale experience requiring dedicated effort has been collected, public cloud resources have not been integrated yet with the standard workflows of science organisations in their private data centres; in addition, European science has not ramped up to significant scale yet. The HELIX NEBULA Science Cloud project - HNSciCloud, partly funded by the European Commission, addresses these points. Ten organisations under CERN’s leadership, covering particle physics, bioinformatics, photon science and other sciences, have joined to procure public cloud resources as well as dedicated development efforts towards this integration. The HNSciCloud project faces the challenge to accelerate developments performed by the selected commercial providers. In order to guarantee cost efficient usage of IaaS resources across a wide range of scientific communities, the technical requirements had to be carefully constructed. With respect to current IaaS offerings, dataintensive science is the biggest challenge; other points that need to be addressed concern identity federations, network connectivity and how to match business practices of large IaaS providers with those of public research organisations. In the first section, this paper will give an overview of the project and explain the findings so far. The last section will explain the key points of the technical requirements and present first results of the experience of the procurers with the services in comparison to their’on-premise’ infrastructure.
3D change detection at street level using mobile laser scanning point clouds and terrestrial images
NASA Astrophysics Data System (ADS)
Qin, Rongjun; Gruen, Armin
2014-04-01
Automatic change detection and geo-database updating in the urban environment are difficult tasks. There has been much research on detecting changes with satellite and aerial images, but studies have rarely been performed at the street level, which is complex in its 3D geometry. Contemporary geo-databases include 3D street-level objects, which demand frequent data updating. Terrestrial images provides rich texture information for change detection, but the change detection with terrestrial images from different epochs sometimes faces problems with illumination changes, perspective distortions and unreliable 3D geometry caused by the lack of performance of automatic image matchers, while mobile laser scanning (MLS) data acquired from different epochs provides accurate 3D geometry for change detection, but is very expensive for periodical acquisition. This paper proposes a new method for change detection at street level by using combination of MLS point clouds and terrestrial images: the accurate but expensive MLS data acquired from an early epoch serves as the reference, and terrestrial images or photogrammetric images captured from an image-based mobile mapping system (MMS) at a later epoch are used to detect the geometrical changes between different epochs. The method will automatically mark the possible changes in each view, which provides a cost-efficient method for frequent data updating. The methodology is divided into several steps. In the first step, the point clouds are recorded by the MLS system and processed, with data cleaned and classified by semi-automatic means. In the second step, terrestrial images or mobile mapping images at a later epoch are taken and registered to the point cloud, and then point clouds are projected on each image by a weighted window based z-buffering method for view dependent 2D triangulation. In the next step, stereo pairs of the terrestrial images are rectified and re-projected between each other to check the geometrical consistency between point clouds and stereo images. Finally, an over-segmentation based graph cut optimization is carried out, taking into account the color, depth and class information to compute the changed area in the image space. The proposed method is invariant to light changes, robust to small co-registration errors between images and point clouds, and can be applied straightforwardly to 3D polyhedral models. This method can be used for 3D street data updating, city infrastructure management and damage monitoring in complex urban scenes.
NASA Astrophysics Data System (ADS)
Tomljenovic, Ivan; Tiede, Dirk; Blaschke, Thomas
2016-10-01
In the past two decades Object-Based Image Analysis (OBIA) established itself as an efficient approach for the classification and extraction of information from remote sensing imagery and, increasingly, from non-image based sources such as Airborne Laser Scanner (ALS) point clouds. ALS data is represented in the form of a point cloud with recorded multiple returns and intensities. In our work, we combined OBIA with ALS point cloud data in order to identify and extract buildings as 2D polygons representing roof outlines in a top down mapping approach. We performed rasterization of the ALS data into a height raster for the purpose of the generation of a Digital Surface Model (DSM) and a derived Digital Elevation Model (DEM). Further objects were generated in conjunction with point statistics from the linked point cloud. With the use of class modelling methods, we generated the final target class of objects representing buildings. The approach was developed for a test area in Biberach an der Riß (Germany). In order to point out the possibilities of the adaptation-free transferability to another data set, the algorithm has been applied ;as is; to the ISPRS Benchmarking data set of Toronto (Canada). The obtained results show high accuracies for the initial study area (thematic accuracies of around 98%, geometric accuracy of above 80%). The very high performance within the ISPRS Benchmark without any modification of the algorithm and without any adaptation of parameters is particularly noteworthy.
Automatic Road Sign Inventory Using Mobile Mapping Systems
NASA Astrophysics Data System (ADS)
Soilán, M.; Riveiro, B.; Martínez-Sánchez, J.; Arias, P.
2016-06-01
The periodic inspection of certain infrastructure features plays a key role for road network safety and preservation, and for developing optimal maintenance planning that minimize the life-cycle cost of the inspected features. Mobile Mapping Systems (MMS) use laser scanner technology in order to collect dense and precise three-dimensional point clouds that gather both geometric and radiometric information of the road network. Furthermore, time-stamped RGB imagery that is synchronized with the MMS trajectory is also available. In this paper a methodology for the automatic detection and classification of road signs from point cloud and imagery data provided by a LYNX Mobile Mapper System is presented. First, road signs are detected in the point cloud. Subsequently, the inventory is enriched with geometrical and contextual data such as orientation or distance to the trajectory. Finally, semantic content is given to the detected road signs. As point cloud resolution is insufficient, RGB imagery is used projecting the 3D points in the corresponding images and analysing the RGB data within the bounding box defined by the projected points. The methodology was tested in urban and road environments in Spain, obtaining global recall results greater than 95%, and F-score greater than 90%. In this way, inventory data is obtained in a fast, reliable manner, and it can be applied to improve the maintenance planning of the road network, or to feed a Spatial Information System (SIS), thus, road sign information can be available to be used in a Smart City context.
Comparison of the different approaches to generate holograms from data acquired with a Kinect sensor
NASA Astrophysics Data System (ADS)
Kang, Ji-Hoon; Leportier, Thibault; Ju, Byeong-Kwon; Song, Jin Dong; Lee, Kwang-Hoon; Park, Min-Chul
2017-05-01
Data of real scenes acquired in real-time with a Kinect sensor can be processed with different approaches to generate a hologram. 3D models can be generated from a point cloud or a mesh representation. The advantage of the point cloud approach is that computation process is well established since it involves only diffraction and propagation of point sources between parallel planes. On the other hand, the mesh representation enables to reduce the number of elements necessary to represent the object. Then, even though the computation time for the contribution of a single element increases compared to a simple point, the total computation time can be reduced significantly. However, the algorithm is more complex since propagation of elemental polygons between non-parallel planes should be implemented. Finally, since a depth map of the scene is acquired at the same time than the intensity image, a depth layer approach can also be adopted. This technique is appropriate for a fast computation since propagation of an optical wavefront from one plane to another can be handled efficiently with the fast Fourier transform. Fast computation with depth layer approach is convenient for real time applications, but point cloud method is more appropriate when high resolution is needed. In this study, since Kinect can be used to obtain both point cloud and depth map, we examine the different approaches that can be adopted for hologram computation and compare their performance.
NASA Astrophysics Data System (ADS)
Welch, R. M.; Sengupta, S. K.; Kuo, K. S.
1988-04-01
Statistical measures of the spatial distributions of gray levels (cloud reflectivities) are determined for LANDSAT Multispectral Scanner digital data. Textural properties for twelve stratocumulus cloud fields, seven cumulus fields, and two cirrus fields are examined using the Spatial Gray Level Co-Occurrence Matrix method. The co-occurrence statistics are computed for pixel separations ranging from 57 m to 29 km and at angles of 0°, 45°, 90° and 135°. Nine different textual measures are used to define the cloud field spatial relationships. However, the measures of contrast and correlation appear to be most useful in distinguishing cloud structure.Cloud field macrotexture describes general cloud field characteristics at distances greater than the size of typical cloud elements. It is determined from the spatial asymptotic values of the texture measures. The slope of the texture curves at small distances provides a measure of the microtexture of individual cloud cells. Cloud fields composed primarily of small cells have very steep slopes and reach their asymptotic values at short distances from the origin. As the cells composing the cloud field grow larger, the slope becomes more gradual and the asymptotic distance increases accordingly. Low asymptotic values of correlation show that stratocumulus cloud fields have no large scale organized structure.Besides the ability to distinguish cloud field structure, texture appears to be a potentially valuable tool in cloud classification. Stratocumulus clouds are characterized by low values of angular second moment and large values of entropy. Cirrus clouds appear to have extremely low values of contrast, low values of entropy, and very large values of correlation.Finally, we propose that sampled high spatial resolution satellite data be used in conjunction with coarser resolution operational satellite data to detect and identify cloud field structure and directionality and to locate regions of subresolution scale cloud contamination.
Abraham, Leandro; Bromberg, Facundo; Forradellas, Raymundo
2018-04-01
Muscle activation level is currently being captured using impractical and expensive devices which make their use in telemedicine settings extremely difficult. To address this issue, a prototype is presented of a non-invasive, easy-to-install system for the estimation of a discrete level of muscle activation of the biceps muscle from 3D point clouds captured with RGB-D cameras. A methodology is proposed that uses the ensemble of shape functions point cloud descriptor for the geometric characterization of 3D point clouds, together with support vector machines to learn a classifier that, based on this geometric characterization for some points of view of the biceps, provides a model for the estimation of muscle activation for all neighboring points of view. This results in a classifier that is robust to small perturbations in the point of view of the capturing device, greatly simplifying the installation process for end-users. In the discrimination of five levels of effort with values up to the maximum voluntary contraction (MVC) of the biceps muscle (3800 g), the best variant of the proposed methodology achieved mean absolute errors of about 9.21% MVC - an acceptable performance for telemedicine settings where the electric measurement of muscle activation is impractical. The results prove that the correlations between the external geometry of the arm and biceps muscle activation are strong enough to consider computer vision and supervised learning an alternative with great potential for practical applications in tele-physiotherapy. Copyright © 2018 Elsevier Ltd. All rights reserved.
TEMPERATURE DISTRIBUTION IN A DIFFUSION CLOUD CHAMBER
DOE Office of Scientific and Technical Information (OSTI.GOV)
Slavic, I.; Szymakowski, J.; Stachorska, D.
1961-03-01
A diffusion cloud chamber with working conditions within a pressure range from 10 mm Hg to 2 atmospheres and at variable boundary surface temperatures in a wide interval is described. A simple procedure is described for cooling and thermoregulating the bottom of the chamber by means of vapor flow of liquid air which makes possible the achievement of temperature up to -120 deg C with stability better that plus or minus 1 deg C. A method for the measurement of temperature distribution by means of a thermistor is described, and a number of curves of the observed temperature gradient, dependentmore » on the boundary surface temperature is given. Analysis of other factors influencing the stable work of the diffusion cloud chamber was made. (auth)« less
Retrieval of effective cloud field parameters from radiometric data
NASA Astrophysics Data System (ADS)
Paulescu, Marius; Badescu, Viorel; Brabec, Marek
2017-06-01
Clouds play a key role in establishing the Earth's climate. Real cloud fields are very different and very complex in both morphological and microphysical senses. Consequently, the numerical description of the cloud field is a critical task for accurate climate modeling. This study explores the feasibility of retrieving the effective cloud field parameters (namely the cloud aspect ratio and cloud factor) from systematic radiometric measurements at high frequency (measurement is taken every 15 s). Two different procedures are proposed, evaluated, and discussed with respect to both physical and numerical restrictions. None of the procedures is classified as best; therefore, the specific advantages and weaknesses are discussed. It is shown that the relationship between the cloud shade and point cloudiness computed using the estimated cloud field parameters recovers the typical relationship derived from measurements.
NASA Astrophysics Data System (ADS)
Grochocka, M.
2013-12-01
Mobile laser scanning is dynamically developing measurement technology, which is becoming increasingly widespread in acquiring three-dimensional spatial information. Continuous technical progress based on the use of new tools, technology development, and thus the use of existing resources in a better way, reveals new horizons of extensive use of MLS technology. Mobile laser scanning system is usually used for mapping linear objects, and in particular the inventory of roads, railways, bridges, shorelines, shafts, tunnels, and even geometrically complex urban spaces. The measurement is done from the perspective of use of the object, however, does not interfere with the possibilities of movement and work. This paper presents the initial results of the segmentation data acquired by the MLS. The data used in this work was obtained as part of an inventory measurement infrastructure railway line. Measurement of point clouds was carried out using a profile scanners installed on the railway platform. To process the data, the tools of 'open source' Point Cloud Library was used. These tools allow to use templates of programming libraries. PCL is an open, independent project, operating on a large scale for processing 2D/3D image and point clouds. Software PCL is released under the terms of the BSD license (Berkeley Software Distribution License), which means it is a free for commercial and research use. The article presents a number of issues related to the use of this software and its capabilities. Segmentation data is based on applying the templates library pcl_ segmentation, which contains the segmentation algorithms to separate clusters. These algorithms are best suited to the processing point clouds, consisting of a number of spatially isolated regions. Template library performs the extraction of the cluster based on the fit of the model by the consensus method samples for various parametric models (planes, cylinders, spheres, lines, etc.). Most of the mathematical operation is carried out on the basis of Eigen library, a set of templates for linear algebra.
NASA Astrophysics Data System (ADS)
Vadman, M.; Bemis, S. P.
2017-12-01
Even at high tectonic rates, detection of possible off-fault plastic/aseismic deformation and variability in far-field strain accumulation requires high spatial resolution data and likely decades of measurements. Due to the influence that variability in interseismic deformation could have on the timing, size, and location of future earthquakes and the calculation of modern geodetic estimates of strain, we attempt to use historical aerial photographs to constrain deformation through time across a locked fault. Modern photo-based 3D reconstruction techniques facilitate the creation of dense point clouds from historical aerial photograph collections. We use these tools to generate a time series of high-resolution point clouds that span 10-20 km across the Carrizo Plain segment of the San Andreas fault. We chose this location due to the high tectonic rates along the San Andreas fault and lack of vegetation, which may obscure tectonic signals. We use ground control points collected with differential GPS to establish scale and georeference the aerial photograph-derived point clouds. With a locked fault assumption, point clouds can be co-registered (to one another and/or the 1.7 km wide B4 airborne lidar dataset) along the fault trace to calculate relative displacements away from the fault. We use CloudCompare to compute 3D surface displacements, which reflect the interseismic strain accumulation that occurred in the time interval between photo collections. As expected, we do not observe clear surface displacements along the primary fault trace in our comparisons of the B4 lidar data against the aerial photograph-derived point clouds. However, there may be small scale variations within the lidar swath area that represent near-fault plastic deformation. With large-scale historical photographs available for the Carrizo Plain extending back to at least the 1940s, we can potentially sample nearly half the interseismic period since the last major earthquake on this portion of this fault (1857). Where sufficient aerial photograph coverage is available, this approach has the potential to illuminate complex fault zone processes for this and other major strike-slip faults.
Big Geo Data Services: From More Bytes to More Barrels
NASA Astrophysics Data System (ADS)
Misev, Dimitar; Baumann, Peter
2016-04-01
The data deluge is affecting the oil and gas industry just as much as many other industries. However, aside from the sheer volume there is the challenge of data variety, such as regular and irregular grids, multi-dimensional space/time grids, point clouds, and TINs and other meshes. A uniform conceptualization for modelling and serving them could save substantial effort, such as the proverbial "department of reformatting". The notion of a coverage actually can accomplish this. Its abstract model in ISO 19123 together with the concrete, interoperable OGC Coverage Implementation Schema (CIS), which is currently under adoption as ISO 19123-2, provieds a common platform for representing any n-D grid type, point clouds, and general meshes. This is paired by the OGC Web Coverage Service (WCS) together with its datacube analytics language, the OGC Web Coverage Processing Service (WCPS). The OGC WCS Core Reference Implementation, rasdaman, relies on Array Database technology, i.e. a NewSQL/NoSQL approach. It supports the grid part of coverages, with installations of 100+ TB known and single queries parallelized across 1,000+ cloud nodes. Recent research attempts to address the point cloud and mesh part through a unified query model. The Holy Grail envisioned is that these approaches can be merged into a single service interface at some time. We present both grid amd point cloud / mesh approaches and discuss status, implementation, standardization, and research perspectives, including a live demo.
Pursuit Eye-Movements in Curve Driving Differentiate between Future Path and Tangent Point Models
Lappi, Otto; Pekkanen, Jami; Itkonen, Teemu H.
2013-01-01
For nearly 20 years, looking at the tangent point on the road edge has been prominent in models of visual orientation in curve driving. It is the most common interpretation of the commonly observed pattern of car drivers looking through a bend, or at the apex of the curve. Indeed, in the visual science literature, visual orientation towards the inside of a bend has become known as “tangent point orientation”. Yet, it remains to be empirically established whether it is the tangent point the drivers are looking at, or whether some other reference point on the road surface, or several reference points, are being targeted in addition to, or instead of, the tangent point. Recently discovered optokinetic pursuit eye-movements during curve driving can provide complementary evidence over and above traditional gaze-position measures. This paper presents the first detailed quantitative analysis of pursuit eye movements elicited by curvilinear optic flow in real driving. The data implicates the far zone beyond the tangent point as an important gaze target area during steady-state cornering. This is in line with the future path steering models, but difficult to reconcile with any pure tangent point steering model. We conclude that the tangent point steering models do not provide a general explanation of eye movement and steering during a curve driving sequence and cannot be considered uncritically as the default interpretation when the gaze position distribution is observed to be situated in the region of the curve apex. PMID:23894300
Stochastic Surface Mesh Reconstruction
NASA Astrophysics Data System (ADS)
Ozendi, M.; Akca, D.; Topan, H.
2018-05-01
A generic and practical methodology is presented for 3D surface mesh reconstruction from the terrestrial laser scanner (TLS) derived point clouds. It has two main steps. The first step deals with developing an anisotropic point error model, which is capable of computing the theoretical precisions of 3D coordinates of each individual point in the point cloud. The magnitude and direction of the errors are represented in the form of error ellipsoids. The following second step is focused on the stochastic surface mesh reconstruction. It exploits the previously determined error ellipsoids by computing a point-wise quality measure, which takes into account the semi-diagonal axis length of the error ellipsoid. The points only with the least errors are used in the surface triangulation. The remaining ones are automatically discarded.
Automatic extraction of pavement markings on streets from point cloud data of mobile LiDAR
NASA Astrophysics Data System (ADS)
Gao, Yang; Zhong, Ruofei; Tang, Tao; Wang, Liuzhao; Liu, Xianlin
2017-08-01
Pavement markings provide an important foundation as they help to keep roads users safe. Accurate and comprehensive information about pavement markings assists the road regulators and is useful in developing driverless technology. Mobile light detection and ranging (LiDAR) systems offer new opportunities to collect and process accurate pavement markings’ information. Mobile LiDAR systems can directly obtain the three-dimensional (3D) coordinates of an object, thus defining spatial data and the intensity of (3D) objects in a fast and efficient way. The RGB attribute information of data points can be obtained based on the panoramic camera in the system. In this paper, we present a novel method process to automatically extract pavement markings using multiple attribute information of the laser scanning point cloud from the mobile LiDAR data. This method process utilizes a differential grayscale of RGB color, laser pulse reflection intensity, and the differential intensity to identify and extract pavement markings. We utilized point cloud density to remove the noise and used morphological operations to eliminate the errors. In the application, we tested our method process on different sections of roads in Beijing, China, and Buffalo, NY, USA. The results indicated that both correctness (p) and completeness (r) were higher than 90%. The method process of this research can be applied to extract pavement markings from huge point cloud data produced by mobile LiDAR.
A novel point cloud registration using 2D image features
NASA Astrophysics Data System (ADS)
Lin, Chien-Chou; Tai, Yen-Chou; Lee, Jhong-Jin; Chen, Yong-Sheng
2017-01-01
Since a 3D scanner only captures a scene of a 3D object at a time, a 3D registration for multi-scene is the key issue of 3D modeling. This paper presents a novel and an efficient 3D registration method based on 2D local feature matching. The proposed method transforms the point clouds into 2D bearing angle images and then uses the 2D feature based matching method, SURF, to find matching pixel pairs between two images. The corresponding points of 3D point clouds can be obtained by those pixel pairs. Since the corresponding pairs are sorted by their distance between matching features, only the top half of the corresponding pairs are used to find the optimal rotation matrix by the least squares approximation. In this paper, the optimal rotation matrix is derived by orthogonal Procrustes method (SVD-based approach). Therefore, the 3D model of an object can be reconstructed by aligning those point clouds with the optimal transformation matrix. Experimental results show that the accuracy of the proposed method is close to the ICP, but the computation cost is reduced significantly. The performance is six times faster than the generalized-ICP algorithm. Furthermore, while the ICP requires high alignment similarity of two scenes, the proposed method is robust to a larger difference of viewing angle.
Drogue tracking using 3D flash lidar for autonomous aerial refueling
NASA Astrophysics Data System (ADS)
Chen, Chao-I.; Stettner, Roger
2011-06-01
Autonomous aerial refueling (AAR) is an important capability for an unmanned aerial vehicle (UAV) to increase its flying range and endurance without increasing its size. This paper presents a novel tracking method that utilizes both 2D intensity and 3D point-cloud data acquired with a 3D Flash LIDAR sensor to establish relative position and orientation between the receiver vehicle and drogue during an aerial refueling process. Unlike classic, vision-based sensors, a 3D Flash LIDAR sensor can provide 3D point-cloud data in real time without motion blur, in the day or night, and is capable of imaging through fog and clouds. The proposed method segments out the drogue through 2D analysis and estimates the center of the drogue from 3D point-cloud data for flight trajectory determination. A level-set front propagation routine is first employed to identify the target of interest and establish its silhouette information. Sufficient domain knowledge, such as the size of the drogue and the expected operable distance, is integrated into our approach to quickly eliminate unlikely target candidates. A statistical analysis along with a random sample consensus (RANSAC) is performed on the target to reduce noise and estimate the center of the drogue after all 3D points on the drogue are identified. The estimated center and drogue silhouette serve as the seed points to efficiently locate the target in the next frame.
Thermodynamic and cloud parameter retrieval using infrared spectral data
NASA Technical Reports Server (NTRS)
Zhou, Daniel K.; Smith, William L., Sr.; Liu, Xu; Larar, Allen M.; Huang, Hung-Lung A.; Li, Jun; McGill, Matthew J.; Mango, Stephen A.
2005-01-01
High-resolution infrared radiance spectra obtained from near nadir observations provide atmospheric, surface, and cloud property information. A fast radiative transfer model, including cloud effects, is used for atmospheric profile and cloud parameter retrieval. The retrieval algorithm is presented along with its application to recent field experiment data from the NPOESS Airborne Sounding Testbed - Interferometer (NAST-I). The retrieval accuracy dependence on cloud properties is discussed. It is shown that relatively accurate temperature and moisture retrievals can be achieved below optically thin clouds. For optically thick clouds, accurate temperature and moisture profiles down to cloud top level are obtained. For both optically thin and thick cloud situations, the cloud top height can be retrieved with an accuracy of approximately 1.0 km. Preliminary NAST-I retrieval results from the recent Atlantic-THORPEX Regional Campaign (ATReC) are presented and compared with coincident observations obtained from dropsondes and the nadir-pointing Cloud Physics Lidar (CPL).
The estimation of branching curves in the presence of subject-specific random effects.
Elmi, Angelo; Ratcliffe, Sarah J; Guo, Wensheng
2014-12-20
Branching curves are a technique for modeling curves that change trajectory at a change (branching) point. Currently, the estimation framework is limited to independent data, and smoothing splines are used for estimation. This article aims to extend the branching curve framework to the longitudinal data setting where the branching point varies by subject. If the branching point is modeled as a random effect, then the longitudinal branching curve framework is a semiparametric nonlinear mixed effects model. Given existing issues with using random effects within a smoothing spline, we express the model as a B-spline based semiparametric nonlinear mixed effects model. Simple, clever smoothness constraints are enforced on the B-splines at the change point. The method is applied to Women's Health data where we model the shape of the labor curve (cervical dilation measured longitudinally) before and after treatment with oxytocin (a labor stimulant). Copyright © 2014 John Wiley & Sons, Ltd.
THE INFLUENCE OF NONUNIFORM CLOUD COVER ON TRANSIT TRANSMISSION SPECTRA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Line, Michael R.; Parmentier, Vivien, E-mail: mrline@ucsc.edu
2016-03-20
We model the impact of nonuniform cloud cover on transit transmission spectra. Patchy clouds exist in nearly every solar system atmosphere, brown dwarfs, and transiting exoplanets. Our major findings suggest that fractional cloud coverage can exactly mimic high mean molecular weight atmospheres and vice versa over certain wavelength regions, in particular, over the Hubble Space Telescope (HST) Wide Field Camera 3 (WFC3) bandpass (1.1–1.7 μm). We also find that patchy cloud coverage exhibits a signature that is different from uniform global clouds. Furthermore, we explain analytically why the “patchy cloud-high mean molecular weight” degeneracy exists. We also explore the degeneracy ofmore » nonuniform cloud coverage in atmospheric retrievals on both synthetic and real planets. We find from retrievals on a synthetic solar composition hot Jupiter with patchy clouds and a cloud-free high mean molecular weight warm Neptune that both cloud-free high mean molecular weight atmospheres and partially cloudy atmospheres can explain the data equally well. Another key finding is that the HST WFC3 transit transmission spectra of two well-observed objects, the hot Jupiter HD 189733b and the warm Neptune HAT-P-11b, can be explained well by solar composition atmospheres with patchy clouds without the need to invoke high mean molecular weight or global clouds. The degeneracy between high molecular weight and solar composition partially cloudy atmospheres can be broken by observing the molecular Rayleigh scattering differences between the two. Furthermore, the signature of partially cloudy limbs also appears as a ∼100 ppm residual in the ingress and egress of the transit light curves, provided that the transit timing is known to seconds.« less
Effects of Phase Separation Behavior on Morphology and Performance of Polycarbonate Membranes
Idris, Alamin; Man, Zakaria; Maulud, Abdulhalim S.; Khan, Muhammad Saad
2017-01-01
The phase separation behavior of bisphenol-A-polycarbonate (PC), dissolved in N-methyl-2-pyrrolidone and dichloromethane solvents in coagulant water, was studied by the cloud point method. The respective cloud point data were determined by titration against water at room temperature and the characteristic binodal curves for the ternary systems were plotted. Further, the physical properties such as viscosity, refractive index, and density of the solution were measured. The critical polymer concentrations were determined from the viscosity measurements. PC/NMP and PC/DCM membranes were fabricated by the dry-wet phase inversion technique and characterized for their morphology, structure, and thermal stability using field emission scanning electron microscopy, Fourier transform infrared spectroscopy, and thermogravimetric analysis, respectively. The membranes’ performances were tested for their permeance to CO2, CH4, and N2 gases at 24 ± 0.5 °C with varying feed pressures from 2 to 10 bar. The PC/DCM membranes appeared to be asymmetric dense membrane types with appreciable thermal stability, whereas the PC/NMP membranes were observed to be asymmetric with porous structures exhibiting 4.18% and 9.17% decrease in the initial and maximum degradation temperatures, respectively. The ideal CO2/N2 and CO2/CH4 selectivities of the PC/NMP membrane decreased with the increase in feed pressures, while for the PC/DCM membrane, the average ideal CO2/N2 and CO2/CH4 selectivities were found to be 25.1 ± 0.8 and 21.1 ± 0.6, respectively. Therefore, the PC/DCM membranes with dense morphologies are appropriate for gas separation applications. PMID:28379173
Heidarizadi, Elham; Tabaraki, Reza
2016-01-01
A sensitive cloud point extraction method for simultaneous determination of trace amounts of sunset yellow (SY), allura red (AR) and brilliant blue (BB) by spectrophotometry was developed. Experimental parameters such as Triton X-100 concentration, KCl concentration and initial pH on extraction efficiency of dyes were optimized using response surface methodology (RSM) with a Doehlert design. Experimental data were evaluated by applying RSM integrating a desirability function approach. The optimum condition for extraction efficiency of SY, AR and BB simultaneously were: Triton X-100 concentration 0.0635 mol L(-1), KCl concentration 0.11 mol L(-1) and pH 4 with maximum overall desirability D of 0.95. Correspondingly, the maximum extraction efficiency of SY, AR and BB were 100%, 92.23% and 95.69%, respectively. At optimal conditions, extraction efficiencies were 99.8%, 92.48% and 95.96% for SY, AR and BB, respectively. These values were only 0.2%, 0.25% and 0.27% different from the predicted values, suggesting that the desirability function approach with RSM was a useful technique for simultaneously dye extraction. Linear calibration curves were obtained in the range of 0.02-4 for SY, 0.025-2.5 for AR and 0.02-4 μg mL(-1) for BB under optimum condition. Detection limit based on three times the standard deviation of the blank (3Sb) was 0.009, 0.01 and 0.007 μg mL(-1) (n=10) for SY, AR and BB, respectively. The method was successfully used for the simultaneous determination of the dyes in different food samples. Copyright © 2015 Elsevier B.V. All rights reserved.
Spir, Lívia Genovez; Ataide, Janaína Artem; De Lencastre Novaes, Letícia Celia; Moriel, Patrícia; Mazzola, Priscila Gava; De Borba Gurpilhares, Daniela; Silveira, Edgar; Pessoa, Adalberto; Tambourgi, Elias Basile
2015-01-01
Bromelain is a set of proteolytic enzymes found in pineapple (Ananas comosus) tissues such as stem, fruit and leaves. Because of its proteolytic activity, bromelain has potential applications in the cosmetic, pharmaceutical, and food industries. The present study focused on the recovery of bromelain from pineapple peel by liquid-liquid extraction in aqueous two-phase micellar systems (ATPMS), using Triton X-114 (TX-114) and McIlvaine buffer, in the absence and presence of electrolytes CaCl2 and KI; the cloud points of the generated extraction systems were studied by plotting binodal curves. Based on the cloud points, three temperatures were selected for extraction: 30, 33, and 36°C for systems in the absence of salts; 40, 43, and 46°C in the presence of KI; 24, 27, and 30°C in the presence of CaCl2 . Total protein and enzymatic activities were analyzed to monitor bromelain. Employing the ATPMS chosen for extraction (0.5 M KI with 3% TX-114, at pH 6.0, at 40°C), the bromelain extract stability was assessed after incorporation into three cosmetic bases: an anhydrous gel, a cream, and a cream-gel formulation. The cream-gel formulation presented as the most appropriate base to convey bromelain, and its optimal storage conditions were found to be 4.0 ± 0.5°C. The selected ATPMS enabled the extraction of a biomolecule with high added value from waste lined-up in a cosmetic formulation, allowing for exploration of further cosmetic potential. © 2015 American Institute of Chemical Engineers.
Li, Jiekang; Li, Guirong; Han, Qian
2016-12-05
In this paper, two kinds of salophens (Sal) with different solubilities, Sal1 and Sal2, have been respectively synthesized, and they all can combine with uranyl to form stable complexes: [UO2(2+)-Sal1] and [UO2(2+)-Sal2]. Among them, [UO2(2+)-Sal1] was used as ligand to extract uranium in complex samples by dual cloud point extraction (dCPE), and [UO2(2+)-Sal2] was used as catalyst for the determination of uranium by photocatalytic resonance fluorescence (RF) method. The photocatalytic characteristic of [UO2(2+)-Sal2] on the oxidized pyronine Y (PRY) by potassium bromate which leads to the decrease of RF intensity of PRY were studied. The reduced value of RF intensity of reaction system (ΔF) is in proportional to the concentration of uranium (c), and a novel photo-catalytic RF method was developed for the determination of trace uranium (VI) after dCPE. The combination of photo-catalytic RF techniques and dCPE procedure endows the presented methods with enhanced sensitivity and selectivity. Under optimal conditions, the linear calibration curves range for 0.067 to 6.57ngmL(-1), the linear regression equation was ΔF=438.0 c (ngmL(-1))+175.6 with the correlation coefficient r=0.9981. The limit of detection was 0.066ngmL(-1). The proposed method was successfully applied for the separation and determination of uranium in real samples with the recoveries of 95.0-103.5%. The mechanisms of the indicator reaction and dCPE are discussed. Copyright © 2016 Elsevier B.V. All rights reserved.
Ishikawa, N; Taki, K; Hojo, Y; Hagino, Y; Shigei, T
1978-09-01
The dog heart-lung preparations were prepared. The "equilibrium point", which could be defined as the point at which the cardiac output (CO)-curve and the venous return (VR)-curve crossed, when the CO and VR were plotted against the right atrial pressure, was recorded directly by utilizing an X-Y recorder. The CO-curve was obtained, as a locus of the equilibrium point, by raising and lowering the level of blood in the venous reservoir (competence test). The meaning of the procedure was shown to increase or decrease the mean systemic pressure, and to cause the corresponding parallel shift in the VR-curve. The VR-curve was obtained by changing myocardial contractility. When heart failure was induced by pentobarbital or by chloroform, the equilibrium point shifted downwards to the right, depicting the VR-curve. During development of the failure, the slopes of CO-curves decreased gradually. Effects of cinobufagin and norepinephrine were also analyzed. Utilization of the X-Y recorder enabled us to settle the uniform experimental conditions more easily, and to follow the effects of drugs continuously on a diagram equating the CO- and VR-curves (Gyton's scheme).
Cloud condensation nuclei near marine cumulus
NASA Technical Reports Server (NTRS)
Hudson, James G.
1993-01-01
Extensive airborne measurements of cloud condensation nucleus (CCN) spectra and condensation nuclei below, in, between, and above the cumulus clouds near Hawaii point to important aerosol-cloud interactions. Consistent particle concentrations of 200/cu cm were found above the marine boundary layer and within the noncloudy marine boundary layer. Lower and more variable CCN concentrations within the cloudy boundary layer, especially very close to the clouds, appear to be a result of cloud scavenging processes. Gravitational coagulation of cloud droplets may be the principal cause of this difference in the vertical distribution of CCN. The results suggest a reservoir of CCN in the free troposphere which can act as a source for the marine boundary layer.