Extracting cross sections and water levels of vegetated ditches from LiDAR point clouds
NASA Astrophysics Data System (ADS)
Roelens, Jennifer; Dondeyne, Stefaan; Van Orshoven, Jos; Diels, Jan
2016-12-01
The hydrologic response of a catchment is sensitive to the morphology of the drainage network. Dimensions of bigger channels are usually well known, however, geometrical data for man-made ditches is often missing as there are many and small. Aerial LiDAR data offers the possibility to extract these small geometrical features. Analysing the three-dimensional point clouds directly will maintain the highest degree of information. A longitudinal and cross-sectional buffer were used to extract the cross-sectional profile points from the LiDAR point cloud. The profile was represented by spline functions fitted through the minimum envelop of the extracted points. The cross-sectional ditch profiles were classified for the presence of water and vegetation based on the normalized difference water index and the spatial characteristics of the points along the profile. The normalized difference water index was created using the RGB and intensity data coupled to the LiDAR points. The mean vertical deviation of 0.14 m found between the extracted and reference cross sections could mainly be attributed to the occurrence of water and partly to vegetation on the banks. In contrast to the cross-sectional area, the extracted width was not influenced by the environment (coefficient of determination R2 = 0.87). Water and vegetation influenced the extracted ditch characteristics, but the proposed method is still robust and therefore facilitates input data acquisition and improves accuracy of spatially explicit hydrological models.
Determination of Cd in urine by cloud point extraction-tungsten coil atomic absorption spectrometry.
Donati, George L; Pharr, Kathryn E; Calloway, Clifton P; Nóbrega, Joaquim A; Jones, Bradley T
2008-09-15
Cadmium concentrations in human urine are typically at or below the 1 microgL(-1) level, so only a handful of techniques may be appropriate for this application. These include sophisticated methods such as graphite furnace atomic absorption spectrometry and inductively coupled plasma mass spectrometry. While tungsten coil atomic absorption spectrometry is a simpler and less expensive technique, its practical detection limits often prohibit the detection of Cd in normal urine samples. In addition, the nature of the urine matrix often necessitates accurate background correction techniques, which would add expense and complexity to the tungsten coil instrument. This manuscript describes a cloud point extraction method that reduces matrix interference while preconcentrating Cd by a factor of 15. Ammonium pyrrolidinedithiocarbamate and Triton X-114 are used as complexing agent and surfactant, respectively, in the extraction procedure. Triton X-114 forms an extractant coacervate surfactant-rich phase that is denser than water, so the aqueous supernatant is easily removed leaving the metal-containing surfactant layer intact. A 25 microL aliquot of this preconcentrated sample is placed directly onto the tungsten coil for analysis. The cloud point extraction procedure allows for simple background correction based either on the measurement of absorption at a nearby wavelength, or measurement of absorption at a time in the atomization step immediately prior to the onset of the Cd signal. Seven human urine samples are analyzed by this technique and the results are compared to those found by the inductively coupled plasma mass spectrometry analysis of the same samples performed at a different institution. The limit of detection for Cd in urine is 5 ngL(-1) for cloud point extraction tungsten coil atomic absorption spectrometry. The accuracy of the method is determined with a standard reference material (toxic metals in freeze-dried urine) and the determined values agree with the reported levels at the 95% confidence level.
Cloud point extraction of Δ9-tetrahydrocannabinol from cannabis resin.
Ameur, S; Haddou, B; Derriche, Z; Canselier, J P; Gourdon, C
2013-04-01
A cloud point extraction coupled with high performance liquid chromatography (HPLC/UV) method was developed for the determination of Δ(9)-tetrahydrocannabinol (THC) in micellar phase. The nonionic surfactant "Dowfax 20B102" was used to extract and pre-concentrate THC from cannabis resin, prior to its determination with a HPLC-UV system (diode array detector) with isocratic elution. The parameters and variables affecting the extraction were investigated. Under optimum conditions (1 wt.% Dowfax 20B102, 1 wt.% Na2SO4, T = 318 K, t = 30 min), this method yielded a quite satisfactory recovery rate (~81 %). The limit of detection was 0.04 μg mL(-1), and the relative standard deviation was less than 2 %. Compared with conventional solid-liquid extraction, this new method avoids the use of volatile organic solvents, therefore is environmentally safer.
Classification of Aerial Photogrammetric 3d Point Clouds
NASA Astrophysics Data System (ADS)
Becker, C.; Häni, N.; Rosinskaya, E.; d'Angelo, E.; Strecha, C.
2017-05-01
We present a powerful method to extract per-point semantic class labels from aerial photogrammetry data. Labelling this kind of data is important for tasks such as environmental modelling, object classification and scene understanding. Unlike previous point cloud classification methods that rely exclusively on geometric features, we show that incorporating color information yields a significant increase in accuracy in detecting semantic classes. We test our classification method on three real-world photogrammetry datasets that were generated with Pix4Dmapper Pro, and with varying point densities. We show that off-the-shelf machine learning techniques coupled with our new features allow us to train highly accurate classifiers that generalize well to unseen data, processing point clouds containing 10 million points in less than 3 minutes on a desktop computer.
Chen, Ligang; Zhao, Qi; Jin, Haiyan; Zhang, Xiaopan; Xu, Yang; Yu, Aimin; Zhang, Hanqi; Ding, Lan
2010-04-15
A method based on coupling of cloud point extraction (CPE) with high performance liquid chromatography separation and ultraviolet detection was developed for determination of xanthohumol in beer. The nonionic surfactant Triton X-114 was chosen as the extraction medium. The parameters affecting the CPE were evaluated and optimized. The highest extraction yield of xanthohumol was obtained with 2.5% of Triton X-114 (v/v) at pH 5.0, 15% of sodium chloride (w/v), 70 degrees C of equilibrium temperature and 10 min of equilibrium time. Under these conditions, the limit of detection of xanthohumol is 0.003 mg L(-1). The intra- and inter-day precisions expressed as relative standard deviations are 4.6% and 6.3%, respectively. The proposed method was successfully applied for determination of xanthohumol in various beer samples. The contents of xanthohumol in these samples are in the range of 0.052-0.628 mg L(-1), and the recoveries ranging from 90.7% to 101.9% were obtained. The developed method was demonstrated to be efficient, green, rapid and inexpensive for extraction and determination of xanthohumol in beer. (c) 2010 Elsevier B.V. All rights reserved.
Kachangoon, Rawikan; Vichapong, Jitlada; Burakham, Rodjana; Santaladchaiyakit, Yanawath; Srijaranai, Supalax
2018-05-12
An effective pre-concentration method, namely amended-cloud point extraction (CPE), has been developed for the extraction and pre-concentration of neonicotinoid insecticide residues. The studied analytes including clothianidin, imidacloprid, acetamiprid, thiamethoxam and thiacloprid were chosen as a model compound. The amended-CPE procedure included two cloud point processes. Triton™ X-114 was used to extract neonicotinoid residues into the surfactant-rich phase and then the analytes were transferred into an alkaline solution with the help of ultrasound energy. The extracts were then analyzed by high-performance liquid chromatography (HPLC) coupled with a monolithic column. Several factors influencing the extraction efficiency were studied such as kind and concentration of surfactant, type and content of salts, kind and concentration of back extraction agent, and incubation temperature and time. Enrichment factors (EFs) were found in the range of 20⁻333 folds. The limits of detection of the studied neonicotinoids were in the range of 0.0003⁻0.002 µg mL −1 which are below the maximum residue limits (MRLs) established by the European Union (EU). Good repeatability was obtained with relative standard deviations lower than 1.92% and 4.54% for retention time ( t R ) and peak area, respectively. The developed extraction method was successfully applied for the analysis of water samples. No detectable residues of neonicotinoids in the studied samples were found.
Zhao, Lingling; Zhong, Shuxian; Fang, Keming; Qian, Zhaosheng; Chen, Jianrong
2012-11-15
A dual-cloud point extraction (d-CPE) procedure has been developed for simultaneous pre-concentration and separation of heavy metal ions (Cd2+, Co2+, Ni2+, Pb2+, Zn2+, and Cu2+ ion) in water samples by inductively coupled plasma optical emission spectrometry (ICP-OES). The procedure is based on forming complexes of metal ion with 8-hydroxyquinoline (8-HQ) into the as-formed Triton X-114 surfactant rich phase. Instead of direct injection or analysis, the surfactant rich phase containing the complexes was treated by nitric acid, and the detected ions were back extracted again into aqueous phase at the second cloud point extraction stage, and finally determined by ICP-OES. Under the optimum conditions (pH=7.0, Triton X-114=0.05% (w/v), 8-HQ=2.0×10(-4) mol L(-1), HNO3=0.8 mol L(-1)), the detection limits for Cd2+, Co2+, Ni2+, Pb2+, Zn2+, and Cu2+ ions were 0.01, 0.04, 0.01, 0.34, 0.05, and 0.04 μg L(-1), respectively. Relative standard deviation (RSD) values for 10 replicates at 100 μg L(-1) were lower than 6.0%. The proposed method could be successfully applied to the determination of Cd2+, Co2+, Ni2+, Pb2+, Zn2+, and Cu2+ ion in water samples. Copyright © 2012 Elsevier B.V. All rights reserved.
Rahimi, Marzieh; Hashemi, Payman; Nazari, Fariba
2014-05-15
A cold column trapping-cloud point extraction (CCT-CPE) method coupled to high performance liquid chromatography (HPLC) was developed for preconcentration and determination of curcumin in human urine. A nonionic surfactant, Triton X-100, was used as the extraction medium. In the proposed method, a low surfactant concentration of 0.4% v/v and a short heating time of only 2min at 70°C were sufficient for quantitative extraction of the analyte. For the separation of the extraction phase, the resulted cloudy solution was passed through a packed trapping column that was cooled to 0 °C. The temperature of the CCT column was then increased to 25°C and the surfactant rich phase was desorbed with 400μL ethanol to be directly injected into HPLC for the analysis. The effects of different variables such as pH, surfactant concentration, cloud point temperature and time were investigated and optimum conditions were established by a central composite design (response surface) method. A limit of detection of 0.066mgL(-1) curcumin and a linear range of 0.22-100mgL(-1) with a determination coefficient of 0.9998 were obtained for the method. The average recovery and relative standard deviation for six replicated analysis were 101.0% and 2.77%, respectively. The CCT-CPE technique was faster than a conventional CPE method requiring a lower concentration of the surfactant and lower temperatures with no need for the centrifugation. The proposed method was successfully applied to the analysis of curcumin in human urine samples. Copyright © 2014 Elsevier B.V. All rights reserved.
Heydari, Rouhollah; Elyasi, Najmeh S
2014-10-01
A novel, simple, and effective ion-pair cloud-point extraction coupled with a gradient high-performance liquid chromatography method was developed for determination of thiamine (vitamin B1 ), niacinamide (vitamin B3 ), pyridoxine (vitamin B6 ), and riboflavin (vitamin B2 ) in plasma and urine samples. The extraction and separation of vitamins were achieved based on an ion-pair formation approach between these ionizable analytes and 1-heptanesulfonic acid sodium salt as an ion-pairing agent. Influential variables on the ion-pair cloud-point extraction efficiency, such as the ion-pairing agent concentration, ionic strength, pH, volume of Triton X-100, extraction temperature, and incubation time have been fully evaluated and optimized. Water-soluble vitamins were successfully extracted by 1-heptanesulfonic acid sodium salt (0.2% w/v) as ion-pairing agent with Triton X-100 (4% w/v) as surfactant phase at 50°C for 10 min. The calibration curves showed good linearity (r(2) > 0.9916) and precision in the concentration ranges of 1-50 μg/mL for thiamine and niacinamide, 5-100 μg/mL for pyridoxine, and 0.5-20 μg/mL for riboflavin. The recoveries were in the range of 78.0-88.0% with relative standard deviations ranging from 6.2 to 8.2%. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Arain, Salma Aslam; Kazi, Tasneem G.; Afridi, Hassan Imran; Abbasi, Abdul Rasool; Panhwar, Abdul Haleem; Naeemullah; Shanker, Bhawani; Arain, Mohammad Balal
2014-12-01
An efficient, innovative preconcentration method, dual-cloud point extraction (d-CPE) has been developed for the extraction and preconcentration of copper (Cu2+) in serum samples of different viral hepatitis patients prior to couple with flame atomic absorption spectrometry (FAAS). The d-CPE procedure was based on forming complexes of elemental ions with complexing reagent 1-(2-pyridylazo)-2-naphthol (PAN), and subsequent entrapping the complexes in nonionic surfactant (Triton X-114). Then the surfactant rich phase containing the metal complexes was treated with aqueous nitric acid solution, and metal ions were back extracted into the aqueous phase, as second cloud point extraction stage, and finally determined by flame atomic absorption spectrometry using conventional nebulization. The multivariate strategy was applied to estimate the optimum values of experimental variables for the recovery of Cu2+ using d-CPE. In optimum experimental conditions, the limit of detection and the enrichment factor were 0.046 μg L-1 and 78, respectively. The validity and accuracy of proposed method were checked by analysis of Cu2+ in certified sample of serum (CRM) by d-CPE and conventional CPE procedure on same CRM. The proposed method was successfully applied to the determination of Cu2+ in serum samples of different viral hepatitis patients and healthy controls.
Investigation of cloud point extraction for the analysis of metallic nanoparticles in a soil matrix
Hadri, Hind El; Hackley, Vincent A.
2017-01-01
The characterization of manufactured nanoparticles (MNPs) in environmental samples is necessary to assess their behavior, fate and potential toxicity. Several techniques are available, but the limit of detection (LOD) is often too high for environmentally relevant concentrations. Therefore, pre-concentration of MNPs is an important component in the sample preparation step, in order to apply analytical tools with a LOD higher than the ng kg−1 level. The objective of this study was to explore cloud point extraction (CPE) as a viable method to pre-concentrate gold nanoparticles (AuNPs), as a model MNP, spiked into a soil extract matrix. To that end, different extraction conditions and surface coatings were evaluated in a simple matrix. The CPE method was then applied to soil extract samples spiked with AuNPs. Total gold, determined by inductively coupled plasma mass spectrometry (ICP-MS) following acid digestion, yielded a recovery greater than 90 %. The first known application of single particle ICP-MS and asymmetric flow field-flow fractionation to evaluate the preservation of the AuNP physical state following CPE extraction is demonstrated. PMID:28507763
Wen, Yingying; Li, Jinhua; Liu, Junshen; Lu, Wenhui; Ma, Jiping; Chen, Lingxin
2013-07-01
A dual cloud point extraction (dCPE) off-line enrichment procedure coupled with a hydrodynamic-electrokinetic two-step injection online enrichment technique was successfully developed for simultaneous preconcentration of trace phenolic estrogens (hexestrol, dienestrol, and diethylstilbestrol) in water samples followed by micellar electrokinetic chromatography (MEKC) analysis. Several parameters affecting the extraction and online injection conditions were optimized. Under optimal dCPE-two-step injection-MEKC conditions, detection limits of 7.9-8.9 ng/mL and good linearity in the range from 0.05 to 5 μg/mL with correlation coefficients R(2) ≥ 0.9990 were achieved. Satisfactory recoveries ranging from 83 to 108% were obtained with lake and tap water spiked at 0.1 and 0.5 μg/mL, respectively, with relative standard deviations (n = 6) of 1.3-3.1%. This method was demonstrated to be convenient, rapid, cost-effective, and environmentally benign, and could be used as an alternative to existing methods for analyzing trace residues of phenolic estrogens in water samples.
LIDAR Point Cloud Data Extraction and Establishment of 3D Modeling of Buildings
NASA Astrophysics Data System (ADS)
Zhang, Yujuan; Li, Xiuhai; Wang, Qiang; Liu, Jiang; Liang, Xin; Li, Dan; Ni, Chundi; Liu, Yan
2018-01-01
This paper takes the method of Shepard’s to deal with the original LIDAR point clouds data, and generate regular grid data DSM, filters the ground point cloud and non ground point cloud through double least square method, and obtains the rules of DSM. By using region growing method for the segmentation of DSM rules, the removal of non building point cloud, obtaining the building point cloud information. Uses the Canny operator to extract the image segmentation is needed after the edges of the building, uses Hough transform line detection to extract the edges of buildings rules of operation based on the smooth and uniform. At last, uses E3De3 software to establish the 3D model of buildings.
Joint classification and contour extraction of large 3D point clouds
NASA Astrophysics Data System (ADS)
Hackel, Timo; Wegner, Jan D.; Schindler, Konrad
2017-08-01
We present an effective and efficient method for point-wise semantic classification and extraction of object contours of large-scale 3D point clouds. What makes point cloud interpretation challenging is the sheer size of several millions of points per scan and the non-grid, sparse, and uneven distribution of points. Standard image processing tools like texture filters, for example, cannot handle such data efficiently, which calls for dedicated point cloud labeling methods. It turns out that one of the major drivers for efficient computation and handling of strong variations in point density, is a careful formulation of per-point neighborhoods at multiple scales. This allows, both, to define an expressive feature set and to extract topologically meaningful object contours. Semantic classification and contour extraction are interlaced problems. Point-wise semantic classification enables extracting a meaningful candidate set of contour points while contours help generating a rich feature representation that benefits point-wise classification. These methods are tailored to have fast run time and small memory footprint for processing large-scale, unstructured, and inhomogeneous point clouds, while still achieving high classification accuracy. We evaluate our methods on the semantic3d.net benchmark for terrestrial laser scans with >109 points.
Arain, Salma Aslam; Kazi, Tasneem G; Afridi, Hassan Imran; Abbasi, Abdul Rasool; Panhwar, Abdul Haleem; Naeemullah; Shanker, Bhawani; Arain, Mohammad Balal
2014-12-10
An efficient, innovative preconcentration method, dual-cloud point extraction (d-CPE) has been developed for the extraction and preconcentration of copper (Cu(2+)) in serum samples of different viral hepatitis patients prior to couple with flame atomic absorption spectrometry (FAAS). The d-CPE procedure was based on forming complexes of elemental ions with complexing reagent 1-(2-pyridylazo)-2-naphthol (PAN), and subsequent entrapping the complexes in nonionic surfactant (Triton X-114). Then the surfactant rich phase containing the metal complexes was treated with aqueous nitric acid solution, and metal ions were back extracted into the aqueous phase, as second cloud point extraction stage, and finally determined by flame atomic absorption spectrometry using conventional nebulization. The multivariate strategy was applied to estimate the optimum values of experimental variables for the recovery of Cu(2+) using d-CPE. In optimum experimental conditions, the limit of detection and the enrichment factor were 0.046μgL(-1) and 78, respectively. The validity and accuracy of proposed method were checked by analysis of Cu(2+) in certified sample of serum (CRM) by d-CPE and conventional CPE procedure on same CRM. The proposed method was successfully applied to the determination of Cu(2+) in serum samples of different viral hepatitis patients and healthy controls. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Poux, F.; Neuville, R.; Billen, R.
2017-08-01
Reasoning from information extraction given by point cloud data mining allows contextual adaptation and fast decision making. However, to achieve this perceptive level, a point cloud must be semantically rich, retaining relevant information for the end user. This paper presents an automatic knowledge-based method for pre-processing multi-sensory data and classifying a hybrid point cloud from both terrestrial laser scanning and dense image matching. Using 18 features including sensor's biased data, each tessera in the high-density point cloud from the 3D captured complex mosaics of Germigny-des-prés (France) is segmented via a colour multi-scale abstraction-based featuring extracting connectivity. A 2D surface and outline polygon of each tessera is generated by a RANSAC plane extraction and convex hull fitting. Knowledge is then used to classify every tesserae based on their size, surface, shape, material properties and their neighbour's class. The detection and semantic enrichment method shows promising results of 94% correct semantization, a first step toward the creation of an archaeological smart point cloud.
Kori, Shivpoojan; Parmar, Ankush; Goyal, Jony; Sharma, Shweta
2018-02-01
A procedure for the determination of Eszopiclone (ESZ) from complex matrices i.e. in vitro (spiked matrices), as well as in vivo (mice model) was developed using cloud point extraction coupled with microwave-assisted back-extraction (CPE-MABE). Analytical measurements have been carried using UV-Visible, HPLC and MS techniques. The proposed method has been validated according to ICH guidelines and legitimate reproducible and reliability of protocol is assessed through intraday and inter-day precision <3.61% and <4.70%, respectively. Limit of detection has been obtained as 0.083μg/mL and 0.472μg/mL respectively, for HPLC and UV-Visible techniques, corresponding to assessed linearity range. The coaservate phase in CPE was back extracted under microwaves exposure, with isooctane at pre-concentration factor ~50 when 5mL of sample solution was pre-concentrated to 0.1mL. Under optimized conditions i.e. Aqueous-Triton X-114 4% (w/v), pH4.0, NaCl 4% (w/v) and equilibrium temperature of 45°C for 20min, average extraction recovery has been obtained between 89.8 and 99.2% and 84.0-99.2% from UV-Visible and HPLC analysis, respectively. The method has been successfully applied to the pharmacokinetic estimation (post intraperitoneal administration) of ESZ in mice. MS analysis precisely depicted the presence of active N‑desmethyl zopiclone in impales as well as in mice plasma. Copyright © 2018 Elsevier B.V. All rights reserved.
Filik, Hayati; Sener, Izzet; Cekiç, Sema Demirci; Kiliç, Emine; Apak, Reşat
2006-06-01
In the present paper, conventional spectrophotometry in conjunction with cloud point extraction-preconcentration were investigated as alternative methods for paracetamol (PCT) assay in urine samples. Cloud point extraction (CPE) was employed for the preconcentration of p-aminophenol (PAP) prior to spectrophotometric determination using the non-ionic surfactant Triton X-114 (TX-114) as an extractant. The developed methods were based on acidic hydrolysis of PCT to PAP, which reacted at room temperature with 25,26,27,28-tetrahydroxycalix[4]arene (CAL4) in the presence of an oxidant (KIO(4)) to form an blue colored product. The PAP-CAL4 blue dye formed was subsequently entrapped in the surfactant micelles of Triton X-114. Cloud point phase separation with the aid of Triton X-114 induced by addition of Na(2)SO(4) solution was performed at room temperature as an advantage over other CPE assays requiring elevated temperatures. The 580 nm-absorbance maximum of the formed product was shifted bathochromically to 590 nm with CPE. The working range of 1.5-12 microg ml(-1) achieved by conventional spectrophotometry was reduced down to 0.14-1.5 microg ml(-1) with cloud point extraction, which was lower than those of most literature flow-through assays that also suffer from nonspecific absorption in the UV region. By preconcentrating 10 ml sample solution, a detection limit as low as 40.0 ng ml(-1) was obtained after a single-step extraction, achieving a preconcentration factor of 10. The stoichiometric composition of the dye was found to be 1 : 4 (PAP : CAL4). The impact of a number of parameters such as concentrations of CAL4, KIO(4), Triton X-100 (TX-100), and TX-114, extraction temperature, time periods for incubation and centrifugation, and sample volume were investigated in detail. The determination of PAP in the presence of paracetamol in micellar systems under these conditions is limited. The established procedures were successfully adopted for the determination of PCT in urine samples. Since the drug is rapidly absorbed and excreted largely in urine and its high doses have been associated with lethal hepatic necrosis and renal failure, development of a rapid, sensitive and selective assay of PCT is of vital importance for fast urinary screening and antidote administration before applying more sophisticated, but costly and laborious hyphenated instrumental techniques of HPLC-SPE-NMR-MS.
Towards 3D Matching of Point Clouds Derived from Oblique and Nadir Airborne Imagery
NASA Astrophysics Data System (ADS)
Zhang, Ming
Because of the low-expense high-efficient image collection process and the rich 3D and texture information presented in the images, a combined use of 2D airborne nadir and oblique images to reconstruct 3D geometric scene has a promising market for future commercial usage like urban planning or first responders. The methodology introduced in this thesis provides a feasible way towards fully automated 3D city modeling from oblique and nadir airborne imagery. In this thesis, the difficulty of matching 2D images with large disparity is avoided by grouping the images first and applying the 3D registration afterward. The procedure starts with the extraction of point clouds using a modified version of the RIT 3D Extraction Workflow. Then the point clouds are refined by noise removal and surface smoothing processes. Since the point clouds extracted from different image groups use independent coordinate systems, there are translation, rotation and scale differences existing. To figure out these differences, 3D keypoints and their features are extracted. For each pair of point clouds, an initial alignment and a more accurate registration are applied in succession. The final transform matrix presents the parameters describing the translation, rotation and scale requirements. The methodology presented in the thesis has been shown to behave well for test data. The robustness of this method is discussed by adding artificial noise to the test data. For Pictometry oblique aerial imagery, the initial alignment provides a rough alignment result, which contains a larger offset compared to that of test data because of the low quality of the point clouds themselves, but it can be further refined through the final optimization. The accuracy of the final registration result is evaluated by comparing it to the result obtained from manual selection of matched points. Using the method introduced, point clouds extracted from different image groups could be combined with each other to build a more complete point cloud, or be used as a complement to existing point clouds extracted from other sources. This research will both improve the state of the art of 3D city modeling and inspire new ideas in related fields.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Warner-Schmid, D.; Hoshi, Suwaru; Armstrong, D.W.
Aqueous solutions of nonionic surfactants are known to undergo phase separations at elevated temperatures. This phenomenon is known as clouding,' and the temperature at which it occurs is refereed to as the cloud point. Permethylhydroxypropyl-[beta]-cyclodextrin (PMHP-[beta]-CD) was synthesized and aqueous solutions containing it were found to undergo similar cloud-point behavior. Factors that affect the phase separation of PMHP-[beta]-CD were investigated. Subsequently, the cloud-point extractions of several aromatic compounds (i.e., acetanilide, aniline, 2,2[prime]-dihydroxybiphenyl, N-methylaniline, 2-naphthol, o-nitroaniline, m-nitroaniline, p-nitroaniline, nitrobenzene, o-nitrophenol, m-nitrophenol, p-nitrophenol, 4-phenazophenol, 3-phenylphenol, and 2-phenylbenzimidazole) from dilute aqueous solution were evaluated. Although the extraction efficiency of the compounds varied, mostmore » can be quantitatively extracted if sufficient PMHP-[beta]-CD is used. For those few compounds that are not extracted (e.g., o-nitroacetanilide), the cloud-point procedure may be an effective one-step isolation or purification method. 18 refs., 2 figs., 3 tabs.« less
The Segmentation of Point Clouds with K-Means and ANN (artifical Neural Network)
NASA Astrophysics Data System (ADS)
Kuçak, R. A.; Özdemir, E.; Erol, S.
2017-05-01
Segmentation of point clouds is recently used in many Geomatics Engineering applications such as the building extraction in urban areas, Digital Terrain Model (DTM) generation and the road or urban furniture extraction. Segmentation is a process of dividing point clouds according to their special characteristic layers. The present paper discusses K-means and self-organizing map (SOM) which is a type of ANN (Artificial Neural Network) segmentation algorithm which treats the segmentation of point cloud. The point clouds which generate with photogrammetric method and Terrestrial Lidar System (TLS) were segmented according to surface normal, intensity and curvature. Thus, the results were evaluated. LIDAR (Light Detection and Ranging) and Photogrammetry are commonly used to obtain point clouds in many remote sensing and geodesy applications. By photogrammetric method or LIDAR method, it is possible to obtain point cloud from terrestrial or airborne systems. In this study, the measurements were made with a Leica C10 laser scanner in LIDAR method. In photogrammetric method, the point cloud was obtained from photographs taken from the ground with a 13 MP non-metric camera.
Scan Line Based Road Marking Extraction from Mobile LiDAR Point Clouds.
Yan, Li; Liu, Hua; Tan, Junxiang; Li, Zan; Xie, Hong; Chen, Changjun
2016-06-17
Mobile Mapping Technology (MMT) is one of the most important 3D spatial data acquisition technologies. The state-of-the-art mobile mapping systems, equipped with laser scanners and named Mobile LiDAR Scanning (MLS) systems, have been widely used in a variety of areas, especially in road mapping and road inventory. With the commercialization of Advanced Driving Assistance Systems (ADASs) and self-driving technology, there will be a great demand for lane-level detailed 3D maps, and MLS is the most promising technology to generate such lane-level detailed 3D maps. Road markings and road edges are necessary information in creating such lane-level detailed 3D maps. This paper proposes a scan line based method to extract road markings from mobile LiDAR point clouds in three steps: (1) preprocessing; (2) road points extraction; (3) road markings extraction and refinement. In preprocessing step, the isolated LiDAR points in the air are removed from the LiDAR point clouds and the point clouds are organized into scan lines. In the road points extraction step, seed road points are first extracted by Height Difference (HD) between trajectory data and road surface, then full road points are extracted from the point clouds by moving least squares line fitting. In the road markings extraction and refinement step, the intensity values of road points in a scan line are first smoothed by a dynamic window median filter to suppress intensity noises, then road markings are extracted by Edge Detection and Edge Constraint (EDEC) method, and the Fake Road Marking Points (FRMPs) are eliminated from the detected road markings by segment and dimensionality feature-based refinement. The performance of the proposed method is evaluated by three data samples and the experiment results indicate that road points are well extracted from MLS data and road markings are well extracted from road points by the applied method. A quantitative study shows that the proposed method achieves an average completeness, correctness, and F-measure of 0.96, 0.93, and 0.94, respectively. The time complexity analysis shows that the scan line based road markings extraction method proposed in this paper provides a promising alternative for offline road markings extraction from MLS data.
Line segment extraction for large scale unorganized point clouds
NASA Astrophysics Data System (ADS)
Lin, Yangbin; Wang, Cheng; Cheng, Jun; Chen, Bili; Jia, Fukai; Chen, Zhonggui; Li, Jonathan
2015-04-01
Line segment detection in images is already a well-investigated topic, although it has received considerably less attention in 3D point clouds. Benefiting from current LiDAR devices, large-scale point clouds are becoming increasingly common. Most human-made objects have flat surfaces. Line segments that occur where pairs of planes intersect give important information regarding the geometric content of point clouds, which is especially useful for automatic building reconstruction and segmentation. This paper proposes a novel method that is capable of accurately extracting plane intersection line segments from large-scale raw scan points. The 3D line-support region, namely, a point set near a straight linear structure, is extracted simultaneously. The 3D line-support region is fitted by our Line-Segment-Half-Planes (LSHP) structure, which provides a geometric constraint for a line segment, making the line segment more reliable and accurate. We demonstrate our method on the point clouds of large-scale, complex, real-world scenes acquired by LiDAR devices. We also demonstrate the application of 3D line-support regions and their LSHP structures on urban scene abstraction.
Favre-Réguillon, Alain; Draye, Micheline; Lebuzit, Gérard; Thomas, Sylvie; Foos, Jacques; Cote, Gérard; Guy, Alain
2004-06-17
Cloud point extraction (CPE) was used to extract and separate lanthanum(III) and gadolinium(III) nitrate from an aqueous solution. The methodology used is based on the formation of lanthanide(III)-8-hydroxyquinoline (8-HQ) complexes soluble in a micellar phase of non-ionic surfactant. The lanthanide(III) complexes are then extracted into the surfactant-rich phase at a temperature above the cloud point temperature (CPT). The structure of the non-ionic surfactant, and the chelating agent-metal molar ratio are identified as factors determining the extraction efficiency and selectivity. In an aqueous solution containing equimolar concentrations of La(III) and Gd(III), extraction efficiency for Gd(III) can reach 96% with a Gd(III)/La(III) selectivity higher than 30 using Triton X-114. Under those conditions, a Gd(III) decontamination factor of 50 is obtained.
Nong, Chunyan; Niu, Zongliang; Li, Pengyao; Wang, Chunping; Li, Wanyu; Wen, Yingying
2017-04-15
Dual-cloud point extraction (dCPE) was successfully developed for simultaneous extraction of trace sulfonamides (SAs) including sulfamerazine (SMZ), sulfadoxin (SDX), sulfathiazole (STZ) in urine and water samples. Several parameters affecting the extraction were optimized, such as sample pH, concentration of Triton X-114, extraction temperature and time, centrifugation rate and time, back-extraction solution pH, back-extraction temperature and time, back-extraction centrifugation rate and time. High performance liquid chromatography (HPLC) was applied for the SAs analysis. Under the optimum extraction and detection conditions, successful separation of the SAs was achieved within 9min, and excellent analytical performances were attained. Good linear relationships (R 2 ≥0.9990) between peak area and concentration for SMZ and STZ were optimized from 0.02 to 10μg/mL, for SDX from 0.01 to 10μg/mL. Detection limits of 3.0-6.2ng/mL were achieved. Satisfactory recoveries ranging from 85 to 108% were determined with urine, lake and tap water spiked at 0.2, 0.5 and 1μg/mL, respectively, with relative standard deviations (RSDs, n=6) of 1.5-7.7%. This method was demonstrated to be convenient, rapid, cost-effective and environmentally benign, and could be used as an alternative tool to existing methods for analysing trace residues of SAs in urine and water samples. Copyright © 2017 Elsevier B.V. All rights reserved.
3D local feature BKD to extract road information from mobile laser scanning point clouds
NASA Astrophysics Data System (ADS)
Yang, Bisheng; Liu, Yuan; Dong, Zhen; Liang, Fuxun; Li, Bijun; Peng, Xiangyang
2017-08-01
Extracting road information from point clouds obtained through mobile laser scanning (MLS) is essential for autonomous vehicle navigation, and has hence garnered a growing amount of research interest in recent years. However, the performance of such systems is seriously affected due to varying point density and noise. This paper proposes a novel three-dimensional (3D) local feature called the binary kernel descriptor (BKD) to extract road information from MLS point clouds. The BKD consists of Gaussian kernel density estimation and binarization components to encode the shape and intensity information of the 3D point clouds that are fed to a random forest classifier to extract curbs and markings on the road. These are then used to derive road information, such as the number of lanes, the lane width, and intersections. In experiments, the precision and recall of the proposed feature for the detection of curbs and road markings on an urban dataset and a highway dataset were as high as 90%, thus showing that the BKD is accurate and robust against varying point density and noise.
Arain, Salma Aslam; Kazi, Tasneem Gul; Afridi, Hassan Imran; Arain, Mariam Shahzadi; Panhwar, Abdul Haleem; Khan, Naeemullah; Baig, Jameel Ahmed; Shah, Faheem
2016-04-01
A simple and rapid dispersive liquid-liquid microextraction procedure based on ionic liquid assisted microemulsion (IL-µE-DLLME) combined with cloud point extraction has been developed for preconcentration copper (Cu(2+)) in drinking water and serum samples of adolescent female hepatitits C (HCV) patients. In this method a ternary system was developed to form microemulsion (µE) by phase inversion method (PIM), using ionic liquid, 1-butyl-3-methylimidazolium hexafluorophosphate ([C4mim][PF6]) and nonionic surfactant, TX-100 (as a stabilizer in aqueous media). The Ionic liquid microemulsion (IL-µE) was evaluated through visual assessment, optical light microscope and spectrophotometrically. The Cu(2+) in real water and aqueous acid digested serum samples were complexed with 8-hydroxyquinoline (oxine) and extracted into IL-µE medium. The phase separation of stable IL-µE was carried out by the micellar cloud point extraction approach. The influence of of different parameters such as pH, oxine concentration, centrifugation time and rate were investigated. At optimized experimental conditions, the limit of detection and enhancement factor were found to be 0.132 µg/L and 70 respectively, with relative standard deviation <5%. In order to validate the developed method, certified reference materials (SLRS-4 Riverine water) and human serum (Sero-M10181) were analyzed. The resulting data indicated a non-significant difference in obtained and certified values of Cu(2+). The developed procedure was successfully applied for the preconcentration and determination of trace levels of Cu(2+) in environmental and biological samples. Copyright © 2015 Elsevier Inc. All rights reserved.
Gürkan, Ramazan; Korkmaz, Sema; Altunay, Nail
2016-08-01
A new ultrasonic-thermostatic-assisted cloud point extraction procedure (UTA-CPE) was developed for preconcentration at the trace levels of vanadium (V) and molybdenum (Mo) in milk, vegetables and foodstuffs prior to determination via flame atomic absorption spectrometry (FAAS). The method is based on the ion-association of stable anionic oxalate complexes of V(V) and Mo(VI) with [9-(diethylamino)benzo[a]phenoxazin-5-ylidene]azanium; sulfate (Nile blue A) at pH 4.5, and then extraction of the formed ion-association complexes into micellar phase of polyoxyethylene(7.5)nonylphenyl ether (PONPE 7.5). The UTA-CPE is greatly simplified and accelerated compared to traditional cloud point extraction (CPE). The analytical parameters optimized are solution pH, the concentrations of complexing reagents (oxalate and Nile blue A), the PONPE 7.5 concentration, electrolyte concentration, sample volume, temperature and ultrasonic power. Under the optimum conditions, the calibration curves for Mo(VI) and V(V) are obtained in the concentration range of 3-340µgL(-1) and 5-250µgL(-1) with high sensitivity enhancement factors (EFs) of 145 and 115, respectively. The limits of detection (LODs) for Mo(VI) and V(V) are 0.86 and 1.55µgL(-1), respectively. The proposed method demonstrated good performances such as relative standard deviations (as RSD %) (≤3.5%) and spiked recoveries (95.7-102.3%). The accuracy of the method was assessed by analysis of two standard reference materials (SRMs) and recoveries of spiked solutions. The method was successfully applied into the determination of trace amounts of Mo(VI) and V(V) in milk, vegetables and foodstuffs with satisfactory results. Copyright © 2016 Elsevier B.V. All rights reserved.
Terrain Extraction by Integrating Terrestrial Laser Scanner Data and Spectral Information
NASA Astrophysics Data System (ADS)
Lau, C. L.; Halim, S.; Zulkepli, M.; Azwan, A. M.; Tang, W. L.; Chong, A. K.
2015-10-01
The extraction of true terrain points from unstructured laser point cloud data is an important process in order to produce an accurate digital terrain model (DTM). However, most of these spatial filtering methods just utilizing the geometrical data to discriminate the terrain points from nonterrain points. The point cloud filtering method also can be improved by using the spectral information available with some scanners. Therefore, the objective of this study is to investigate the effectiveness of using the three-channel (red, green and blue) of the colour image captured from built-in digital camera which is available in some Terrestrial Laser Scanner (TLS) for terrain extraction. In this study, the data acquisition was conducted at a mini replica landscape in Universiti Teknologi Malaysia (UTM), Skudai campus using Leica ScanStation C10. The spectral information of the coloured point clouds from selected sample classes are extracted for spectral analysis. The coloured point clouds which within the corresponding preset spectral threshold are identified as that specific feature point from the dataset. This process of terrain extraction is done through using developed Matlab coding. Result demonstrates that a higher spectral resolution passive image is required in order to improve the output. This is because low quality of the colour images captured by the sensor contributes to the low separability in spectral reflectance. In conclusion, this study shows that, spectral information is capable to be used as a parameter for terrain extraction.
NASA Astrophysics Data System (ADS)
Duarte, João; Gonçalves, Gil; Duarte, Diogo; Figueiredo, Fernando; Mira, Maria
2015-04-01
Photogrammetric Unmanned Aerial Vehicles (UAVs) and Terrestrial Laser Scanners (TLS) are two emerging technologies that allows the production of dense 3D point clouds of the sensed topographic surfaces. Although image-based stereo-photogrammetric point clouds could not, in general, compete on geometric quality over TLS point clouds, fully automated mapping solutions based on ultra-light UAVs (or drones) have recently become commercially available at very reasonable accuracy and cost for engineering and geological applications. The purpose of this paper is to compare the two point clouds generated by these two technologies, in order to automatize the manual process tasks commonly used to detect and represent the attitude of discontinuities (Stereographic projection: Schmidt net - Equal area). To avoid the difficulties of access and guarantee the data survey security conditions, this fundamental step in all geological/geotechnical studies, applied to the extractive industry and engineering works, has to be replaced by a more expeditious and reliable methodology. This methodology will allow, in a more actuated clear way, give answers to the needs of evaluation of rock masses, by mapping the structures present, which will reduce considerably the associated risks (investment, structures dimensioning, security, etc.). A case study of a dolerite outcrop locate in the center of Portugal (the dolerite outcrop is situated in the volcanic complex of Serra de Todo-o-Mundo, Casais Gaiola, intruded in Jurassic sandstones) will be used to assess this methodology. The results obtained show that the 3D point cloud produced by the Photogrammetric UAV platform has the appropriate geometric quality for extracting the parameters that define the discontinuities of the dolerite outcrops. Although, they are comparable to the manual extracted parameters, their quality is inferior to parameters extracted from the TLS point cloud.
Zhu, Hai-Zhen; Liu, Wei; Mao, Jian-Wei; Yang, Ming-Min
2008-04-28
4-Amino-4'-nitrobiphenyl, which is formed by catalytic effect of trichlorfon on sodium perborate oxidizing benzidine, is extracted with a cloud point extraction method and then detected using a high performance liquid chromatography with ultraviolet detection (HPLC-UV). Under the optimum experimental conditions, there was a linear relationship between trichlorfon in the concentration range of 0.01-0.2 mgL(-1) and the peak areas of 4-amino-4'-nitrobiphenyl (r=0.996). Limit of detection was 2.0 microgL(-1), recoveries of spiked water and cabbage samples ranged between 95.4-103 and 85.2-91.2%, respectively. It was proved that the cloud point extraction (CPE) method was simple, cheap, and environment friendly than extraction with organic solvents and had more effective extraction yield.
Optimal Information Extraction of Laser Scanning Dataset by Scale-Adaptive Reduction
NASA Astrophysics Data System (ADS)
Zang, Y.; Yang, B.
2018-04-01
3D laser technology is widely used to collocate the surface information of object. For various applications, we need to extract a good perceptual quality point cloud from the scanned points. To solve the problem, most of existing methods extract important points based on a fixed scale. However, geometric features of 3D object come from various geometric scales. We propose a multi-scale construction method based on radial basis function. For each scale, important points are extracted from the point cloud based on their importance. We apply a perception metric Just-Noticeable-Difference to measure degradation of each geometric scale. Finally, scale-adaptive optimal information extraction is realized. Experiments are undertaken to evaluate the effective of the proposed method, suggesting a reliable solution for optimal information extraction of object.
Hierarchical extraction of urban objects from mobile laser scanning data
NASA Astrophysics Data System (ADS)
Yang, Bisheng; Dong, Zhen; Zhao, Gang; Dai, Wenxia
2015-01-01
Point clouds collected in urban scenes contain a huge number of points (e.g., billions), numerous objects with significant size variability, complex and incomplete structures, and variable point densities, raising great challenges for the automated extraction of urban objects in the field of photogrammetry, computer vision, and robotics. This paper addresses these challenges by proposing an automated method to extract urban objects robustly and efficiently. The proposed method generates multi-scale supervoxels from 3D point clouds using the point attributes (e.g., colors, intensities) and spatial distances between points, and then segments the supervoxels rather than individual points by combining graph based segmentation with multiple cues (e.g., principal direction, colors) of the supervoxels. The proposed method defines a set of rules for merging segments into meaningful units according to types of urban objects and forms the semantic knowledge of urban objects for the classification of objects. Finally, the proposed method extracts and classifies urban objects in a hierarchical order ranked by the saliency of the segments. Experiments show that the proposed method is efficient and robust for extracting buildings, streetlamps, trees, telegraph poles, traffic signs, cars, and enclosures from mobile laser scanning (MLS) point clouds, with an overall accuracy of 92.3%.
Scan Line Based Road Marking Extraction from Mobile LiDAR Point Clouds†
Yan, Li; Liu, Hua; Tan, Junxiang; Li, Zan; Xie, Hong; Chen, Changjun
2016-01-01
Mobile Mapping Technology (MMT) is one of the most important 3D spatial data acquisition technologies. The state-of-the-art mobile mapping systems, equipped with laser scanners and named Mobile LiDAR Scanning (MLS) systems, have been widely used in a variety of areas, especially in road mapping and road inventory. With the commercialization of Advanced Driving Assistance Systems (ADASs) and self-driving technology, there will be a great demand for lane-level detailed 3D maps, and MLS is the most promising technology to generate such lane-level detailed 3D maps. Road markings and road edges are necessary information in creating such lane-level detailed 3D maps. This paper proposes a scan line based method to extract road markings from mobile LiDAR point clouds in three steps: (1) preprocessing; (2) road points extraction; (3) road markings extraction and refinement. In preprocessing step, the isolated LiDAR points in the air are removed from the LiDAR point clouds and the point clouds are organized into scan lines. In the road points extraction step, seed road points are first extracted by Height Difference (HD) between trajectory data and road surface, then full road points are extracted from the point clouds by moving least squares line fitting. In the road markings extraction and refinement step, the intensity values of road points in a scan line are first smoothed by a dynamic window median filter to suppress intensity noises, then road markings are extracted by Edge Detection and Edge Constraint (EDEC) method, and the Fake Road Marking Points (FRMPs) are eliminated from the detected road markings by segment and dimensionality feature-based refinement. The performance of the proposed method is evaluated by three data samples and the experiment results indicate that road points are well extracted from MLS data and road markings are well extracted from road points by the applied method. A quantitative study shows that the proposed method achieves an average completeness, correctness, and F-measure of 0.96, 0.93, and 0.94, respectively. The time complexity analysis shows that the scan line based road markings extraction method proposed in this paper provides a promising alternative for offline road markings extraction from MLS data. PMID:27322279
Yamaki, Regina Terumi; Nunes, Luana Sena; de Oliveira, Hygor Rodrigues; Araújo, André S; Bezerra, Marcos Almeida; Lemos, Valfredo Azevedo
2011-01-01
The synthesis and characterization of the reagent 2-(5-bromothiazolylazo)-4-chlorophenol and its application in the development of a preconcentration procedure for cobalt determination using flame atomic absorption spectrometry after cloud point extraction is presented. This procedure is based on cobalt complexing and entrapment of the metal chelates into micelles of a surfactant-rich phase of Triton X-114. The preconcentration procedure was optimized by using a response surface methodology through the application of the Box-Behnken matrix. Under optimum conditions, the procedure determined the presence of cobalt with an LOD of 2.8 microg/L and LOQ of 9.3 microg/L. The enrichment factor obtained was 25. The precision was evaluated as the RSD, which was 5.5% for 10 microg/L cobalt and 6.9% for 30 microg/L. The accuracy of the procedure was assessed by comparing the results with those found using inductively coupled plasma-optical emission spectrometry. After validation, the procedure was applied to the determination of cobalt in pharmaceutical preparation samples containing cobalamin (vitamin B12).
Liu, Jing-fu; Liu, Rui; Yin, Yong-guang; Jiang, Gui-bin
2009-03-28
Capable of preserving the sizes and shapes of nanomaterials during the phase transferring, Triton X-114 based cloud point extraction provides a general, simple, and cost-effective route for reversible concentration/separation or dispersion of various nanomaterials in the aqueous phase.
Automatic Extraction of Road Markings from Mobile Laser-Point Cloud Using Intensity Data
NASA Astrophysics Data System (ADS)
Yao, L.; Chen, Q.; Qin, C.; Wu, H.; Zhang, S.
2018-04-01
With the development of intelligent transportation, road's high precision information data has been widely applied in many fields. This paper proposes a concise and practical way to extract road marking information from point cloud data collected by mobile mapping system (MMS). The method contains three steps. Firstly, road surface is segmented through edge detection from scan lines. Then the intensity image is generated by inverse distance weighted (IDW) interpolation and the road marking is extracted by using adaptive threshold segmentation based on integral image without intensity calibration. Moreover, the noise is reduced by removing a small number of plaque pixels from binary image. Finally, point cloud mapped from binary image is clustered into marking objects according to Euclidean distance, and using a series of algorithms including template matching and feature attribute filtering for the classification of linear markings, arrow markings and guidelines. Through processing the point cloud data collected by RIEGL VUX-1 in case area, the results show that the F-score of marking extraction is 0.83, and the average classification rate is 0.9.
Vicente, Filipa A; Cardoso, Inês S; Sintra, Tânia E; Lemus, Jesus; Marques, Eduardo F; Ventura, Sónia P M; Coutinho, João A P
2017-09-21
Aqueous micellar two-phase systems (AMTPS) hold a large potential for cloud point extraction of biomolecules but are yet poorly studied and characterized, with few phase diagrams reported for these systems, hence limiting their use in extraction processes. This work reports a systematic investigation of the effect of different surface-active ionic liquids (SAILs)-covering a wide range of molecular properties-upon the clouding behavior of three nonionic Tergitol surfactants. Two different effects of the SAILs on the cloud points and mixed micelle size have been observed: ILs with a more hydrophilic character and lower critical packing parameter (CPP < 1 / 2 ) lead to the formation of smaller micelles and concomitantly increase the cloud points; in contrast, ILs with a more hydrophobic character and higher CPP (CPP ≥ 1) induce significant micellar growth and a decrease in the cloud points. The latter effect is particularly interesting and unusual for it was accepted that cloud point reduction is only induced by inorganic salts. The effects of nonionic surfactant concentration, SAIL concentration, pH, and micelle ζ potential are also studied and rationalized.
The registration of non-cooperative moving targets laser point cloud in different view point
NASA Astrophysics Data System (ADS)
Wang, Shuai; Sun, Huayan; Guo, Huichao
2018-01-01
Non-cooperative moving target multi-view cloud registration is the key technology of 3D reconstruction of laser threedimension imaging. The main problem is that the density changes greatly and noise exists under different acquisition conditions of point cloud. In this paper, firstly, the feature descriptor is used to find the most similar point cloud, and then based on the registration algorithm of region segmentation, the geometric structure of the point is extracted by the geometric similarity between point and point, The point cloud is divided into regions based on spectral clustering, feature descriptors are created for each region, searching to find the most similar regions in the most similar point of view cloud, and then aligning the pair of point clouds by aligning their minimum bounding boxes. Repeat the above steps again until registration of all point clouds is completed. Experiments show that this method is insensitive to the density of point clouds and performs well on the noise of laser three-dimension imaging.
Shi, Zhihong; Zhu, Xiaomin; Zhang, Hongyi
2007-08-15
In this paper, a micelle-mediated extraction and cloud point preconcentration method was developed for the determination of less hydrophobic compounds aesculin and aesculetin in Cortex fraxini by HPLC. Non-ionic surfactant oligoethylene glycol monoalkyl ether (Genapol X-080) was employed as the extraction solvent. Various experimental conditions were investigated to optimize the extraction process. Under optimum conditions, i.e. 5% Genapol X-080 (w/v), pH 1.0, liquid/solid ratio of 400:1 (ml/g), ultrasonic-assisted extraction for 30 min, the extraction yield reached the highest value. For the preconcentration of aesculin and aesculetin by cloud point extraction (CPE), the solution was incubated in a thermostatic water bath at 55 degrees C for 30 min, and 20% NaCl (w/v) was added to the solution to facilitate the phase separation and increase the preconcentration factor during the CPE process. Compared with methanol, which was used in Chinese Pharmacopoeia (2005 edition) for the extraction of C. fraxini, the extraction efficiency of 5% Genapol X-080 reached higher value.
Applications of 3D-EDGE Detection for ALS Point Cloud
NASA Astrophysics Data System (ADS)
Ni, H.; Lin, X. G.; Zhang, J. X.
2017-09-01
Edge detection has been one of the major issues in the field of remote sensing and photogrammetry. With the fast development of sensor technology of laser scanning system, dense point clouds have become increasingly common. Precious 3D-edges are able to be detected from these point clouds and a great deal of edge or feature line extraction methods have been proposed. Among these methods, an easy-to-use 3D-edge detection method, AGPN (Analyzing Geometric Properties of Neighborhoods), has been proposed. The AGPN method detects edges based on the analysis of geometric properties of a query point's neighbourhood. The AGPN method detects two kinds of 3D-edges, including boundary elements and fold edges, and it has many applications. This paper presents three applications of AGPN, i.e., 3D line segment extraction, ground points filtering, and ground breakline extraction. Experiments show that the utilization of AGPN method gives a straightforward solution to these applications.
Speciation and Determination of Low Concentration of Iron in Beer Samples by Cloud Point Extraction
ERIC Educational Resources Information Center
Khalafi, Lida; Doolittle, Pamela; Wright, John
2018-01-01
A laboratory experiment is described in which students determine the concentration and speciation of iron in beer samples using cloud point extraction and absorbance spectroscopy. The basis of determination is the complexation between iron and 2-(5-bromo-2- pyridylazo)-5-diethylaminophenol (5-Br-PADAP) as a colorimetric reagent in an aqueous…
Castor, José Martín Rosas; Portugal, Lindomar; Ferrer, Laura; Hinojosa-Reyes, Laura; Guzmán-Mar, Jorge Luis; Hernández-Ramírez, Aracely; Cerdà, Víctor
2016-08-01
A simple, inexpensive and rapid method was proposed for the determination of bioaccessible arsenic in corn and rice samples using an in vitro bioaccessibility assay. The method was based on the preconcentration of arsenic by cloud point extraction (CPE) using o,o-diethyldithiophosphate (DDTP) complex, which was generated from an in vitro extract using polyethylene glycol tert-octylphenyl ether (Triton X-114) as a surfactant prior to its detection by atomic fluorescence spectrometry with a hydride generation system (HG-AFS). The CPE method was optimized by a multivariate approach (two-level full factorial and Doehlert designs). A photo-oxidation step of the organic species prior to HG-AFS detection was included for the accurate quantification of the total As. The limit of detection was 1.34μgkg(-1) and 1.90μgkg(-1) for rice and corn samples, respectively. The accuracy of the method was confirmed by analyzing certified reference material ERM BC-211 (rice powder). The corn and rice samples that were analyzed showed a high bioaccessible arsenic content (72-88% and 54-96%, respectively), indicating a potential human health risk. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Kassem, Mohammed A.; Amin, Alaa S.
2015-02-01
A new method to estimate rhodium in different samples at trace levels had been developed. Rhodium was complexed with 5-(4‧-nitro-2‧,6‧-dichlorophenylazo)-6-hydroxypyrimidine-2,4-dione (NDPHPD) as a complexing agent in an aqueous medium and concentrated by using Triton X-114 as a surfactant. The investigated rhodium complex was preconcentrated with cloud point extraction process using the nonionic surfactant Triton X-114 to extract rhodium complex from aqueous solutions at pH 4.75. After the phase separation at 50 °C, the surfactant-rich phase was heated again at 100 °C to remove water after decantation and the remaining phase was dissolved using 0.5 mL of acetonitrile. Under optimum conditions, the calibration curve was linear for the concentration range of 0.5-75 ng mL-1 and the detection limit was 0.15 ng mL-1 of the original solution. The enhancement factor of 500 was achieved for 250 mL samples containing the analyte and relative standard deviations were ⩽1.50%. The method was found to be highly selective, fairly sensitive, simple, rapid and economical and safely applied for rhodium determination in different complex materials such as synthetic mixture of alloys and environmental water samples.
Giebułtowicz, Joanna; Kojro, Grzegorz; Piotrowski, Roman; Kułakowski, Piotr; Wroczyński, Piotr
2016-09-05
Cloud-point extraction (CPE) is attracting increasing interest in a number of analytical fields, including bioanalysis, as it provides a simple, safe and environmentally-friendly sample preparation technique. However, there are only few reports on the application of this extraction technique in liquid chromatography-electrospray ionization-tandem mass spectrometry (LC-ESI-MS/MS) analysis. In this study, CPE was used for the isolation of antazoline from human plasma. To date, only one method of antazoline isolation from plasma exists-liquid-liquid extraction (LLE). The aim of this study was to prove the compatibility of CPE and LC-ESI-MS/MS and the applicability of CPE to the determination of antazoline in spiked human plasma and clinical samples. Antazoline was isolated from human plasma using Triton X-114 as a surfactant. Xylometazoline was used as an internal standard. NaOH concentration, temperature and Triton X-114 concentration were optimized. The absolute matrix effect was carefully investigated. All validation experiments met international acceptance criteria and no significant relative matrix effect was observed. The compatibility of CPE and LC-ESI-MS/MS was confirmed using clinical plasma samples. The determination of antazoline concentration in human plasma in the range 10-2500ngmL(-1) by the CPE method led to results which are equivalent to those obtained by the widely used liquid-liquid extraction method. Copyright © 2016 Elsevier B.V. All rights reserved.
Contextual Classification of Point Cloud Data by Exploiting Individual 3d Neigbourhoods
NASA Astrophysics Data System (ADS)
Weinmann, M.; Schmidt, A.; Mallet, C.; Hinz, S.; Rottensteiner, F.; Jutzi, B.
2015-03-01
The fully automated analysis of 3D point clouds is of great importance in photogrammetry, remote sensing and computer vision. For reliably extracting objects such as buildings, road inventory or vegetation, many approaches rely on the results of a point cloud classification, where each 3D point is assigned a respective semantic class label. Such an assignment, in turn, typically involves statistical methods for feature extraction and machine learning. Whereas the different components in the processing workflow have extensively, but separately been investigated in recent years, the respective connection by sharing the results of crucial tasks across all components has not yet been addressed. This connection not only encapsulates the interrelated issues of neighborhood selection and feature extraction, but also the issue of how to involve spatial context in the classification step. In this paper, we present a novel and generic approach for 3D scene analysis which relies on (i) individually optimized 3D neighborhoods for (ii) the extraction of distinctive geometric features and (iii) the contextual classification of point cloud data. For a labeled benchmark dataset, we demonstrate the beneficial impact of involving contextual information in the classification process and that using individual 3D neighborhoods of optimal size significantly increases the quality of the results for both pointwise and contextual classification.
Automatic extraction of discontinuity orientation from rock mass surface 3D point cloud
NASA Astrophysics Data System (ADS)
Chen, Jianqin; Zhu, Hehua; Li, Xiaojun
2016-10-01
This paper presents a new method for extracting discontinuity orientation automatically from rock mass surface 3D point cloud. The proposed method consists of four steps: (1) automatic grouping of discontinuity sets using an improved K-means clustering method, (2) discontinuity segmentation and optimization, (3) discontinuity plane fitting using Random Sample Consensus (RANSAC) method, and (4) coordinate transformation of discontinuity plane. The method is first validated by the point cloud of a small piece of a rock slope acquired by photogrammetry. The extracted discontinuity orientations are compared with measured ones in the field. Then it is applied to a publicly available LiDAR data of a road cut rock slope at Rockbench repository. The extracted discontinuity orientations are compared with the method proposed by Riquelme et al. (2014). The results show that the presented method is reliable and of high accuracy, and can meet the engineering needs.
NASA Astrophysics Data System (ADS)
Sturdivant, E. J.; Lentz, E. E.; Thieler, E. R.; Remsen, D.; Miner, S.
2016-12-01
Characterizing the vulnerability of coastal systems to storm events, chronic change and sea-level rise can be improved with high-resolution data that capture timely snapshots of biogeomorphology. Imagery acquired with unmanned aerial systems (UAS) coupled with structure from motion (SfM) photogrammetry can produce high-resolution topographic and visual reflectance datasets that rival or exceed lidar and orthoimagery. Here we compare SfM-derived data to lidar and visual imagery for their utility in a) geomorphic feature extraction and b) land cover classification for coastal habitat assessment. At a beach and wetland site on Cape Cod, Massachusetts, we used UAS to capture photographs over a 15-hectare coastal area with a resulting pixel resolution of 2.5 cm. We used standard SfM processing in Agisoft PhotoScan to produce an elevation point cloud, an orthomosaic, and a digital elevation model (DEM). The SfM-derived products have a horizontal uncertainty of +/- 2.8 cm. Using the point cloud in an extraction routine developed for lidar data, we determined the position of shorelines, dune crests, and dune toes. We used the output imagery and DEM to map land cover with a pixel-based supervised classification. The dense and highly precise SfM point cloud enabled extraction of geomorphic features with greater detail than with lidar. The feature positions are reported with near-continuous coverage and sub-meter accuracy. The orthomosaic image produced with SfM provides visual reflectance with higher resolution than those available from aerial flight surveys, which enables visual identification of small features and thus aids the training and validation of the automated classification. We find that the high-resolution and correspondingly high density of UAS data requires some simple modifications to existing measurement techniques and processing workflows, and that the types of data and the quality provided is equivalent to, and in some cases surpasses, that of data collected using other methods.
Pan, Tao; Deng, Tao; Zeng, Xinying; Dong, Wei; Yu, Shuijing
2016-01-01
The biological treatment of polycyclic aromatic hydrocarbons is an important issue. Most microbes have limited practical applications because of the poor bioavailability of polycyclic aromatic hydrocarbons. In this study, the extractive biodegradation of phenanthrene by Sphingomonas polyaromaticivorans was conducted by introducing the cloud point system. The cloud point system is composed of a mixture of (40 g/L) Brij 30 and Tergitol TMN-3, which are nonionic surfactants, in equal proportions. After phenanthrene degradation, a higher wet cell weight and lower phenanthrene residue were obtained in the cloud point system than that in the control system. According to the results of high-performance liquid chromatography, the residual phenanthrene preferred to partition from the dilute phase into the coacervate phase. The concentration of residual phenanthrene in the dilute phase (below 0.001 mg/L) is lower than its solubility in water (1.18 mg/L) after extractive biodegradation. Therefore, dilute phase detoxification was achieved, thus indicating that the dilute phase could be discharged without causing phenanthrene pollution. Bioavailability was assessed by introducing the apparent logP in the cloud point system. Apparent logP decreased significantly, thus indicating that the bioavailability of phenanthrene increased remarkably in the system. This study provides a potential application of biological treatment in water and soil contaminated by phenanthrene.
Extracting valley-ridge lines from point-cloud-based 3D fingerprint models.
Pang, Xufang; Song, Zhan; Xie, Wuyuan
2013-01-01
3D fingerprinting is an emerging technology with the distinct advantage of touchless operation. More important, 3D fingerprint models contain more biometric information than traditional 2D fingerprint images. However, current approaches to fingerprint feature detection usually must transform the 3D models to a 2D space through unwrapping or other methods, which might introduce distortions. A new approach directly extracts valley-ridge features from point-cloud-based 3D fingerprint models. It first applies the moving least-squares method to fit a local paraboloid surface and represent the local point cloud area. It then computes the local surface's curvatures and curvature tensors to facilitate detection of the potential valley and ridge points. The approach projects those points to the most likely valley-ridge lines, using statistical means such as covariance analysis and cross correlation. To finally extract the valley-ridge lines, it grows the polylines that approximate the projected feature points and removes the perturbations between the sampled points. Experiments with different 3D fingerprint models demonstrate this approach's feasibility and performance.
Point Cloud Oriented Shoulder Line Extraction in Loess Hilly Area
NASA Astrophysics Data System (ADS)
Min, Li; Xin, Yang; Liyang, Xiong
2016-06-01
Shoulder line is the significant line in hilly area of Loess Plateau in China, dividing the surface into positive and negative terrain (P-N terrains). Due to the point cloud vegetation removal methods of P-N terrains are different, there is an imperative need for shoulder line extraction. In this paper, we proposed an automatic shoulder line extraction method based on point cloud. The workflow is as below: (i) ground points were selected by using a grid filter in order to remove most of noisy points. (ii) Based on DEM interpolated by those ground points, slope was mapped and classified into two classes (P-N terrains), using Natural Break Classified method. (iii) The common boundary between two slopes is extracted as shoulder line candidate. (iv) Adjust the filter gird size and repeat step i-iii until the shoulder line candidate matches its real location. (v) Generate shoulder line of the whole area. Test area locates in Madigou, Jingbian County of Shaanxi Province, China. A total of 600 million points are acquired in the test area of 0.23km2, using Riegl VZ400 3D Laser Scanner in August 2014. Due to the limit Granted computing performance, the test area is divided into 60 blocks and 13 of them around the shoulder line were selected for filter grid size optimizing. The experiment result shows that the optimal filter grid size varies in diverse sample area, and a power function relation exists between filter grid size and point density. The optimal grid size was determined by above relation and shoulder lines of 60 blocks were then extracted. Comparing with the manual interpretation results, the accuracy of the whole result reaches 85%. This method can be applied to shoulder line extraction in hilly area, which is crucial for point cloud denoising and high accuracy DEM generation.
Automatic Rail Extraction and Celarance Check with a Point Cloud Captured by Mls in a Railway
NASA Astrophysics Data System (ADS)
Niina, Y.; Honma, R.; Honma, Y.; Kondo, K.; Tsuji, K.; Hiramatsu, T.; Oketani, E.
2018-05-01
Recently, MLS (Mobile Laser Scanning) has been successfully used in a road maintenance. In this paper, we present the application of MLS for the inspection of clearance along railway tracks of West Japan Railway Company. Point clouds around the track are captured by MLS mounted on a bogie and rail position can be determined by matching the shape of the ideal rail head with respect to the point cloud by ICP algorithm. A clearance check is executed automatically with virtual clearance model laid along the extracted rail. As a result of evaluation, the accuracy of extracting rail positions is less than 3 mm. With respect to the automatic clearance check, the objects inside the clearance and the ones related to a contact line is successfully detected by visual confirmation.
Automatic extraction of pavement markings on streets from point cloud data of mobile LiDAR
NASA Astrophysics Data System (ADS)
Gao, Yang; Zhong, Ruofei; Tang, Tao; Wang, Liuzhao; Liu, Xianlin
2017-08-01
Pavement markings provide an important foundation as they help to keep roads users safe. Accurate and comprehensive information about pavement markings assists the road regulators and is useful in developing driverless technology. Mobile light detection and ranging (LiDAR) systems offer new opportunities to collect and process accurate pavement markings’ information. Mobile LiDAR systems can directly obtain the three-dimensional (3D) coordinates of an object, thus defining spatial data and the intensity of (3D) objects in a fast and efficient way. The RGB attribute information of data points can be obtained based on the panoramic camera in the system. In this paper, we present a novel method process to automatically extract pavement markings using multiple attribute information of the laser scanning point cloud from the mobile LiDAR data. This method process utilizes a differential grayscale of RGB color, laser pulse reflection intensity, and the differential intensity to identify and extract pavement markings. We utilized point cloud density to remove the noise and used morphological operations to eliminate the errors. In the application, we tested our method process on different sections of roads in Beijing, China, and Buffalo, NY, USA. The results indicated that both correctness (p) and completeness (r) were higher than 90%. The method process of this research can be applied to extract pavement markings from huge point cloud data produced by mobile LiDAR.
Ohashi, Akira; Tsuguchi, Akira; Imura, Hisanori; Ohashi, Kousaburo
2004-07-01
The cloud point extraction behavior of aluminum(III) with 8-quinolinol (HQ) or 2-methyl-8-quinolinol (HMQ) and Triton X-100 was investigated in the absence and presence of 3,5-dichlorophenol (Hdcp). Aluminum(III) was almost extracted with HQ and 4(v/v)% Triton X-100 above pH 5.0, but was not extracted with HMQ-Triton X-100. However, in the presence of Hdcp, it was almost quantitatively extracted with HMQ-Triton X-100. The synergistic effect of Hdcp on the extraction of aluminum(III) with HMQ and Triton X-100 may be caused by the formation of a mixed-ligand complex, Al(dcp)(MQ)2.
Temporal Analysis and Automatic Calibration of the Velodyne HDL-32E LiDAR System
NASA Astrophysics Data System (ADS)
Chan, T. O.; Lichti, D. D.; Belton, D.
2013-10-01
At the end of the first quarter of 2012, more than 600 Velodyne LiDAR systems had been sold worldwide for various robotic and high-accuracy survey applications. The ultra-compact Velodyne HDL-32E LiDAR has become a predominant sensor for many applications that require lower sensor size/weight and cost. For high accuracy applications, cost-effective calibration methods with minimal manual intervention are always desired by users. However, the calibrations are complicated by the Velodyne LiDAR's narrow vertical field of view and the very highly time-variant nature of its measurements. In the paper, the temporal stability of the HDL-32E is first analysed as the motivation for developing a new, automated calibration method. This is followed by a detailed description of the calibration method that is driven by a novel segmentation method for extracting vertical cylindrical features from the Velodyne point clouds. The proposed segmentation method utilizes the Velodyne point cloud's slice-like nature and first decomposes the point clouds into 2D layers. Then the layers are treated as 2D images and are processed with the Generalized Hough Transform which extracts the points distributed in circular patterns from the point cloud layers. Subsequently, the vertical cylindrical features can be readily extracted from the whole point clouds based on the previously extracted points. The points are passed to the calibration that estimates the cylinder parameters and the LiDAR's additional parameters simultaneously by constraining the segmented points to fit to the cylindrical geometric model in such a way the weighted sum of the adjustment residuals are minimized. The proposed calibration is highly automatic and this allows end users to obtain the time-variant additional parameters instantly and frequently whenever there are vertical cylindrical features presenting in scenes. The methods were verified with two different real datasets, and the results suggest that up to 78.43% accuracy improvement for the HDL-32E can be achieved using the proposed calibration method.
Kassem, Mohammed A; Amin, Alaa S
2015-02-05
A new method to estimate rhodium in different samples at trace levels had been developed. Rhodium was complexed with 5-(4'-nitro-2',6'-dichlorophenylazo)-6-hydroxypyrimidine-2,4-dione (NDPHPD) as a complexing agent in an aqueous medium and concentrated by using Triton X-114 as a surfactant. The investigated rhodium complex was preconcentrated with cloud point extraction process using the nonionic surfactant Triton X-114 to extract rhodium complex from aqueous solutions at pH 4.75. After the phase separation at 50°C, the surfactant-rich phase was heated again at 100°C to remove water after decantation and the remaining phase was dissolved using 0.5mL of acetonitrile. Under optimum conditions, the calibration curve was linear for the concentration range of 0.5-75ngmL(-1) and the detection limit was 0.15ngmL(-1) of the original solution. The enhancement factor of 500 was achieved for 250mL samples containing the analyte and relative standard deviations were ⩽1.50%. The method was found to be highly selective, fairly sensitive, simple, rapid and economical and safely applied for rhodium determination in different complex materials such as synthetic mixture of alloys and environmental water samples. Copyright © 2014 Elsevier B.V. All rights reserved.
Vertical Optical Scanning with Panoramic Vision for Tree Trunk Reconstruction
Berveglieri, Adilson; Liang, Xinlian; Honkavaara, Eija
2017-01-01
This paper presents a practical application of a technique that uses a vertical optical flow with a fisheye camera to generate dense point clouds from a single planimetric station. Accurate data can be extracted to enable the measurement of tree trunks or branches. The images that are collected with this technique can be oriented in photogrammetric software (using fisheye models) and used to generate dense point clouds, provided that some constraints on the camera positions are adopted. A set of images was captured in a forest plot in the experiments. Weighted geometric constraints were imposed in the photogrammetric software to calculate the image orientation, perform dense image matching, and accurately generate a 3D point cloud. The tree trunks in the scenes were reconstructed and mapped in a local reference system. The accuracy assessment was based on differences between measured and estimated trunk diameters at different heights. Trunk sections from an image-based point cloud were also compared to the corresponding sections that were extracted from a dense terrestrial laser scanning (TLS) point cloud. Cylindrical fitting of the trunk sections allowed the assessment of the accuracies of the trunk geometric shapes in both clouds. The average difference between the cylinders that were fitted to the photogrammetric cloud and those to the TLS cloud was less than 1 cm, which indicates the potential of the proposed technique. The point densities that were obtained with vertical optical scanning were 1/3 less than those that were obtained with TLS. However, the point density can be improved by using higher resolution cameras. PMID:29207468
Vertical Optical Scanning with Panoramic Vision for Tree Trunk Reconstruction.
Berveglieri, Adilson; Tommaselli, Antonio M G; Liang, Xinlian; Honkavaara, Eija
2017-12-02
This paper presents a practical application of a technique that uses a vertical optical flow with a fisheye camera to generate dense point clouds from a single planimetric station. Accurate data can be extracted to enable the measurement of tree trunks or branches. The images that are collected with this technique can be oriented in photogrammetric software (using fisheye models) and used to generate dense point clouds, provided that some constraints on the camera positions are adopted. A set of images was captured in a forest plot in the experiments. Weighted geometric constraints were imposed in the photogrammetric software to calculate the image orientation, perform dense image matching, and accurately generate a 3D point cloud. The tree trunks in the scenes were reconstructed and mapped in a local reference system. The accuracy assessment was based on differences between measured and estimated trunk diameters at different heights. Trunk sections from an image-based point cloud were also compared to the corresponding sections that were extracted from a dense terrestrial laser scanning (TLS) point cloud. Cylindrical fitting of the trunk sections allowed the assessment of the accuracies of the trunk geometric shapes in both clouds. The average difference between the cylinders that were fitted to the photogrammetric cloud and those to the TLS cloud was less than 1 cm, which indicates the potential of the proposed technique. The point densities that were obtained with vertical optical scanning were 1/3 less than those that were obtained with TLS. However, the point density can be improved by using higher resolution cameras.
Change Analysis in Structural Laser Scanning Point Clouds: The Baseline Method
Shen, Yueqian; Lindenbergh, Roderik; Wang, Jinhu
2016-01-01
A method is introduced for detecting changes from point clouds that avoids registration. For many applications, changes are detected between two scans of the same scene obtained at different times. Traditionally, these scans are aligned to a common coordinate system having the disadvantage that this registration step introduces additional errors. In addition, registration requires stable targets or features. To avoid these issues, we propose a change detection method based on so-called baselines. Baselines connect feature points within one scan. To analyze changes, baselines connecting corresponding points in two scans are compared. As feature points either targets or virtual points corresponding to some reconstructable feature in the scene are used. The new method is implemented on two scans sampling a masonry laboratory building before and after seismic testing, that resulted in damages in the order of several centimeters. The centres of the bricks of the laboratory building are automatically extracted to serve as virtual points. Baselines connecting virtual points and/or target points are extracted and compared with respect to a suitable structural coordinate system. Changes detected from the baseline analysis are compared to a traditional cloud to cloud change analysis demonstrating the potential of the new method for structural analysis. PMID:28029121
Change Analysis in Structural Laser Scanning Point Clouds: The Baseline Method.
Shen, Yueqian; Lindenbergh, Roderik; Wang, Jinhu
2016-12-24
A method is introduced for detecting changes from point clouds that avoids registration. For many applications, changes are detected between two scans of the same scene obtained at different times. Traditionally, these scans are aligned to a common coordinate system having the disadvantage that this registration step introduces additional errors. In addition, registration requires stable targets or features. To avoid these issues, we propose a change detection method based on so-called baselines. Baselines connect feature points within one scan. To analyze changes, baselines connecting corresponding points in two scans are compared. As feature points either targets or virtual points corresponding to some reconstructable feature in the scene are used. The new method is implemented on two scans sampling a masonry laboratory building before and after seismic testing, that resulted in damages in the order of several centimeters. The centres of the bricks of the laboratory building are automatically extracted to serve as virtual points. Baselines connecting virtual points and/or target points are extracted and compared with respect to a suitable structural coordinate system. Changes detected from the baseline analysis are compared to a traditional cloud to cloud change analysis demonstrating the potential of the new method for structural analysis.
Performance Evaluation of sUAS Equipped with Velodyne HDL-32E LiDAR Sensor
NASA Astrophysics Data System (ADS)
Jozkow, G.; Wieczorek, P.; Karpina, M.; Walicka, A.; Borkowski, A.
2017-08-01
The Velodyne HDL-32E laser scanner is used more frequently as main mapping sensor in small commercial UASs. However, there is still little information about the actual accuracy of point clouds collected with such UASs. This work evaluates empirically the accuracy of the point cloud collected with such UAS. Accuracy assessment was conducted in four aspects: impact of sensors on theoretical point cloud accuracy, trajectory reconstruction quality, and internal and absolute point cloud accuracies. Theoretical point cloud accuracy was evaluated by calculating 3D position error knowing errors of used sensors. The quality of trajectory reconstruction was assessed by comparing position and attitude differences from forward and reverse EKF solution. Internal and absolute accuracies were evaluated by fitting planes to 8 point cloud samples extracted for planar surfaces. In addition, the absolute accuracy was also determined by calculating point 3D distances between LiDAR UAS and reference TLS point clouds. Test data consisted of point clouds collected in two separate flights performed over the same area. Executed experiments showed that in tested UAS, the trajectory reconstruction, especially attitude, has significant impact on point cloud accuracy. Estimated absolute accuracy of point clouds collected during both test flights was better than 10 cm, thus investigated UAS fits mapping-grade category.
Three-dimensional reconstruction of indoor whole elements based on mobile LiDAR point cloud data
NASA Astrophysics Data System (ADS)
Gong, Yuejian; Mao, Wenbo; Bi, Jiantao; Ji, Wei; He, Zhanjun
2014-11-01
Ground-based LiDAR is one of the most effective city modeling tools at present, which has been widely used for three-dimensional reconstruction of outdoor objects. However, as for indoor objects, there are some technical bottlenecks due to lack of GPS signal. In this paper, based on the high-precision indoor point cloud data which was obtained by LiDAR, an international advanced indoor mobile measuring equipment, high -precision model was fulfilled for all indoor ancillary facilities. The point cloud data we employed also contain color feature, which is extracted by fusion with CCD images. Thus, it has both space geometric feature and spectral information which can be used for constructing objects' surface and restoring color and texture of the geometric model. Based on Autodesk CAD platform and with help of PointSence plug, three-dimensional reconstruction of indoor whole elements was realized. Specifically, Pointools Edit Pro was adopted to edit the point cloud, then different types of indoor point cloud data was processed, including data format conversion, outline extracting and texture mapping of the point cloud model. Finally, three-dimensional visualization of the real-world indoor was completed. Experiment results showed that high-precision 3D point cloud data obtained by indoor mobile measuring equipment can be used for indoor whole elements' 3-d reconstruction and that methods proposed in this paper can efficiently realize the 3 -d construction of indoor whole elements. Moreover, the modeling precision could be controlled within 5 cm, which was proved to be a satisfactory result.
Model for Semantically Rich Point Cloud Data
NASA Astrophysics Data System (ADS)
Poux, F.; Neuville, R.; Hallot, P.; Billen, R.
2017-10-01
This paper proposes an interoperable model for managing high dimensional point clouds while integrating semantics. Point clouds from sensors are a direct source of information physically describing a 3D state of the recorded environment. As such, they are an exhaustive representation of the real world at every scale: 3D reality-based spatial data. Their generation is increasingly fast but processing routines and data models lack of knowledge to reason from information extraction rather than interpretation. The enhanced smart point cloud developed model allows to bring intelligence to point clouds via 3 connected meta-models while linking available knowledge and classification procedures that permits semantic injection. Interoperability drives the model adaptation to potentially many applications through specialized domain ontologies. A first prototype is implemented in Python and PostgreSQL database and allows to combine semantic and spatial concepts for basic hybrid queries on different point clouds.
Mapping Urban Tree Canopy Cover Using Fused Airborne LIDAR and Satellite Imagery Data
NASA Astrophysics Data System (ADS)
Parmehr, Ebadat G.; Amati, Marco; Fraser, Clive S.
2016-06-01
Urban green spaces, particularly urban trees, play a key role in enhancing the liveability of cities. The availability of accurate and up-to-date maps of tree canopy cover is important for sustainable development of urban green spaces. LiDAR point clouds are widely used for the mapping of buildings and trees, and several LiDAR point cloud classification techniques have been proposed for automatic mapping. However, the effectiveness of point cloud classification techniques for automated tree extraction from LiDAR data can be impacted to the point of failure by the complexity of tree canopy shapes in urban areas. Multispectral imagery, which provides complementary information to LiDAR data, can improve point cloud classification quality. This paper proposes a reliable method for the extraction of tree canopy cover from fused LiDAR point cloud and multispectral satellite imagery data. The proposed method initially associates each LiDAR point with spectral information from the co-registered satellite imagery data. It calculates the normalised difference vegetation index (NDVI) value for each LiDAR point and corrects tree points which have been misclassified as buildings. Then, region growing of tree points, taking the NDVI value into account, is applied. Finally, the LiDAR points classified as tree points are utilised to generate a canopy cover map. The performance of the proposed tree canopy cover mapping method is experimentally evaluated on a data set of airborne LiDAR and WorldView 2 imagery covering a suburb in Melbourne, Australia.
[Determination of biphenyl ether herbicides in water using HPLC with cloud-point extraction].
He, Cheng-Yan; Li, Yuan-Qian; Wang, Shen-Jiao; Ouyang, Hua-Xue; Zheng, Bo
2010-01-01
To determine residues of multiple biphenyl ether herbicides simultaneously in water using high performance liquid chromatography (HPLC) with cloud-point extraction. The residues of eight biphenyl ether herbicides (including bentazone, fomesafen, acifluorfen, aclonifen, bifenox, fluoroglycofenethy, nitrofen, oxyfluorfen) in water samples were extracted with cloud-point extraction of Triton X-114. The analytes were separated and determined using reverse phase HPLC with ultraviolet detector at 300 nm. Optimized conditions for the pretreatment of water samples and the parameters of chromatographic separation applied. There was a good linear correlation between the concentration and the peak area of the analytes in the range of 0.05-2.00 mg/L (r = 0.9991-0.9998). Except bentazone, the spiked recoveries of the biphenyl ether herbicides in the water samples ranged from 80.1% to 100.9%, with relative standard deviations ranging from 2.70% to 6.40%. The detection limit of the method ranged from 0.10 microg/L to 0.50 microg/L. The proposed method is simple, rapid and sensitive, and can meet the requirements of determination of multiple biphenyl ether herbicides simultaneously in natural waters.
Automatic Extraction of Road Markings from Mobile Laser Scanning Data
NASA Astrophysics Data System (ADS)
Ma, H.; Pei, Z.; Wei, Z.; Zhong, R.
2017-09-01
Road markings as critical feature in high-defination maps, which are Advanced Driver Assistance System (ADAS) and self-driving technology required, have important functions in providing guidance and information to moving cars. Mobile laser scanning (MLS) system is an effective way to obtain the 3D information of the road surface, including road markings, at highway speeds and at less than traditional survey costs. This paper presents a novel method to automatically extract road markings from MLS point clouds. Ground points are first filtered from raw input point clouds using neighborhood elevation consistency method. The basic assumption of the method is that the road surface is smooth. Points with small elevation-difference between neighborhood are considered to be ground points. Then ground points are partitioned into a set of profiles according to trajectory data. The intensity histogram of points in each profile is generated to find intensity jumps in certain threshold which inversely to laser distance. The separated points are used as seed points to region grow based on intensity so as to obtain road mark of integrity. We use the point cloud template-matching method to refine the road marking candidates via removing the noise clusters with low correlation coefficient. During experiment with a MLS point set of about 2 kilometres in a city center, our method provides a promising solution to the road markings extraction from MLS data.
The potential of cloud point system as a novel two-phase partitioning system for biotransformation.
Wang, Zhilong
2007-05-01
Although the extractive biotransformation in two-phase partitioning systems have been studied extensively, such as the water-organic solvent two-phase system, the aqueous two-phase system, the reverse micelle system, and the room temperature ionic liquid, etc., this has not yet resulted in a widespread industrial application. Based on the discussion of the main obstacles, an exploitation of a cloud point system, which has already been applied in a separation field known as a cloud point extraction, as a novel two-phase partitioning system for biotransformation, is reviewed by analysis of some topical examples. At the end of the review, the process control and downstream processing in the application of the novel two-phase partitioning system for biotransformation are also briefly discussed.
Datum Feature Extraction and Deformation Analysis Method Based on Normal Vector of Point Cloud
NASA Astrophysics Data System (ADS)
Sun, W.; Wang, J.; Jin, F.; Liang, Z.; Yang, Y.
2018-04-01
In order to solve the problem lacking applicable analysis method in the application of three-dimensional laser scanning technology to the field of deformation monitoring, an efficient method extracting datum feature and analysing deformation based on normal vector of point cloud was proposed. Firstly, the kd-tree is used to establish the topological relation. Datum points are detected by tracking the normal vector of point cloud determined by the normal vector of local planar. Then, the cubic B-spline curve fitting is performed on the datum points. Finally, datum elevation and the inclination angle of the radial point are calculated according to the fitted curve and then the deformation information was analyzed. The proposed approach was verified on real large-scale tank data set captured with terrestrial laser scanner in a chemical plant. The results show that the method could obtain the entire information of the monitor object quickly and comprehensively, and reflect accurately the datum feature deformation.
Pan, Tao; Ren, Suizhou; Xu, Meiying; Sun, Guoping; Guo, Jun
2013-07-01
The biological treatment of triphenylmethane dyes is an important issue. Most microbes have limited practical application because they cannot completely detoxicate these dyes. In this study, the extractive biodecolorization of triphenylmethane dyes by Aeromonas hydrophila DN322p was carried out by introducing the cloud point system. The cloud point system is composed of a mixture of nonionic surfactants (20 g/L) Brij 30 and Tergitol TMN-3 in equal proportions. After the decolorization of crystal violet, a higher wet cell weight was obtained in the cloud point system than that of the control system. Based on the results of thin-layer chromatography, the residual crystal violet and its decolorized product, leuco crystal violet, preferred to partition into the coacervate phase. Therefore, the detoxification of the dilute phase was achieved, which indicated that the dilute phase could be discharged without causing dye pollution. The extractive biodecolorization of three other triphenylmethane dyes was also examined in this system. The decolorization of malachite green and brilliant green was similar to that of crystal violet. Only ethyl violet achieved a poor decolorization rate because DN322p decolorized it via adsorption but did not convert it into its leuco form. This study provides potential application of biological treatment in triphenylmethane dye wastewater.
D Modeling of Components of a Garden by Using Point Cloud Data
NASA Astrophysics Data System (ADS)
Kumazakia, R.; Kunii, Y.
2016-06-01
Laser measurement is currently applied to several tasks such as plumbing management, road investigation through mobile mapping systems, and elevation model utilization through airborne LiDAR. Effective laser measurement methods have been well-documented in civil engineering, but few attempts have been made to establish equally effective methods in landscape engineering. By using point cloud data acquired through laser measurement, the aesthetic landscaping of Japanese gardens can be enhanced. This study focuses on simple landscape simulations for pruning and rearranging trees as well as rearranging rocks, lanterns, and other garden features by using point cloud data. However, such simulations lack concreteness. Therefore, this study considers the construction of a library of garden features extracted from point cloud data. The library would serve as a resource for creating new gardens and simulating gardens prior to conducting repairs. Extracted garden features are imported as 3ds Max objects, and realistic 3D models are generated by using a material editor system. As further work toward the publication of a 3D model library, file formats for tree crowns and trunks should be adjusted. Moreover, reducing the size of created models is necessary. Models created using point cloud data are informative because simply shaped garden features such as trees are often seen in the 3D industry.
NASA Astrophysics Data System (ADS)
Zhang, Yuyan; Guo, Quanli; Wang, Zhenchun; Yang, Degong
2018-03-01
This paper proposes a non-contact, non-destructive evaluation method for the surface damage of high-speed sliding electrical contact rails. The proposed method establishes a model of damage identification and calculation. A laser scanning system is built to obtain the 3D point cloud data of the rail surface. In order to extract the damage region of the rail surface, the 3D point cloud data are processed using iterative difference, nearest neighbours search and a data registration algorithm. The curvature of the point cloud data in the damage region is mapped to RGB color information, which can directly reflect the change trend of the curvature of the point cloud data in the damage region. The extracted damage region is divided into three prism elements by a method of triangulation. The volume and mass of a single element are calculated by the method of geometric segmentation. Finally, the total volume and mass of the damage region are obtained by the principle of superposition. The proposed method is applied to several typical injuries and the results are discussed. The experimental results show that the algorithm can identify damage shapes and calculate damage mass with milligram precision, which are useful for evaluating the damage in a further research stage.
Automatic pole-like object modeling via 3D part-based analysis of point cloud
NASA Astrophysics Data System (ADS)
He, Liu; Yang, Haoxiang; Huang, Yuchun
2016-10-01
Pole-like objects, including trees, lampposts and traffic signs, are indispensable part of urban infrastructure. With the advance of vehicle-based laser scanning (VLS), massive point cloud of roadside urban areas becomes applied in 3D digital city modeling. Based on the property that different pole-like objects have various canopy parts and similar trunk parts, this paper proposed the 3D part-based shape analysis to robustly extract, identify and model the pole-like objects. The proposed method includes: 3D clustering and recognition of trunks, voxel growing and part-based 3D modeling. After preprocessing, the trunk center is identified as the point that has local density peak and the largest minimum inter-cluster distance. Starting from the trunk centers, the remaining points are iteratively clustered to the same centers of their nearest point with higher density. To eliminate the noisy points, cluster border is refined by trimming boundary outliers. Then, candidate trunks are extracted based on the clustering results in three orthogonal planes by shape analysis. Voxel growing obtains the completed pole-like objects regardless of overlaying. Finally, entire trunk, branch and crown part are analyzed to obtain seven feature parameters. These parameters are utilized to model three parts respectively and get signal part-assembled 3D model. The proposed method is tested using the VLS-based point cloud of Wuhan University, China. The point cloud includes many kinds of trees, lampposts and other pole-like posters under different occlusions and overlaying. Experimental results show that the proposed method can extract the exact attributes and model the roadside pole-like objects efficiently.
NASA Astrophysics Data System (ADS)
Ghasemi, Elham; Kaykhaii, Massoud
2016-07-01
A novel, green, simple and fast method was developed for spectrophotometric determination of Malachite green, Crystal violet, and Rhodamine B in water samples based on Micro-cloud Point extraction (MCPE) at room temperature. This is the first report on the application of MCPE on dyes. In this method, to reach the cloud point at room temperature, the MCPE procedure was carried out in brine using Triton X-114 as a non-ionic surfactant. The factors influencing the extraction efficiency were investigated and optimized. Under the optimized condition, calibration curves were found to be linear in the concentration range of 0.06-0.60 mg/L, 0.10-0.80 mg/L, and 0.03-0.30 mg/L with the enrichment factors of 29.26, 85.47 and 28.36, respectively for Malachite green, Crystal violet, and Rhodamine B. Limit of detections were between 2.2 and 5.1 μg/L.
Ghasemi, Elham; Kaykhaii, Massoud
2016-07-05
A novel, green, simple and fast method was developed for spectrophotometric determination of Malachite green, Crystal violet, and Rhodamine B in water samples based on Micro-cloud Point extraction (MCPE) at room temperature. This is the first report on the application of MCPE on dyes. In this method, to reach the cloud point at room temperature, the MCPE procedure was carried out in brine using Triton X-114 as a non-ionic surfactant. The factors influencing the extraction efficiency were investigated and optimized. Under the optimized condition, calibration curves were found to be linear in the concentration range of 0.06-0.60mg/L, 0.10-0.80mg/L, and 0.03-0.30mg/L with the enrichment factors of 29.26, 85.47 and 28.36, respectively for Malachite green, Crystal violet, and Rhodamine B. Limit of detections were between 2.2 and 5.1μg/L. Copyright © 2016 Elsevier B.V. All rights reserved.
Space Subdivision in Indoor Mobile Laser Scanning Point Clouds Based on Scanline Analysis.
Zheng, Yi; Peter, Michael; Zhong, Ruofei; Oude Elberink, Sander; Zhou, Quan
2018-06-05
Indoor space subdivision is an important aspect of scene analysis that provides essential information for many applications, such as indoor navigation and evacuation route planning. Until now, most proposed scene understanding algorithms have been based on whole point clouds, which has led to complicated operations, high computational loads and low processing speed. This paper presents novel methods to efficiently extract the location of openings (e.g., doors and windows) and to subdivide space by analyzing scanlines. An opening detection method is demonstrated that analyses the local geometric regularity in scanlines to refine the extracted opening. Moreover, a space subdivision method based on the extracted openings and the scanning system trajectory is described. Finally, the opening detection and space subdivision results are saved as point cloud labels which will be used for further investigations. The method has been tested on a real dataset collected by ZEB-REVO. The experimental results validate the completeness and correctness of the proposed method for different indoor environment and scanning paths.
Smart Point Cloud: Definition and Remaining Challenges
NASA Astrophysics Data System (ADS)
Poux, F.; Hallot, P.; Neuville, R.; Billen, R.
2016-10-01
Dealing with coloured point cloud acquired from terrestrial laser scanner, this paper identifies remaining challenges for a new data structure: the smart point cloud. This concept arises with the statement that massive and discretized spatial information from active remote sensing technology is often underused due to data mining limitations. The generalisation of point cloud data associated with the heterogeneity and temporality of such datasets is the main issue regarding structure, segmentation, classification, and interaction for an immediate understanding. We propose to use both point cloud properties and human knowledge through machine learning to rapidly extract pertinent information, using user-centered information (smart data) rather than raw data. A review of feature detection, machine learning frameworks and database systems indexed both for mining queries and data visualisation is studied. Based on existing approaches, we propose a new 3-block flexible framework around device expertise, analytic expertise and domain base reflexion. This contribution serves as the first step for the realisation of a comprehensive smart point cloud data structure.
NASA Astrophysics Data System (ADS)
Tomljenovic, Ivan; Tiede, Dirk; Blaschke, Thomas
2016-10-01
In the past two decades Object-Based Image Analysis (OBIA) established itself as an efficient approach for the classification and extraction of information from remote sensing imagery and, increasingly, from non-image based sources such as Airborne Laser Scanner (ALS) point clouds. ALS data is represented in the form of a point cloud with recorded multiple returns and intensities. In our work, we combined OBIA with ALS point cloud data in order to identify and extract buildings as 2D polygons representing roof outlines in a top down mapping approach. We performed rasterization of the ALS data into a height raster for the purpose of the generation of a Digital Surface Model (DSM) and a derived Digital Elevation Model (DEM). Further objects were generated in conjunction with point statistics from the linked point cloud. With the use of class modelling methods, we generated the final target class of objects representing buildings. The approach was developed for a test area in Biberach an der Riß (Germany). In order to point out the possibilities of the adaptation-free transferability to another data set, the algorithm has been applied ;as is; to the ISPRS Benchmarking data set of Toronto (Canada). The obtained results show high accuracies for the initial study area (thematic accuracies of around 98%, geometric accuracy of above 80%). The very high performance within the ISPRS Benchmark without any modification of the algorithm and without any adaptation of parameters is particularly noteworthy.
Drawing and Landscape Simulation for Japanese Garden by Using Terrestrial Laser Scanner
NASA Astrophysics Data System (ADS)
Kumazaki, R.; Kunii, Y.
2015-05-01
Recently, many laser scanners are applied for various measurement fields. This paper investigates that it was useful to use the terrestrial laser scanner in the field of landscape architecture and examined a usage in Japanese garden. As for the use of 3D point cloud data in the Japanese garden, it is the visual use such as the animations. Therefore, some applications of the 3D point cloud data was investigated that are as follows. Firstly, ortho image of the Japanese garden could be outputted for the 3D point cloud data. Secondly, contour lines of the Japanese garden also could be extracted, and drawing was became possible. Consequently, drawing of Japanese garden was realized more efficiency due to achievement of laborsaving. Moreover, operation of the measurement and drawing could be performed without technical skills, and any observers can be operated. Furthermore, 3D point cloud data could be edited, and some landscape simulations that extraction and placement of tree or some objects were became possible. As a result, it can be said that the terrestrial laser scanner will be applied in landscape architecture field more widely.
Facets : a Cloudcompare Plugin to Extract Geological Planes from Unstructured 3d Point Clouds
NASA Astrophysics Data System (ADS)
Dewez, T. J. B.; Girardeau-Montaut, D.; Allanic, C.; Rohmer, J.
2016-06-01
Geological planar facets (stratification, fault, joint…) are key features to unravel the tectonic history of rock outcrop or appreciate the stability of a hazardous rock cliff. Measuring their spatial attitude (dip and strike) is generally performed by hand with a compass/clinometer, which is time consuming, requires some degree of censoring (i.e. refusing to measure some features judged unimportant at the time), is not always possible for fractures higher up on the outcrop and is somewhat hazardous. 3D virtual geological outcrop hold the potential to alleviate these issues. Efficiently segmenting massive 3D point clouds into individual planar facets, inside a convenient software environment was lacking. FACETS is a dedicated plugin within CloudCompare v2.6.2 (http://cloudcompare.org/ ) implemented to perform planar facet extraction, calculate their dip and dip direction (i.e. azimuth of steepest decent) and report the extracted data in interactive stereograms. Two algorithms perform the segmentation: Kd-Tree and Fast Marching. Both divide the point cloud into sub-cells, then compute elementary planar objects and aggregate them progressively according to a planeity threshold into polygons. The boundaries of the polygons are adjusted around segmented points with a tension parameter, and the facet polygons can be exported as 3D polygon shapefiles towards third party GIS software or simply as ASCII comma separated files. One of the great features of FACETS is the capability to explore planar objects but also 3D points with normals with the stereogram tool. Poles can be readily displayed, queried and manually segmented interactively. The plugin blends seamlessly into CloudCompare to leverage all its other 3D point cloud manipulation features. A demonstration of the tool is presented to illustrate these different features. While designed for geological applications, FACETS could be more widely applied to any planar objects.
Instantaneous Coastline Extraction from LIDAR Point Cloud and High Resolution Remote Sensing Imagery
NASA Astrophysics Data System (ADS)
Li, Y.; Zhoing, L.; Lai, Z.; Gan, Z.
2018-04-01
A new method was proposed for instantaneous waterline extraction in this paper, which combines point cloud geometry features and image spectral characteristics of the coastal zone. The proposed method consists of follow steps: Mean Shift algorithm is used to segment the coastal zone of high resolution remote sensing images into small regions containing semantic information;Region features are extracted by integrating LiDAR data and the surface area of the image; initial waterlines are extracted by α-shape algorithm; a region growing algorithm with is taking into coastline refinement, with a growth rule integrating the intensity and topography of LiDAR data; moothing the coastline. Experiments are conducted to demonstrate the efficiency of the proposed method.
Hierarchical Regularization of Polygons for Photogrammetric Point Clouds of Oblique Images
NASA Astrophysics Data System (ADS)
Xie, L.; Hu, H.; Zhu, Q.; Wu, B.; Zhang, Y.
2017-05-01
Despite the success of multi-view stereo (MVS) reconstruction from massive oblique images in city scale, only point clouds and triangulated meshes are available from existing MVS pipelines, which are topologically defect laden, free of semantical information and hard to edit and manipulate interactively in further applications. On the other hand, 2D polygons and polygonal models are still the industrial standard. However, extraction of the 2D polygons from MVS point clouds is still a non-trivial task, given the fact that the boundaries of the detected planes are zigzagged and regularities, such as parallel and orthogonal, cannot preserve. Aiming to solve these issues, this paper proposes a hierarchical polygon regularization method for the photogrammetric point clouds from existing MVS pipelines, which comprises of local and global levels. After boundary points extraction, e.g. using alpha shapes, the local level is used to consolidate the original points, by refining the orientation and position of the points using linear priors. The points are then grouped into local segments by forward searching. In the global level, regularities are enforced through a labeling process, which encourage the segments share the same label and the same label represents segments are parallel or orthogonal. This is formulated as Markov Random Field and solved efficiently. Preliminary results are made with point clouds from aerial oblique images and compared with two classical regularization methods, which have revealed that the proposed method are more powerful in abstracting a single building and is promising for further 3D polygonal model reconstruction and GIS applications.
NASA Astrophysics Data System (ADS)
Bonduel, M.; Bassier, M.; Vergauwen, M.; Pauwels, P.; Klein, R.
2017-11-01
The use of Building Information Modeling (BIM) for existing buildings based on point clouds is increasing. Standardized geometric quality assessment of the BIMs is needed to make them more reliable and thus reusable for future users. First, available literature on the subject is studied. Next, an initial proposal for a standardized geometric quality assessment is presented. Finally, this method is tested and evaluated with a case study. The number of specifications on BIM relating to existing buildings is limited. The Levels of Accuracy (LOA) specification of the USIBD provides definitions and suggestions regarding geometric model accuracy, but lacks a standardized assessment method. A deviation analysis is found to be dependent on (1) the used mathematical model, (2) the density of the point clouds and (3) the order of comparison. Results of the analysis can be graphical and numerical. An analysis on macro (building) and micro (BIM object) scale is necessary. On macro scale, the complete model is compared to the original point cloud and vice versa to get an overview of the general model quality. The graphical results show occluded zones and non-modeled objects respectively. Colored point clouds are derived from this analysis and integrated in the BIM. On micro scale, the relevant surface parts are extracted per BIM object and compared to the complete point cloud. Occluded zones are extracted based on a maximum deviation. What remains is classified according to the LOA specification. The numerical results are integrated in the BIM with the use of object parameters.
Detection and Classification of Pole-Like Objects from Mobile Mapping Data
NASA Astrophysics Data System (ADS)
Fukano, K.; Masuda, H.
2015-08-01
Laser scanners on a vehicle-based mobile mapping system can capture 3D point-clouds of roads and roadside objects. Since roadside objects have to be maintained periodically, their 3D models are useful for planning maintenance tasks. In our previous work, we proposed a method for detecting cylindrical poles and planar plates in a point-cloud. However, it is often required to further classify pole-like objects into utility poles, streetlights, traffic signals and signs, which are managed by different organizations. In addition, our previous method may fail to extract low pole-like objects, which are often observed in urban residential areas. In this paper, we propose new methods for extracting and classifying pole-like objects. In our method, we robustly extract a wide variety of poles by converting point-clouds into wireframe models and calculating cross-sections between wireframe models and horizontal cutting planes. For classifying pole-like objects, we subdivide a pole-like object into five subsets by extracting poles and planes, and calculate feature values of each subset. Then we apply a supervised machine learning method using feature variables of subsets. In our experiments, our method could achieve excellent results for detection and classification of pole-like objects.
NASA Astrophysics Data System (ADS)
Okyay, U.; Glennie, C. L.; Khan, S.
2017-12-01
Owing to the advent of terrestrial laser scanners (TLS), high-density point cloud data has become increasingly available to the geoscience research community. Research groups have started producing their own point clouds for various applications, gradually shifting their emphasis from obtaining the data towards extracting more and meaningful information from the point clouds. Extracting fracture properties from three-dimensional data in a (semi-)automated manner has been an active area of research in geosciences. Several studies have developed various processing algorithms for extracting only planar surfaces. In comparison, (semi-)automated identification of fracture traces at the outcrop scale, which could be used for mapping fracture distribution have not been investigated frequently. Understanding the spatial distribution and configuration of natural fractures is of particular importance, as they directly influence fluid-flow through the host rock. Surface roughness, typically defined as the deviation of a natural surface from a reference datum, has become an important metric in geoscience research, especially with the increasing density and accuracy of point clouds. In the study presented herein, a surface roughness model was employed to identify fracture traces and their distribution on an ophiolite outcrop in Oman. Surface roughness calculations were performed using orthogonal distance regression over various grid intervals. The results demonstrated that surface roughness could identify outcrop-scale fracture traces from which fracture distribution and density maps can be generated. However, considering outcrop conditions and properties and the purpose of the application, the definition of an adequate grid interval for surface roughness model and selection of threshold values for distribution maps are not straightforward and require user intervention and interpretation.
Automatic Recognition of Indoor Navigation Elements from Kinect Point Clouds
NASA Astrophysics Data System (ADS)
Zeng, L.; Kang, Z.
2017-09-01
This paper realizes automatically the navigating elements defined by indoorGML data standard - door, stairway and wall. The data used is indoor 3D point cloud collected by Kinect v2 launched in 2011 through the means of ORB-SLAM. By contrast, it is cheaper and more convenient than lidar, but the point clouds also have the problem of noise, registration error and large data volume. Hence, we adopt a shape descriptor - histogram of distances between two randomly chosen points, proposed by Osada and merges with other descriptor - in conjunction with random forest classifier to recognize the navigation elements (door, stairway and wall) from Kinect point clouds. This research acquires navigation elements and their 3-d location information from each single data frame through segmentation of point clouds, boundary extraction, feature calculation and classification. Finally, this paper utilizes the acquired navigation elements and their information to generate the state data of the indoor navigation module automatically. The experimental results demonstrate a high recognition accuracy of the proposed method.
NASA Astrophysics Data System (ADS)
Wu, Peng; Zhang, Yunchang; Lv, Yi; Hou, Xiandeng
2006-12-01
A simple, low cost and highly sensitive method based on cloud point extraction (CPE) for separation/preconcentration and thermospray flame quartz furnace atomic absorption spectrometry was proposed for the determination of ultratrace cadmium in water and urine samples. The analytical procedure involved the formation of analyte-entrapped surfactant micelles by mixing the analyte solution with an ammonium pyrrolidinedithiocarbamate (APDC) solution and a Triton X-114 solution. When the temperature of the system was higher than the cloud point of Triton X-114, the complex of cadmium-PDC entered the surfactant-rich phase and thus separation of the analyte from the matrix was achieved. Under optimal chemical and instrumental conditions, the limit of detection was 0.04 μg/L for cadmium with a sample volume of 10 mL. The analytical results of cadmium in water and urine samples agreed well with those by ICP-MS.
Towards semi-automatic rock mass discontinuity orientation and set analysis from 3D point clouds
NASA Astrophysics Data System (ADS)
Guo, Jiateng; Liu, Shanjun; Zhang, Peina; Wu, Lixin; Zhou, Wenhui; Yu, Yinan
2017-06-01
Obtaining accurate information on rock mass discontinuities for deformation analysis and the evaluation of rock mass stability is important. Obtaining measurements for high and steep zones with the traditional compass method is difficult. Photogrammetry, three-dimensional (3D) laser scanning and other remote sensing methods have gradually become mainstream methods. In this study, a method that is based on a 3D point cloud is proposed to semi-automatically extract rock mass structural plane information. The original data are pre-treated prior to segmentation by removing outlier points. The next step is to segment the point cloud into different point subsets. Various parameters, such as the normal, dip/direction and dip, can be calculated for each point subset after obtaining the equation of the best fit plane for the relevant point subset. A cluster analysis (a point subset that satisfies some conditions and thus forms a cluster) is performed based on the normal vectors by introducing the firefly algorithm (FA) and the fuzzy c-means (FCM) algorithm. Finally, clusters that belong to the same discontinuity sets are merged and coloured for visualization purposes. A prototype system is developed based on this method to extract the points of the rock discontinuity from a 3D point cloud. A comparison with existing software shows that this method is feasible. This method can provide a reference for rock mechanics, 3D geological modelling and other related fields.
a Voxel-Based Filtering Algorithm for Mobile LIDAR Data
NASA Astrophysics Data System (ADS)
Qin, H.; Guan, G.; Yu, Y.; Zhong, L.
2018-04-01
This paper presents a stepwise voxel-based filtering algorithm for mobile LiDAR data. In the first step, to improve computational efficiency, mobile LiDAR points, in xy-plane, are first partitioned into a set of two-dimensional (2-D) blocks with a given block size, in each of which all laser points are further organized into an octree partition structure with a set of three-dimensional (3-D) voxels. Then, a voxel-based upward growing processing is performed to roughly separate terrain from non-terrain points with global and local terrain thresholds. In the second step, the extracted terrain points are refined by computing voxel curvatures. This voxel-based filtering algorithm is comprehensively discussed in the analyses of parameter sensitivity and overall performance. An experimental study performed on multiple point cloud samples, collected by different commercial mobile LiDAR systems, showed that the proposed algorithm provides a promising solution to terrain point extraction from mobile point clouds.
NASA Astrophysics Data System (ADS)
Ferraz, A.; Painter, T. H.; Saatchi, S.; Bormann, K. J.
2016-12-01
Fusion of multi-temporal Airborne Snow Observatory (ASO) lidar data for mountainous vegetation ecosystems studies The NASA Jet Propulsion Laboratory developed the Airborne Snow Observatory (ASO), a coupled scanning lidar system and imaging spectrometer, to quantify the spatial distribution of snow volume and dynamics over mountains watersheds (Painter et al., 2015). To do this, ASO weekly over-flights mountainous areas during snowfall and snowmelt seasons. In addition, there are additional flights in snow-off conditions to calculate Digital Terrain Models (DTM). In this study, we focus on the reliability of ASO lidar data to characterize the 3D forest vegetation structure. The density of a single point cloud acquisition is of nearly 1 pt/m2, which is not optimal to properly characterize vegetation. However, ASO covers a given study site up to 14 times a year that enables computing a high-resolution point cloud by merging single acquisitions. In this study, we present a method to automatically register ASO multi-temporal lidar 3D point clouds. Although flight specifications do not change between acquisition dates, lidar datasets might have significant planimetric shifts due to inaccuracies in platform trajectory estimation introduced by the GPS system and drifts of the IMU. There are a large number of methodologies that address the problem of 3D data registration (Gressin et al., 2013). Briefly, they look for common primitive features in both datasets such as buildings corners, structures like electric poles, DTM breaklines or deformations. However, they are not suited for our experiment. First, single acquisition point clouds have low density that makes the extraction of primitive features difficult. Second, the landscape significantly changes between flights due to snowfall and snowmelt. Therefore, we developed a method to automatically register point clouds using tree apexes as keypoints because they are features that are supposed to experience little change during winter season. We applied the method to 14 lidar datasets (12 snow-on and 2 snow-off) acquired over the Tuolumne River Basin (California) in the year of 2014. To assess the reliability of the merged point cloud, we analyze the quality of vegetation related products such as canopy height models (CHM) and vertical vegetation profiles.
NASA Astrophysics Data System (ADS)
Li, Jiekang; Li, Guirong; Han, Qian
2016-12-01
In this paper, two kinds of salophens (Sal) with different solubilities, Sal1 and Sal2, have been respectively synthesized, and they all can combine with uranyl to form stable complexes: [UO22 +-Sal1] and [UO22 +-Sal2]. Among them, [UO22 +-Sal1] was used as ligand to extract uranium in complex samples by dual cloud point extraction (dCPE), and [UO22 +-Sal2] was used as catalyst for the determination of uranium by photocatalytic resonance fluorescence (RF) method. The photocatalytic characteristic of [UO22 +-Sal2] on the oxidized pyronine Y (PRY) by potassium bromate which leads to the decrease of RF intensity of PRY were studied. The reduced value of RF intensity of reaction system (ΔF) is in proportional to the concentration of uranium (c), and a novel photo-catalytic RF method was developed for the determination of trace uranium (VI) after dCPE. The combination of photo-catalytic RF techniques and dCPE procedure endows the presented methods with enhanced sensitivity and selectivity. Under optimal conditions, the linear calibration curves range for 0.067 to 6.57 ng mL- 1, the linear regression equation was ΔF = 438.0 c (ng mL- 1) + 175.6 with the correlation coefficient r = 0.9981. The limit of detection was 0.066 ng mL- 1. The proposed method was successfully applied for the separation and determination of uranium in real samples with the recoveries of 95.0-103.5%. The mechanisms of the indicator reaction and dCPE are discussed.
Traffic sign detection in MLS acquired point clouds for geometric and image-based semantic inventory
NASA Astrophysics Data System (ADS)
Soilán, Mario; Riveiro, Belén; Martínez-Sánchez, Joaquín; Arias, Pedro
2016-04-01
Nowadays, mobile laser scanning has become a valid technology for infrastructure inspection. This technology permits collecting accurate 3D point clouds of urban and road environments and the geometric and semantic analysis of data became an active research topic in the last years. This paper focuses on the detection of vertical traffic signs in 3D point clouds acquired by a LYNX Mobile Mapper system, comprised of laser scanning and RGB cameras. Each traffic sign is automatically detected in the LiDAR point cloud, and its main geometric parameters can be automatically extracted, therefore aiding the inventory process. Furthermore, the 3D position of traffic signs are reprojected on the 2D images, which are spatially and temporally synced with the point cloud. Image analysis allows for recognizing the traffic sign semantics using machine learning approaches. The presented method was tested in road and urban scenarios in Galicia (Spain). The recall results for traffic sign detection are close to 98%, and existing false positives can be easily filtered after point cloud projection. Finally, the lack of a large, publicly available Spanish traffic sign database is pointed out.
Semantic Segmentation of Building Elements Using Point Cloud Hashing
NASA Astrophysics Data System (ADS)
Chizhova, M.; Gurianov, A.; Hess, M.; Luhmann, T.; Brunn, A.; Stilla, U.
2018-05-01
For the interpretation of point clouds, the semantic definition of extracted segments from point clouds or images is a common problem. Usually, the semantic of geometrical pre-segmented point cloud elements are determined using probabilistic networks and scene databases. The proposed semantic segmentation method is based on the psychological human interpretation of geometric objects, especially on fundamental rules of primary comprehension. Starting from these rules the buildings could be quite well and simply classified by a human operator (e.g. architect) into different building types and structural elements (dome, nave, transept etc.), including particular building parts which are visually detected. The key part of the procedure is a novel method based on hashing where point cloud projections are transformed into binary pixel representations. A segmentation approach released on the example of classical Orthodox churches is suitable for other buildings and objects characterized through a particular typology in its construction (e.g. industrial objects in standardized enviroments with strict component design allowing clear semantic modelling).
Image-Based Airborne LiDAR Point Cloud Encoding for 3d Building Model Retrieval
NASA Astrophysics Data System (ADS)
Chen, Yi-Chen; Lin, Chao-Hung
2016-06-01
With the development of Web 2.0 and cyber city modeling, an increasing number of 3D models have been available on web-based model-sharing platforms with many applications such as navigation, urban planning, and virtual reality. Based on the concept of data reuse, a 3D model retrieval system is proposed to retrieve building models similar to a user-specified query. The basic idea behind this system is to reuse these existing 3D building models instead of reconstruction from point clouds. To efficiently retrieve models, the models in databases are compactly encoded by using a shape descriptor generally. However, most of the geometric descriptors in related works are applied to polygonal models. In this study, the input query of the model retrieval system is a point cloud acquired by Light Detection and Ranging (LiDAR) systems because of the efficient scene scanning and spatial information collection. Using Point clouds with sparse, noisy, and incomplete sampling as input queries is more difficult than that by using 3D models. Because that the building roof is more informative than other parts in the airborne LiDAR point cloud, an image-based approach is proposed to encode both point clouds from input queries and 3D models in databases. The main goal of data encoding is that the models in the database and input point clouds can be consistently encoded. Firstly, top-view depth images of buildings are generated to represent the geometry surface of a building roof. Secondly, geometric features are extracted from depth images based on height, edge and plane of building. Finally, descriptors can be extracted by spatial histograms and used in 3D model retrieval system. For data retrieval, the models are retrieved by matching the encoding coefficients of point clouds and building models. In experiments, a database including about 900,000 3D models collected from the Internet is used for evaluation of data retrieval. The results of the proposed method show a clear superiority over related methods.
Tunnel Point Cloud Filtering Method Based on Elliptic Cylindrical Model
NASA Astrophysics Data System (ADS)
Zhua, Ningning; Jiaa, Yonghong; Luo, Lun
2016-06-01
The large number of bolts and screws that attached to the subway shield ring plates, along with the great amount of accessories of metal stents and electrical equipments mounted on the tunnel walls, make the laser point cloud data include lots of non-tunnel section points (hereinafter referred to as non-points), therefore affecting the accuracy for modeling and deformation monitoring. This paper proposed a filtering method for the point cloud based on the elliptic cylindrical model. The original laser point cloud data was firstly projected onto a horizontal plane, and a searching algorithm was given to extract the edging points of both sides, which were used further to fit the tunnel central axis. Along the axis the point cloud was segmented regionally, and then fitted as smooth elliptic cylindrical surface by means of iteration. This processing enabled the automatic filtering of those inner wall non-points. Experiments of two groups showed coincident results, that the elliptic cylindrical model based method could effectively filter out the non-points, and meet the accuracy requirements for subway deformation monitoring. The method provides a new mode for the periodic monitoring of tunnel sections all-around deformation in subways routine operation and maintenance.
Rapid, semi-automatic fracture and contact mapping for point clouds, images and geophysical data
NASA Astrophysics Data System (ADS)
Thiele, Samuel T.; Grose, Lachlan; Samsu, Anindita; Micklethwaite, Steven; Vollgger, Stefan A.; Cruden, Alexander R.
2017-12-01
The advent of large digital datasets from unmanned aerial vehicle (UAV) and satellite platforms now challenges our ability to extract information across multiple scales in a timely manner, often meaning that the full value of the data is not realised. Here we adapt a least-cost-path solver and specially tailored cost functions to rapidly interpolate structural features between manually defined control points in point cloud and raster datasets. We implement the method in the geographic information system QGIS and the point cloud and mesh processing software CloudCompare. Using these implementations, the method can be applied to a variety of three-dimensional (3-D) and two-dimensional (2-D) datasets, including high-resolution aerial imagery, digital outcrop models, digital elevation models (DEMs) and geophysical grids. We demonstrate the algorithm with four diverse applications in which we extract (1) joint and contact patterns in high-resolution orthophotographs, (2) fracture patterns in a dense 3-D point cloud, (3) earthquake surface ruptures of the Greendale Fault associated with the Mw7.1 Darfield earthquake (New Zealand) from high-resolution light detection and ranging (lidar) data, and (4) oceanic fracture zones from bathymetric data of the North Atlantic. The approach improves the consistency of the interpretation process while retaining expert guidance and achieves significant improvements (35-65 %) in digitisation time compared to traditional methods. Furthermore, it opens up new possibilities for data synthesis and can quantify the agreement between datasets and an interpretation.
Continuously Deformation Monitoring of Subway Tunnel Based on Terrestrial Point Clouds
NASA Astrophysics Data System (ADS)
Kang, Z.; Tuo, L.; Zlatanova, S.
2012-07-01
The deformation monitoring of subway tunnel is of extraordinary necessity. Therefore, a method for deformation monitoring based on terrestrial point clouds is proposed in this paper. First, the traditional adjacent stations registration is replaced by sectioncontrolled registration, so that the common control points can be used by each station and thus the error accumulation avoided within a section. Afterwards, the central axis of the subway tunnel is determined through RANSAC (Random Sample Consensus) algorithm and curve fitting. Although with very high resolution, laser points are still discrete and thus the vertical section is computed via the quadric fitting of the vicinity of interest, instead of the fitting of the whole model of a subway tunnel, which is determined by the intersection line rotated about the central axis of tunnel within a vertical plane. The extraction of the vertical section is then optimized using RANSAC for the purpose of filtering out noises. Based on the extracted vertical sections, the volume of tunnel deformation is estimated by the comparison between vertical sections extracted at the same position from different epochs of point clouds. Furthermore, the continuously extracted vertical sections are deployed to evaluate the convergent tendency of the tunnel. The proposed algorithms are verified using real datasets in terms of accuracy and computation efficiency. The experimental result of fitting accuracy analysis shows the maximum deviation between interpolated point and real point is 1.5 mm, and the minimum one is 0.1 mm; the convergent tendency of the tunnel was detected by the comparison of adjacent fitting radius. The maximum error is 6 mm, while the minimum one is 1 mm. The computation cost of vertical section abstraction is within 3 seconds/section, which proves high efficiency..
Automated extraction and analysis of rock discontinuity characteristics from 3D point clouds
NASA Astrophysics Data System (ADS)
Bianchetti, Matteo; Villa, Alberto; Agliardi, Federico; Crosta, Giovanni B.
2016-04-01
A reliable characterization of fractured rock masses requires an exhaustive geometrical description of discontinuities, including orientation, spacing, and size. These are required to describe discontinuum rock mass structure, perform Discrete Fracture Network and DEM modelling, or provide input for rock mass classification or equivalent continuum estimate of rock mass properties. Although several advanced methodologies have been developed in the last decades, a complete characterization of discontinuity geometry in practice is still challenging, due to scale-dependent variability of fracture patterns and difficult accessibility to large outcrops. Recent advances in remote survey techniques, such as terrestrial laser scanning and digital photogrammetry, allow a fast and accurate acquisition of dense 3D point clouds, which promoted the development of several semi-automatic approaches to extract discontinuity features. Nevertheless, these often need user supervision on algorithm parameters which can be difficult to assess. To overcome this problem, we developed an original Matlab tool, allowing fast, fully automatic extraction and analysis of discontinuity features with no requirements on point cloud accuracy, density and homogeneity. The tool consists of a set of algorithms which: (i) process raw 3D point clouds, (ii) automatically characterize discontinuity sets, (iii) identify individual discontinuity surfaces, and (iv) analyse their spacing and persistence. The tool operates in either a supervised or unsupervised mode, starting from an automatic preliminary exploration data analysis. The identification and geometrical characterization of discontinuity features is divided in steps. First, coplanar surfaces are identified in the whole point cloud using K-Nearest Neighbor and Principal Component Analysis algorithms optimized on point cloud accuracy and specified typical facet size. Then, discontinuity set orientation is calculated using Kernel Density Estimation and principal vector similarity criteria. Poles to points are assigned to individual discontinuity objects using easy custom vector clustering and Jaccard distance approaches, and each object is segmented into planar clusters using an improved version of the DBSCAN algorithm. Modal set orientations are then recomputed by cluster-based orientation statistics to avoid the effects of biases related to cluster size and density heterogeneity of the point cloud. Finally, spacing values are measured between individual discontinuity clusters along scanlines parallel to modal pole vectors, whereas individual feature size (persistence) is measured using 3D convex hull bounding boxes. Spacing and size are provided both as raw population data and as summary statistics. The tool is optimized for parallel computing on 64bit systems, and a Graphic User Interface (GUI) has been developed to manage data processing, provide several outputs, including reclassified point clouds, tables, plots, derived fracture intensity parameters, and export to modelling software tools. We present test applications performed both on synthetic 3D data (simple 3D solids) and real case studies, validating the results with existing geomechanical datasets.
Georeferencing UAS Derivatives Through Point Cloud Registration with Archived Lidar Datasets
NASA Astrophysics Data System (ADS)
Magtalas, M. S. L. Y.; Aves, J. C. L.; Blanco, A. C.
2016-10-01
Georeferencing gathered images is a common step before performing spatial analysis and other processes on acquired datasets using unmanned aerial systems (UAS). Methods of applying spatial information to aerial images or their derivatives is through onboard GPS (Global Positioning Systems) geotagging, or through tying of models through GCPs (Ground Control Points) acquired in the field. Currently, UAS (Unmanned Aerial System) derivatives are limited to meter-levels of accuracy when their generation is unaided with points of known position on the ground. The use of ground control points established using survey-grade GPS or GNSS receivers can greatly reduce model errors to centimeter levels. However, this comes with additional costs not only with instrument acquisition and survey operations, but also in actual time spent in the field. This study uses a workflow for cloud-based post-processing of UAS data in combination with already existing LiDAR data. The georeferencing of the UAV point cloud is executed using the Iterative Closest Point algorithm (ICP). It is applied through the open-source CloudCompare software (Girardeau-Montaut, 2006) on a `skeleton point cloud'. This skeleton point cloud consists of manually extracted features consistent on both LiDAR and UAV data. For this cloud, roads and buildings with minimal deviations given their differing dates of acquisition are considered consistent. Transformation parameters are computed for the skeleton cloud which could then be applied to the whole UAS dataset. In addition, a separate cloud consisting of non-vegetation features automatically derived using CANUPO classification algorithm (Brodu and Lague, 2012) was used to generate a separate set of parameters. Ground survey is done to validate the transformed cloud. An RMSE value of around 16 centimeters was found when comparing validation data to the models georeferenced using the CANUPO cloud and the manual skeleton cloud. Cloud-to-cloud distance computations of CANUPO and manual skeleton clouds were obtained with values for both equal to around 0.67 meters at 1.73 standard deviation.
Ohashi, Akira; Ito, Hiromi; Kanai, Chikako; Imura, Hisanori; Ohashi, Kousaburo
2005-01-30
The cloud point extraction behavior of iron(III) and vanadium(V) using 8-quinolinol derivatives (HA) such as 8-quinolinol (HQ), 2-methyl-8-quinolinol (HMQ), 5-butyloxymethyl-8-quinolinol (HO(4)Q), 5-hexyloxymethyl-8-quinolinol (HO(6)Q), and 2-methyl-5-octyloxymethyl-8-quinolinol (HMO(8)Q) and Triton X-100 solution was investigated. Iron(III) was extracted with HA and 4% (v/v) Triton X-100 in the pH range of 1.70-5.44. Above pH 4.0, more than 95% of iron(III) was extracted with HQ, HMQ, and HMO(8)Q. Vanadium(V) was also extracted with HA and 4% (v/v) Triton X-100 in the pH range of 2.07-5.00, and the extractability increased in the following order of HMQ < HQ < HO(4)Q < HO(6)Q. The cloud point extraction was applied to the determination of iron(III) in the riverine water reference by a graphite furnace atomic absorption spectroscopy. When 1.25 x 10(-3)M HMQ and 1% (v/v) Triton X-100 were used, the found values showed a good agreement with the certified ones within the 2% of the R.S.D. Moreover, the effect of an alkyl group on the solubility of 5-alkyloxymethyl-8-quinolinol and 2-methyl-5-alkyloxymethyl-8-quinolinol in 4% (v/v) Triton X-100 at 25 degrees C was also investigated.
Wang, Yunsheng; Weinacker, Holger; Koch, Barbara
2008-01-01
A procedure for both vertical canopy structure analysis and 3D single tree modelling based on Lidar point cloud is presented in this paper. The whole area of research is segmented into small study cells by a raster net. For each cell, a normalized point cloud whose point heights represent the absolute heights of the ground objects is generated from the original Lidar raw point cloud. The main tree canopy layers and the height ranges of the layers are detected according to a statistical analysis of the height distribution probability of the normalized raw points. For the 3D modelling of individual trees, individual trees are detected and delineated not only from the top canopy layer but also from the sub canopy layer. The normalized points are resampled into a local voxel space. A series of horizontal 2D projection images at the different height levels are then generated respect to the voxel space. Tree crown regions are detected from the projection images. Individual trees are then extracted by means of a pre-order forest traversal process through all the tree crown regions at the different height levels. Finally, 3D tree crown models of the extracted individual trees are reconstructed. With further analyses on the 3D models of individual tree crowns, important parameters such as crown height range, crown volume and crown contours at the different height levels can be derived. PMID:27879916
3D reconstruction of wooden member of ancient architecture from point clouds
NASA Astrophysics Data System (ADS)
Zhang, Ruiju; Wang, Yanmin; Li, Deren; Zhao, Jun; Song, Daixue
2006-10-01
This paper presents a 3D reconstruction method to model wooden member of ancient architecture from point clouds based on improved deformable model. Three steps are taken to recover the shape of wooden member. Firstly, Hessian matrix is adopted to compute the axe of wooden member. Secondly, an initial model of wooden member is made by contour orthogonal to its axis. Thirdly, an accurate model is got through the coupling effect between the initial model and the point clouds of the wooden member according to the theory of improved deformable model. Every step and algorithm is studied and described in the paper. Using the point clouds captured from Forbidden City of China, shaft member and beam member are taken as examples to test the method proposed in the paper. Results show the efficiency and robustness of the method addressed in the literature to model the wooden member of ancient architecture.
Sun, Mei; Wu, Qianghua
2010-04-15
A cloud point extraction (CPE) method for the preconcentration of ultra-trace aluminum in human albumin prior to its determination by graphite furnace atomic absorption spectrometry (GFAAS) had been developed in this paper. The CPE method was based on the complex of Al(III) with 1-(2-pyridylazo)-2-naphthol (PAN) and Triton X-114 was used as non-ionic surfactant. The main factors affecting cloud point extraction efficiency, such as pH of solution, concentration and kind of complexing agent, concentration of non-ionic surfactant, equilibration temperature and time, were investigated in detail. An enrichment factor of 34.8 was obtained for the preconcentration of Al(III) with 10 mL solution. Under the optimal conditions, the detection limit of Al(III) was 0.06 ng mL(-1). The relative standard deviation (n=7) of sample was 3.6%, values of recovery of aluminum were changed from 92.3% to 94.7% for three samples. This method is simple, accurate, sensitive and can be applied to the determination of ultra-trace aluminum in human albumin. 2009 Elsevier B.V. All rights reserved.
Nazar, Muhammad Faizan; Shah, Syed Sakhawat; Eastoe, Julian; Khan, Asad Muhammad; Shah, Afzal
2011-11-15
A viable cost-effective approach employing mixtures of non-ionic surfactants Triton X-114/Triton X-100 (TX-114/TX-100), and subsequent cloud point extraction (CPE), has been utilized to concentrate and recycle inorganic nanoparticles (NPs) in aqueous media. Gold Au- and palladium Pd-NPs have been pre-synthesized in aqueous phases and stabilized by sodium 2-mercaptoethanesulfonate (MES) ligands, then dispersed in aqueous non-ionic surfactant mixtures. Heating the NP-micellar systems induced cloud point phase separations, resulting in concentration of the NPs in lower phases after the transition. For the Au-NPs UV/vis absorption has been used to quantify the recovery and recycle efficiency after five repeated CPE cycles. Transmission electron microscopy (TEM) was used to investigate NP size, shape, and stability. The results showed that NPs are preserved after the recovery processes, but highlight a potential limitation, in that further particle growth can occur in the condensed phases. Copyright © 2011 Elsevier Inc. All rights reserved.
Naeemullah; Kazi, Tasneem G; Shah, Faheem; Afridi, Hassan I; Baig, Jameel Ahmed; Soomro, Abdul Sattar
2013-01-01
A simple method for the preconcentration of cadmium (Cd) and nickel (Ni) in drinking and wastewater samples was developed. Cloud point extraction has been used for the preconcentration of both metals, after formation of complexes with 8-hydroxyquinoline (8-HQ) and extraction with the surfactant octylphenoxypolyethoxyethanol (Triton X-114). Dilution of the surfactant-rich phase with acidified ethanol was performed after phase separation, and the Cd and Ni contents were measured by flame atomic absorption spectrometry. The experimental variables, such as pH, amounts of reagents (8-HQ and Triton X-114), temperature, incubation time, and sample volume, were optimized. After optimization of the complexation and extraction conditions, enhancement factors of 80 and 61, with LOD values of 0.22 and 0.52 microg/L, were obtained for Cd and Ni, respectively. The proposed method was applied satisfactorily for the determination of both elements in drinking and wastewater samples.
Automatic extraction of blocks from 3D point clouds of fractured rock
NASA Astrophysics Data System (ADS)
Chen, Na; Kemeny, John; Jiang, Qinghui; Pan, Zhiwen
2017-12-01
This paper presents a new method for extracting blocks and calculating block size automatically from rock surface 3D point clouds. Block size is an important rock mass characteristic and forms the basis for several rock mass classification schemes. The proposed method consists of four steps: 1) the automatic extraction of discontinuities using an improved Ransac Shape Detection method, 2) the calculation of discontinuity intersections based on plane geometry, 3) the extraction of block candidates based on three discontinuities intersecting one another to form corners, and 4) the identification of "true" blocks using an improved Floodfill algorithm. The calculated block sizes were compared with manual measurements in two case studies, one with fabricated cardboard blocks and the other from an actual rock mass outcrop. The results demonstrate that the proposed method is accurate and overcomes the inaccuracies, safety hazards, and biases of traditional techniques.
NASA Astrophysics Data System (ADS)
Wang, Yongbo; Sheng, Yehua; Lu, Guonian; Tian, Peng; Zhang, Kai
2008-04-01
Surface reconstruction is an important task in the field of 3d-GIS, computer aided design and computer graphics (CAD & CG), virtual simulation and so on. Based on available incremental surface reconstruction methods, a feature-constrained surface reconstruction approach for point cloud is presented. Firstly features are extracted from point cloud under the rules of curvature extremes and minimum spanning tree. By projecting local sample points to the fitted tangent planes and using extracted features to guide and constrain the process of local triangulation and surface propagation, topological relationship among sample points can be achieved. For the constructed models, a process named consistent normal adjustment and regularization is adopted to adjust normal of each face so that the correct surface model is achieved. Experiments show that the presented approach inherits the convenient implementation and high efficiency of traditional incremental surface reconstruction method, meanwhile, it avoids improper propagation of normal across sharp edges, which means the applicability of incremental surface reconstruction is greatly improved. Above all, appropriate k-neighborhood can help to recognize un-sufficient sampled areas and boundary parts, the presented approach can be used to reconstruct both open and close surfaces without additional interference.
The Feasibility of 3d Point Cloud Generation from Smartphones
NASA Astrophysics Data System (ADS)
Alsubaie, N.; El-Sheimy, N.
2016-06-01
This paper proposes a new technique for increasing the accuracy of direct geo-referenced image-based 3D point cloud generated from low-cost sensors in smartphones. The smartphone's motion sensors are used to directly acquire the Exterior Orientation Parameters (EOPs) of the captured images. These EOPs, along with the Interior Orientation Parameters (IOPs) of the camera/ phone, are used to reconstruct the image-based 3D point cloud. However, because smartphone motion sensors suffer from poor GPS accuracy, accumulated drift and high signal noise, inaccurate 3D mapping solutions often result. Therefore, horizontal and vertical linear features, visible in each image, are extracted and used as constraints in the bundle adjustment procedure. These constraints correct the relative position and orientation of the 3D mapping solution. Once the enhanced EOPs are estimated, the semi-global matching algorithm (SGM) is used to generate the image-based dense 3D point cloud. Statistical analysis and assessment are implemented herein, in order to demonstrate the feasibility of 3D point cloud generation from the consumer-grade sensors in smartphones.
Duan, Zhugeng; Zhao, Dan; Zeng, Yuan; Zhao, Yujin; Wu, Bingfang; Zhu, Jianjun
2015-01-01
Topography affects forest canopy height retrieval based on airborne Light Detection and Ranging (LiDAR) data a lot. This paper proposes a method for correcting deviations caused by topography based on individual tree crown segmentation. The point cloud of an individual tree was extracted according to crown boundaries of isolated individual trees from digital orthophoto maps (DOMs). Normalized canopy height was calculated by subtracting the elevation of centres of gravity from the elevation of point cloud. First, individual tree crown boundaries are obtained by carrying out segmentation on the DOM. Second, point clouds of the individual trees are extracted based on the boundaries. Third, precise DEM is derived from the point cloud which is classified by a multi-scale curvature classification algorithm. Finally, a height weighted correction method is applied to correct the topological effects. The method is applied to LiDAR data acquired in South China, and its effectiveness is tested using 41 field survey plots. The results show that the terrain impacts the canopy height of individual trees in that the downslope side of the tree trunk is elevated and the upslope side is depressed. This further affects the extraction of the location and crown of individual trees. A strong correlation was detected between the slope gradient and the proportions of returns with height differences more than 0.3, 0.5 and 0.8 m in the total returns, with coefficient of determination R2 of 0.83, 0.76, and 0.60 (n = 41), respectively. PMID:26016907
Li, Jiekang; Li, Guirong; Han, Qian
2016-12-05
In this paper, two kinds of salophens (Sal) with different solubilities, Sal1 and Sal2, have been respectively synthesized, and they all can combine with uranyl to form stable complexes: [UO2(2+)-Sal1] and [UO2(2+)-Sal2]. Among them, [UO2(2+)-Sal1] was used as ligand to extract uranium in complex samples by dual cloud point extraction (dCPE), and [UO2(2+)-Sal2] was used as catalyst for the determination of uranium by photocatalytic resonance fluorescence (RF) method. The photocatalytic characteristic of [UO2(2+)-Sal2] on the oxidized pyronine Y (PRY) by potassium bromate which leads to the decrease of RF intensity of PRY were studied. The reduced value of RF intensity of reaction system (ΔF) is in proportional to the concentration of uranium (c), and a novel photo-catalytic RF method was developed for the determination of trace uranium (VI) after dCPE. The combination of photo-catalytic RF techniques and dCPE procedure endows the presented methods with enhanced sensitivity and selectivity. Under optimal conditions, the linear calibration curves range for 0.067 to 6.57ngmL(-1), the linear regression equation was ΔF=438.0 c (ngmL(-1))+175.6 with the correlation coefficient r=0.9981. The limit of detection was 0.066ngmL(-1). The proposed method was successfully applied for the separation and determination of uranium in real samples with the recoveries of 95.0-103.5%. The mechanisms of the indicator reaction and dCPE are discussed. Copyright © 2016 Elsevier B.V. All rights reserved.
Chen, Chen-Wen; Hsu, Wen-Chan; Lu, Ya-Chen; Weng, Jing-Ru; Feng, Chia-Hsien
2018-02-15
Parabens are common preservatives and environmental hormones. As such, possible detrimental health effects could be amplified through their widespread use in foods, cosmetics, and pharmaceutical products. Thus, the determination of parabens in such products is of particular importance. This study explored vortex-assisted dispersive liquid-liquid microextraction techniques based on the solidification of a floating organic drop (VA-DLLME-SFO) and salt-assisted cloud point extraction (SA-CPE) for paraben extraction. Microanalysis was performed using a capillary liquid chromatography-ultraviolet detection system. These techniques were modified successfully to determine four parabens in 19 commercial products. The regression equations of these parabens exhibited good linearity (r 2 =0.998, 0.1-10μg/mL), good precision (RSD<5%) and accuracy (RE<5%), reduced reagent consumption and reaction times (<6min), and excellent sample versatility. VA-DLLME-SFO was also particularly convenient due to the use of a solidified extract. Thus, the VA-DLLME-SFO technique was better suited to the extraction of parabens from complex matrices. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Kang, Zhizhong
2013-10-01
This paper presents a new approach to automatic registration of terrestrial laser scanning (TLS) point clouds utilizing a novel robust estimation method by an efficient BaySAC (BAYes SAmpling Consensus). The proposed method directly generates reflectance images from 3D point clouds, and then using SIFT algorithm extracts keypoints to identify corresponding image points. The 3D corresponding points, from which transformation parameters between point clouds are computed, are acquired by mapping the 2D ones onto the point cloud. To remove false accepted correspondences, we implement a conditional sampling method to select the n data points with the highest inlier probabilities as a hypothesis set and update the inlier probabilities of each data point using simplified Bayes' rule for the purpose of improving the computation efficiency. The prior probability is estimated by the verification of the distance invariance between correspondences. The proposed approach is tested on four data sets acquired by three different scanners. The results show that, comparing with the performance of RANSAC, BaySAC leads to less iterations and cheaper computation cost when the hypothesis set is contaminated with more outliers. The registration results also indicate that, the proposed algorithm can achieve high registration accuracy on all experimental datasets.
Cloud Point Extraction for Electroanalysis: Anodic Stripping Voltammetry of Cadmium
Rusinek, Cory A.; Bange, Adam; Papautsky, Ian; Heineman, William R.
2016-01-01
Cloud point extraction (CPE) is a well-established technique for the pre-concentration of hydrophobic species from water without the use of organic solvents. Subsequent analysis is then typically performed via atomic absorption spectroscopy (AAS), UV-Vis spectroscopy, or high performance liquid chromatography (HPLC). However, the suitability of CPE for electroanalytical methods such as stripping voltammetry has not been reported. We demonstrate the use of CPE for electroanalysis using the determination of cadmium (Cd2+) by anodic stripping voltammetry (ASV) as a representative example. Rather than using the chelating agents which are commonly used in CPE to form a hydrophobic, extractable metal complex, we used iodide and sulfuric acid to neutralize the charge on Cd2+ to form an extractable ion pair. Triton X-114 was chosen as the surfactant for the extraction because its cloud point temperature is near room temperature (22–25° C). Bare glassy carbon (GC), bismuth-coated glassy carbon (Bi-GC), and mercury-coated glassy carbon (Hg-GC) electrodes were compared for the CPE-ASV. A detection limit for Cd2+ of 1.7 nM (0.2 ppb) was obtained with the Hg-GC electrode. Comparison of ASV analysis without CPE was also investigated and a 20x decrease (4.0 ppb) in the detection limit was observed. The suitability of this procedure for the analysis of tap and river water samples was also demonstrated. This simple, versatile, environmentally friendly and cost-effective extraction method is potentially applicable to a wide variety of transition metals and organic compounds that are amenable to detection by electroanalytical methods. PMID:25996561
Investigation of the coupling of the momentum distribution of a BEC with its collective of modes
NASA Astrophysics Data System (ADS)
Henn, Emanuel; Tavares, Pedro; Fritsch, Amilson; Vivanco, Franklin; Telles, Gustavo; Bagnato, Vanderlei
In our group we have a strong research line on quantum turbulence and the general investigation of Bose-Einstein condensates (BEC) subjected to oscillatory excitations. Inside this research line we investigate first the behavior of the normal modes of the BEC under this excitation and observe a non-linear behavior in the amplitude of the quadrupolar mode. Also, inside this same procedure of investigation we study the momentum distribution of a BEC to understand if it is possible to extract Kolmogorov like excitation spectra which would point to a turbulent state of matter. The condensate is perturbed, and we let it evolve in-trap after which we perform standard time-of- flight absorption imaging. The momentum distribution is extracted and analyzed as a function of the in-trap free evolution time for a 2D projected cloud. We show that the momentum distribution has its features varying periodically with the same frequency as the quadrupolar mode displayed by the atomic gas hinting at a strong coupling of both. The main consequence of that one cannot be assertive about the quantitative features of the extract spectrum of momentum and we can only rely on its qualitative features. Financial Support: FAPESP, CNPq.
Mohd, N I; Zain, N N M; Raoov, M; Mohamad, S
2018-04-01
A new cloud point methodology was successfully used for the extraction of carcinogenic pesticides in milk samples as a prior step to their determination by spectrophotometry. In this work, non-ionic silicone surfactant, also known as 3-(3-hydroxypropyl-heptatrimethylxyloxane), was chosen as a green extraction solvent because of its structure and properties. The effect of different parameters, such as the type of surfactant, concentration and volume of surfactant, pH, salt, temperature, incubation time and water content on the cloud point extraction of carcinogenic pesticides such as atrazine and propazine, was studied in detail and a set of optimum conditions was established. A good correlation coefficient ( R 2 ) in the range of 0.991-0.997 for all calibration curves was obtained. The limit of detection was 1.06 µg l -1 (atrazine) and 1.22 µg l -1 (propazine), and the limit of quantitation was 3.54 µg l -1 (atrazine) and 4.07 µg l -1 (propazine). Satisfactory recoveries in the range of 81-108% were determined in milk samples at 5 and 1000 µg l -1 , respectively, with low relative standard deviation, n = 3 of 0.301-7.45% in milk matrices. The proposed method is very convenient, rapid, cost-effective and environmentally friendly for food analysis.
NASA Astrophysics Data System (ADS)
Sun, Z.; Xu, Y.; Hoegner, L.; Stilla, U.
2018-05-01
In this work, we propose a classification method designed for the labeling of MLS point clouds, with detrended geometric features extracted from the points of the supervoxel-based local context. To achieve the analysis of complex 3D urban scenes, acquired points of the scene should be tagged with individual labels of different classes. Thus, assigning a unique label to the points of an object that belong to the same category plays an essential role in the entire 3D scene analysis workflow. Although plenty of studies in this field have been reported, this work is still a challenging task. Specifically, in this work: 1) A novel geometric feature extraction method, detrending the redundant and in-salient information in the local context, is proposed, which is proved to be effective for extracting local geometric features from the 3D scene. 2) Instead of using individual point as basic element, the supervoxel-based local context is designed to encapsulate geometric characteristics of points, providing a flexible and robust solution for feature extraction. 3) Experiments using complex urban scene with manually labeled ground truth are conducted, and the performance of proposed method with respect to different methods is analyzed. With the testing dataset, we have obtained a result of 0.92 for overall accuracy for assigning eight semantic classes.
Assessment of different models for computing the probability of a clear line of sight
NASA Astrophysics Data System (ADS)
Bojin, Sorin; Paulescu, Marius; Badescu, Viorel
2017-12-01
This paper is focused on modeling the morphological properties of the cloud fields in terms of the probability of a clear line of sight (PCLOS). PCLOS is defined as the probability that a line of sight between observer and a given point of the celestial vault goes freely without intersecting a cloud. A variety of PCLOS models assuming the cloud shape hemisphere, semi-ellipsoid and ellipsoid are tested. The effective parameters (cloud aspect ratio and absolute cloud fraction) are extracted from high-resolution series of sunshine number measurements. The performance of the PCLOS models is evaluated from the perspective of their ability in retrieving the point cloudiness. The advantages and disadvantages of the tested models are discussed, aiming to a simplified parameterization of PCLOS models.
A portable foot-parameter-extracting system
NASA Astrophysics Data System (ADS)
Zhang, MingKai; Liang, Jin; Li, Wenpan; Liu, Shifan
2016-03-01
In order to solve the problem of automatic foot measurement in garment customization, a new automatic footparameter- extracting system based on stereo vision, photogrammetry and heterodyne multiple frequency phase shift technology is proposed and implemented. The key technologies applied in the system are studied, including calibration of projector, alignment of point clouds, and foot measurement. Firstly, a new projector calibration algorithm based on plane model has been put forward to get the initial calibration parameters and a feature point detection scheme of calibration board image is developed. Then, an almost perfect match of two clouds is achieved by performing a first alignment using the Sampled Consensus - Initial Alignment algorithm (SAC-IA) and refining the alignment using the Iterative Closest Point algorithm (ICP). Finally, the approaches used for foot-parameterextracting and the system scheme are presented in detail. Experimental results show that the RMS error of the calibration result is 0.03 pixel and the foot parameter extracting experiment shows the feasibility of the extracting algorithm. Compared with the traditional measurement method, the system can be more portable, accurate and robust.
NASA Astrophysics Data System (ADS)
Wang, Yongzhi; Ma, Yuqing; Zhu, A.-xing; Zhao, Hui; Liao, Lixia
2018-05-01
Facade features represent segmentations of building surfaces and can serve as a building framework. Extracting facade features from three-dimensional (3D) point cloud data (3D PCD) is an efficient method for 3D building modeling. By combining the advantages of 3D PCD and two-dimensional optical images, this study describes the creation of a highly accurate building facade feature extraction method from 3D PCD with a focus on structural information. The new extraction method involves three major steps: image feature extraction, exploration of the mapping method between the image features and 3D PCD, and optimization of the initial 3D PCD facade features considering structural information. Results show that the new method can extract the 3D PCD facade features of buildings more accurately and continuously. The new method is validated using a case study. In addition, the effectiveness of the new method is demonstrated by comparing it with the range image-extraction method and the optical image-extraction method in the absence of structural information. The 3D PCD facade features extracted by the new method can be applied in many fields, such as 3D building modeling and building information modeling.
Yang, Xiupei; Jia, Zhihui; Yang, Xiaocui; Li, Gu; Liao, Xiangjun
2017-03-01
A cloud point extraction (CPE) method was used as a pre-concentration strategy prior to the determination of trace levels of silver in water by flame atomic absorption spectrometry (FAAS) The pre-concentration is based on the clouding phenomena of non-ionic surfactant, triton X-114, with Ag (I)/diethyldithiocarbamate (DDTC) complexes in which the latter is soluble in a micellar phase composed by the former. When the temperature increases above its cloud point, the Ag (I)/DDTC complexes are extracted into the surfactant-rich phase. The factors affecting the extraction efficiency including pH of the aqueous solution, concentration of the DDTC, amount of the surfactant, incubation temperature and time were investigated and optimized. Under the optimal experimental conditions, no interference was observed for the determination of 100 ng·mL -1 Ag + in the presence of various cations below their maximum concentrations allowed in this method, for instance, 50 μg·mL -1 for both Zn 2+ and Cu 2+ , 80 μg·mL -1 for Pb 2+ , 1000 μg·mL -1 for Mn 2+ , and 100 μg·mL -1 for both Cd 2+ and Ni 2+ . The calibration curve was linear in the range of 1-500 ng·mL -1 with a limit of detection (LOD) at 0.3 ng·mL -1 . The developed method was successfully applied for the determination of trace levels of silver in water samples such as river water and tap water.
NASA Astrophysics Data System (ADS)
Macher, H.; Grussenmeyer, P.; Landes, T.; Halin, G.; Chevrier, C.; Huyghe, O.
2017-08-01
The French collection of Plan-Reliefs, scale models of fortified towns, constitutes a precious testimony of the history of France. The aim of the URBANIA project is the valorisation and the diffusion of this Heritage through the creation of virtual models. The town scale model of Strasbourg at 1/600 currently exhibited in the Historical Museum of Strasbourg was selected as a case study. In this paper, the photogrammetric recording of this scale model is first presented. The acquisition protocol as well as the data post-processing are detailed. Then, the modelling of the city and more specially building blocks is investigated. Based on point clouds of the scale model, the extraction of roof elements is considered. It deals first with the segmentation of the point cloud into building blocks. Then, for each block, points belonging to roofs are identified and the extraction of chimney point clouds as well as roof ridges and roof planes is performed. Finally, the 3D parametric modelling of the building blocks is studied by considering roof polygons and polylines describing chimneys as input. In a future works section, the semantically enrichment and the potential usage scenarios of the scale model are envisaged.
Cloud-point detection using a portable thickness shear mode crystal resonator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mansure, A.J.; Spates, J.J.; Germer, J.W.
1997-08-01
The Thickness Shear Mode (TSM) crystal resonator monitors the crude oil by propagating a shear wave into the oil. The coupling of the shear wave and the crystal vibrations is a function of the viscosity of the oil. By driving the crystal with circuitry that incorporates feedback, it is possible to determine the change from Newtonian to non-Newtonian viscosity at the cloud point. A portable prototype TSM Cloud Point Detector (CPD) has performed flawlessly during field and lab tests proving the technique is less subjective or operator dependent than the ASTM standard. The TSM CPD, in contrast to standard viscositymore » techniques, makes the measurement in a closed container capable of maintaining up to 100 psi. The closed container minimizes losses of low molecular weight volatiles, allowing samples (25 ml) to be retested with the addition of chemicals. By cycling/thermal soaking the sample, the effects of thermal history can be investigated and eliminated as a source of confusion. The CPD is portable, suitable for shipping the field offices for use by personnel without special training or experience in cloud point measurements. As such, it can make cloud point data available without the delays and inconvenience of sending samples to special labs. The crystal resonator technology can be adapted to in-line monitoring of cloud point and deposition detection.« less
A hierarchical methodology for urban facade parsing from TLS point clouds
NASA Astrophysics Data System (ADS)
Li, Zhuqiang; Zhang, Liqiang; Mathiopoulos, P. Takis; Liu, Fangyu; Zhang, Liang; Li, Shuaipeng; Liu, Hao
2017-01-01
The effective and automated parsing of building facades from terrestrial laser scanning (TLS) point clouds of urban environments is an important research topic in the GIS and remote sensing fields. It is also challenging because of the complexity and great variety of the available 3D building facade layouts as well as the noise and data missing of the input TLS point clouds. In this paper, we introduce a novel methodology for the accurate and computationally efficient parsing of urban building facades from TLS point clouds. The main novelty of the proposed methodology is that it is a systematic and hierarchical approach that considers, in an adaptive way, the semantic and underlying structures of the urban facades for segmentation and subsequent accurate modeling. Firstly, the available input point cloud is decomposed into depth planes based on a data-driven method; such layer decomposition enables similarity detection in each depth plane layer. Secondly, the labeling of the facade elements is performed using the SVM classifier in combination with our proposed BieS-ScSPM algorithm. The labeling outcome is then augmented with weak architectural knowledge. Thirdly, least-squares fitted normalized gray accumulative curves are applied to detect regular structures, and a binarization dilation extraction algorithm is used to partition facade elements. A dynamic line-by-line division is further applied to extract the boundaries of the elements. The 3D geometrical façade models are then reconstructed by optimizing facade elements across depth plane layers. We have evaluated the performance of the proposed method using several TLS facade datasets. Qualitative and quantitative performance comparisons with several other state-of-the-art methods dealing with the same facade parsing problem have demonstrated its superiority in performance and its effectiveness in improving segmentation accuracy.
Yue, Chun-Hua; Zheng, Li-Tao; Guo, Qi-Ming; Li, Kun-Ping
2014-05-01
To establish a new method for the extraction and separation of curcuminoids from Curcuma longa rhizome by cloud-point preconcentration using microemulsions as solvent. The spectrophotometry was used to detect the solubility of curcumin in different oil phase, emulsifier and auxiliary emulsifier, and the microemulsion prescription was used for false three-phase figure optimization. The extraction process was optimized by uniform experiment design. The curcuminoids were separated from microemulsion extract by cloud-point preconcentration. Oil phase was oleic acid ethyl ester; Emulsifier was OP emulsifier; Auxiliary emulsifier was polyethylene glycol(peg) 400; The quantity of emulsifier to auxiliary emulsifier was the ratio of 5: 1; Microemulsion prescription was water-oleic acid ethyl ester-mixed emulsifier (0.45:0.1:0.45). The optimum extraction process was: time for 12.5 min, temperature of 52 degrees C, power of 360 W, frequency of 400 kHz, and the liquid-solid ratio of 40:1. The extraction rate of curcuminoids was 92.17% and 86.85% in microemulsion and oil phase, respectively. Curcuminoids is soluble in this microemulsion prescription with good extraction rate. This method is simple and suitable for curcuminoids extraction from Curcuma longa rhizome.
Section Curve Reconstruction and Mean-Camber Curve Extraction of a Point-Sampled Blade Surface
Li, Wen-long; Xie, He; Li, Qi-dong; Zhou, Li-ping; Yin, Zhou-ping
2014-01-01
The blade is one of the most critical parts of an aviation engine, and a small change in the blade geometry may significantly affect the dynamics performance of the aviation engine. Rapid advancements in 3D scanning techniques have enabled the inspection of the blade shape using a dense and accurate point cloud. This paper proposes a new method to achieving two common tasks in blade inspection: section curve reconstruction and mean-camber curve extraction with the representation of a point cloud. The mathematical morphology is expanded and applied to restrain the effect of the measuring defects and generate an ordered sequence of 2D measured points in the section plane. Then, the energy and distance are minimized to iteratively smoothen the measured points, approximate the section curve and extract the mean-camber curve. In addition, a turbine blade is machined and scanned to observe the curvature variation, energy variation and approximation error, which demonstrates the availability of the proposed method. The proposed method is simple to implement and can be applied in aviation casting-blade finish inspection, large forging-blade allowance inspection and visual-guided robot grinding localization. PMID:25551467
Section curve reconstruction and mean-camber curve extraction of a point-sampled blade surface.
Li, Wen-long; Xie, He; Li, Qi-dong; Zhou, Li-ping; Yin, Zhou-ping
2014-01-01
The blade is one of the most critical parts of an aviation engine, and a small change in the blade geometry may significantly affect the dynamics performance of the aviation engine. Rapid advancements in 3D scanning techniques have enabled the inspection of the blade shape using a dense and accurate point cloud. This paper proposes a new method to achieving two common tasks in blade inspection: section curve reconstruction and mean-camber curve extraction with the representation of a point cloud. The mathematical morphology is expanded and applied to restrain the effect of the measuring defects and generate an ordered sequence of 2D measured points in the section plane. Then, the energy and distance are minimized to iteratively smoothen the measured points, approximate the section curve and extract the mean-camber curve. In addition, a turbine blade is machined and scanned to observe the curvature variation, energy variation and approximation error, which demonstrates the availability of the proposed method. The proposed method is simple to implement and can be applied in aviation casting-blade finish inspection, large forging-blade allowance inspection and visual-guided robot grinding localization.
Image Capture with Synchronized Multiple-Cameras for Extraction of Accurate Geometries
NASA Astrophysics Data System (ADS)
Koehl, M.; Delacourt, T.; Boutry, C.
2016-06-01
This paper presents a project of recording and modelling tunnels, traffic circles and roads from multiple sensors. The aim is the representation and the accurate 3D modelling of a selection of road infrastructures as dense point clouds in order to extract profiles and metrics from it. Indeed, these models will be used for the sizing of infrastructures in order to simulate exceptional convoy truck routes. The objective is to extract directly from the point clouds the heights, widths and lengths of bridges and tunnels, the diameter of gyrating and to highlight potential obstacles for a convoy. Light, mobile and fast acquisition approaches based on images and videos from a set of synchronized sensors have been tested in order to obtain useable point clouds. The presented solution is based on a combination of multiple low-cost cameras designed on an on-boarded device allowing dynamic captures. The experimental device containing GoPro Hero4 cameras has been set up and used for tests in static or mobile acquisitions. That way, various configurations have been tested by using multiple synchronized cameras. These configurations are discussed in order to highlight the best operational configuration according to the shape of the acquired objects. As the precise calibration of each sensor and its optics are major factors in the process of creation of accurate dense point clouds, and in order to reach the best quality available from such cameras, the estimation of the internal parameters of fisheye lenses of the cameras has been processed. Reference measures were also realized by using a 3D TLS (Faro Focus 3D) to allow the accuracy assessment.
Feature Relevance Assessment of Multispectral Airborne LIDAR Data for Tree Species Classification
NASA Astrophysics Data System (ADS)
Amiri, N.; Heurich, M.; Krzystek, P.; Skidmore, A. K.
2018-04-01
The presented experiment investigates the potential of Multispectral Laser Scanning (MLS) point clouds for single tree species classification. The basic idea is to simulate a MLS sensor by combining two different Lidar sensors providing three different wavelngthes. The available data were acquired in the summer 2016 at the same date in a leaf-on condition with an average point density of 37 points/m2. For the purpose of classification, we segmented the combined 3D point clouds consisiting of three different spectral channels into 3D clusters using Normalized Cut segmentation approach. Then, we extracted four group of features from the 3D point cloud space. Once a varity of features has been extracted, we applied forward stepwise feature selection in order to reduce the number of irrelevant or redundant features. For the classification, we used multinomial logestic regression with L1 regularization. Our study is conducted using 586 ground measured single trees from 20 sample plots in the Bavarian Forest National Park, in Germany. Due to lack of reference data for some rare species, we focused on four classes of species. The results show an improvement between 4-10 pp for the tree species classification by using MLS data in comparison to a single wavelength based approach. A cross validated (15-fold) accuracy of 0.75 can be achieved when all feature sets from three different spectral channels are used. Our results cleary indicates that the use of MLS point clouds has great potential to improve detailed forest species mapping.
NASA Astrophysics Data System (ADS)
Hoegner, L.; Tuttas, S.; Xu, Y.; Eder, K.; Stilla, U.
2016-06-01
This paper discusses the automatic coregistration and fusion of 3d point clouds generated from aerial image sequences and corresponding thermal infrared (TIR) images. Both RGB and TIR images have been taken from a RPAS platform with a predefined flight path where every RGB image has a corresponding TIR image taken from the same position and with the same orientation with respect to the accuracy of the RPAS system and the inertial measurement unit. To remove remaining differences in the exterior orientation, different strategies for coregistering RGB and TIR images are discussed: (i) coregistration based on 2D line segments for every single TIR image and the corresponding RGB image. This method implies a mainly planar scene to avoid mismatches; (ii) coregistration of both the dense 3D point clouds from RGB images and from TIR images by coregistering 2D image projections of both point clouds; (iii) coregistration based on 2D line segments in every single TIR image and 3D line segments extracted from intersections of planes fitted in the segmented dense 3D point cloud; (iv) coregistration of both the dense 3D point clouds from RGB images and from TIR images using both ICP and an adapted version based on corresponding segmented planes; (v) coregistration of both image sets based on point features. The quality is measured by comparing the differences of the back projection of homologous points in both corrected RGB and TIR images.
Building Facade Modeling Under Line Feature Constraint Based on Close-Range Images
NASA Astrophysics Data System (ADS)
Liang, Y.; Sheng, Y. H.
2018-04-01
To solve existing problems in modeling facade of building merely with point feature based on close-range images , a new method for modeling building facade under line feature constraint is proposed in this paper. Firstly, Camera parameters and sparse spatial point clouds data were restored using the SFM , and 3D dense point clouds were generated with MVS; Secondly, the line features were detected based on the gradient direction , those detected line features were fit considering directions and lengths , then line features were matched under multiple types of constraints and extracted from multi-image sequence. At last, final facade mesh of a building was triangulated with point cloud and line features. The experiment shows that this method can effectively reconstruct the geometric facade of buildings using the advantages of combining point and line features of the close - range image sequence, especially in restoring the contour information of the facade of buildings.
A Registration Method Based on Contour Point Cloud for 3D Whole-Body PET and CT Images
Yang, Qiyao; Wang, Zhiguo; Zhang, Guoxu
2017-01-01
The PET and CT fusion image, combining the anatomical and functional information, has important clinical meaning. An effective registration of PET and CT images is the basis of image fusion. This paper presents a multithread registration method based on contour point cloud for 3D whole-body PET and CT images. Firstly, a geometric feature-based segmentation (GFS) method and a dynamic threshold denoising (DTD) method are creatively proposed to preprocess CT and PET images, respectively. Next, a new automated trunk slices extraction method is presented for extracting feature point clouds. Finally, the multithread Iterative Closet Point is adopted to drive an affine transform. We compare our method with a multiresolution registration method based on Mattes Mutual Information on 13 pairs (246~286 slices per pair) of 3D whole-body PET and CT data. Experimental results demonstrate the registration effectiveness of our method with lower negative normalization correlation (NC = −0.933) on feature images and less Euclidean distance error (ED = 2.826) on landmark points, outperforming the source data (NC = −0.496, ED = 25.847) and the compared method (NC = −0.614, ED = 16.085). Moreover, our method is about ten times faster than the compared one. PMID:28316979
Cloud Point Extraction for Electroanalysis: Anodic Stripping Voltammetry of Cadmium.
Rusinek, Cory A; Bange, Adam; Papautsky, Ian; Heineman, William R
2015-06-16
Cloud point extraction (CPE) is a well-established technique for the preconcentration of hydrophobic species from water without the use of organic solvents. Subsequent analysis is then typically performed via atomic absorption spectroscopy (AAS), UV-vis spectroscopy, or high performance liquid chromatography (HPLC). However, the suitability of CPE for electroanalytical methods such as stripping voltammetry has not been reported. We demonstrate the use of CPE for electroanalysis using the determination of cadmium (Cd(2+)) by anodic stripping voltammetry (ASV). Rather than using the chelating agents which are commonly used in CPE to form a hydrophobic, extractable metal complex, we used iodide and sulfuric acid to neutralize the charge on Cd(2+) to form an extractable ion pair. This offers good selectivity for Cd(2+) as no interferences were observed from other heavy metal ions. Triton X-114 was chosen as the surfactant for the extraction because its cloud point temperature is near room temperature (22-25 °C). Bare glassy carbon (GC), bismuth-coated glassy carbon (Bi-GC), and mercury-coated glassy carbon (Hg-GC) electrodes were compared for the CPE-ASV. A detection limit for Cd(2+) of 1.7 nM (0.2 ppb) was obtained with the Hg-GC electrode. ASV with CPE gave a 20x decrease (4.0 ppb) in the detection limit compared to ASV without CPE. The suitability of this procedure for the analysis of tap and river water samples was demonstrated. This simple, versatile, environmentally friendly, and cost-effective extraction method is potentially applicable to a wide variety of transition metals and organic compounds that are amenable to detection by electroanalytical methods.
NASA Astrophysics Data System (ADS)
Michele, Mangiameli; Giuseppe, Mussumeci; Salvatore, Zito
2017-07-01
The Structure From Motion (SFM) is a technique applied to a series of photographs of an object that returns a 3D reconstruction made up by points in the space (point clouds). This research aims at comparing the results of the SFM approach with the results of a 3D laser scanning in terms of density and accuracy of the model. The experience was conducted by detecting several architectural elements (walls and portals of historical buildings) both with a 3D laser scanner of the latest generation and an amateur photographic camera. The point clouds acquired by laser scanner and those acquired by the photo camera have been systematically compared. In particular we present the experience carried out on the "Don Diego Pappalardo Palace" site in Pedara (Catania, Sicily).
Vial, Jessica; Bony, Sandrine; Dufresne, Jean-Louis; Roehrig, Romain
2016-12-01
Several studies have pointed out the dependence of low-cloud feedbacks on the strength of the lower-tropospheric convective mixing. By analyzing a series of single-column model experiments run by a climate model using two different convective parametrizations, this study elucidates the physical mechanisms through which marine boundary-layer clouds depend on this mixing in the present-day climate and under surface warming. An increased lower-tropospheric convective mixing leads to a reduction of low-cloud fraction. However, the rate of decrease strongly depends on how the surface latent heat flux couples to the convective mixing and to boundary-layer cloud radiative effects: (i) on the one hand, the latent heat flux is enhanced by the lower-tropospheric drying induced by the convective mixing, which damps the reduction of the low-cloud fraction, (ii) on the other hand, the latent heat flux is reduced as the lower troposphere stabilizes under the effect of reduced low-cloud radiative cooling, which enhances the reduction of the low-cloud fraction. The relative importance of these two different processes depends on the closure of the convective parameterization. The convective scheme that favors the coupling between latent heat flux and low-cloud radiative cooling exhibits a stronger sensitivity of low-clouds to convective mixing in the present-day climate, and a stronger low-cloud feedback in response to surface warming. In this model, the low-cloud feedback is stronger when the present-day convective mixing is weaker and when present-day clouds are shallower and more radiatively active. The implications of these insights for constraining the strength of low-cloud feedbacks observationally is discussed.
Bony, Sandrine; Dufresne, Jean‐Louis; Roehrig, Romain
2016-01-01
Abstract Several studies have pointed out the dependence of low‐cloud feedbacks on the strength of the lower‐tropospheric convective mixing. By analyzing a series of single‐column model experiments run by a climate model using two different convective parametrizations, this study elucidates the physical mechanisms through which marine boundary‐layer clouds depend on this mixing in the present‐day climate and under surface warming. An increased lower‐tropospheric convective mixing leads to a reduction of low‐cloud fraction. However, the rate of decrease strongly depends on how the surface latent heat flux couples to the convective mixing and to boundary‐layer cloud radiative effects: (i) on the one hand, the latent heat flux is enhanced by the lower‐tropospheric drying induced by the convective mixing, which damps the reduction of the low‐cloud fraction, (ii) on the other hand, the latent heat flux is reduced as the lower troposphere stabilizes under the effect of reduced low‐cloud radiative cooling, which enhances the reduction of the low‐cloud fraction. The relative importance of these two different processes depends on the closure of the convective parameterization. The convective scheme that favors the coupling between latent heat flux and low‐cloud radiative cooling exhibits a stronger sensitivity of low‐clouds to convective mixing in the present‐day climate, and a stronger low‐cloud feedback in response to surface warming. In this model, the low‐cloud feedback is stronger when the present‐day convective mixing is weaker and when present‐day clouds are shallower and more radiatively active. The implications of these insights for constraining the strength of low‐cloud feedbacks observationally is discussed. PMID:28239438
Csf Based Non-Ground Points Extraction from LIDAR Data
NASA Astrophysics Data System (ADS)
Shen, A.; Zhang, W.; Shi, H.
2017-09-01
Region growing is a classical method of point cloud segmentation. Based on the idea of collecting the pixels with similar properties to form regions, region growing is widely used in many fields such as medicine, forestry and remote sensing. In this algorithm, there are two core problems. One is the selection of seed points, the other is the setting of the growth constraints, in which the selection of the seed points is the foundation. In this paper, we propose a CSF (Cloth Simulation Filtering) based method to extract the non-ground seed points effectively. The experiments have shown that this method can obtain a group of seed spots compared with the traditional methods. It is a new attempt to extract seed points
NASA Astrophysics Data System (ADS)
Arunachalam, M. S.; Puli, Anil; Anuradha, B.
2016-07-01
In the present work continuous extraction of convective cloud optical information and reflectivity (MAX(Z) in dBZ) using online retrieval technique for time series data production from Doppler Weather Radar (DWR) located at Indian Meteorological Department, Chennai has been developed in MATLAB. Reflectivity measurements for different locations within the DWR range of 250 Km radii of circular disc area can be retrieved using this technique. It gives both time series reflectivity of point location and also Range Time Intensity (RTI) maps of reflectivity for the corresponding location. The Graphical User Interface (GUI) developed for the cloud reflectivity is user friendly; it also provides the convective cloud optical information such as cloud base height (CBH), cloud top height (CTH) and cloud optical depth (COD). This technique is also applicable for retrieving other DWR products such as Plan Position Indicator (Z, in dBZ), Plan Position Indicator (Z, in dBZ)-Close Range, Volume Velocity Processing (V, in knots), Plan Position Indicator (V, in m/s), Surface Rainfall Intensity (SRI, mm/hr), Precipitation Accumulation (PAC) 24 hrs at 0300UTC. Keywords: Reflectivity, cloud top height, cloud base, cloud optical depth
Ulusoy, Halil Ibrahim
2014-01-01
A new micelle-mediated extraction method was developed for preconcentration of ultratrace Hg(II) ions prior to spectrophotometric determination. 2-(2'-Thiazolylazo)-p-cresol (TAC) and Ponpe 7.5 were used as the chelating agent and nonionic surfactant, respectively. Hg(II) ions form a hydrophobic complex with TAC in a micelle medium. The main factors affecting cloud point extraction efficiency, such as pH of the medium, concentrations of TAC and Ponpe 7.5, and equilibration temperature and time, were investigated in detail. An overall preconcentration factor of 33.3 was obtained upon preconcentration of a 50 mL sample. The LOD obtained under the optimal conditions was 0.86 microg/L, and the RSD for five replicate measurements of 100 microg/L Hg(II) was 3.12%. The method was successfully applied to the determination of Hg in environmental water samples.
Sun, Mei; Liu, Guijian; Wu, Qianghua
2013-11-01
A new method was developed for the determination of organic and inorganic selenium in selenium-enriched rice by graphite furnace atomic absorption spectrometry detection after cloud point extraction. Effective separation of organic and inorganic selenium in selenium-enriched rice was achieved by sequentially extracting with water and cyclohexane. Under the optimised conditions, the limit of detection (LOD) was 0.08 μg L(-1), the relative standard deviation (RSD) was 2.1% (c=10.0 μg L(-1), n=11), and the enrichment factor for selenium was 82. Recoveries of inorganic selenium in the selenium-enriched rice samples were between 90.3% and 106.0%. The proposed method was successfully applied for the determination of organic and inorganic selenium as well as total selenium in selenium-enriched rice. Copyright © 2013 Elsevier Ltd. All rights reserved.
Localization of Pathology on Complex Architecture Building Surfaces
NASA Astrophysics Data System (ADS)
Sidiropoulos, A. A.; Lakakis, K. N.; Mouza, V. K.
2017-02-01
The technology of 3D laser scanning is considered as one of the most common methods for heritage documentation. The point clouds that are being produced provide information of high detail, both geometric and thematic. There are various studies that examine techniques of the best exploitation of this information. In this study, an algorithm of pathology localization, such as cracks and fissures, on complex building surfaces is being tested. The algorithm makes use of the points' position in the point cloud and tries to distinguish them in two groups-patterns; pathology and non-pathology. The extraction of the geometric information that is being used for recognizing the pattern of the points is being accomplished via Principal Component Analysis (PCA) in user-specified neighborhoods in the whole point cloud. The implementation of PCA leads to the definition of the normal vector at each point of the cloud. Two tests that operate separately examine both local and global geometric criteria among the points and conclude which of them should be categorized as pathology. The proposed algorithm was tested on parts of the Gazi Evrenos Baths masonry, which are located at the city of Giannitsa at Northern Greece.
Maalek, Reza; Lichti, Derek D; Ruwanpura, Janaka Y
2018-03-08
Automated segmentation of planar and linear features of point clouds acquired from construction sites is essential for the automatic extraction of building construction elements such as columns, beams and slabs. However, many planar and linear segmentation methods use scene-dependent similarity thresholds that may not provide generalizable solutions for all environments. In addition, outliers exist in construction site point clouds due to data artefacts caused by moving objects, occlusions and dust. To address these concerns, a novel method for robust classification and segmentation of planar and linear features is proposed. First, coplanar and collinear points are classified through a robust principal components analysis procedure. The classified points are then grouped using a new robust clustering method, the robust complete linkage method. A robust method is also proposed to extract the points of flat-slab floors and/or ceilings independent of the aforementioned stages to improve computational efficiency. The applicability of the proposed method is evaluated in eight datasets acquired from a complex laboratory environment and two construction sites at the University of Calgary. The precision, recall, and accuracy of the segmentation at both construction sites were 96.8%, 97.7% and 95%, respectively. These results demonstrate the suitability of the proposed method for robust segmentation of planar and linear features of contaminated datasets, such as those collected from construction sites.
Maalek, Reza; Lichti, Derek D; Ruwanpura, Janaka Y
2018-01-01
Automated segmentation of planar and linear features of point clouds acquired from construction sites is essential for the automatic extraction of building construction elements such as columns, beams and slabs. However, many planar and linear segmentation methods use scene-dependent similarity thresholds that may not provide generalizable solutions for all environments. In addition, outliers exist in construction site point clouds due to data artefacts caused by moving objects, occlusions and dust. To address these concerns, a novel method for robust classification and segmentation of planar and linear features is proposed. First, coplanar and collinear points are classified through a robust principal components analysis procedure. The classified points are then grouped using a new robust clustering method, the robust complete linkage method. A robust method is also proposed to extract the points of flat-slab floors and/or ceilings independent of the aforementioned stages to improve computational efficiency. The applicability of the proposed method is evaluated in eight datasets acquired from a complex laboratory environment and two construction sites at the University of Calgary. The precision, recall, and accuracy of the segmentation at both construction sites were 96.8%, 97.7% and 95%, respectively. These results demonstrate the suitability of the proposed method for robust segmentation of planar and linear features of contaminated datasets, such as those collected from construction sites. PMID:29518062
Automated Point Cloud Correspondence Detection for Underwater Mapping Using AUVs
NASA Technical Reports Server (NTRS)
Hammond, Marcus; Clark, Ashley; Mahajan, Aditya; Sharma, Sumant; Rock, Stephen
2015-01-01
An algorithm for automating correspondence detection between point clouds composed of multibeam sonar data is presented. This allows accurate initialization for point cloud alignment techniques even in cases where accurate inertial navigation is not available, such as iceberg profiling or vehicles with low-grade inertial navigation systems. Techniques from computer vision literature are used to extract, label, and match keypoints between "pseudo-images" generated from these point clouds. Image matches are refined using RANSAC and information about the vehicle trajectory. The resulting correspondences can be used to initialize an iterative closest point (ICP) registration algorithm to estimate accumulated navigation error and aid in the creation of accurate, self-consistent maps. The results presented use multibeam sonar data obtained from multiple overlapping passes of an underwater canyon in Monterey Bay, California. Using strict matching criteria, the method detects 23 between-swath correspondence events in a set of 155 pseudo-images with zero false positives. Using less conservative matching criteria doubles the number of matches but introduces several false positive matches as well. Heuristics based on known vehicle trajectory information are used to eliminate these.
Registration of Vehicle-Borne Point Clouds and Panoramic Images Based on Sensor Constellations.
Yao, Lianbi; Wu, Hangbin; Li, Yayun; Meng, Bin; Qian, Jinfei; Liu, Chun; Fan, Hongchao
2017-04-11
A mobile mapping system (MMS) is usually utilized to collect environmental data on and around urban roads. Laser scanners and panoramic cameras are the main sensors of an MMS. This paper presents a new method for the registration of the point clouds and panoramic images based on sensor constellation. After the sensor constellation was analyzed, a feature point, the intersection of the connecting line between the global positioning system (GPS) antenna and the panoramic camera with a horizontal plane, was utilized to separate the point clouds into blocks. The blocks for the central and sideward laser scanners were extracted with the segmentation feature points. Then, the point clouds located in the blocks were separated from the original point clouds. Each point in the blocks was used to find the accurate corresponding pixel in the relative panoramic images via a collinear function, and the position and orientation relationship amongst different sensors. A search strategy is proposed for the correspondence of laser scanners and lenses of panoramic cameras to reduce calculation complexity and improve efficiency. Four cases of different urban road types were selected to verify the efficiency and accuracy of the proposed method. Results indicate that most of the point clouds (with an average of 99.7%) were successfully registered with the panoramic images with great efficiency. Geometric evaluation results indicate that horizontal accuracy was approximately 0.10-0.20 m, and vertical accuracy was approximately 0.01-0.02 m for all cases. Finally, the main factors that affect registration accuracy, including time synchronization amongst different sensors, system positioning and vehicle speed, are discussed.
Galbeiro, Rafaela; Garcia, Samara; Gaubeur, Ivanise
2014-04-01
Cloud point extraction (CPE) was used to simultaneously preconcentrate trace-level cadmium, nickel and zinc for determination by flame atomic absorption spectrometry (FAAS). 1-(2-Pyridilazo)-2-naphthol (PAN) was used as a complexing agent, and the metal complexes were extracted from the aqueous phase by the surfactant Triton X-114 ((1,1,3,3-tetramethylbutyl)phenyl-polyethylene glycol). Under optimized complexation and extraction conditions, the limits of detection were 0.37μgL(-1) (Cd), 2.6μgL(-1) (Ni) and 2.3μgL(-1) (Zn). This extraction was quantitative with a preconcentration factor of 30 and enrichment factor estimated to be 42, 40 and 43, respectively. The method was applied to different complex samples, and the accuracy was evaluated by analyzing a water standard reference material (NIST SRM 1643e), yielding results in agreement with the certified values. Copyright © 2013 Elsevier GmbH. All rights reserved.
Heydari, Rouhollah; Hosseini, Mohammad; Zarabi, Sanaz
2015-01-01
In this paper, a simple and cost effective method was developed for extraction and pre-concentration of carmine in food samples by using cloud point extraction (CPE) prior to its spectrophotometric determination. Carmine was extracted from aqueous solution using Triton X-100 as extracting solvent. The effects of main parameters such as solution pH, surfactant and salt concentrations, incubation time and temperature were investigated and optimized. Calibration graph was linear in the range of 0.04-5.0 μg mL(-1) of carmine in the initial solution with regression coefficient of 0.9995. The limit of detection (LOD) and limit of quantification were 0.012 and 0.04 μg mL(-1), respectively. Relative standard deviation (RSD) at low concentration level (0.05 μg mL(-1)) of carmine was 4.8% (n=7). Recovery values in different concentration levels were in the range of 93.7-105.8%. The obtained results demonstrate the proposed method can be applied satisfactory to determine the carmine in food samples. Copyright © 2015 Elsevier B.V. All rights reserved.
Mixed micelle cloud point-magnetic dispersive μ-solid phase extraction of doxazosin and alfuzosin
NASA Astrophysics Data System (ADS)
Gao, Nannan; Wu, Hao; Chang, Yafen; Guo, Xiaozhen; Zhang, Lizhen; Du, Liming; Fu, Yunlong
2015-01-01
Mixed micelle cloud point extraction (MM-CPE) combined with magnetic dispersive μ-solid phase extraction (MD-μ-SPE) has been developed as a new approach for the extraction of doxazosin (DOX) and alfuzosin (ALF) prior to fluorescence analysis. The mixed micelle anionic surfactant sodium dodecyl sulfate and non-ionic polyoxyethylene(7.5)nonylphenylether was used as the extraction solvent in MM-CPE, and diatomite bonding Fe3O4 magnetic nanoparticles were used as the adsorbent in MD-μ-SPE. The method was based on MM-CPE of DOX and ALF in the surfactant-rich phase. Magnetic materials were used to retrieve the surfactant-rich phase, which easily separated from the aqueous phase under magnetic field. At optimum conditions, a linear relationship between DOX and ALF was obtained in the range of 5-300 ng mL-1, and the limits of detection were 0.21 and 0.16 ng mL-1, respectively. The proposed method was successfully applied for the determination of the drugs in pharmaceutical preparations, urine samples, and plasma samples.
Dağdeviren, Semahat; Altunay, Nail; Sayman, Yasin; Gürkan, Ramazan
2018-07-30
The study developed a new method for proline detection in honey, wine and fruit juice using ultrasound assisted-cloud point extraction (UA-CPE) and spectrophotometry. Initially, a quaternary complex was built, containing proline, histamine, Cu(II), and fluorescein at pH 5.5. Samples were treated with ethanol-water mixture before extraction and preconcentration, using an ultrasonic bath for 10 min at 40 °C (40 kHz, 300 W). After the optimization of variables affecting extraction efficiency, good linearity was obtained between 15 and 600 µg L -1 with sensitivity enhancement factor of 105. The limits of detection and quantification were 5.7 and 19.0 µg L -1 , respectively. The recovery percentage and relative standard deviations (RSD %) were between 95.3 and 103.3%, and 2.5 and 4.2%, respectively. The accuracy of the method was verified by the analysis of a standard reference material (SRM 2389a). Copyright © 2018 Elsevier Ltd. All rights reserved.
2.5D multi-view gait recognition based on point cloud registration.
Tang, Jin; Luo, Jian; Tjahjadi, Tardi; Gao, Yan
2014-03-28
This paper presents a method for modeling a 2.5-dimensional (2.5D) human body and extracting the gait features for identifying the human subject. To achieve view-invariant gait recognition, a multi-view synthesizing method based on point cloud registration (MVSM) to generate multi-view training galleries is proposed. The concept of a density and curvature-based Color Gait Curvature Image is introduced to map 2.5D data onto a 2D space to enable data dimension reduction by discrete cosine transform and 2D principle component analysis. Gait recognition is achieved via a 2.5D view-invariant gait recognition method based on point cloud registration. Experimental results on the in-house database captured by a Microsoft Kinect camera show a significant performance gain when using MVSM.
Automated estimation of leaf distribution for individual trees based on TLS point clouds
NASA Astrophysics Data System (ADS)
Koma, Zsófia; Rutzinger, Martin; Bremer, Magnus
2017-04-01
Light Detection and Ranging (LiDAR) especially the ground based LiDAR (Terrestrial Laser Scanning - TLS) is an operational used and widely available measurement tool supporting forest inventory updating and research in forest ecology. High resolution point clouds from TLS already represent single leaves which can be used for a more precise estimation of Leaf Area Index (LAI) and for higher accurate biomass estimation. However, currently the methodology for extracting single leafs from the unclassified point clouds for individual trees is still missing. The aim of this study is to present a novel segmentation approach in order to extract single leaves and derive features related to leaf morphology (such as area, slope, length and width) of each single leaf from TLS point cloud data. For the study two exemplary single trees were scanned in leaf-on condition on the university campus of Innsbruck during calm wind conditions. A northern red oak (Quercus rubra) was scanned by a discrete return recording Optech ILRIS-3D TLS scanner and a tulip tree (Liliodendron tulpifera) with Riegl VZ-6000 scanner. During the scanning campaign a reference dataset was measured parallel to scanning. In this case 230 leaves were randomly collected around the lower branches of the tree and photos were taken. The developed workflow steps were the following: in the first step normal vectors and eigenvalues were calculated based on the user specified neighborhood. Then using the direction of the largest eigenvalue outliers i.e. ghost points were removed. After that region growing segmentation based on the curvature and angles between normal vectors was applied on the filtered point cloud. On each segment a RANSAC plane fitting algorithm was applied in order to extract the segment based normal vectors. Using the related features of the calculated segments the stem and branches were labeled as non-leaf and other segments were classified as leaf. The validation of the different segmentation parameters was evaluated as the following: i) the sum area of the collected leaves and the point cloud, ii) the segmented leaf length-width ratio iii) the distribution of the leaf area for the segmented and the reference-ones were compared and the ideal parameter-set was found. The results show that the leaves can be captured with the developed workflow and the slope can be determined robustly for the segmented leaves. However, area, length and width values are systematically depending on the angle and the distance from the scanner. For correction of the systematic underestimation, more systematic measurement or LiDAR simulation is required for further detailed analysis. The results of leaf segmentation algorithm show high potential in generating more precise tree models with correctly located leaves in order to extract more precise input model for biological modeling of LAI or atmospheric corrections studies. The presented workflow also can be used in monitoring the change of angle of the leaves due to sun irradiation, water balance, and day-night rhythm.
NASA Astrophysics Data System (ADS)
Howle, J. F.; Bawden, G. W.; Hunter, L. E.; Rose, R. S.
2009-12-01
High resolution (centimeter level) three-dimensional point-cloud imagery of offset glacial outwash deposits were collected by using ground based tripod LiDAR (T-LiDAR) to characterize the cumulative fault slip across the recently identified Polaris fault (Hunter et al., 2009) near Truckee, California. The type-section site for the Polaris fault is located 6.5 km east of Truckee where progressive right-lateral displacement of middle to late Pleistocene deposits is evident. Glacial outwash deposits, aggraded during the Tioga glaciation, form a flat lying ‘fill’ terrace on both the north and south sides of the modern Truckee River. During the Tioga deglaciation melt water incised into the terrace producing fluvial scarps or terrace risers (Birkeland, 1964). Subsequently, the terrace risers on both banks have been right-laterally offset by the Polaris fault. By using T-LiDAR on an elevated tripod (4.25 m high), we collected 3D high-resolution (thousands of points per square meter; ± 4 mm) point-cloud imagery of the offset terrace risers. Vegetation was removed from the data using commercial software, and large protruding boulders were manually deleted to generate a bare-earth point-cloud dataset with an average data density of over 240 points per square meter. From the bare-earth point cloud we mathematically reconstructed a pristine terrace/scarp morphology on both sides of the fault, defined coupled sets of piercing points, and extracted a corresponding displacement vector. First, the Polaris fault was approximated as a vertical plane that bisects the offset terrace risers, as well as bisecting linear swales and tectonic depressions in the outwash terrace. Then, piercing points to the vertical fault plane were constructed from the geometry of the geomorphic elements on either side of the fault. On each side of the fault, the best-fit modeled outwash plane is projected laterally and the best-fit modeled terrace riser projected upward to a virtual intersection in space, creating a vector. These constructed vectors were projected to intersection with the fault plane, defining statistically significant piercing points. The distance between the coupled set of piercing points, within the plane of the fault, is the cumulative displacement vector. To assess the variability of the modeled geomorphic surfaces, including surface roughness and nonlinearity, we generated a suite of displacement models by systematically incorporating larger areas of the model domain symmetrically about the fault. Preliminary results of 10 models yield an average cumulative displacement of 5.6 m (1 Std Dev = 0.31 m). As previously described, Tioga deglaciation melt water incised into the outwash terrace leaving terrace risers that were subsequently offset by the Polaris fault. Therefore, the age of the Tioga outwash terrace represents a maximum limiting age of the tectonic displacement. Using regional age constraints of 15 to 13 kya for the Tioga outwash terrace (Benson et al., 1990; Clark and Gillespie, 1997; James et al., 2002) and the above model results, we estimate a preliminary minimum fault slip rate of 0.40 ± 0.05 mm/yr for the Polaris type-section site.
Sturdivant, Emily; Lentz, Erika; Thieler, E. Robert; Farris, Amy; Weber, Kathryn; Remsen, David P.; Miner, Simon; Henderson, Rachel
2017-01-01
The vulnerability of coastal systems to hazards such as storms and sea-level rise is typically characterized using a combination of ground and manned airborne systems that have limited spatial or temporal scales. Structure-from-motion (SfM) photogrammetry applied to imagery acquired by unmanned aerial systems (UAS) offers a rapid and inexpensive means to produce high-resolution topographic and visual reflectance datasets that rival existing lidar and imagery standards. Here, we use SfM to produce an elevation point cloud, an orthomosaic, and a digital elevation model (DEM) from data collected by UAS at a beach and wetland site in Massachusetts, USA. We apply existing methods to (a) determine the position of shorelines and foredunes using a feature extraction routine developed for lidar point clouds and (b) map land cover from the rasterized surfaces using a supervised classification routine. In both analyses, we experimentally vary the input datasets to understand the benefits and limitations of UAS-SfM for coastal vulnerability assessment. We find that (a) geomorphic features are extracted from the SfM point cloud with near-continuous coverage and sub-meter precision, better than was possible from a recent lidar dataset covering the same area; and (b) land cover classification is greatly improved by including topographic data with visual reflectance, but changes to resolution (when <50 cm) have little influence on the classification accuracy.
Holographic estimate of the meson cloud contribution to nucleon axial form factor
NASA Astrophysics Data System (ADS)
Ramalho, G.
2018-04-01
We use light-front holography to estimate the valence quark and the meson cloud contributions to the nucleon axial form factor. The free couplings of the holographic model are determined by the empirical data and by the information extracted from lattice QCD. The holographic model provides a good description of the empirical data when we consider a meson cloud mixture of about 30% in the physical nucleon state. The estimate of the valence quark contribution to the nucleon axial form factor compares well with the lattice QCD data for small pion masses. Our estimate of the meson cloud contribution to the nucleon axial form factor has a slower falloff with the square momentum transfer compared to typical estimates from quark models with meson cloud dressing.
Baig, Jameel A; Kazi, Tasneem G; Shah, Abdul Q; Arain, Mohammad B; Afridi, Hassan I; Kandhro, Ghulam A; Khan, Sumaira
2009-09-28
The simple and rapid pre-concentration techniques viz. cloud point extraction (CPE) and solid phase extraction (SPE) were applied for the determination of As(3+) and total inorganic arsenic (iAs) in surface and ground water samples. The As(3+) was formed complex with ammonium pyrrolidinedithiocarbamate (APDC) and extracted by surfactant-rich phases in the non-ionic surfactant Triton X-114, after centrifugation the surfactant-rich phase was diluted with 0.1 mol L(-1) HNO(3) in methanol. While total iAs in water samples was adsorbed on titanium dioxide (TiO(2)); after centrifugation, the solid phase was prepared to be slurry for determination. The extracted As species were determined by electrothermal atomic absorption spectrometry. The multivariate strategy was applied to estimate the optimum values of experimental factors for the recovery of As(3+) and total iAs by CPE and SPE. The standard addition method was used to validate the optimized methods. The obtained result showed sufficient recoveries for As(3+) and iAs (>98.0%). The concentration factor in both cases was found to be 40.
Multi-Scale Voxel Segmentation for Terrestrial Lidar Data within Marshes
NASA Astrophysics Data System (ADS)
Nguyen, C. T.; Starek, M. J.; Tissot, P.; Gibeaut, J. C.
2016-12-01
The resilience of marshes to a rising sea is dependent on their elevation response. Terrestrial laser scanning (TLS) is a detailed topographic approach for accurate, dense surface measurement with high potential for monitoring of marsh surface elevation response. The dense point cloud provides a 3D representation of the surface, which includes both terrain and non-terrain objects. Extraction of topographic information requires filtering of the data into like-groups or classes, therefore, methods must be incorporated to identify structure in the data prior to creation of an end product. A voxel representation of three-dimensional space provides quantitative visualization and analysis for pattern recognition. The objectives of this study are threefold: 1) apply a multi-scale voxel approach to effectively extract geometric features from the TLS point cloud data, 2) investigate the utility of K-means and Self Organizing Map (SOM) clustering algorithms for segmentation, and 3) utilize a variety of validity indices to measure the quality of the result. TLS data were collected at a marsh site along the central Texas Gulf Coast using a Riegl VZ 400 TLS. The site consists of both exposed and vegetated surface regions. To characterize structure of the point cloud, octree segmentation is applied to create a tree data structure of voxels containing the points. The flexibility of voxels in size and point density makes this algorithm a promising candidate to locally extract statistical and geometric features of the terrain including surface normal and curvature. The characteristics of the voxel itself such as the volume and point density are also computed and assigned to each point as are laser pulse characteristics. The features extracted from the voxelization are then used as input for clustering of the points using the K-means and SOM clustering algorithms. Optimal number of clusters are then determined based on evaluation of cluster separability criterions. Results for different combinations of the feature space vector and differences between K-means and SOM clustering will be presented. The developed method provides a novel approach for compressing TLS scene complexity in marshes, such as for vegetation biomass studies or erosion monitoring.
Point Cloud Based Change Detection - an Automated Approach for Cloud-based Services
NASA Astrophysics Data System (ADS)
Collins, Patrick; Bahr, Thomas
2016-04-01
The fusion of stereo photogrammetric point clouds with LiDAR data or terrain information derived from SAR interferometry has a significant potential for 3D topographic change detection. In the present case study latest point cloud generation and analysis capabilities are used to examine a landslide that occurred in the village of Malin in Maharashtra, India, on 30 July 2014, and affected an area of ca. 44.000 m2. It focuses on Pléiades high resolution satellite imagery and the Airbus DS WorldDEMTM as a product of the TanDEM-X mission. This case study was performed using the COTS software package ENVI 5.3. Integration of custom processes and automation is supported by IDL (Interactive Data Language). Thus, ENVI analytics is running via the object-oriented and IDL-based ENVITask API. The pre-event topography is represented by the WorldDEMTM product, delivered with a raster of 12 m x 12 m and based on the EGM2008 geoid (called pre-DEM). For the post-event situation a Pléiades 1B stereo image pair of the AOI affected was obtained. The ENVITask "GeneratePointCloudsByDenseImageMatching" was implemented to extract passive point clouds in LAS format from the panchromatic stereo datasets: • A dense image-matching algorithm is used to identify corresponding points in the two images. • A block adjustment is applied to refine the 3D coordinates that describe the scene geometry. • Additionally, the WorldDEMTM was input to constrain the range of heights in the matching area, and subsequently the length of the epipolar line. The "PointCloudFeatureExtraction" task was executed to generate the post-event digital surface model from the photogrammetric point clouds (called post-DEM). Post-processing consisted of the following steps: • Adding the geoid component (EGM 2008) to the post-DEM. • Pre-DEM reprojection to the UTM Zone 43N (WGS-84) coordinate system and resizing. • Subtraction of the pre-DEM from the post-DEM. • Filtering and threshold based classification of the DEM difference to analyze the surface changes in 3D. The automated point cloud generation and analysis introduced here can be embedded in virtually any existing geospatial workflow for operational applications. Three integration options were implemented in this case study: • Integration within any ArcGIS environment whether deployed on the desktop, in the cloud, or online. Execution uses a customized ArcGIS script tool. A Python script file retrieves the parameters from the user interface and runs the precompiled IDL code. That IDL code is used to interface between the Python script and the relevant ENVITasks. • Publishing the point cloud processing tasks as services via the ENVI Services Engine (ESE). ESE is a cloud-based image analysis solution to publish and deploy advanced ENVI image and data analytics to existing enterprise infrastructures. For this purpose the entire IDL code can be capsuled in a single ENVITask. • Integration in an existing geospatial workflow using the Python-to-IDL Bridge. This mechanism allows calling IDL code within Python on a user-defined platform. The results of this case study allow a 3D estimation of the topographic changes within the tectonically active and anthropogenically invaded Malin area after the landslide event. Accordingly, the point cloud analysis was correlated successfully with modelled displacement contours of the slope. Based on optical satellite imagery, such point clouds of high precision and density distribution can be obtained in a few minutes to support the operational monitoring of landslide processes.
Tiwari, Swapnil; Deb, Manas Kanti; Sen, Bhupendra K
2017-04-15
A new cloud point extraction (CPE) method for the determination of hexavalent chromium i.e. Cr(VI) in food samples is established with subsequent diffuse reflectance-Fourier transform infrared (DRS-FTIR) analysis. The method demonstrates enrichment of Cr(VI) after its complexation with 1,5-diphenylcarbazide. The reddish-violet complex formed showed λ max at 540nm. Micellar phase separation at cloud point temperature of non-ionic surfactant, Triton X-100 occurred and complex was entrapped in surfactant and analyzed using DRS-FTIR. Under optimized conditions, the limit of detection (LOD) and quantification (LOQ) were 1.22 and 4.02μgmL -1 , respectively. Excellent linearity with correlation coefficient value of 0.94 was found for the concentration range of 1-100μgmL -1 . At 10μgmL -1 the standard deviation for 7 replicate measurements was found to be 0.11μgmL -1 . The method was successfully applied to commercially marketed food stuffs, and good recoveries (81-112%) were obtained by spiking the real samples. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, Jianping; Yang, Bisheng; Chen, Chi; Huang, Ronggang; Dong, Zhen; Xiao, Wen
2018-02-01
Inaccurate exterior orientation parameters (EoPs) between sensors obtained by pre-calibration leads to failure of registration between panoramic image sequence and mobile laser scanning data. To address this challenge, this paper proposes an automatic registration method based on semantic features extracted from panoramic images and point clouds. Firstly, accurate rotation parameters between the panoramic camera and the laser scanner are estimated using GPS and IMU aided structure from motion (SfM). The initial EoPs of panoramic images are obtained at the same time. Secondly, vehicles in panoramic images are extracted by the Faster-RCNN as candidate primitives to be matched with potential corresponding primitives in point clouds according to the initial EoPs. Finally, translation between the panoramic camera and the laser scanner is refined by maximizing the overlapping area of corresponding primitive pairs based on the Particle Swarm Optimization (PSO), resulting in a finer registration between panoramic image sequences and point clouds. Two challenging urban scenes were experimented to assess the proposed method, and the final registration errors of these two scenes were both less than three pixels, which demonstrates a high level of automation, robustness and accuracy.
Duester, Lars; Fabricius, Anne-Lena; Jakobtorweihen, Sven; Philippe, Allan; Weigl, Florian; Wimmer, Andreas; Schuster, Michael; Nazar, Muhammad Faizan
2016-11-01
Coacervate-based techniques are intensively used in environmental analytical chemistry to enrich and extract different kinds of analytes. Most methods focus on the total content or the speciation of inorganic and organic substances. Size fractionation is less commonly addressed. Within coacervate-based techniques, cloud point extraction (CPE) is characterized by a phase separation of non-ionic surfactants dispersed in an aqueous solution when the respective cloud point temperature is exceeded. In this context, the feature article raises the following question: May CPE in future studies serve as a key tool (i) to enrich and extract nanoparticles (NPs) from complex environmental matrices prior to analyses and (ii) to preserve the colloidal status of unstable environmental samples? With respect to engineered NPs, a significant gap between environmental concentrations and size- and element-specific analytical capabilities is still visible. CPE may support efforts to overcome this "concentration gap" via the analyte enrichment. In addition, most environmental colloidal systems are known to be unstable, dynamic, and sensitive to changes of the environmental conditions during sampling and sample preparation. This delivers a so far unsolved "sample preparation dilemma" in the analytical process. The authors are of the opinion that CPE-based methods have the potential to preserve the colloidal status of these instable samples. Focusing on NPs, this feature article aims to support the discussion on the creation of a convention called the "CPE extractable fraction" by connecting current knowledge on CPE mechanisms and on available applications, via the uncertainties visible and modeling approaches available, with potential future benefits from CPE protocols.
Registration of Vehicle-Borne Point Clouds and Panoramic Images Based on Sensor Constellations
Yao, Lianbi; Wu, Hangbin; Li, Yayun; Meng, Bin; Qian, Jinfei; Liu, Chun; Fan, Hongchao
2017-01-01
A mobile mapping system (MMS) is usually utilized to collect environmental data on and around urban roads. Laser scanners and panoramic cameras are the main sensors of an MMS. This paper presents a new method for the registration of the point clouds and panoramic images based on sensor constellation. After the sensor constellation was analyzed, a feature point, the intersection of the connecting line between the global positioning system (GPS) antenna and the panoramic camera with a horizontal plane, was utilized to separate the point clouds into blocks. The blocks for the central and sideward laser scanners were extracted with the segmentation feature points. Then, the point clouds located in the blocks were separated from the original point clouds. Each point in the blocks was used to find the accurate corresponding pixel in the relative panoramic images via a collinear function, and the position and orientation relationship amongst different sensors. A search strategy is proposed for the correspondence of laser scanners and lenses of panoramic cameras to reduce calculation complexity and improve efficiency. Four cases of different urban road types were selected to verify the efficiency and accuracy of the proposed method. Results indicate that most of the point clouds (with an average of 99.7%) were successfully registered with the panoramic images with great efficiency. Geometric evaluation results indicate that horizontal accuracy was approximately 0.10–0.20 m, and vertical accuracy was approximately 0.01–0.02 m for all cases. Finally, the main factors that affect registration accuracy, including time synchronization amongst different sensors, system positioning and vehicle speed, are discussed. PMID:28398256
Point Cloud Based Approach to Stem Width Extraction of Sorghum
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, Jihui; Zakhor, Avideh
A revolution in the field of genomics has produced vast amounts of data and furthered our understanding of the genotypephenotype map, but is currently constrained by manually intensive or limited phenotype data collection. We propose an algorithm to estimate stem width, a key characteristic used for biomass potential evaluation, from 3D point cloud data collected by a robot equipped with a depth sensor in a single pass in a standard field. The algorithm applies a two step alignment to register point clouds in different frames, a Frangi filter to identify stemlike objects in the point cloud and an orientation basedmore » filter to segment out and refine individual stems for width estimation. Individually, detected stems which are split due to occlusions are merged and then registered with previously found stems in previous camera frames in order to track temporally. We then refine the estimates to produce an accurate histogram of width estimates per plot. Since the plants in each plot are genetically identical, distributions of the stem width per plot can be useful in identifying genetically superior sorghum for biofuels.« less
Airborne LIDAR point cloud tower inclination judgment
NASA Astrophysics Data System (ADS)
liang, Chen; zhengjun, Liu; jianguo, Qian
2016-11-01
Inclined transmission line towers for the safe operation of the line caused a great threat, how to effectively, quickly and accurately perform inclined judgment tower of power supply company safety and security of supply has played a key role. In recent years, with the development of unmanned aerial vehicles, unmanned aerial vehicles equipped with a laser scanner, GPS, inertial navigation is one of the high-precision 3D Remote Sensing System in the electricity sector more and more. By airborne radar scan point cloud to visually show the whole picture of the three-dimensional spatial information of the power line corridors, such as the line facilities and equipment, terrain and trees. Currently, LIDAR point cloud research in the field has not yet formed an algorithm to determine tower inclination, the paper through the existing power line corridor on the tower base extraction, through their own tower shape characteristic analysis, a vertical stratification the method of combining convex hull algorithm for point cloud tower scarce two cases using two different methods for the tower was Inclined to judge, and the results with high reliability.
NASA Astrophysics Data System (ADS)
Poux, F.; Neuville, R.; Hallot, P.; Van Wersch, L.; Luczfalvy Jancsó, A.; Billen, R.
2017-05-01
While virtual copies of the real world tend to be created faster than ever through point clouds and derivatives, their working proficiency by all professionals' demands adapted tools to facilitate knowledge dissemination. Digital investigations are changing the way cultural heritage researchers, archaeologists, and curators work and collaborate to progressively aggregate expertise through one common platform. In this paper, we present a web application in a WebGL framework accessible on any HTML5-compatible browser. It allows real time point cloud exploration of the mosaics in the Oratory of Germigny-des-Prés, and emphasises the ease of use as well as performances. Our reasoning engine is constructed over a semantically rich point cloud data structure, where metadata has been injected a priori. We developed a tool that directly allows semantic extraction and visualisation of pertinent information for the end users. It leads to efficient communication between actors by proposing optimal 3D viewpoints as a basis on which interactions can grow.
Point Cloud Based Approach to Stem Width Extraction of Sorghum
Jin, Jihui; Zakhor, Avideh
2017-01-29
A revolution in the field of genomics has produced vast amounts of data and furthered our understanding of the genotypephenotype map, but is currently constrained by manually intensive or limited phenotype data collection. We propose an algorithm to estimate stem width, a key characteristic used for biomass potential evaluation, from 3D point cloud data collected by a robot equipped with a depth sensor in a single pass in a standard field. The algorithm applies a two step alignment to register point clouds in different frames, a Frangi filter to identify stemlike objects in the point cloud and an orientation basedmore » filter to segment out and refine individual stems for width estimation. Individually, detected stems which are split due to occlusions are merged and then registered with previously found stems in previous camera frames in order to track temporally. We then refine the estimates to produce an accurate histogram of width estimates per plot. Since the plants in each plot are genetically identical, distributions of the stem width per plot can be useful in identifying genetically superior sorghum for biofuels.« less
a Method for the Registration of Hemispherical Photographs and Tls Intensity Images
NASA Astrophysics Data System (ADS)
Schmidt, A.; Schilling, A.; Maas, H.-G.
2012-07-01
Terrestrial laser scanners generate dense and accurate 3D point clouds with minimal effort, which represent the geometry of real objects, while image data contains texture information of object surfaces. Based on the complementary characteristics of both data sets, a combination is very appealing for many applications, including forest-related tasks. In the scope of our research project, independent data sets of a plain birch stand have been taken by a full-spherical laser scanner and a hemispherical digital camera. Previously, both kinds of data sets have been considered separately: Individual trees were successfully extracted from large 3D point clouds, and so-called forest inventory parameters could be determined. Additionally, a simplified tree topology representation was retrieved. From hemispherical images, leaf area index (LAI) values, as a very relevant parameter for describing a stand, have been computed. The objective of our approach is to merge a 3D point cloud with image data in a way that RGB values are assigned to each 3D point. So far, segmentation and classification of TLS point clouds in forestry applications was mainly based on geometrical aspects of the data set. However, a 3D point cloud with colour information provides valuable cues exceeding simple statistical evaluation of geometrical object features and thus may facilitate the analysis of the scan data significantly.
2.5D Multi-View Gait Recognition Based on Point Cloud Registration
Tang, Jin; Luo, Jian; Tjahjadi, Tardi; Gao, Yan
2014-01-01
This paper presents a method for modeling a 2.5-dimensional (2.5D) human body and extracting the gait features for identifying the human subject. To achieve view-invariant gait recognition, a multi-view synthesizing method based on point cloud registration (MVSM) to generate multi-view training galleries is proposed. The concept of a density and curvature-based Color Gait Curvature Image is introduced to map 2.5D data onto a 2D space to enable data dimension reduction by discrete cosine transform and 2D principle component analysis. Gait recognition is achieved via a 2.5D view-invariant gait recognition method based on point cloud registration. Experimental results on the in-house database captured by a Microsoft Kinect camera show a significant performance gain when using MVSM. PMID:24686727
NASA Astrophysics Data System (ADS)
Nayak, M.; Beck, J.; Udrea, B.
This paper focuses on the aerospace application of a single beam laser rangefinder (LRF) for 3D imaging, shape detection, and reconstruction in the context of a space-based space situational awareness (SSA) mission scenario. The primary limitation to 3D imaging from LRF point clouds is the one-dimensional nature of the single beam measurements. A method that combines relative orbital motion and scanning attitude motion to generate point clouds has been developed and the design and characterization of multiple relative motion and attitude maneuver profiles are presented. The target resident space object (RSO) has the shape of a generic telecommunications satellite. The shape and attitude of the RSO are unknown to the chaser satellite however, it is assumed that the RSO is un-cooperative and has fixed inertial pointing. All sensors in the metrology chain are assumed ideal. A previous study by the authors used pure Keplerian motion to perform a similar 3D imaging mission at an asteroid. A new baseline for proximity operations maneuvers for LRF scanning, based on a waypoint adaptation of the Hill-Clohessy-Wiltshire (HCW) equations is examined. Propellant expenditure for each waypoint profile is discussed and combinations of relative motion and attitude maneuvers that minimize the propellant used to achieve a minimum required point cloud density are studied. Both LRF strike-point coverage and point cloud density are maximized; the capability for 3D shape registration and reconstruction from point clouds generated with a single beam LRF without catalog comparison is proven. Next, a method of using edge detection algorithms to process a point cloud into a 3D modeled image containing reconstructed shapes is presented. Weighted accuracy of edge reconstruction with respect to the true model is used to calculate a qualitative “ metric” that evaluates effectiveness of coverage. Both edge recognition algorithms and the metric are independent of point cloud densit- , therefore they are utilized to compare the quality of point clouds generated by various attitude and waypoint command profiles. The RSO model incorporates diverse irregular protruding shapes, such as open sensor covers, instrument pods and solar arrays, to test the limits of the algorithms. This analysis is used to mathematically prove that point clouds generated by a single-beam LRF can achieve sufficient edge recognition accuracy for SSA applications, with meaningful shape information extractable even from sparse point clouds. For all command profiles, reconstruction of RSO shapes from the point clouds generated with the proposed method are compared to the truth model and conclusions are drawn regarding their fidelity.
Invariant-feature-based adaptive automatic target recognition in obscured 3D point clouds
NASA Astrophysics Data System (ADS)
Khuon, Timothy; Kershner, Charles; Mattei, Enrico; Alverio, Arnel; Rand, Robert
2014-06-01
Target recognition and classification in a 3D point cloud is a non-trivial process due to the nature of the data collected from a sensor system. The signal can be corrupted by noise from the environment, electronic system, A/D converter, etc. Therefore, an adaptive system with a desired tolerance is required to perform classification and recognition optimally. The feature-based pattern recognition algorithm architecture as described below is particularly devised for solving a single-sensor classification non-parametrically. Feature set is extracted from an input point cloud, normalized, and classifier a neural network classifier. For instance, automatic target recognition in an urban area would require different feature sets from one in a dense foliage area. The figure above (see manuscript) illustrates the architecture of the feature based adaptive signature extraction of 3D point cloud including LIDAR, RADAR, and electro-optical data. This network takes a 3D cluster and classifies it into a specific class. The algorithm is a supervised and adaptive classifier with two modes: the training mode and the performing mode. For the training mode, a number of novel patterns are selected from actual or artificial data. A particular 3D cluster is input to the network as shown above for the decision class output. The network consists of three sequential functional modules. The first module is for feature extraction that extracts the input cluster into a set of singular value features or feature vector. Then the feature vector is input into the feature normalization module to normalize and balance it before being fed to the neural net classifier for the classification. The neural net can be trained by actual or artificial novel data until each trained output reaches the declared output within the defined tolerance. In case new novel data is added after the neural net has been learned, the training is then resumed until the neural net has incrementally learned with the new novel data. The associative memory capability of the neural net enables the incremental learning. The back propagation algorithm or support vector machine can be utilized for the classification and recognition.
Zhou, Jun; Sun, Jiang Bing; Xu, Xin Yu; Cheng, Zhao Hui; Zeng, Ping; Wang, Feng Qiao; Zhang, Qiong
2015-03-25
A simple, inexpensive and efficient method based on the mixed cloud point extraction (MCPE) combined with high performance liquid chromatography was developed for the simultaneous separation and determination of six flavonoids (rutin, hyperoside, quercetin-3-O-sophoroside, isoquercitrin, astragalin and quercetin) in Apocynum venetum leaf samples. The non-ionic surfactant Genapol X-080 and cetyl-trimethyl ammonium bromide (CTAB) was chosen as the mixed extracting solvent. Parameters that affect the MCPE processes, such as the content of Genapol X-080 and CTAB, pH, salt content, extraction temperature and time were investigated and optimized. Under the optimized conditions, the calibration curve for six flavonoids were all linear with the correlation coefficients greater than 0.9994. The intra-day and inter-day precision (RSD) were below 8.1% and the limits of detection (LOD) for the six flavonoids were 1.2-5.0 ng mL(-1) (S/N=3). The proposed method was successfully used to separate and determine the six flavonoids in A. venetum leaf samples. Copyright © 2015 Elsevier B.V. All rights reserved.
Hartmann, Georg; Baumgartner, Tanja; Schuster, Michael
2014-01-07
For the quantification of silver nanoparticles (Ag-NPs) in environmental samples using cloud point extraction (CPE) for selective enrichment, surface modification of the Ag-NPs and matrix effects can play a key role. In this work we validate CPE with respect to the influence of different coatings and naturally occurring matrix components. The Ag-NPs tested were functionalized with inorganic and organic compounds as well as with biomolecules. Commercially available NPs and NPs synthesized according to methods published in the literature were used. We found that CPE can extract almost all Ag-NPs tested with very good efficiencies (82-105%). Only Ag-NPs functionalized with BSA (bovine serum albumin), which is a protein with the function to keep colloids in solution, cannot be extracted. No or little effect of environmentally relevant salts, organic matter, and inorganic colloids on the CPE of AgNPs was found. Additionally we used CPE to observe the in situ formation of Ag-NPs produced by the reduction of Ag(+) with natural organic matter (NOM).
NASA Astrophysics Data System (ADS)
Yang, Bisheng; Dong, Zhen; Liu, Yuan; Liang, Fuxun; Wang, Yongjun
2017-04-01
In recent years, updating the inventory of road infrastructures based on field work is labor intensive, time consuming, and costly. Fortunately, vehicle-based mobile laser scanning (MLS) systems provide an efficient solution to rapidly capture three-dimensional (3D) point clouds of road environments with high flexibility and precision. However, robust recognition of road facilities from huge volumes of 3D point clouds is still a challenging issue because of complicated and incomplete structures, occlusions and varied point densities. Most existing methods utilize point or object based features to recognize object candidates, and can only extract limited types of objects with a relatively low recognition rate, especially for incomplete and small objects. To overcome these drawbacks, this paper proposes a semantic labeling framework by combing multiple aggregation levels (point-segment-object) of features and contextual features to recognize road facilities, such as road surfaces, road boundaries, buildings, guardrails, street lamps, traffic signs, roadside-trees, power lines, and cars, for highway infrastructure inventory. The proposed method first identifies ground and non-ground points, and extracts road surfaces facilities from ground points. Non-ground points are segmented into individual candidate objects based on the proposed multi-rule region growing method. Then, the multiple aggregation levels of features and the contextual features (relative positions, relative directions, and spatial patterns) associated with each candidate object are calculated and fed into a SVM classifier to label the corresponding candidate object. The recognition performance of combining multiple aggregation levels and contextual features was compared with single level (point, segment, or object) based features using large-scale highway scene point clouds. Comparative studies demonstrated that the proposed semantic labeling framework significantly improves road facilities recognition precision (90.6%) and recall (91.2%), particularly for incomplete and small objects.
3D micro-mapping: Towards assessing the quality of crowdsourcing to support 3D point cloud analysis
NASA Astrophysics Data System (ADS)
Herfort, Benjamin; Höfle, Bernhard; Klonner, Carolin
2018-03-01
In this paper, we propose a method to crowdsource the task of complex three-dimensional information extraction from 3D point clouds. We design web-based 3D micro tasks tailored to assess segmented LiDAR point clouds of urban trees and investigate the quality of the approach in an empirical user study. Our results for three different experiments with increasing complexity indicate that a single crowdsourcing task can be solved in a very short time of less than five seconds on average. Furthermore, the results of our empirical case study reveal that the accuracy, sensitivity and precision of 3D crowdsourcing are high for most information extraction problems. For our first experiment (binary classification with single answer) we obtain an accuracy of 91%, a sensitivity of 95% and a precision of 92%. For the more complex tasks of the second Experiment 2 (multiple answer classification) the accuracy ranges from 65% to 99% depending on the label class. Regarding the third experiment - the determination of the crown base height of individual trees - our study highlights that crowdsourcing can be a tool to obtain values with even higher accuracy in comparison to an automated computer-based approach. Finally, we found out that the accuracy of the crowdsourced results for all experiments is hardly influenced by characteristics of the input point cloud data and of the users. Importantly, the results' accuracy can be estimated using agreement among volunteers as an intrinsic indicator, which makes a broad application of 3D micro-mapping very promising.
Self-Similar Spin Images for Point Cloud Matching
NASA Astrophysics Data System (ADS)
Pulido, Daniel
The rapid growth of Light Detection And Ranging (Lidar) technologies that collect, process, and disseminate 3D point clouds have allowed for increasingly accurate spatial modeling and analysis of the real world. Lidar sensors can generate massive 3D point clouds of a collection area that provide highly detailed spatial and radiometric information. However, a Lidar collection can be expensive and time consuming. Simultaneously, the growth of crowdsourced Web 2.0 data (e.g., Flickr, OpenStreetMap) have provided researchers with a wealth of freely available data sources that cover a variety of geographic areas. Crowdsourced data can be of varying quality and density. In addition, since it is typically not collected as part of a dedicated experiment but rather volunteered, when and where the data is collected is arbitrary. The integration of these two sources of geoinformation can provide researchers the ability to generate products and derive intelligence that mitigate their respective disadvantages and combine their advantages. Therefore, this research will address the problem of fusing two point clouds from potentially different sources. Specifically, we will consider two problems: scale matching and feature matching. Scale matching consists of computing feature metrics of each point cloud and analyzing their distributions to determine scale differences. Feature matching consists of defining local descriptors that are invariant to common dataset distortions (e.g., rotation and translation). Additionally, after matching the point clouds they can be registered and processed further (e.g., change detection). The objective of this research is to develop novel methods to fuse and enhance two point clouds from potentially disparate sources (e.g., Lidar and crowdsourced Web 2.0 datasets). The scope of this research is to investigate both scale and feature matching between two point clouds. The specific focus of this research will be in developing a novel local descriptor based on the concept of self-similarity to aid in the scale and feature matching steps. An open problem in fusion is how best to extract features from two point clouds and then perform feature-based matching. The proposed approach for this matching step is the use of local self-similarity as an invariant measure to match features. In particular, the proposed approach is to combine the concept of local self-similarity with a well-known feature descriptor, Spin Images, and thereby define "Self-Similar Spin Images". This approach is then extended to the case of matching two points clouds in very different coordinate systems (e.g., a geo-referenced Lidar point cloud and stereo-image derived point cloud without geo-referencing). The use of Self-Similar Spin Images is again applied to address this problem by introducing a "Self-Similar Keyscale" that matches the spatial scales of two point clouds. Another open problem is how best to detect changes in content between two point clouds. A method is proposed to find changes between two point clouds by analyzing the order statistics of the nearest neighbors between the two clouds, and thereby define the "Nearest Neighbor Order Statistic" method. Note that the well-known Hausdorff distance is a special case as being just the maximum order statistic. Therefore, by studying the entire histogram of these nearest neighbors it is expected to yield a more robust method to detect points that are present in one cloud but not the other. This approach is applied at multiple resolutions. Therefore, changes detected at the coarsest level will yield large missing targets and at finer levels will yield smaller targets.
Hamraz, Hamid; Contreras, Marco A; Zhang, Jun
2017-07-28
Airborne laser scanning (LiDAR) point clouds over large forested areas can be processed to segment individual trees and subsequently extract tree-level information. Existing segmentation procedures typically detect more than 90% of overstory trees, yet they barely detect 60% of understory trees because of the occlusion effect of higher canopy layers. Although understory trees provide limited financial value, they are an essential component of ecosystem functioning by offering habitat for numerous wildlife species and influencing stand development. Here we model the occlusion effect in terms of point density. We estimate the fractions of points representing different canopy layers (one overstory and multiple understory) and also pinpoint the required density for reasonable tree segmentation (where accuracy plateaus). We show that at a density of ~170 pt/m² understory trees can likely be segmented as accurately as overstory trees. Given the advancements of LiDAR sensor technology, point clouds will affordably reach this required density. Using modern computational approaches for big data, the denser point clouds can efficiently be processed to ultimately allow accurate remote quantification of forest resources. The methodology can also be adopted for other similar remote sensing or advanced imaging applications such as geological subsurface modelling or biomedical tissue analysis.
NASA Astrophysics Data System (ADS)
Böhm, J.; Bredif, M.; Gierlinger, T.; Krämer, M.; Lindenberg, R.; Liu, K.; Michel, F.; Sirmacek, B.
2016-06-01
Current 3D data capturing as implemented on for example airborne or mobile laser scanning systems is able to efficiently sample the surface of a city by billions of unselective points during one working day. What is still difficult is to extract and visualize meaningful information hidden in these point clouds with the same efficiency. This is where the FP7 IQmulus project enters the scene. IQmulus is an interactive facility for processing and visualizing big spatial data. In this study the potential of IQmulus is demonstrated on a laser mobile mapping point cloud of 1 billion points sampling ~ 10 km of street environment in Toulouse, France. After the data is uploaded to the IQmulus Hadoop Distributed File System, a workflow is defined by the user consisting of retiling the data followed by a PCA driven local dimensionality analysis, which runs efficiently on the IQmulus cloud facility using a Spark implementation. Points scattering in 3 directions are clustered in the tree class, and are separated next into individual trees. Five hours of processing at the 12 node computing cluster results in the automatic identification of 4000+ urban trees. Visualization of the results in the IQmulus fat client helps users to appreciate the results, and developers to identify remaining flaws in the processing workflow.
NASA Astrophysics Data System (ADS)
Sirmacek, B.; Lindenbergh, R. C.; Menenti, M.
2013-10-01
Fusion of 3D airborne laser (LIDAR) data and terrestrial optical imagery can be applied in 3D urban modeling and model up-dating. The most challenging aspect of the fusion procedure is registering the terrestrial optical images on the LIDAR point clouds. In this article, we propose an approach for registering these two different data from different sensor sources. As we use iPhone camera images which are taken in front of the interested urban structure by the application user and the high resolution LIDAR point clouds of the acquired by an airborne laser sensor. After finding the photo capturing position and orientation from the iPhone photograph metafile, we automatically select the area of interest in the point cloud and transform it into a range image which has only grayscale intensity levels according to the distance from the image acquisition position. We benefit from local features for registering the iPhone image to the generated range image. In this article, we have applied the registration process based on local feature extraction and graph matching. Finally, the registration result is used for facade texture mapping on the 3D building surface mesh which is generated from the LIDAR point cloud. Our experimental results indicate possible usage of the proposed algorithm framework for 3D urban map updating and enhancing purposes.
Reconstruction of Consistent 3d CAD Models from Point Cloud Data Using a Priori CAD Models
NASA Astrophysics Data System (ADS)
Bey, A.; Chaine, R.; Marc, R.; Thibault, G.; Akkouche, S.
2011-09-01
We address the reconstruction of 3D CAD models from point cloud data acquired in industrial environments, using a pre-existing 3D model as an initial estimate of the scene to be processed. Indeed, this prior knowledge can be used to drive the reconstruction so as to generate an accurate 3D model matching the point cloud. We more particularly focus our work on the cylindrical parts of the 3D models. We propose to state the problem in a probabilistic framework: we have to search for the 3D model which maximizes some probability taking several constraints into account, such as the relevancy with respect to the point cloud and the a priori 3D model, and the consistency of the reconstructed model. The resulting optimization problem can then be handled using a stochastic exploration of the solution space, based on the random insertion of elements in the configuration under construction, coupled with a greedy management of the conflicts which efficiently improves the configuration at each step. We show that this approach provides reliable reconstructed 3D models by presenting some results on industrial data sets.
Alternative Methods for Estimating Plane Parameters Based on a Point Cloud
NASA Astrophysics Data System (ADS)
Stryczek, Roman
2017-12-01
Non-contact measurement techniques carried out using triangulation optical sensors are increasingly popular in measurements with the use of industrial robots directly on production lines. The result of such measurements is often a cloud of measurement points that is characterized by considerable measuring noise, presence of a number of points that differ from the reference model, and excessive errors that must be eliminated from the analysis. To obtain vector information points contained in the cloud that describe reference models, the data obtained during a measurement should be subjected to appropriate processing operations. The present paperwork presents an analysis of suitability of methods known as RANdom Sample Consensus (RANSAC), Monte Carlo Method (MCM), and Particle Swarm Optimization (PSO) for the extraction of the reference model. The effectiveness of the tested methods is illustrated by examples of measurement of the height of an object and the angle of a plane, which were made on the basis of experiments carried out at workshop conditions.
Automatic Monitoring of Tunnel Deformation Based on High Density Point Clouds Data
NASA Astrophysics Data System (ADS)
Du, L.; Zhong, R.; Sun, H.; Wu, Q.
2017-09-01
An automated method for tunnel deformation monitoring using high density point clouds data is presented. Firstly, the 3D point clouds data are converted to two-dimensional surface by projection on the XOY plane, the projection point set of central axis on XOY plane named Uxoy is calculated by combining the Alpha Shape algorithm with RANSAC (Random Sampling Consistency) algorithm, and then the projection point set of central axis on YOZ plane named Uyoz is obtained by highest and lowest points which are extracted by intersecting straight lines that through each point of Uxoy and perpendicular to the two -dimensional surface with the tunnel point clouds, Uxoy and Uyoz together form the 3D center axis finally. Secondly, the buffer of each cross section is calculated by K-Nearest neighbor algorithm, and the initial cross-sectional point set is quickly constructed by projection method. Finally, the cross sections are denoised and the section lines are fitted using the method of iterative ellipse fitting. In order to improve the accuracy of the cross section, a fine adjustment method is proposed to rotate the initial sectional plane around the intercept point in the horizontal and vertical direction within the buffer. The proposed method is used in Shanghai subway tunnel, and the deformation of each section in the direction of 0 to 360 degrees is calculated. The result shows that the cross sections becomes flat circles from regular circles due to the great pressure at the top of the tunnel
Methods from Information Extraction from LIDAR Intensity Data and Multispectral LIDAR Technology
NASA Astrophysics Data System (ADS)
Scaioni, M.; Höfle, B.; Baungarten Kersting, A. P.; Barazzetti, L.; Previtali, M.; Wujanz, D.
2018-04-01
LiDAR is a consolidated technology for topographic mapping and 3D reconstruction, which is implemented in several platforms On the other hand, the exploitation of the geometric information has been coupled by the use of laser intensity, which may provide additional data for multiple purposes. This option has been emphasized by the availability of sensors working on different wavelength, thus able to provide additional information for classification of surfaces and objects. Several applications ofmonochromatic and multi-spectral LiDAR data have been already developed in different fields: geosciences, agriculture, forestry, building and cultural heritage. The use of intensity data to extract measures of point cloud quality has been also developed. The paper would like to give an overview on the state-of-the-art of these techniques, and to present the modern technologies for the acquisition of multispectral LiDAR data. In addition, the ISPRS WG III/5 on `Information Extraction from LiDAR Intensity Data' has collected and made available a few open data sets to support scholars to do research on this field. This service is presented and data sets delivered so far as are described.
EUREC4A: A Field Campaign to Elucidate the Couplings Between Clouds, Convection and Circulation
NASA Astrophysics Data System (ADS)
Bony, Sandrine; Stevens, Bjorn; Ament, Felix; Bigorre, Sebastien; Chazette, Patrick; Crewell, Susanne; Delanoë, Julien; Emanuel, Kerry; Farrell, David; Flamant, Cyrille; Gross, Silke; Hirsch, Lutz; Karstensen, Johannes; Mayer, Bernhard; Nuijens, Louise; Ruppert, James H.; Sandu, Irina; Siebesma, Pier; Speich, Sabrina; Szczap, Frédéric; Totems, Julien; Vogel, Raphaela; Wendisch, Manfred; Wirth, Martin
2017-11-01
Trade-wind cumuli constitute the cloud type with the highest frequency of occurrence on Earth, and it has been shown that their sensitivity to changing environmental conditions will critically influence the magnitude and pace of future global warming. Research over the last decade has pointed out the importance of the interplay between clouds, convection and circulation in controling this sensitivity. Numerical models represent this interplay in diverse ways, which translates into different responses of trade-cumuli to climate perturbations. Climate models predict that the area covered by shallow cumuli at cloud base is very sensitive to changes in environmental conditions, while process models suggest the opposite. To understand and resolve this contradiction, we propose to organize a field campaign aimed at quantifying the physical properties of trade-cumuli (e.g., cloud fraction and water content) as a function of the large-scale environment. Beyond a better understanding of clouds-circulation coupling processes, the campaign will provide a reference data set that may be used as a benchmark for advancing the modelling and the satellite remote sensing of clouds and circulation. It will also be an opportunity for complementary investigations such as evaluating model convective parameterizations or studying the role of ocean mesoscale eddies in air-sea interactions and convective organization.
EUREC4A: A Field Campaign to Elucidate the Couplings Between Clouds, Convection and Circulation
NASA Astrophysics Data System (ADS)
Bony, Sandrine; Stevens, Bjorn; Ament, Felix; Bigorre, Sebastien; Chazette, Patrick; Crewell, Susanne; Delanoë, Julien; Emanuel, Kerry; Farrell, David; Flamant, Cyrille; Gross, Silke; Hirsch, Lutz; Karstensen, Johannes; Mayer, Bernhard; Nuijens, Louise; Ruppert, James H.; Sandu, Irina; Siebesma, Pier; Speich, Sabrina; Szczap, Frédéric; Totems, Julien; Vogel, Raphaela; Wendisch, Manfred; Wirth, Martin
Trade-wind cumuli constitute the cloud type with the highest frequency of occurrence on Earth, and it has been shown that their sensitivity to changing environmental conditions will critically influence the magnitude and pace of future global warming. Research over the last decade has pointed out the importance of the interplay between clouds, convection and circulation in controling this sensitivity. Numerical models represent this interplay in diverse ways, which translates into different responses of tradecumuli to climate perturbations. Climate models predict that the area covered by shallow cumuli at cloud base is very sensitive to changes in environmental conditions, while process models suggest the opposite. To understand and resolve this contradiction, we propose to organize a field campaign aimed at quantifying the physical properties of tradecumuli (e.g., cloud fraction and water content) as a function of the large-scale environment. Beyond a better understanding of clouds-circulation coupling processes, the campaign will provide a reference data set that may be used as a benchmark for advancing the modelling and the satellite remote sensing of clouds and circulation. It will also be an opportunity for complementary investigations such as evaluating model convective parameterizations or studying the role of ocean mesoscale eddies in air-sea interactions and convective organization.
NASA Astrophysics Data System (ADS)
Nieuwenhuizen, Th. M.; Allahverdyan, A. E.
2002-09-01
The Brownian motion of a quantum particle in a harmonic confining potential and coupled to harmonic quantum thermal bath is exactly solvable. Though this system presents at high temperatures a pedagogic example to explain the laws of thermodynamics, it is shown that at low enough temperatures the stationary state is non-Gibbsian due to an entanglement with the bath. In physical terms, this happens when the cloud of bath modes around the particle starts to play a nontrivial role, namely, when the bath temperature T is smaller than the coupling energy. Indeed, equilibrium thermodynamics of the total system, particle plus bath, does not imply standard equilibrium thermodynamics for the particle itself at low T. Various formulations of the second law are found to be invalid at low T. First, the Clausius inequality can be violated, because heat can be extracted from the zero point energy of the cloud of bath modes. Second, when the width of the confining potential is suddenly changed, there occurs a relaxation to equilibrium during which the entropy production is partly negative. In this process the energy put on the particle does not relax monotonically, but oscillates between particle and bath, even in the limit of strong damping. Third, for nonadiabatic changes of system parameters the rate of energy dissipation can be negative, and, out of equilibrium, cyclic processes are possible which extract work from the bath. Conditions are put forward under which perpetuum mobility of the second kind, having one or several work extraction cycles, enter the realm of condensed matter physics. Fourth, it follows that the equivalence between different formulations of the second law (e.g., those by Clausius and Thomson) can be violated at low temperatures. These effects are the consequence of quantum entanglement in the presence of the slightly off-equilibrium nature of the thermal bath, and become important when the characteristic quantum time scale ħ/kBT is larger than or comparable to other time scales of the system. They show that there is no general consensus between standard thermodynamics and quantum mechanics. The known agreements occur only due to the weak coupling limit, which does not pertain to low temperatures. Experimental setups for testing the effects are discussed.
Nieuwenhuizen, Th M; Allahverdyan, A E
2002-09-01
The Brownian motion of a quantum particle in a harmonic confining potential and coupled to harmonic quantum thermal bath is exactly solvable. Though this system presents at high temperatures a pedagogic example to explain the laws of thermodynamics, it is shown that at low enough temperatures the stationary state is non-Gibbsian due to an entanglement with the bath. In physical terms, this happens when the cloud of bath modes around the particle starts to play a nontrivial role, namely, when the bath temperature T is smaller than the coupling energy. Indeed, equilibrium thermodynamics of the total system, particle plus bath, does not imply standard equilibrium thermodynamics for the particle itself at low T. Various formulations of the second law are found to be invalid at low T. First, the Clausius inequality can be violated, because heat can be extracted from the zero point energy of the cloud of bath modes. Second, when the width of the confining potential is suddenly changed, there occurs a relaxation to equilibrium during which the entropy production is partly negative. In this process the energy put on the particle does not relax monotonically, but oscillates between particle and bath, even in the limit of strong damping. Third, for nonadiabatic changes of system parameters the rate of energy dissipation can be negative, and, out of equilibrium, cyclic processes are possible which extract work from the bath. Conditions are put forward under which perpetuum mobility of the second kind, having one or several work extraction cycles, enter the realm of condensed matter physics. Fourth, it follows that the equivalence between different formulations of the second law (e.g., those by Clausius and Thomson) can be violated at low temperatures. These effects are the consequence of quantum entanglement in the presence of the slightly off-equilibrium nature of the thermal bath, and become important when the characteristic quantum time scale variant Planck's over 2pi /k(B)T is larger than or comparable to other time scales of the system. They show that there is no general consensus between standard thermodynamics and quantum mechanics. The known agreements occur only due to the weak coupling limit, which does not pertain to low temperatures. Experimental setups for testing the effects are discussed.
Feature relevance assessment for the semantic interpretation of 3D point cloud data
NASA Astrophysics Data System (ADS)
Weinmann, M.; Jutzi, B.; Mallet, C.
2013-10-01
The automatic analysis of large 3D point clouds represents a crucial task in photogrammetry, remote sensing and computer vision. In this paper, we propose a new methodology for the semantic interpretation of such point clouds which involves feature relevance assessment in order to reduce both processing time and memory consumption. Given a standard benchmark dataset with 1.3 million 3D points, we first extract a set of 21 geometric 3D and 2D features. Subsequently, we apply a classifier-independent ranking procedure which involves a general relevance metric in order to derive compact and robust subsets of versatile features which are generally applicable for a large variety of subsequent tasks. This metric is based on 7 different feature selection strategies and thus addresses different intrinsic properties of the given data. For the example of semantically interpreting 3D point cloud data, we demonstrate the great potential of smaller subsets consisting of only the most relevant features with 4 different state-of-the-art classifiers. The results reveal that, instead of including as many features as possible in order to compensate for lack of knowledge, a crucial task such as scene interpretation can be carried out with only few versatile features and even improved accuracy.
- and Graph-Based Point Cloud Segmentation of 3d Scenes Using Perceptual Grouping Laws
NASA Astrophysics Data System (ADS)
Xu, Y.; Hoegner, L.; Tuttas, S.; Stilla, U.
2017-05-01
Segmentation is the fundamental step for recognizing and extracting objects from point clouds of 3D scene. In this paper, we present a strategy for point cloud segmentation using voxel structure and graph-based clustering with perceptual grouping laws, which allows a learning-free and completely automatic but parametric solution for segmenting 3D point cloud. To speak precisely, two segmentation methods utilizing voxel and supervoxel structures are reported and tested. The voxel-based data structure can increase efficiency and robustness of the segmentation process, suppressing the negative effect of noise, outliers, and uneven points densities. The clustering of voxels and supervoxel is carried out using graph theory on the basis of the local contextual information, which commonly conducted utilizing merely pairwise information in conventional clustering algorithms. By the use of perceptual laws, our method conducts the segmentation in a pure geometric way avoiding the use of RGB color and intensity information, so that it can be applied to more general applications. Experiments using different datasets have demonstrated that our proposed methods can achieve good results, especially for complex scenes and nonplanar surfaces of objects. Quantitative comparisons between our methods and other representative segmentation methods also confirms the effectiveness and efficiency of our proposals.
Street curb recognition in 3d point cloud data using morphological operations
NASA Astrophysics Data System (ADS)
Rodríguez-Cuenca, Borja; Concepción Alonso-Rodríguez, María; García-Cortés, Silverio; Ordóñez, Celestino
2015-04-01
Accurate and automatic detection of cartographic-entities saves a great deal of time and money when creating and updating cartographic databases. The current trend in remote sensing feature extraction is to develop methods that are as automatic as possible. The aim is to develop algorithms that can obtain accurate results with the least possible human intervention in the process. Non-manual curb detection is an important issue in road maintenance, 3D urban modeling, and autonomous navigation fields. This paper is focused on the semi-automatic recognition of curbs and street boundaries using a 3D point cloud registered by a mobile laser scanner (MLS) system. This work is divided into four steps. First, a coordinate system transformation is carried out, moving from a global coordinate system to a local one. After that and in order to simplify the calculations involved in the procedure, a rasterization based on the projection of the measured point cloud on the XY plane was carried out, passing from the 3D original data to a 2D image. To determine the location of curbs in the image, different image processing techniques such as thresholding and morphological operations were applied. Finally, the upper and lower edges of curbs are detected by an unsupervised classification algorithm on the curvature and roughness of the points that represent curbs. The proposed method is valid in both straight and curved road sections and applicable both to laser scanner and stereo vision 3D data due to the independence of its scanning geometry. This method has been successfully tested with two datasets measured by different sensors. The first dataset corresponds to a point cloud measured by a TOPCON sensor in the Spanish town of Cudillero. That point cloud comprises more than 6,000,000 points and covers a 400-meter street. The second dataset corresponds to a point cloud measured by a RIEGL sensor in the Austrian town of Horn. That point cloud comprises 8,000,000 points and represents a 160-meter street. The proposed method provides success rates in curb recognition of over 85% in both datasets.
Images from Galileo of the Venus cloud deck
Belton, M.J.S.; Gierasch, P.J.; Smith, M.D.; Helfenstein, P.; Schinder, P.J.; Pollack, James B.; Rages, K.A.; Ingersoll, A.P.; Klaasen, K.P.; Veverka, J.; Anger, C.D.; Carr, M.H.; Chapman, C.R.; Davies, M.E.; Fanale, F.P.; Greeley, R.; Greenberg, R.; Head, J. W.; Morrison, D.; Neukum, G.; Pilcher, C.B.
1991-01-01
Images of Venus taken at 418 (violet) and 986 [near-infrared (NIR)] nanometers show that the morphology and motions of large-scale features change with depth in the cloud deck. Poleward meridional velocities, seen in both spectral regions, are much reduced in the NIR. In the south polar region the markings in the two wavelength bands are strongly anticorrelated. The images follow the changing state of the upper cloud layer downwind of the subsolar point, and the zonal flow field shows a longitudinal periodicity that may be coupled to the formation of large-scale planetary waves. No optical lightning was detected.
Han, Quan; Huo, Yanyan; Wu, Jiangyan; He, Yaping; Yang, Xiaohui; Yang, Longhu
2017-03-24
A highly sensitive method based on cloud point extraction (CPE) separation/preconcentration and graphite furnace atomic absorption spectrometry (GFAAS) detection has been developed for the determination of ultra-trace amounts of rhodium in water samples. A new reagent, 2-(5-iodo-2-pyridylazo)-5-dimethylaminoaniline (5-I-PADMA), was used as the chelating agent and the nonionic surfactant TritonX-114 was chosen as extractant. In a HAc-NaAc buffer solution at pH 5.5, Rh(III) reacts with 5-I-PADMA to form a stable chelate by heating in a boiling water bath for 10 min. Subsequently, the chelate is extracted into the surfactant phase and separated from bulk water. The factors affecting CPE were investigated. Under the optimized conditions, the calibration graph was linear in the range of 0.1-6.0 ng/mL, the detection limit was 0.023 ng/mL for rhodium and relative standard deviation was 3.67% ( c = 1.0 ng/mL, n = 11).The method has been applied to the determination of trace rhodium in water samples with satisfactory results.
D Building Reconstruction by Multiview Images and the Integrated Application with Augmented Reality
NASA Astrophysics Data System (ADS)
Hwang, Jin-Tsong; Chu, Ting-Chen
2016-10-01
This study presents an approach wherein photographs with a high degree of overlap are clicked using a digital camera and used to generate three-dimensional (3D) point clouds via feature point extraction and matching. To reconstruct a building model, an unmanned aerial vehicle (UAV) is used to click photographs from vertical shooting angles above the building. Multiview images are taken from the ground to eliminate the shielding effect on UAV images caused by trees. Point clouds from the UAV and multiview images are generated via Pix4Dmapper. By merging two sets of point clouds via tie points, the complete building model is reconstructed. The 3D models are reconstructed using AutoCAD 2016 to generate vectors from the point clouds; SketchUp Make 2016 is used to rebuild a complete building model with textures. To apply 3D building models in urban planning and design, a modern approach is to rebuild the digital models; however, replacing the landscape design and building distribution in real time is difficult as the frequency of building replacement increases. One potential solution to these problems is augmented reality (AR). Using Unity3D and Vuforia to design and implement the smartphone application service, a markerless AR of the building model can be built. This study is aimed at providing technical and design skills related to urban planning, urban designing, and building information retrieval using AR.
Determination of total selenium in food samples by d-CPE and HG-AFS.
Wang, Mei; Zhong, Yizhou; Qin, Jinpeng; Zhang, Zehua; Li, Shan; Yang, Bingyi
2017-07-15
A dual-cloud point extraction (d-CPE) procedure was developed for the simultaneous preconcentration and determination of trace level Se in food samples by hydride generation-atomic fluorescence spectrometry (HG-AFS). The Se(IV) was complexed with ammonium pyrrolidinedithiocarbamate (APDC) in a Triton X-114 surfactant-rich phase, which was then treated with a mixture of 16% (v/v) HCl and 20% (v/v) H 2 O 2 . This converted the Se(IV)-APDC into free Se(IV), which was back extracted into an aqueous phase at the second cloud point extraction stage. This aqueous phase was analyzed directly by HG-AFS. Optimization of the experimental conditions gave a limit of detection of 0.023μgL -1 with an enhancement factor of 11.8 when 50mL of sample solution was preconcentrated to 3mL. The relative standard deviation was 4.04% (c=6.0μgL -1 , n=10). The proposed method was applied to determine the Se contents in twelve food samples with satisfactory recoveries of 95.6-105.2%. Copyright © 2016 Elsevier Ltd. All rights reserved.
Ordóñez, Celestino; Cabo, Carlos; Sanz-Ablanedo, Enoc
2017-01-01
Mobile laser scanning (MLS) is a modern and powerful technology capable of obtaining massive point clouds of objects in a short period of time. Although this technology is nowadays being widely applied in urban cartography and 3D city modelling, it has some drawbacks that need to be avoided in order to strengthen it. One of the most important shortcomings of MLS data is concerned with the fact that it provides an unstructured dataset whose processing is very time-consuming. Consequently, there is a growing interest in developing algorithms for the automatic extraction of useful information from MLS point clouds. This work is focused on establishing a methodology and developing an algorithm to detect pole-like objects and classify them into several categories using MLS datasets. The developed procedure starts with the discretization of the point cloud by means of a voxelization, in order to simplify and reduce the processing time in the segmentation process. In turn, a heuristic segmentation algorithm was developed to detect pole-like objects in the MLS point cloud. Finally, two supervised classification algorithms, linear discriminant analysis and support vector machines, were used to distinguish between the different types of poles in the point cloud. The predictors are the principal component eigenvalues obtained from the Cartesian coordinates of the laser points, the range of the Z coordinate, and some shape-related indexes. The performance of the method was tested in an urban area with 123 poles of different categories. Very encouraging results were obtained, since the accuracy rate was over 90%. PMID:28640189
Extracting Topological Relations Between Indoor Spaces from Point Clouds
NASA Astrophysics Data System (ADS)
Tran, H.; Khoshelham, K.; Kealy, A.; Díaz-Vilariño, L.
2017-09-01
3D models of indoor environments are essential for many application domains such as navigation guidance, emergency management and a range of indoor location-based services. The principal components defined in different BIM standards contain not only building elements, such as floors, walls and doors, but also navigable spaces and their topological relations, which are essential for path planning and navigation. We present an approach to automatically reconstruct topological relations between navigable spaces from point clouds. Three types of topological relations, namely containment, adjacency and connectivity of the spaces are modelled. The results of initial experiments demonstrate the potential of the method in supporting indoor navigation.
Exploring point-cloud features from partial body views for gender classification
NASA Astrophysics Data System (ADS)
Fouts, Aaron; McCoppin, Ryan; Rizki, Mateen; Tamburino, Louis; Mendoza-Schrock, Olga
2012-06-01
In this paper we extend a previous exploration of histogram features extracted from 3D point cloud images of human subjects for gender discrimination. Feature extraction used a collection of concentric cylinders to define volumes for counting 3D points. The histogram features are characterized by a rotational axis and a selected set of volumes derived from the concentric cylinders. The point cloud images are drawn from the CAESAR anthropometric database provided by the Air Force Research Laboratory (AFRL) Human Effectiveness Directorate and SAE International. This database contains approximately 4400 high resolution LIDAR whole body scans of carefully posed human subjects. Success from our previous investigation was based on extracting features from full body coverage which required integration of multiple camera images. With the full body coverage, the central vertical body axis and orientation are readily obtainable; however, this is not the case with a one camera view providing less than one half body coverage. Assuming that the subjects are upright, we need to determine or estimate the position of the vertical axis and the orientation of the body about this axis relative to the camera. In past experiments the vertical axis was located through the center of mass of torso points projected on the ground plane and the body orientation derived using principle component analysis. In a natural extension of our previous work to partial body views, the absence of rotational invariance about the cylindrical axis greatly increases the difficulty for gender classification. Even the problem of estimating the axis is no longer simple. We describe some simple feasibility experiments that use partial image histograms. Here, the cylindrical axis is assumed to be known. We also discuss experiments with full body images that explore the sensitivity of classification accuracy relative to displacements of the cylindrical axis. Our initial results provide the basis for further investigation of more complex partial body viewing problems and new methods for estimating the two position coordinates for the axis location and the unknown body orientation angle.
Madej, Katarzyna; Persona, Karolina; Wandas, Monika; Gomółka, Ewa
2013-10-18
A complex extraction system with the use of cloud-point extraction technique (CPE) was developed for sequential isolation of basic and acidic/neutral medicaments from human plasma/serum, screened by HPLC/DAD method. Eight model drugs (paracetamol, promazine, chlorpromazine, amitriptyline, salicyclic acid, opipramol, alprazolam and carbamazepine) were chosen for the study of optimal CPE conditions. The CPE technique consists in partition of an aqueous sample with addition of a surfactant into two phases: micelle-rich phase with the isolated compounds and water phase containing a surfactant below the critical micellar concentration, mainly under influence of temperature change. The proposed extraction system consists of two chief steps: isolation of basic compounds (from pH 12) and then isolation of acidic/neutral compounds (from pH 6) using surfactant Triton X-114 as the extraction medium. Extraction recovery varied from 25.2 to 107.9% with intra-day and inter-day precision (RSD %) ranged 0.88-1087 and 5.32-17.96, respectively. The limits of detection for the studied medicaments at λ 254nm corresponded to therapeutic or low toxic plasma concentration levels. Usefulness of the proposed CPE-HPLC/DAD method for toxicological drug screening was tested via its application to analysis of two serum samples taken from patients suspected of drug overdosing. Published by Elsevier B.V.
Extraction of Features from High-resolution 3D LiDaR Point-cloud Data
NASA Astrophysics Data System (ADS)
Keller, P.; Kreylos, O.; Hamann, B.; Kellogg, L. H.; Cowgill, E. S.; Yikilmaz, M. B.; Hering-Bertram, M.; Hagen, H.
2008-12-01
Airborne and tripod-based LiDaR scans are capable of producing new insight into geologic features by providing high-quality 3D measurements of the landscape. High-resolution LiDaR is a promising method for studying slip on faults, erosion, and other landscape-altering processes. LiDaR scans can produce up to several billion individual point returns associated with the reflection of a laser from natural and engineered surfaces; these point clouds are typically used to derive a high-resolution digital elevation model (DEM). Currently, there exist only few methods that can support the analysis of the data at full resolution and in the natural 3D perspective in which it was collected by working directly with the points. We are developing new algorithms for extracting features from LiDaR scans, and present method for determining the local curvature of a LiDaR data set, working directly with the individual point returns of a scan. Computing the curvature enables us to rapidly and automatically identify key features such as ridge-lines, stream beds, and edges of terraces. We fit polynomial surface patches via a moving least squares (MLS) approach to local point neighborhoods, determining curvature values for each point. The size of the local point neighborhood is defined by a user. Since both terrestrial and airborne LiDaR scans suffer from high noise, we apply additional pre- and post-processing smoothing steps to eliminate unwanted features. LiDaR data also captures objects like buildings and trees complicating greatly the task of extracting reliable curvature values. Hence, we use a stochastic approach to determine whether a point can be reliably used to estimate curvature or not. Additionally, we have developed a graph-based approach to establish connectivities among points that correspond to regions of high curvature. The result is an explicit description of ridge-lines, for example. We have applied our method to the raw point cloud data collected as part of the GeoEarthScope B-4 project on a section of the San Andreas Fault (Segment SA09). This section provides an excellent test site for our method as it exposes the fault clearly, contains few extraneous structures, and exhibits multiple dry stream-beds that have been off-set by motion on the fault.
NASA Astrophysics Data System (ADS)
Jensen, M. P.; Miller, M. A.; Wang, J.
2017-12-01
The first Intensive Observation Period of the DOE Aerosol and Cloud Experiments in the Eastern North Atlantic (ACE-ENA) took place from 21 June through 20 July 2017 involving the deployment of the ARM Gulfstream-159 (G-1) aircraft with a suite of in situ cloud and aerosol instrumentation in the vicinity of the ARM Climate Research Facility Eastern North Atlantic (ENA) site on Graciosa Island, Azores. Here we present preliminary analysis of the thermodynamic characteristics of the marine boundary layer and the variability of cloud properties for a mixed cloud field including both stratiform cloud layers and deeper cumulus elements. Analysis combines in situ atmospheric state observations from the G-1 with radiosonde profiles and surface meteorology from the ENA site in order to characterize the thermodynamic structure of the marine boundary layer including the coupling state and stability. Cloud/drizzle droplet size distributions measured in situ are combined with remote sensing observations from a scanning cloud radar, and vertically pointing cloud radar and lidar provide quantification of the macrophysical and microphysical properties of the mixed cloud field.
Indoor Navigation from Point Clouds: 3d Modelling and Obstacle Detection
NASA Astrophysics Data System (ADS)
Díaz-Vilariño, L.; Boguslawski, P.; Khoshelham, K.; Lorenzo, H.; Mahdjoubi, L.
2016-06-01
In the recent years, indoor modelling and navigation has become a research of interest because many stakeholders require navigation assistance in various application scenarios. The navigational assistance for blind or wheelchair people, building crisis management such as fire protection, augmented reality for gaming, tourism or training emergency assistance units are just some of the direct applications of indoor modelling and navigation. Navigational information is traditionally extracted from 2D drawings or layouts. Real state of indoors, including opening position and geometry for both windows and doors, and the presence of obstacles is commonly ignored. In this work, a real indoor-path planning methodology based on 3D point clouds is developed. The value and originality of the approach consist on considering point clouds not only for reconstructing semantically-rich 3D indoor models, but also for detecting potential obstacles in the route planning and using these for readapting the routes according to the real state of the indoor depictured by the laser scanner.
Comparative Analysis of Data Structures for Storing Massive Tins in a Dbms
NASA Astrophysics Data System (ADS)
Kumar, K.; Ledoux, H.; Stoter, J.
2016-06-01
Point cloud data are an important source for 3D geoinformation. Modern day 3D data acquisition and processing techniques such as airborne laser scanning and multi-beam echosounding generate billions of 3D points for simply an area of few square kilometers. With the size of the point clouds exceeding the billion mark for even a small area, there is a need for their efficient storage and management. These point clouds are sometimes associated with attributes and constraints as well. Storing billions of 3D points is currently possible which is confirmed by the initial implementations in Oracle Spatial SDO PC and the PostgreSQL Point Cloud extension. But to be able to analyse and extract useful information from point clouds, we need more than just points i.e. we require the surface defined by these points in space. There are different ways to represent surfaces in GIS including grids, TINs, boundary representations, etc. In this study, we investigate the database solutions for the storage and management of massive TINs. The classical (face and edge based) and compact (star based) data structures are discussed at length with reference to their structure, advantages and limitations in handling massive triangulations and are compared with the current solution of PostGIS Simple Feature. The main test dataset is the TIN generated from third national elevation model of the Netherlands (AHN3) with a point density of over 10 points/m2. PostgreSQL/PostGIS DBMS is used for storing the generated TIN. The data structures are tested with the generated TIN models to account for their geometry, topology, storage, indexing, and loading time in a database. Our study is useful in identifying what are the limitations of the existing data structures for storing massive TINs and what is required to optimise these structures for managing massive triangulations in a database.
NASA Astrophysics Data System (ADS)
Weinmann, M.; Müller, M. S.; Hillemann, M.; Reydel, N.; Hinz, S.; Jutzi, B.
2017-08-01
In this paper, we focus on UAV-borne laser scanning with the objective of densely sampling object surfaces in the local surrounding of the UAV. In this regard, using a line scanner which scans along the vertical direction and perpendicular to the flight direction results in a point cloud with low point density if the UAV moves fast. Using a line scanner which scans along the horizontal direction only delivers data corresponding to the altitude of the UAV and thus a low scene coverage. For these reasons, we present a concept and a system for UAV-borne laser scanning using multiple line scanners. Our system consists of a quadcopter equipped with horizontally and vertically oriented line scanners. We demonstrate the capabilities of our system by presenting first results obtained for a flight within an outdoor scene. Thereby, we use a downsampling of the original point cloud and different neighborhood types to extract fundamental geometric features which in turn can be used for scene interpretation with respect to linear, planar or volumetric structures.
NASA Astrophysics Data System (ADS)
Yu, P.; Wu, H.; Liu, C.; Xu, Z.
2018-04-01
Diagnosis of water leakage in metro tunnels is of great significance to the metro tunnel construction and the safety of metro operation. A method that integrates laser scanning and infrared thermal imaging is proposed for the diagnosis of water leakage. The diagnosis of water leakage in this paper is mainly divided into two parts: extraction of water leakage geometry information and extraction of water leakage attribute information. Firstly, the suspected water leakage is obtained by threshold segmentation based on the point cloud of tunnel. And the real water leakage is obtained by the auxiliary interpretation of infrared thermal images. Then, the characteristic of isotherm outline is expressed by solving Centroid Distance Function to determine the type of water leakage. Similarly, the location of leakage silt and the direction of crack are calculated by finding coordinates of feature points on Centroid Distance Function. Finally, a metro tunnel part in Shanghai was selected as the case area to make experiment and the result shown that the proposed method in this paper can be used to diagnosis water leakage disease completely and accurately.
Contrasting Cloud Composition Between Coupled and Decoupled Marine Boundary Layer Clouds
NASA Astrophysics Data System (ADS)
WANG, Z.; Mora, M.; Dadashazar, H.; MacDonald, A.; Crosbie, E.; Bates, K. H.; Coggon, M. M.; Craven, J. S.; Xian, P.; Campbell, J. R.; AzadiAghdam, M.; Woods, R. K.; Jonsson, H.; Flagan, R. C.; Seinfeld, J.; Sorooshian, A.
2016-12-01
Marine stratocumulus clouds often become decoupled from the vertical layer immediately above the ocean surface. This study contrasts cloud chemical composition between coupled and decoupled marine stratocumulus clouds. Cloud water and droplet residual particle composition were measured in clouds off the California coast during three airborne experiments in July-August of separate years (E-PEACE 2011, NiCE 2013, BOAS 2015). Decoupled clouds exhibited significantly lower overall mass concentrations in both cloud water and droplet residual particles, consistent with reduced cloud droplet number concentration and sub-cloud aerosol (Dp > 100 nm) number concentration, owing to detachment from surface sources. Non-refractory sub-micrometer aerosol measurements show that coupled clouds exhibit higher sulfate mass fractions in droplet residual particles, owing to more abundant precursor emissions from the ocean and ships. Consequently, decoupled clouds exhibited higher mass fractions of organics, nitrate, and ammonium in droplet residual particles, owing to effects of long-range transport from more distant sources. Total cloud water mass concentration in coupled clouds was dominated by sodium and chloride, and their mass fractions and concentrations exceeded those in decoupled clouds. Conversely, with the exception of sea salt constituents (e.g., Cl, Na, Mg, K), cloud water mass fractions of all species examined were higher in decoupled clouds relative to coupled clouds. These results suggest that an important variable is the extent to which clouds are coupled to the surface layer when interpreting microphysical data relevant to clouds and aerosol particles.
Ghate, Virendra P.; Albrecht, Bruce A.; Miller, Mark A.; ...
2014-01-13
Observations made during a 24-h period as part of the Variability of the American Monsoon Systems (VAMOS) Ocean–Cloud–Atmosphere–Land Study Regional Experiment (VOCALS-REx) are analyzed to study the radiation and turbulence associated with the stratocumulus-topped marine boundary layer (BL). The first 14 h exhibited a well-mixed (coupled) BL with an average cloud-top radiative flux divergence of ~130 W m 22; the BL was decoupled during the last 10 h with negligible radiative flux divergence. The averaged radiative cooling very close to the cloud top was -9.04 K h -1 in coupled conditions and -3.85 K h -1 in decoupled conditions. Thismore » is the first study that combined data from a vertically pointing Doppler cloud radar and a Doppler lidar to yield the vertical velocity structure of the entire BL. The averaged vertical velocity variance and updraft mass flux during coupled conditions were higher than those during decoupled conditions at all levels by a factor of 2 or more. The vertical velocity skewness was negative in the entire BL during coupled conditions, whereas it was weakly positive in the lower third of the BL and negative above during decoupled conditions. A formulation of velocity scale is proposed that includes the effect of cloud-top radiative cooling in addition to the surface buoyancy flux. When scaled by the velocity scale, the vertical velocity variance and coherent downdrafts had similar magnitude during the coupled and decoupled conditions. Finally, the coherent updrafts that exhibited a constant profile in the entire BL during both the coupled and decoupled conditions scaled well with the convective velocity scale to a value of ~0.5.« less
Semantic Segmentation of Indoor Point Clouds Using Convolutional Neural Network
NASA Astrophysics Data System (ADS)
Babacan, K.; Chen, L.; Sohn, G.
2017-11-01
As Building Information Modelling (BIM) thrives, geometry becomes no longer sufficient; an ever increasing variety of semantic information is needed to express an indoor model adequately. On the other hand, for the existing buildings, automatically generating semantically enriched BIM from point cloud data is in its infancy. The previous research to enhance the semantic content rely on frameworks in which some specific rules and/or features that are hand coded by specialists. These methods immanently lack generalization and easily break in different circumstances. On this account, a generalized framework is urgently needed to automatically and accurately generate semantic information. Therefore we propose to employ deep learning techniques for the semantic segmentation of point clouds into meaningful parts. More specifically, we build a volumetric data representation in order to efficiently generate the high number of training samples needed to initiate a convolutional neural network architecture. The feedforward propagation is used in such a way to perform the classification in voxel level for achieving semantic segmentation. The method is tested both for a mobile laser scanner point cloud, and a larger scale synthetically generated data. We also demonstrate a case study, in which our method can be effectively used to leverage the extraction of planar surfaces in challenging cluttered indoor environments.
Improvement of the cloud point extraction of uranyl ions by the addition of ionic liquids.
Gao, Song; Sun, Taoxiang; Chen, Qingde; Shen, Xinghai
2013-12-15
The cloud point extraction (CPE) of uranyl ions by different kinds of extractants in Triton X-114 (TX-114) micellar solution was investigated upon the addition of ionic liquids (ILs) with various anions, i.e., bromide (Br(-)), tetrafluoroborate (BF4(-)), hexafluorophosphate (PF6(-)) and bis[(trifluoromethyl)sulfonyl]imide (NTf2(-)). A significant increase of the extraction efficiency was found on the addition of NTf2(-) based ILs when using neutral extractant tri-octylphosphine oxide (TOPO), and the extraction efficiency kept high at both nearly neutral and high acidity. However, the CPE with acidic extractants, e.g., bis(2-ethylhexyl) phosphoric acid (HDEHP) and 8-hydroxyquinoline (8-HQ) which are only effective at nearly neutral condition, was not improved by ILs. The results of zeta potential and (19)F NMR measurements indicated that the anion NTf2(-) penetrated into the TX-114 micelles and was enriched in the surfactant-rich phase during the CPE process. Meanwhile, NTf2(-) may act as a counterion in the CPE of UO2(2+) by TOPO. Furthermore, the addition of IL increased the separation factor of UO2(2+) and La(3+), which implied that in the micelle TOPO, NTf2(-) and NO3(-) established a soft template for UO2(2+). Therefore, the combination of CPE and IL provided a supramolecular recognition to concentrate UO2(2+) efficiently and selectively. Copyright © 2013 Elsevier B.V. All rights reserved.
Supervised Outlier Detection in Large-Scale Mvs Point Clouds for 3d City Modeling Applications
NASA Astrophysics Data System (ADS)
Stucker, C.; Richard, A.; Wegner, J. D.; Schindler, K.
2018-05-01
We propose to use a discriminative classifier for outlier detection in large-scale point clouds of cities generated via multi-view stereo (MVS) from densely acquired images. What makes outlier removal hard are varying distributions of inliers and outliers across a scene. Heuristic outlier removal using a specific feature that encodes point distribution often delivers unsatisfying results. Although most outliers can be identified correctly (high recall), many inliers are erroneously removed (low precision), too. This aggravates object 3D reconstruction due to missing data. We thus propose to discriminatively learn class-specific distributions directly from the data to achieve high precision. We apply a standard Random Forest classifier that infers a binary label (inlier or outlier) for each 3D point in the raw, unfiltered point cloud and test two approaches for training. In the first, non-semantic approach, features are extracted without considering the semantic interpretation of the 3D points. The trained model approximates the average distribution of inliers and outliers across all semantic classes. Second, semantic interpretation is incorporated into the learning process, i.e. we train separate inlieroutlier classifiers per semantic class (building facades, roof, ground, vegetation, fields, and water). Performance of learned filtering is evaluated on several large SfM point clouds of cities. We find that results confirm our underlying assumption that discriminatively learning inlier-outlier distributions does improve precision over global heuristics by up to ≍ 12 percent points. Moreover, semantically informed filtering that models class-specific distributions further improves precision by up to ≍ 10 percent points, being able to remove very isolated building, roof, and water points while preserving inliers on building facades and vegetation.
Lost in Virtual Reality: Pathfinding Algorithms Detect Rock Fractures and Contacts in Point Clouds
NASA Astrophysics Data System (ADS)
Thiele, S.; Grose, L.; Micklethwaite, S.
2016-12-01
UAV-based photogrammetric and LiDAR techniques provide high resolution 3D point clouds and ortho-rectified photomontages that can capture surface geology in outstanding detail over wide areas. Automated and semi-automated methods are vital to extract full value from these data in practical time periods, though the nuances of geological structures and materials (natural variability in colour and geometry, soft and hard linkage, shadows and multiscale properties) make this a challenging task. We present a novel method for computer assisted trace detection in dense point clouds, using a lowest cost path solver to "follow" fracture traces and lithological contacts between user defined end points. This is achieved by defining a local neighbourhood network where each point in the cloud is linked to its neighbours, and then using a least-cost path algorithm to search this network and estimate the trace of the fracture or contact. A variety of different algorithms can then be applied to calculate the best fit plane, produce a fracture network, or map properties such as roughness, curvature and fracture intensity. Our prototype of this method (Fig. 1) suggests the technique is feasible and remarkably good at following traces under non-optimal conditions such as variable-shadow, partial occlusion and complex fracturing. Furthermore, if a fracture is initially mapped incorrectly, the user can easily provide further guidance by defining intermediate waypoints. Future development will include optimization of the algorithm to perform well on large point clouds and modifications that permit the detection of features such as step-overs. We also plan on implementing this approach in an interactive graphical user environment.
Automatic 3D Extraction of Buildings, Vegetation and Roads from LIDAR Data
NASA Astrophysics Data System (ADS)
Bellakaout, A.; Cherkaoui, M.; Ettarid, M.; Touzani, A.
2016-06-01
Aerial topographic surveys using Light Detection and Ranging (LiDAR) technology collect dense and accurate information from the surface or terrain; it is becoming one of the important tools in the geosciences for studying objects and earth surface. Classification of Lidar data for extracting ground, vegetation, and buildings is a very important step needed in numerous applications such as 3D city modelling, extraction of different derived data for geographical information systems (GIS), mapping, navigation, etc... Regardless of what the scan data will be used for, an automatic process is greatly required to handle the large amount of data collected because the manual process is time consuming and very expensive. This paper is presenting an approach for automatic classification of aerial Lidar data into five groups of items: buildings, trees, roads, linear object and soil using single return Lidar and processing the point cloud without generating DEM. Topological relationship and height variation analysis is adopted to segment, preliminary, the entire point cloud preliminarily into upper and lower contours, uniform and non-uniform surface, non-uniform surfaces, linear objects, and others. This primary classification is used on the one hand to know the upper and lower part of each building in an urban scene, needed to model buildings façades; and on the other hand to extract point cloud of uniform surfaces which contain roofs, roads and ground used in the second phase of classification. A second algorithm is developed to segment the uniform surface into buildings roofs, roads and ground, the second phase of classification based on the topological relationship and height variation analysis, The proposed approach has been tested using two areas : the first is a housing complex and the second is a primary school. The proposed approach led to successful classification results of buildings, vegetation and road classes.
Murugesan, Sivananth; Iyyaswami, Regupathi
2017-08-15
Low frequency sonic waves, less than 10kHz were introduced to assist cloud point extraction of polyhydroxyalkanoate from Cupriavidus necator present within the crude broth. Process parameters including surfactant system variables and sonication parameters were studied for their effect on extraction efficiency. Introduction of low frequency sonic waves assists in the dissolution of microbial cell wall by the surfactant micelles and release of cellular content, polyhydroxyalkanoate granules released were encapsulated by the micelle core which was confirmed by crotonic acid assay. In addition, sonic waves resulted in the separation of homogeneous surfactant and broth mixture into two distinct phases, top aqueous phase and polyhydroxyalkanoate enriched bottom surfactant rich phase. Mixed surfactant systems showed higher extraction efficiency compared to that of individual Triton X-100 concentrations, owing to increase in the hydrophobicity of the micellar core and its interaction with polyhydroxyalkanoate. Addition of salts to the mixed surfactant system induces screening of charged surfactant head groups and reduces inter-micellar repulsion, presence of ammonium ions lead to electrostatic repulsion and weaker cation sodium enhances the formation of micellar network. Addition of polyethylene glycol 8000 resulted in increasing interaction with the surfactant tails of the micelle core there by reducing the purity of polyhydroxyalkanoate. Copyright © 2017 Elsevier B.V. All rights reserved.
Hartmann, Georg; Schuster, Michael
2013-01-25
The determination of metallic nanoparticles in environmental samples requires sample pretreatment that ideally combines pre-concentration and species selectivity. With cloud point extraction (CPE) using the surfactant Triton X-114 we present a simple and cost effective separation technique that meets both criteria. Effective separation of ionic gold species and Au nanoparticles (Au-NPs) is achieved by using sodium thiosulphate as a complexing agent. The extraction efficiency for Au-NP ranged from 1.01 ± 0.06 (particle size 2 nm) to 0.52 ± 0.16 (particle size 150 nm). An enrichment factor of 80 and a low limit of detection of 5 ng L(-1) is achieved using electrothermal atomic absorption spectrometry (ET-AAS) for quantification. TEM measurements showed that the particle size is not affected by the CPE process. Natural organic matter (NOM) is tolerated up to a concentration of 10 mg L(-1). The precision of the method expressed as the standard deviation of 12 replicates at an Au-NP concentration of 100 ng L(-1) is 9.5%. A relation between particle concentration and the extraction efficiency was not observed. Spiking experiments showed a recovery higher than 91% for environmental water samples. Copyright © 2012 Elsevier B.V. All rights reserved.
Khan, Sumaira; Kazi, Tasneem G; Baig, Jameel A; Kolachi, Nida F; Afridi, Hassan I; Wadhwa, Sham Kumar; Shah, Abdul Q; Kandhro, Ghulam A; Shah, Faheem
2010-10-15
A cloud point extraction (CPE) method has been developed for the determination of trace quantity of vanadium ions in pharmaceutical formulations (PF), dialysate (DS) and parenteral solutions (PS). The CPE of vanadium (V) using 8-hydroxyquinoline (oxine) as complexing reagent and mediated by nonionic surfactant (Triton X-114) was investigated. The parameters that affect the extraction efficiency of CPE, such as pH of sample solution, concentration of oxine and Triton X-114, equilibration temperature and time period for shaking were investigated in detail. The validity of CPE of V was checked by standard addition method in real samples. The extracted surfactant-rich phase was diluted with nitric acid in ethanol, prior to subjecting electrothermal atomic absorption spectrometry. Under these conditions, the preconcentration of 50 mL sample solutions, allowed raising an enrichment factor of 125-fold. The lower limit of detection obtained under the optimal conditions was 42 ng/L. The proposed method has been successfully applied to the determination of trace quantity of V in various pharmaceutical preparations with satisfactory results. The concentration ranges of V in PF, DS and PS samples were found in the range of 10.5-15.2, 0.65-1.32 and 1.76-6.93 microg/L, respectively. 2010 Elsevier B.V. All rights reserved.
Zhao, Qi; Ding, Jie; Jin, Haiyan; Ding, Lan; Ren, Nanqi
2013-04-01
A method based on cloud point extraction (CPE) coupled with high-performance liquid chromatography separation and ultraviolet (UV) detection was developed to determine andrographolide and dehydroandrographolide in human plasma. The nonionic surfactant Triton X-114 was chosen as the extraction medium. Variable parameters affecting the CPE efficiency were evaluated and optimized, such as concentrations of Triton X-114 and NaCl, pH, equilibration temperature and equilibration time. A Zorbax SB C18 column (250 × 4.6 mm i.d., 5 µm) was used for separation of the two analytes at 30°C. The UV detection was performed at 254 nm. Under the optimum conditions, the limits of detection of andrographolide and dehydroandrographolide are 0.032 and 0.019 µg/mL, respectively. The intra-day and inter-day precisions expressed as relative standard deviation ranged from 3.2 to 7.3% and from 2.9 and 8.6%. The recoveries of andrographolide and dehydroandrographolide were in the range of 76.8-98.6% at three fortified concentrations of 0.1, 0.5 and 1.0 µg/mL. This method was efficient, environmentally friendly, rapid and inexpensive for the extraction and determination of andrographolide and dehydroandrographolide in human plasma.
Souto, Leonardo A V; Castro, André; Gonçalves, Luiz Marcos Garcia; Nascimento, Tiago P
2017-08-08
Natural landmarks are the main features in the next step of the research in localization of mobile robot platforms. The identification and recognition of these landmarks are crucial to better localize a robot. To help solving this problem, this work proposes an approach for the identification and recognition of natural marks included in the environment using images from RGB-D (Red, Green, Blue, Depth) sensors. In the identification step, a structural analysis of the natural landmarks that are present in the environment is performed. The extraction of edge points of these landmarks is done using the 3D point cloud obtained from the RGB-D sensor. These edge points are smoothed through the S l 0 algorithm, which minimizes the standard deviation of the normals at each point. Then, the second step of the proposed algorithm begins, which is the proper recognition of the natural landmarks. This recognition step is done as a real-time algorithm that extracts the points referring to the filtered edges and determines to which structure they belong to in the current scenario: stairs or doors. Finally, the geometrical characteristics that are intrinsic to the doors and stairs are identified. The approach proposed here has been validated with real robot experiments. The performed tests verify the efficacy of our proposed approach.
Castro, André; Nascimento, Tiago P.
2017-01-01
Natural landmarks are the main features in the next step of the research in localization of mobile robot platforms. The identification and recognition of these landmarks are crucial to better localize a robot. To help solving this problem, this work proposes an approach for the identification and recognition of natural marks included in the environment using images from RGB-D (Red, Green, Blue, Depth) sensors. In the identification step, a structural analysis of the natural landmarks that are present in the environment is performed. The extraction of edge points of these landmarks is done using the 3D point cloud obtained from the RGB-D sensor. These edge points are smoothed through the Sl0 algorithm, which minimizes the standard deviation of the normals at each point. Then, the second step of the proposed algorithm begins, which is the proper recognition of the natural landmarks. This recognition step is done as a real-time algorithm that extracts the points referring to the filtered edges and determines to which structure they belong to in the current scenario: stairs or doors. Finally, the geometrical characteristics that are intrinsic to the doors and stairs are identified. The approach proposed here has been validated with real robot experiments. The performed tests verify the efficacy of our proposed approach. PMID:28786925
DECONTAMINATION OF DDT-POLLUTED SOIL BY SOIL WASHING/CLOUD POINT EXTRACTION (R822832)
The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...
Stege, Patricia W; Sombra, Lorena L; Messina, Germán A; Martinez, Luis D; Silva, María F
2009-05-01
Many aromatic compounds can be found in the environment as a result of anthropogenic activities and some of them are highly toxic. The need to determine low concentrations of pollutants requires analytical methods with high sensitivity, selectivity, and resolution for application to soil, sediment, water, and other environmental samples. Complex sample preparation involving analyte isolation and enrichment is generally necessary before the final analysis. The present paper outlines a novel, simple, low-cost, and environmentally friendly method for the simultaneous determination of p-nitrophenol (PNP), p-aminophenol (PAP), and hydroquinone (HQ) by micellar electrokinetic capillary chromatography after preconcentration by cloud point extraction. Enrichment factors of 180 to 200 were achieved. The limits of detection of the analytes for the preconcentration of 50-ml sample volume were 0.10 microg L(-1) for PNP, 0.20 microg L(-1) for PAP, and 0.16 microg L(-1) for HQ. The optimized procedure was applied to the determination of phenolic pollutants in natural waters from San Luis, Argentina.
Monitoring Aircraft Motion at Airports by LIDAR
NASA Astrophysics Data System (ADS)
Toth, C.; Jozkow, G.; Koppanyi, Z.; Young, S.; Grejner-Brzezinska, D.
2016-06-01
Improving sensor performance, combined with better affordability, provides better object space observability, resulting in new applications. Remote sensing systems are primarily concerned with acquiring data of the static components of our environment, such as the topographic surface of the earth, transportation infrastructure, city models, etc. Observing the dynamic component of the object space is still rather rare in the geospatial application field; vehicle extraction and traffic flow monitoring are a few examples of using remote sensing to detect and model moving objects. Deploying a network of inexpensive LiDAR sensors along taxiways and runways can provide both geometrically and temporally rich geospatial data that aircraft body can be extracted from the point cloud, and then, based on consecutive point clouds motion parameters can be estimated. Acquiring accurate aircraft trajectory data is essential to improve aviation safety at airports. This paper reports about the initial experiences obtained by using a network of four Velodyne VLP- 16 sensors to acquire data along a runway segment.
Contrasting cloud composition between coupled and decoupled marine boundary layer clouds
NASA Astrophysics Data System (ADS)
Wang, Zhen; Mora Ramirez, Marco; Dadashazar, Hossein; MacDonald, Alex B.; Crosbie, Ewan; Bates, Kelvin H.; Coggon, Matthew M.; Craven, Jill S.; Lynch, Peng; Campbell, James R.; Azadi Aghdam, Mojtaba; Woods, Roy K.; Jonsson, Haflidi; Flagan, Richard C.; Seinfeld, John H.; Sorooshian, Armin
2016-10-01
Marine stratocumulus clouds often become decoupled from the vertical layer immediately above the ocean surface. This study contrasts cloud chemical composition between coupled and decoupled marine stratocumulus clouds for dissolved nonwater substances. Cloud water and droplet residual particle composition were measured in clouds off the California coast during three airborne experiments in July-August of separate years (Eastern Pacific Emitted Aerosol Cloud Experiment 2011, Nucleation in California Experiment 2013, and Biological and Oceanic Atmospheric Study 2015). Decoupled clouds exhibited significantly lower air-equivalent mass concentrations in both cloud water and droplet residual particles, consistent with reduced cloud droplet number concentration and subcloud aerosol (Dp > 100 nm) number concentration, owing to detachment from surface sources. Nonrefractory submicrometer aerosol measurements show that coupled clouds exhibit higher sulfate mass fractions in droplet residual particles, owing to more abundant precursor emissions from the ocean and ships. Consequently, decoupled clouds exhibited higher mass fractions of organics, nitrate, and ammonium in droplet residual particles, owing to effects of long-range transport from more distant sources. Sodium and chloride dominated in terms of air-equivalent concentration in cloud water for coupled clouds, and their mass fractions and concentrations exceeded those in decoupled clouds. Conversely, with the exception of sea-salt constituents (e.g., Cl, Na, Mg, and K), cloud water mass fractions of all species examined were higher in decoupled clouds relative to coupled clouds. Satellite and Navy Aerosol Analysis and Prediction System-based reanalysis data are compared with each other, and the airborne data to conclude that limitations in resolving boundary layer processes in a global model prevent it from accurately quantifying observed differences between coupled and decoupled cloud composition.
An Efficient Method for Automatic Road Extraction Based on Multiple Features from LiDAR Data
NASA Astrophysics Data System (ADS)
Li, Y.; Hu, X.; Guan, H.; Liu, P.
2016-06-01
The road extraction in urban areas is difficult task due to the complicated patterns and many contextual objects. LiDAR data directly provides three dimensional (3D) points with less occlusions and smaller shadows. The elevation information and surface roughness are distinguishing features to separate roads. However, LiDAR data has some disadvantages are not beneficial to object extraction, such as the irregular distribution of point clouds and lack of clear edges of roads. For these problems, this paper proposes an automatic road centerlines extraction method which has three major steps: (1) road center point detection based on multiple feature spatial clustering for separating road points from ground points, (2) local principal component analysis with least squares fitting for extracting the primitives of road centerlines, and (3) hierarchical grouping for connecting primitives into complete roads network. Compared with MTH (consist of Mean shift algorithm, Tensor voting, and Hough transform) proposed in our previous article, this method greatly reduced the computational cost. To evaluate the proposed method, the Vaihingen data set, a benchmark testing data provided by ISPRS for "Urban Classification and 3D Building Reconstruction" project, was selected. The experimental results show that our method achieve the same performance by less time in road extraction using LiDAR data.
Environmental conditions regulate the impact of plants on cloud formation
Zhao, D. F.; Buchholz, A.; Tillmann, R.; Kleist, E.; Wu, C.; Rubach, F.; Kiendler-Scharr, A.; Rudich, Y.; Wildt, J.; Mentel, Th. F.
2017-01-01
The terrestrial vegetation emits large amounts of volatile organic compounds (VOC) into the atmosphere, which on oxidation produce secondary organic aerosol (SOA). By acting as cloud condensation nuclei (CCN), SOA influences cloud formation and climate. In a warming climate, changes in environmental factors can cause stresses to plants, inducing changes of the emitted VOC. These can modify particle size and composition. Here we report how induced emissions eventually affect CCN activity of SOA, a key parameter in cloud formation. For boreal forest tree species, insect infestation by aphids causes additional VOC emissions which modifies SOA composition thus hygroscopicity and CCN activity. Moderate heat increases the total amount of constitutive VOC, which has a minor effect on hygroscopicity, but affects CCN activity by increasing the particles' size. The coupling of plant stresses, VOC composition and CCN activity points to an important impact of induced plant emissions on cloud formation and climate. PMID:28218253
Environmental conditions regulate the impact of plants on cloud formation.
Zhao, D F; Buchholz, A; Tillmann, R; Kleist, E; Wu, C; Rubach, F; Kiendler-Scharr, A; Rudich, Y; Wildt, J; Mentel, Th F
2017-02-20
The terrestrial vegetation emits large amounts of volatile organic compounds (VOC) into the atmosphere, which on oxidation produce secondary organic aerosol (SOA). By acting as cloud condensation nuclei (CCN), SOA influences cloud formation and climate. In a warming climate, changes in environmental factors can cause stresses to plants, inducing changes of the emitted VOC. These can modify particle size and composition. Here we report how induced emissions eventually affect CCN activity of SOA, a key parameter in cloud formation. For boreal forest tree species, insect infestation by aphids causes additional VOC emissions which modifies SOA composition thus hygroscopicity and CCN activity. Moderate heat increases the total amount of constitutive VOC, which has a minor effect on hygroscopicity, but affects CCN activity by increasing the particles' size. The coupling of plant stresses, VOC composition and CCN activity points to an important impact of induced plant emissions on cloud formation and climate.
Detailed Hydrographic Feature Extraction from High-Resolution LiDAR Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Danny L. Anderson
Detailed hydrographic feature extraction from high-resolution light detection and ranging (LiDAR) data is investigated. Methods for quantitatively evaluating and comparing such extractions are presented, including the use of sinuosity and longitudinal root-mean-square-error (LRMSE). These metrics are then used to quantitatively compare stream networks in two studies. The first study examines the effect of raster cell size on watershed boundaries and stream networks delineated from LiDAR-derived digital elevation models (DEMs). The study confirmed that, with the greatly increased resolution of LiDAR data, smaller cell sizes generally yielded better stream network delineations, based on sinuosity and LRMSE. The second study demonstrates amore » new method of delineating a stream directly from LiDAR point clouds, without the intermediate step of deriving a DEM. Direct use of LiDAR point clouds could improve efficiency and accuracy of hydrographic feature extractions. The direct delineation method developed herein and termed “mDn”, is an extension of the D8 method that has been used for several decades with gridded raster data. The method divides the region around a starting point into sectors, using the LiDAR data points within each sector to determine an average slope, and selecting the sector with the greatest downward slope to determine the direction of flow. An mDn delineation was compared with a traditional grid-based delineation, using TauDEM, and other readily available, common stream data sets. Although, the TauDEM delineation yielded a sinuosity that more closely matches the reference, the mDn delineation yielded a sinuosity that was higher than either the TauDEM method or the existing published stream delineations. Furthermore, stream delineation using the mDn method yielded the smallest LRMSE.« less
NASA Astrophysics Data System (ADS)
Michoud, Clément; Carrea, Dario; Augereau, Emmanuel; Cancouët, Romain; Costa, Stéphane; Davidson, Robert; Delacourt, Chirstophe; Derron, Marc-Henri; Jaboyedoff, Michel; Letortu, Pauline; Maquaire, Olivier
2013-04-01
Dieppe coastal cliffs, in Normandy, France, are mainly formed by sub-horizontal deposits of chalk and flintstone. Largely destabilized by an intense weathering and the Channel sea erosion, small and large rockfalls are regularly observed and contribute to retrogressive cliff processes. During autumn 2012, cliff and intertidal topographies have been acquired with a Terrestrial Laser Scanner (TLS) and a Mobile Laser Scanner (MLS), coupled with seafloor bathymetries realized with a multibeam echosounder (MBES). MLS is a recent development of laser scanning based on the same theoretical principles of aerial LiDAR, but using smaller, cheaper and portable devices. The MLS system, which is composed by an accurate dynamic positioning and orientation (INS) devices and a long range LiDAR, is mounted on a marine vessel; it is then possible to quickly acquire in motion georeferenced LiDAR point clouds with a resolution of about 15 cm. For example, it takes about 1 h to scan of shoreline of 2 km long. MLS is becoming a promising technique supporting erosion and rockfall assessments along the shores of lakes, fjords or seas. In this study, the MLS system used to acquire cliffs and intertidal areas of the Cap d'Ailly was composed by the INS Applanix POS-MV 320 V4 and the LiDAR Optech Ilirs LR. On the same day, three MLS scans with large overlaps (J1, J21 and J3) have been performed at ranges from 600 m at 4 knots (low tide) up to 200 m at 2.2 knots (up tide) with a calm sea at 2.5 Beaufort (small wavelets). Mean scan resolutions go from 26 cm for far scan (J1) to about 8.1 cm for close scan (J3). Moreover, one TLS point cloud on this test site has been acquired with a mean resolution of about 2.3 cm, using a Riegl LMS Z390i. In order to quantify the reliability of the methodology, comparisons between scans have been realized with the software Polyworks™, calculating shortest distances between points of one cloud and the interpolated surface of the reference point cloud. A MatLab™ routine was also written to extract interesting statistics. First, mean distances between points of the reference point clouds (J21) and its interpolated surface are about 0.35 cm with a standard deviation of 15 cm; errors introduced during the surface interpolation step, especially in vegetated areas, may explain those differences. Then, mean distances between J1's points (resp. J3) and the J21's reference surface are about 4 cm (resp. -17 cm) with a standard deviation of 53 cm (resp. 55 cm). After a best fit alignment of J1 and J3 on J21, mean distances between J1 (resp. J3) and the J21's reference surface decrease to about 0.15 cm (resp. 1.6 cm) with a standard deviation of 41 cm (resp. 21 cm). Finally, mean distances between the TLS point clouds and the J21's reference surface are about 3.2 cm with a standard deviation of 26 cm. In conclusion, MLS devices are able to quickly scan long shoreline with a resolution up to about 10 cm. The precision of the acquired data is relatively small enough to investigate on geomorphological features of coastal cliffs. The ability of the MLS technique to detect and monitor small and large rockfalls will be investigated thanks to new acquisitions of the Dieppe cliffs in a close future and enhanced adapted post-processing steps.
NASA Astrophysics Data System (ADS)
Büsing, Susanna; Guerin, Antoine; Derron, Marc-Henri; Jaboyedoff, Michel; Phillips, Marcia
2016-04-01
The study of permafrost is now attracting more and more researchers because the warming observed in the Alps since the beginning of last century is causing changes in active layer depth and in the thermal state of this climate indicator. In mountain regions, permafrost degradation is becoming critical for the whole population since slopes and rock walls are being destabilized, thus increasing risk for infrastructure and inhabitants of mountain valleys. To anticipate the triggering of future events better, it is necessary to improve understanding on the relation between permafrost thaw and slope instabilities. A rockfall of about 7000 m3 occurred in the upper part of the southeast face of the Piz Lischana (3105 m), in the Engadin Valley (Graubünden, Switzerland) around noon on 31 July 2011. Luckily, this event was filmed and ice could be observed on the failure plane after analysis of the images. In September 2014 and in the same area, another rockfall of 2340 m3 occurred along a prominent open fracture which was apparent since the failure of the rock mass in 2011. In order to characterize and analyze these two events, three 3D high density point clouds have been made using Structure from Motion (SfM) and LiDAR, one before and two after the September 2014 rockfall. For this purpose, 120 photos were taken during a helicopter flight in July 2014 to produce the first SfM point cloud, and more than 400 terrestrial photos were taken at the end of September to produce the second SfM point cloud. In July 2015 a third point cloud was created from three LiDAR scans, taken from two different positions. The point clouds were georeferenced with a 2 m resolution digital elevation model and compared to each other in order to calculate the volume of the rockfalls. A detailed structural analysis of the two rockfalls was made and compared to the geological structures of the whole southeast face. The structural analysis also allowed to improve the understanding of the failure mechanisms of the past events and to better assess the probability of future rockfalls. Furthermore, valuable information about the velocity of the failure mechanisms could be extracted from the July 2011 video, using a Particle Image Velocimetry method (Matlab script developed by Thielicke and Stamhuis, 2014). These results, combined with analyses of potential triggering factors (permafrost, freeze-thaw cycles, thermomechanical processes, rainfall, radiation, glacier decompression and seismics) show that many of them contributed towards destabilization. It seems that the "special" structural situation led to the failure of Piz Lischana, but it also highlights the influence of permafrost. This study also provided the opportunity to perform a comparison of both LiDAR - SfM. The point clouds have been analyzed regarding their general quality, the quality of their meshes, the quantity of instrumental noise, the point density of different discontinuities, the structural analysis and kinematic tests. Results show the SfM also allows detailed structural analysis and that a good choice of the parameters allows to approach the quality of the LiDAR data. However, several factors (focal length, variation of distance to object, image resolution) may increase the uncertainty of the photo alignment. This study confirms that the coupling of the two techniques is possible and provides reliable results. This shows that SfM is one of the possible cheap methods to monitor rock summits that are subject to permafrost thaw.
Accuracy Analysis of a Dam Model from Drone Surveys
Buffi, Giulia; Venturi, Sara
2017-01-01
This paper investigates the accuracy of models obtained by drone surveys. To this end, this work analyzes how the placement of ground control points (GCPs) used to georeference the dense point cloud of a dam affects the resulting three-dimensional (3D) model. Images of a double arch masonry dam upstream face are acquired from drone survey and used to build the 3D model of the dam for vulnerability analysis purposes. However, there still remained the issue of understanding the real impact of a correct GCPs location choice to properly georeference the images and thus, the model. To this end, a high number of GCPs configurations were investigated, building a series of dense point clouds. The accuracy of these resulting dense clouds was estimated comparing the coordinates of check points extracted from the model and their true coordinates measured via traditional topography. The paper aims at providing information about the optimal choice of GCPs placement not only for dams but also for all surveys of high-rise structures. The knowledge a priori of the effect of the GCPs number and location on the model accuracy can increase survey reliability and accuracy and speed up the survey set-up operations. PMID:28771185
Accuracy Analysis of a Dam Model from Drone Surveys.
Ridolfi, Elena; Buffi, Giulia; Venturi, Sara; Manciola, Piergiorgio
2017-08-03
This paper investigates the accuracy of models obtained by drone surveys. To this end, this work analyzes how the placement of ground control points (GCPs) used to georeference the dense point cloud of a dam affects the resulting three-dimensional (3D) model. Images of a double arch masonry dam upstream face are acquired from drone survey and used to build the 3D model of the dam for vulnerability analysis purposes. However, there still remained the issue of understanding the real impact of a correct GCPs location choice to properly georeference the images and thus, the model. To this end, a high number of GCPs configurations were investigated, building a series of dense point clouds. The accuracy of these resulting dense clouds was estimated comparing the coordinates of check points extracted from the model and their true coordinates measured via traditional topography. The paper aims at providing information about the optimal choice of GCPs placement not only for dams but also for all surveys of high-rise structures. The knowledge a priori of the effect of the GCPs number and location on the model accuracy can increase survey reliability and accuracy and speed up the survey set-up operations.
NASA Astrophysics Data System (ADS)
Cantrell, W. H.; Chandrakar, K. K.; Karki, S.; Kinney, G.; Shaw, R.
2017-12-01
Many of the climate impacts of boundary layer clouds are modulated by aerosol particles. As two examples, their interactions with incoming solar and upwelling terrestrial radiation and their propensity for precipitation are both governed by the population of aerosol particles upon which the cloud droplets formed. In turn, clouds are the primary removal mechanism for aerosol particles smaller than a few micrometers and larger than a few nanometers. Aspects of these interconnected phenomena are known in exquisite detail (e.g. Köhler theory), but other parts have not been as amenable to study in the laboratory (e.g. scavenging of aerosol particles by cloud droplets). As a complicating factor, boundary layer clouds are ubiquitously turbulent, which introduces fluctuations in the water vapor concentration and temperature, which govern the saturation ratio which mediates aerosol-cloud interactions. We have performed laboratory measurements of aerosol-cloud coupling and feedbacks, using Michigan Tech's Pi Chamber (Chang et al., 2016). In conditions representative of boundary layer clouds, our data suggest that the lifetime of most interstitial particles in the accumulation mode is governed by cloud activation - particles are removed from the Pi Chamber when they activate and settle out of the chamber as cloud droplets. As cloud droplets are removed, these interstitial particles activate until the initially polluted cloud cleans itself and all particulates are removed from the chamber. At that point, the cloud collapses. Our data also indicate that smaller particles, Dp < ˜ 20 nm are not activated, but are instead removed through diffusion, enhanced by the fact that droplets are moving relative to the suspended aerosol. I will discuss results from both warm (i.e. liquid water only) and mixed phase clouds, showing that cloud and aerosol properties are coupled through fluctuations in the supersaturation, and that threshold behaviors can be defined through the use of the Dämkohler number, the ratio of the characteristic turbulence timescale to the cloud's microphysical response time. Chang, K., et al., 2016. A laboratory facility to study gas-aerosol-cloud interactions in a turbulent environment: The Π Chamber. Bull. Amer. Meteor. Soc., doi:10.1175/BAMS-D-15-00203.1
Analysis, Thematic Maps and Data Mining from Point Cloud to Ontology for Software Development
NASA Astrophysics Data System (ADS)
Nespeca, R.; De Luca, L.
2016-06-01
The primary purpose of the survey for the restoration of Cultural Heritage is the interpretation of the state of building preservation. For this, the advantages of the remote sensing systems that generate dense point cloud (range-based or image-based) are not limited only to the acquired data. The paper shows that it is possible to extrapolate very useful information in diagnostics using spatial annotation, with the use of algorithms already implemented in open-source software. Generally, the drawing of degradation maps is the result of manual work, so dependent on the subjectivity of the operator. This paper describes a method of extraction and visualization of information, obtained by mathematical procedures, quantitative, repeatable and verifiable. The case study is a part of the east facade of the Eglise collégiale Saint-Maurice also called Notre Dame des Grâces, in Caromb, in southern France. The work was conducted on the matrix of information contained in the point cloud asci format. The first result is the extrapolation of new geometric descriptors. First, we create the digital maps with the calculated quantities. Subsequently, we have moved to semi-quantitative analyses that transform new data into useful information. We have written the algorithms for accurate selection, for the segmentation of point cloud, for automatic calculation of the real surface and the volume. Furthermore, we have created the graph of spatial distribution of the descriptors. This work shows that if we work during the data processing we can transform the point cloud into an enriched database: the use, the management and the data mining is easy, fast and effective for everyone involved in the restoration process.
Single shot laser speckle based 3D acquisition system for medical applications
NASA Astrophysics Data System (ADS)
Khan, Danish; Shirazi, Muhammad Ayaz; Kim, Min Young
2018-06-01
The state of the art techniques used by medical practitioners to extract the three-dimensional (3D) geometry of different body parts requires a series of images/frames such as laser line profiling or structured light scanning. Movement of the patients during scanning process often leads to inaccurate measurements due to sequential image acquisition. Single shot structured techniques are robust to motion but the prevalent challenges in single shot structured light methods are the low density and algorithm complexity. In this research, a single shot 3D measurement system is presented that extracts the 3D point cloud of human skin by projecting a laser speckle pattern using a single pair of images captured by two synchronized cameras. In contrast to conventional laser speckle 3D measurement systems that realize stereo correspondence by digital correlation of projected speckle patterns, the proposed system employs KLT tracking method to locate the corresponding points. The 3D point cloud contains no outliers and sufficient quality of 3D reconstruction is achieved. The 3D shape acquisition of human body parts validates the potential application of the proposed system in the medical industry.
Min-Cut Based Segmentation of Airborne LIDAR Point Clouds
NASA Astrophysics Data System (ADS)
Ural, S.; Shan, J.
2012-07-01
Introducing an organization to the unstructured point cloud before extracting information from airborne lidar data is common in many applications. Aggregating the points with similar features into segments in 3-D which comply with the nature of actual objects is affected by the neighborhood, scale, features and noise among other aspects. In this study, we present a min-cut based method for segmenting the point cloud. We first assess the neighborhood of each point in 3-D by investigating the local geometric and statistical properties of the candidates. Neighborhood selection is essential since point features are calculated within their local neighborhood. Following neighborhood determination, we calculate point features and determine the clusters in the feature space. We adapt a graph representation from image processing which is especially used in pixel labeling problems and establish it for the unstructured 3-D point clouds. The edges of the graph that are connecting the points with each other and nodes representing feature clusters hold the smoothness costs in the spatial domain and data costs in the feature domain. Smoothness costs ensure spatial coherence, while data costs control the consistency with the representative feature clusters. This graph representation formalizes the segmentation task as an energy minimization problem. It allows the implementation of an approximate solution by min-cuts for a global minimum of this NP hard minimization problem in low order polynomial time. We test our method with airborne lidar point cloud acquired with maximum planned post spacing of 1.4 m and a vertical accuracy 10.5 cm as RMSE. We present the effects of neighborhood and feature determination in the segmentation results and assess the accuracy and efficiency of the implemented min-cut algorithm as well as its sensitivity to the parameters of the smoothness and data cost functions. We find that smoothness cost that only considers simple distance parameter does not strongly conform to the natural structure of the points. Including shape information within the energy function by assigning costs based on the local properties may help to achieve a better representation for segmentation.
Time Series UAV Image-Based Point Clouds for Landslide Progression Evaluation Applications
Moussa, Adel; El-Sheimy, Naser; Habib, Ayman
2017-01-01
Landslides are major and constantly changing threats to urban landscapes and infrastructure. It is essential to detect and capture landslide changes regularly. Traditional methods for monitoring landslides are time-consuming, costly, dangerous, and the quality and quantity of the data is sometimes unable to meet the necessary requirements of geotechnical projects. This motivates the development of more automatic and efficient remote sensing approaches for landslide progression evaluation. Automatic change detection involving low-altitude unmanned aerial vehicle image-based point clouds, although proven, is relatively unexplored, and little research has been done in terms of accounting for volumetric changes. In this study, a methodology for automatically deriving change displacement rates, in a horizontal direction based on comparisons between extracted landslide scarps from multiple time periods, has been developed. Compared with the iterative closest projected point (ICPP) registration method, the developed method takes full advantage of automated geometric measuring, leading to fast processing. The proposed approach easily processes a large number of images from different epochs and enables the creation of registered image-based point clouds without the use of extensive ground control point information or further processing such as interpretation and image correlation. The produced results are promising for use in the field of landslide research. PMID:29057847
Time Series UAV Image-Based Point Clouds for Landslide Progression Evaluation Applications.
Al-Rawabdeh, Abdulla; Moussa, Adel; Foroutan, Marzieh; El-Sheimy, Naser; Habib, Ayman
2017-10-18
Landslides are major and constantly changing threats to urban landscapes and infrastructure. It is essential to detect and capture landslide changes regularly. Traditional methods for monitoring landslides are time-consuming, costly, dangerous, and the quality and quantity of the data is sometimes unable to meet the necessary requirements of geotechnical projects. This motivates the development of more automatic and efficient remote sensing approaches for landslide progression evaluation. Automatic change detection involving low-altitude unmanned aerial vehicle image-based point clouds, although proven, is relatively unexplored, and little research has been done in terms of accounting for volumetric changes. In this study, a methodology for automatically deriving change displacement rates, in a horizontal direction based on comparisons between extracted landslide scarps from multiple time periods, has been developed. Compared with the iterative closest projected point (ICPP) registration method, the developed method takes full advantage of automated geometric measuring, leading to fast processing. The proposed approach easily processes a large number of images from different epochs and enables the creation of registered image-based point clouds without the use of extensive ground control point information or further processing such as interpretation and image correlation. The produced results are promising for use in the field of landslide research.
Geometric identification and damage detection of structural elements by terrestrial laser scanner
NASA Astrophysics Data System (ADS)
Hou, Tsung-Chin; Liu, Yu-Wei; Su, Yu-Min
2016-04-01
In recent years, three-dimensional (3D) terrestrial laser scanning technologies with higher precision and higher capability are developing rapidly. The growing maturity of laser scanning has gradually approached the required precision as those have been provided by traditional structural monitoring technologies. Together with widely available fast computation for massive point cloud data processing, 3D laser scanning can serve as an efficient structural monitoring alternative for civil engineering communities. Currently most research efforts have focused on integrating/calculating the measured multi-station point cloud data, as well as modeling/establishing the 3D meshes of the scanned objects. Very little attention has been spent on extracting the information related to health conditions and mechanical states of structures. In this study, an automated numerical approach that integrates various existing algorithms for geometric identification and damage detection of structural elements were established. Specifically, adaptive meshes were employed for classifying the point cloud data of the structural elements, and detecting the associated damages from the calculated eigenvalues in each area of the structural element. Furthermore, kd-tree was used to enhance the searching efficiency of plane fitting which were later used for identifying the boundaries of structural elements. The results of geometric identification were compared with M3C2 algorithm provided by CloudCompare, as well as validated by LVDT measurements of full-scale reinforced concrete beams tested in laboratory. It shows that 3D laser scanning, through the established processing approaches of the point cloud data, can offer a rapid, nondestructive, remote, and accurate solution for geometric identification and damage detection of structural elements.
Building Facade Reconstruction by Fusing Terrestrial Laser Points and Images
Pu, Shi; Vosselman, George
2009-01-01
Laser data and optical data have a complementary nature for three dimensional feature extraction. Efficient integration of the two data sources will lead to a more reliable and automated extraction of three dimensional features. This paper presents a semiautomatic building facade reconstruction approach, which efficiently combines information from terrestrial laser point clouds and close range images. A building facade's general structure is discovered and established using the planar features from laser data. Then strong lines in images are extracted using Canny extractor and Hough transformation, and compared with current model edges for necessary improvement. Finally, textures with optimal visibility are selected and applied according to accurate image orientations. Solutions to several challenge problems throughout the collaborated reconstruction, such as referencing between laser points and multiple images and automated texturing, are described. The limitations and remaining works of this approach are also discussed. PMID:22408539
NASA Astrophysics Data System (ADS)
Dogon-yaro, M. A.; Kumar, P.; Rahman, A. Abdul; Buyuksalih, G.
2016-10-01
Timely and accurate acquisition of information on the condition and structural changes of urban trees serves as a tool for decision makers to better appreciate urban ecosystems and their numerous values which are critical to building up strategies for sustainable development. The conventional techniques used for extracting tree features include; ground surveying and interpretation of the aerial photography. However, these techniques are associated with some constraint, such as labour intensive field work, a lot of financial requirement, influences by weather condition and topographical covers which can be overcome by means of integrated airborne based LiDAR and very high resolution digital image datasets. This study presented a semi-automated approach for extracting urban trees from integrated airborne based LIDAR and multispectral digital image datasets over Istanbul city of Turkey. The above scheme includes detection and extraction of shadow free vegetation features based on spectral properties of digital images using shadow index and NDVI techniques and automated extraction of 3D information about vegetation features from the integrated processing of shadow free vegetation image and LiDAR point cloud datasets. The ability of the developed algorithms shows a promising result as an automated and cost effective approach to estimating and delineated 3D information of urban trees. The research also proved that integrated datasets is a suitable technology and a viable source of information for city managers to be used in urban trees management.
Zhao, Jiao; Lu, Yunhui; Fan, Chongyang; Wang, Jun; Yang, Yaling
2015-02-05
A novel and simple method for the sensitive determination of trace amounts of nitrite in human urine and blood has been developed by combination of cloud point extraction (CPE) and microplate assay. The method is based on the Griess reaction and the reaction product is extracted into nonionic surfactant Triton-X114 using CPE technique. In this study, decolorization treatment of urine and blood was applied to overcome the interference of matrix and enhance the sensitivity of nitrite detection. Multi-sample can be simultaneously detected thanks to a 96-well microplate technique. The effects of different operating parameters such as type of decolorizing agent, concentration of surfactant (Triton X-114), addition of (NH4)2SO4, extraction temperature and time, interfering elements were studied and optimum conditions were obtained. Under the optimum conditions, a linear calibration graph was obtained in the range of 10-400 ng mL(-1) of nitrite with limit of detection (LOD) of 2.5 ng mL(-1). The relative standard deviation (RSD) for determination of 100 ng mL(-1) of nitrite was 2.80%. The proposed method was successfully applied for the determination of nitrite in the urine and blood samples with recoveries of 92.6-101.2%. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Marques, Luís.; Roca Cladera, Josep; Tenedório, José António
2017-10-01
The use of multiple sets of images with high level of overlapping to extract 3D point clouds has increased progressively in recent years. There are two main fundamental factors in the origin of this progress. In first, the image matching algorithms has been optimised and the software available that supports the progress of these techniques has been constantly developed. In second, because of the emergent paradigm of smart cities which has been promoting the virtualization of urban spaces and their elements. The creation of 3D models for urban elements is extremely relevant for urbanists to constitute digital archives of urban elements and being especially useful for enrich maps and databases or reconstruct and analyse objects/areas through time, building and recreating scenarios and implementing intuitive methods of interaction. These characteristics assist, for example, higher public participation creating a completely collaborative solution system, envisioning processes, simulations and results. This paper is organized in two main topics. The first deals with technical data modelling obtained by terrestrial photographs: planning criteria for obtaining photographs, approving or rejecting photos based on their quality, editing photos, creating masks, aligning photos, generating tie points, extracting point clouds, generating meshes, building textures and exporting results. The application of these procedures results in 3D models for the visualization of urban elements of the city of Barcelona. The second concerns the use of Augmented Reality through mobile platforms allowing to understand the city origins and the relation with the actual city morphology, (en)visioning solutions, processes and simulations, making possible for the agents in several domains, to fundament their decisions (and understand them) achieving a faster and wider consensus.
First Steps to Automated Interior Reconstruction from Semantically Enriched Point Clouds and Imagery
NASA Astrophysics Data System (ADS)
Obrock, L. S.; Gülch, E.
2018-05-01
The automated generation of a BIM-Model from sensor data is a huge challenge for the modeling of existing buildings. Currently the measurements and analyses are time consuming, allow little automation and require expensive equipment. We do lack an automated acquisition of semantical information of objects in a building. We are presenting first results of our approach based on imagery and derived products aiming at a more automated modeling of interior for a BIM building model. We examine the building parts and objects visible in the collected images using Deep Learning Methods based on Convolutional Neural Networks. For localization and classification of building parts we apply the FCN8s-Model for pixel-wise Semantic Segmentation. We, so far, reach a Pixel Accuracy of 77.2 % and a mean Intersection over Union of 44.2 %. We finally use the network for further reasoning on the images of the interior room. We combine the segmented images with the original images and use photogrammetric methods to produce a three-dimensional point cloud. We code the extracted object types as colours of the 3D-points. We thus are able to uniquely classify the points in three-dimensional space. We preliminary investigate a simple extraction method for colour and material of building parts. It is shown, that the combined images are very well suited to further extract more semantic information for the BIM-Model. With the presented methods we see a sound basis for further automation of acquisition and modeling of semantic and geometric information of interior rooms for a BIM-Model.
Orientation of airborne laser scanning point clouds with multi-view, multi-scale image blocks.
Rönnholm, Petri; Hyyppä, Hannu; Hyyppä, Juha; Haggrén, Henrik
2009-01-01
Comprehensive 3D modeling of our environment requires integration of terrestrial and airborne data, which is collected, preferably, using laser scanning and photogrammetric methods. However, integration of these multi-source data requires accurate relative orientations. In this article, two methods for solving relative orientation problems are presented. The first method includes registration by minimizing the distances between of an airborne laser point cloud and a 3D model. The 3D model was derived from photogrammetric measurements and terrestrial laser scanning points. The first method was used as a reference and for validation. Having completed registration in the object space, the relative orientation between images and laser point cloud is known. The second method utilizes an interactive orientation method between a multi-scale image block and a laser point cloud. The multi-scale image block includes both aerial and terrestrial images. Experiments with the multi-scale image block revealed that the accuracy of a relative orientation increased when more images were included in the block. The orientations of the first and second methods were compared. The comparison showed that correct rotations were the most difficult to detect accurately by using the interactive method. Because the interactive method forces laser scanning data to fit with the images, inaccurate rotations cause corresponding shifts to image positions. However, in a test case, in which the orientation differences included only shifts, the interactive method could solve the relative orientation of an aerial image and airborne laser scanning data repeatedly within a couple of centimeters.
Orientation of Airborne Laser Scanning Point Clouds with Multi-View, Multi-Scale Image Blocks
Rönnholm, Petri; Hyyppä, Hannu; Hyyppä, Juha; Haggrén, Henrik
2009-01-01
Comprehensive 3D modeling of our environment requires integration of terrestrial and airborne data, which is collected, preferably, using laser scanning and photogrammetric methods. However, integration of these multi-source data requires accurate relative orientations. In this article, two methods for solving relative orientation problems are presented. The first method includes registration by minimizing the distances between of an airborne laser point cloud and a 3D model. The 3D model was derived from photogrammetric measurements and terrestrial laser scanning points. The first method was used as a reference and for validation. Having completed registration in the object space, the relative orientation between images and laser point cloud is known. The second method utilizes an interactive orientation method between a multi-scale image block and a laser point cloud. The multi-scale image block includes both aerial and terrestrial images. Experiments with the multi-scale image block revealed that the accuracy of a relative orientation increased when more images were included in the block. The orientations of the first and second methods were compared. The comparison showed that correct rotations were the most difficult to detect accurately by using the interactive method. Because the interactive method forces laser scanning data to fit with the images, inaccurate rotations cause corresponding shifts to image positions. However, in a test case, in which the orientation differences included only shifts, the interactive method could solve the relative orientation of an aerial image and airborne laser scanning data repeatedly within a couple of centimeters. PMID:22454569
Stability of Molasse: TLS for structural analysis in the valley of Gotteron-Fribourg, Switzerland
NASA Astrophysics Data System (ADS)
Ben Hammouda, Mariam; Jaboyedoff, Michel; Derron, Marc Henri; Bouaziz, Samir; Mazotti, Benoit
2016-04-01
The marine molasses of Fribourg (Switzerland) is an area where the cliff collapses and rockfalls are quite frequent and difficult to predict due to this particular lithology, a poorly consolidated greywacke. Because of some recent rockfall events, the situation became critical especially in the valley of Gotteron where a big block has slightly moved down and might destroy a house in case of rupture. The cliff made of jointed sandstone and thin layers of clay and siltstone presents many fractures, joints and massive cross bedding surfaces which increases the possibility of slab failure. This paper presents a detailed structural analysis of the cliff and the identification of the potential failure mechanisms. The methodology is about combining field observation and terrestrial LiDAR scanning point cloud in order to assess the stability of potential slope instabilities of molasses. Three LiDAR scans were done i) to extract discontinuity families depending to the dip and the dip direction of joints and ii) to run kinematic tests in order to identify responsible sets for each potential failure mechanisms. Raw point clouds were processed using IMAlign module of Polyworks and CloudCompare software. The structural analysis based on COLTOP 3D (Jaboyedoff et al. 2007) allowed the identification of four discontinuity sets that were not measured in the field. Two different failure mechanisms have been identified as critical: i) planar sliding which is the main responsible mechanism of the present fallen block and ii) wedge sliding. The planar sliding is defined by the discontinuity sets J1 and J5 with a direction parallel to the slope and with a steep dip angle. The wedges, defined by couples of discontinuity sets, contribute to increase cracks' opening and to the detachment of slabs. The use of TLS combined with field survey provides us a first interpretation of instabilities and a very promising structural analysis.
Fast rockfall hazard assessment along a road section using the new LYNX Mobile Mapper Lidar
NASA Astrophysics Data System (ADS)
Dario, Carrea; Celine, Longchamp; Michel, Jaboyedoff; Marc, Choffet; Marc-Henri, Derron; Clement, Michoud; Andrea, Pedrazzini; Dario, Conforti; Michael, Leslar; William, Tompkinson
2010-05-01
The terrestrial laser scanning (TLS) is an active remote sensing technique providing high resolution point clouds of the topography. The high resolution digital elevations models (HRDEM) derived of these point clouds are an important tool for the stability analysis of slopes. The LYNX Mobile Mapper is a new TLS generation developed by Optech. Its particularity is to be mounted on a vehicle and providing a 360° high density point cloud at 200-khz measurement rate in a very short acquisition time. It is composed of two sensors improving the resolution and reducing the laser shadowing. The spatial resolution is better than 10 cm at 10 m range and at a velocity of 50 km/h and the reflectivity of the signal is around 20% at a distance of 200 m. The Lidar is also equipped with a DGPS and an inertial measurement unit (IMU) which gives real time position and georeferences directly the point cloud. Thanks to its ability to provide a continuous data set from an extended area along a road, this TLS system is useful for rockfall hazard assessment. In addition, this new scanner decrease considerably the time spent in the field and the postprocessing is reduced thanks to resultant georeferenced data. Nevertheless, its application is limited to an area close to the road. The LYNX has been tested near Pontarlier (France) along roads sections affected by rockfall. Regarding to the tectonic context, the studied area is located in the Folded Jura mainly composed of limestone. The result is a very detailed point cloud with a point spacing of 4 cm. The LYNX presents detailed topography on which a structural analysis has been carried out using COLTOP-3D. It allows obtaining a full structural description along the road. In addition, kinematic tests coupled with probabilistic analysis give a susceptibility map of the road cut or natural cliffs above the road. Comparisons with field survey confirm the Lidar approach.
Current trends in sample preparation for cosmetic analysis.
Zhong, Zhixiong; Li, Gongke
2017-01-01
The widespread applications of cosmetics in modern life make their analysis particularly important from a safety point of view. There is a wide variety of restricted ingredients and prohibited substances that primarily influence the safety of cosmetics. Sample preparation for cosmetic analysis is a crucial step as the complex matrices may seriously interfere with the determination of target analytes. In this review, some new developments (2010-2016) in sample preparation techniques for cosmetic analysis, including liquid-phase microextraction, solid-phase microextraction, matrix solid-phase dispersion, pressurized liquid extraction, cloud point extraction, ultrasound-assisted extraction, and microwave digestion, are presented. Furthermore, the research and progress in sample preparation techniques and their applications in the separation and purification of allowed ingredients and prohibited substances are reviewed. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
An Indoor Slam Method Based on Kinect and Multi-Feature Extended Information Filter
NASA Astrophysics Data System (ADS)
Chang, M.; Kang, Z.
2017-09-01
Based on the frame of ORB-SLAM in this paper the transformation parameters between adjacent Kinect image frames are computed using ORB keypoints, from which priori information matrix and information vector are calculated. The motion update of multi-feature extended information filter is then realized. According to the point cloud data formed by depth image, ICP algorithm was used to extract the point features of the point cloud data in the scene and built an observation model while calculating a-posteriori information matrix and information vector, and weakening the influences caused by the error accumulation in the positioning process. Furthermore, this paper applied ORB-SLAM frame to realize autonomous positioning in real time in interior unknown environment. In the end, Lidar was used to get data in the scene in order to estimate positioning accuracy put forward in this paper.
Laser-based structural sensing and surface damage detection
NASA Astrophysics Data System (ADS)
Guldur, Burcu
Damage due to age or accumulated damage from hazards on existing structures poses a worldwide problem. In order to evaluate the current status of aging, deteriorating and damaged structures, it is vital to accurately assess the present conditions. It is possible to capture the in situ condition of structures by using laser scanners that create dense three-dimensional point clouds. This research investigates the use of high resolution three-dimensional terrestrial laser scanners with image capturing abilities as tools to capture geometric range data of complex scenes for structural engineering applications. Laser scanning technology is continuously improving, with commonly available scanners now capturing over 1,000,000 texture-mapped points per second with an accuracy of ~2 mm. However, automatically extracting meaningful information from point clouds remains a challenge, and the current state-of-the-art requires significant user interaction. The first objective of this research is to use widely accepted point cloud processing steps such as registration, feature extraction, segmentation, surface fitting and object detection to divide laser scanner data into meaningful object clusters and then apply several damage detection methods to these clusters. This required establishing a process for extracting important information from raw laser-scanned data sets such as the location, orientation and size of objects in a scanned region, and location of damaged regions on a structure. For this purpose, first a methodology for processing range data to identify objects in a scene is presented and then, once the objects from model library are correctly detected and fitted into the captured point cloud, these fitted objects are compared with the as-is point cloud of the investigated object to locate defects on the structure. The algorithms are demonstrated on synthetic scenes and validated on range data collected from test specimens and test-bed bridges. The second objective of this research is to combine useful information extracted from laser scanner data with color information, which provides information in the fourth dimension that enables detection of damage types such as cracks, corrosion, and related surface defects that are generally difficult to detect using only laser scanner data; moreover, the color information also helps to track volumetric changes on structures such as spalling. Although using images with varying resolution to detect cracks is an extensively researched topic, damage detection using laser scanners with and without color images is a new research area that holds many opportunities for enhancing the current practice of visual inspections. The aim is to combine the best features of laser scans and images to create an automatic and effective surface damage detection method, which will reduce the need for skilled labor during visual inspections and allow automatic documentation of related information. This work enables developing surface damage detection strategies that integrate existing condition rating criteria for a wide range damage types that are collected under three main categories: small deformations already existing on the structure (cracks); damage types that induce larger deformations, but where the initial topology of the structure has not changed appreciably (e.g., bent members); and large deformations where localized changes in the topology of the structure have occurred (e.g., rupture, discontinuities and spalling). The effectiveness of the developed damage detection algorithms are validated by comparing the detection results with the measurements taken from test specimens and test-bed bridges.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, Wenhua; Fan, Jiwen; Easter, Richard C.
Aerosol-cloud interaction processes can be represented more physically with bin cloud microphysics relative to bulk microphysical parameterizations. However, due to computational power limitations in the past, bin cloud microphysics was often run with very simple aerosol treatments. The purpose of this study is to represent better aerosol-cloud interaction processes in the Chemistry version of Weather Research and Forecast model (WRF-Chem) at convection-permitting scales by coupling spectral-bin cloud microphysics (SBM) with the MOSAIC sectional aerosol model. A flexible interface is built that exchanges cloud and aerosol information between them. The interface contains a new bin aerosol activation approach, which replaces themore » treatments in the original SBM. It also includes the modified aerosol resuspension and in-cloud wet removal processes with the droplet loss tendencies and precipitation fluxes from SBM. The newly coupled system is evaluated for two marine stratocumulus cases over the Southeast Pacific Ocean with either a simplified aerosol setup or full-chemistry. We compare the aerosol activation process in the newly-coupled SBM-MOSAIC against the SBM simulation without chemistry using a simplified aerosol setup, and the results show consistent activation rates. A longer time simulation reinforces that aerosol resuspension through cloud drop evaporation plays an important role in replenishing aerosols and impacts cloud and precipitation in marine stratocumulus clouds. Evaluation of the coupled SBM-MOSAIC with full-chemistry using aircraft measurements suggests that the new model works realistically for the marine stratocumulus clouds, and improves the simulation of cloud microphysical properties compared to a simulation using MOSAIC coupled with the Morrison two-moment microphysics.« less
NASA Astrophysics Data System (ADS)
Gao, Wenhua; Fan, Jiwen; Easter, R. C.; Yang, Qing; Zhao, Chun; Ghan, Steven J.
2016-09-01
Aerosol-cloud interaction processes can be represented more physically with bin cloud microphysics relative to bulk microphysical parameterizations. However, due to computational power limitations in the past, bin cloud microphysics was often run with very simple aerosol treatments. The purpose of this study is to represent better aerosol-cloud interaction processes in the Chemistry version of Weather Research and Forecast model (WRF-Chem) at convection-permitting scales by coupling spectral-bin cloud microphysics (SBM) with the MOSAIC sectional aerosol model. A flexible interface is built that exchanges cloud and aerosol information between them. The interface contains a new bin aerosol activation approach, which replaces the treatments in the original SBM. It also includes the modified aerosol resuspension and in-cloud wet removal processes with the droplet loss tendencies and precipitation fluxes from SBM. The newly coupled system is evaluated for two marine stratocumulus cases over the Southeast Pacific Ocean with either a simplified aerosol setup or full-chemistry. We compare the aerosol activation process in the newly coupled SBM-MOSAIC against the SBM simulation without chemistry using a simplified aerosol setup, and the results show consistent activation rates. A longer time simulation reinforces that aerosol resuspension through cloud drop evaporation plays an important role in replenishing aerosols and impacts cloud and precipitation in marine stratocumulus clouds. Evaluation of the coupled SBM-MOSAIC with full-chemistry using aircraft measurements suggests that the new model works realistically for the marine stratocumulus clouds, and improves the simulation of cloud microphysical properties compared to a simulation using MOSAIC coupled with the Morrison two-moment microphysics.
Ahmed, Manan; Chin, Ying Hui; Guo, Xinxin; Zhao, Xing-Min
2017-05-01
The study of trace metals in the atmosphere and lake water is important due to their critical effects on humans, aquatic animals and the geochemical balance of ecosystems. The objective of this study was to investigate the concentration of trace metals in atmospheric and lake water samples during the rainy season (before and after precipitation) between November and December 2015. Typical methods of sample preparation for trace metal determination such as cloud point extraction, solid phase extraction and dispersive liquid-liquid micro-extraction are time-consuming and difficult to perform; therefore, there is a crucial need for development of more effective sample preparation procedure. A convection microwave assisted digestion procedure for extraction of trace metals was developed for use prior to inductively couple plasma-mass spectrometric determination. The result showed that metals like zinc (133.50-419.30μg/m 3 ) and aluminum (53.58-378.93μg/m 3 ) had higher concentrations in atmospheric samples as compared to lake samples before precipitation. On the other hand, the concentrations of zinc, aluminum, chromium and arsenic were significantly higher in lake samples after precipitation and lower in atmospheric samples. The relationship between physicochemical parameters (pH and turbidity) and heavy metal concentrations was investigated as well. Furthermore, enrichment factor analysis indicated that anthropogenic sources such as soil dust, biomass burning and fuel combustion influenced the metal concentrations in the atmosphere. Copyright © 2016. Published by Elsevier B.V.
Heidarizadi, Elham; Tabaraki, Reza
2016-01-01
A sensitive cloud point extraction method for simultaneous determination of trace amounts of sunset yellow (SY), allura red (AR) and brilliant blue (BB) by spectrophotometry was developed. Experimental parameters such as Triton X-100 concentration, KCl concentration and initial pH on extraction efficiency of dyes were optimized using response surface methodology (RSM) with a Doehlert design. Experimental data were evaluated by applying RSM integrating a desirability function approach. The optimum condition for extraction efficiency of SY, AR and BB simultaneously were: Triton X-100 concentration 0.0635 mol L(-1), KCl concentration 0.11 mol L(-1) and pH 4 with maximum overall desirability D of 0.95. Correspondingly, the maximum extraction efficiency of SY, AR and BB were 100%, 92.23% and 95.69%, respectively. At optimal conditions, extraction efficiencies were 99.8%, 92.48% and 95.96% for SY, AR and BB, respectively. These values were only 0.2%, 0.25% and 0.27% different from the predicted values, suggesting that the desirability function approach with RSM was a useful technique for simultaneously dye extraction. Linear calibration curves were obtained in the range of 0.02-4 for SY, 0.025-2.5 for AR and 0.02-4 μg mL(-1) for BB under optimum condition. Detection limit based on three times the standard deviation of the blank (3Sb) was 0.009, 0.01 and 0.007 μg mL(-1) (n=10) for SY, AR and BB, respectively. The method was successfully used for the simultaneous determination of the dyes in different food samples. Copyright © 2015 Elsevier B.V. All rights reserved.
Spir, Lívia Genovez; Ataide, Janaína Artem; De Lencastre Novaes, Letícia Celia; Moriel, Patrícia; Mazzola, Priscila Gava; De Borba Gurpilhares, Daniela; Silveira, Edgar; Pessoa, Adalberto; Tambourgi, Elias Basile
2015-01-01
Bromelain is a set of proteolytic enzymes found in pineapple (Ananas comosus) tissues such as stem, fruit and leaves. Because of its proteolytic activity, bromelain has potential applications in the cosmetic, pharmaceutical, and food industries. The present study focused on the recovery of bromelain from pineapple peel by liquid-liquid extraction in aqueous two-phase micellar systems (ATPMS), using Triton X-114 (TX-114) and McIlvaine buffer, in the absence and presence of electrolytes CaCl2 and KI; the cloud points of the generated extraction systems were studied by plotting binodal curves. Based on the cloud points, three temperatures were selected for extraction: 30, 33, and 36°C for systems in the absence of salts; 40, 43, and 46°C in the presence of KI; 24, 27, and 30°C in the presence of CaCl2 . Total protein and enzymatic activities were analyzed to monitor bromelain. Employing the ATPMS chosen for extraction (0.5 M KI with 3% TX-114, at pH 6.0, at 40°C), the bromelain extract stability was assessed after incorporation into three cosmetic bases: an anhydrous gel, a cream, and a cream-gel formulation. The cream-gel formulation presented as the most appropriate base to convey bromelain, and its optimal storage conditions were found to be 4.0 ± 0.5°C. The selected ATPMS enabled the extraction of a biomolecule with high added value from waste lined-up in a cosmetic formulation, allowing for exploration of further cosmetic potential. © 2015 American Institute of Chemical Engineers.
Microphysics in the Multi-Scale Modeling Systems with Unified Physics
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo; Chern, J.; Lamg, S.; Matsui, T.; Shen, B.; Zeng, X.; Shi, R.
2011-01-01
In recent years, exponentially increasing computer power has extended Cloud Resolving Model (CRM) integrations from hours to months, the number of computational grid points from less than a thousand to close to ten million. Three-dimensional models are now more prevalent. Much attention is devoted to precipitating cloud systems where the crucial 1-km scales are resolved in horizontal domains as large as 10,000 km in two-dimensions, and 1,000 x 1,000 km2 in three-dimensions. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that NWP and mesoscale model can be run in grid size similar to cloud resolving model through nesting technique. Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (l) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, the microphysics developments of the multi-scale modeling system will be presented. In particular, the results from using multi-scale modeling system to study the heavy precipitation processes will be presented.
López-García, Ignacio; Vicente-Martínez, Yesica; Hernández-Córdoba, Manuel
2015-01-01
The cloud point extraction (CPE) of silver nanoparticles (AgNPs) by Triton X-114 allows chromium (III) ions to be transferred to the surfactant-rich phase, where they can be measured by electrothermal atomic absorption spectrometry. Using 20 mL sample and 50 μL Triton X-114 (30% w/v), the enrichment factor was 1150, and calibration graphs were obtained in the 5-100 ng L(-1) chromium range in the presence of 5 µg L(-1) AgNPs. Speciation of trivalent and hexavalent chromium was achieved by carrying out two CPE experiments, one of them in the presence of ethylenediaminetetraacetate. While in the first experiment, in absence of the complexing agent, the concentration of total chromium was obtained, the analytical signal measured in the presence of this chemical allowed the chromium (VI) concentration to be measured, being that of chromium (III) calculated by difference. The reliability of the procedure was verified by using three standard reference materials before applying to water, beer and wine samples. Copyright © 2014 Elsevier B.V. All rights reserved.
Pole-Like Road Furniture Detection in Sparse and Unevenly Distributed Mobile Laser Scanning Data
NASA Astrophysics Data System (ADS)
Li, F.; Lehtomäki, M.; Oude Elberink, S.; Vosselman, G.; Puttonen, E.; Kukko, A.; Hyyppä, J.
2018-05-01
Pole-like road furniture detection received much attention due to its traffic functionality in recent years. In this paper, we develop a framework to detect pole-like road furniture from sparse mobile laser scanning data. The framework is carried out in four steps. The unorganised point cloud is first partitioned. Then above ground points are clustered and roughly classified after removing ground points. A slicing check in combination with cylinder masking is proposed to extract pole-like road furniture candidates. Pole-like road furniture are obtained after occlusion analysis in the last stage. The average completeness and correctness of pole-like road furniture in sparse and unevenly distributed mobile laser scanning data was above 0.83. It is comparable to the state of art in the field of pole-like road furniture detection in mobile laser scanning data of good quality and is potentially of practical use in the processing of point clouds collected by autonomous driving platforms.
Patient identification using a near-infrared laser scanner
NASA Astrophysics Data System (ADS)
Manit, Jirapong; Bremer, Christina; Schweikard, Achim; Ernst, Floris
2017-03-01
We propose a new biometric approach where the tissue thickness of a person's forehead is used as a biometric feature. Given that the spatial registration of two 3D laser scans of the same human face usually produces a low error value, the principle of point cloud registration and its error metric can be applied to human classification techniques. However, by only considering the spatial error, it is not possible to reliably verify a person's identity. We propose to use a novel near-infrared laser-based head tracking system to determine an additional feature, the tissue thickness, and include this in the error metric. Using MRI as a ground truth, data from the foreheads of 30 subjects was collected from which a 4D reference point cloud was created for each subject. The measurements from the near-infrared system were registered with all reference point clouds using the ICP algorithm. Afterwards, the spatial and tissue thickness errors were extracted, forming a 2D feature space. For all subjects, the lowest feature distance resulted from the registration of a measurement and the reference point cloud of the same person. The combined registration error features yielded two clusters in the feature space, one from the same subject and another from the other subjects. When only the tissue thickness error was considered, these clusters were less distinct but still present. These findings could help to raise safety standards for head and neck cancer patients and lays the foundation for a future human identification technique.
NASA Astrophysics Data System (ADS)
Grochocka, M.
2013-12-01
Mobile laser scanning is dynamically developing measurement technology, which is becoming increasingly widespread in acquiring three-dimensional spatial information. Continuous technical progress based on the use of new tools, technology development, and thus the use of existing resources in a better way, reveals new horizons of extensive use of MLS technology. Mobile laser scanning system is usually used for mapping linear objects, and in particular the inventory of roads, railways, bridges, shorelines, shafts, tunnels, and even geometrically complex urban spaces. The measurement is done from the perspective of use of the object, however, does not interfere with the possibilities of movement and work. This paper presents the initial results of the segmentation data acquired by the MLS. The data used in this work was obtained as part of an inventory measurement infrastructure railway line. Measurement of point clouds was carried out using a profile scanners installed on the railway platform. To process the data, the tools of 'open source' Point Cloud Library was used. These tools allow to use templates of programming libraries. PCL is an open, independent project, operating on a large scale for processing 2D/3D image and point clouds. Software PCL is released under the terms of the BSD license (Berkeley Software Distribution License), which means it is a free for commercial and research use. The article presents a number of issues related to the use of this software and its capabilities. Segmentation data is based on applying the templates library pcl_ segmentation, which contains the segmentation algorithms to separate clusters. These algorithms are best suited to the processing point clouds, consisting of a number of spatially isolated regions. Template library performs the extraction of the cluster based on the fit of the model by the consensus method samples for various parametric models (planes, cylinders, spheres, lines, etc.). Most of the mathematical operation is carried out on the basis of Eigen library, a set of templates for linear algebra.
NASA Astrophysics Data System (ADS)
Ogura, Tomoo; Shiogama, Hideo; Watanabe, Masahiro; Yoshimori, Masakazu; Yokohata, Tokuta; Annan, James D.; Hargreaves, Julia C.; Ushigami, Naoto; Hirota, Kazuya; Someya, Yu; Kamae, Youichi; Tatebe, Hiroaki; Kimoto, Masahide
2017-12-01
This study discusses how much of the biases in top-of-atmosphere (TOA) radiation and clouds can be removed by parameter tuning in the present-day simulation of a climate model in the Coupled Model Inter-comparison Project phase 5 (CMIP5) generation. We used output of a perturbed parameter ensemble (PPE) experiment conducted with an atmosphere-ocean general circulation model (AOGCM) without flux adjustment. The Model for Interdisciplinary Research on Climate version 5 (MIROC5) was used for the PPE experiment. Output of the PPE was compared with satellite observation data to evaluate the model biases and the parametric uncertainty of the biases with respect to TOA radiation and clouds. The results indicate that removing or changing the sign of the biases by parameter tuning alone is difficult. In particular, the cooling bias of the shortwave cloud radiative effect at low latitudes could not be removed, neither in the zonal mean nor at each latitude-longitude grid point. The bias was related to the overestimation of both cloud amount and cloud optical thickness, which could not be removed by the parameter tuning either. However, they could be alleviated by tuning parameters such as the maximum cumulus updraft velocity at the cloud base. On the other hand, the bias of the shortwave cloud radiative effect in the Arctic was sensitive to parameter tuning. It could be removed by tuning such parameters as albedo of ice and snow both in the zonal mean and at each grid point. The obtained results illustrate the benefit of PPE experiments which provide useful information regarding effectiveness and limitations of parameter tuning. Implementing a shallow convection parameterization is suggested as a potential measure to alleviate the biases in radiation and clouds.
3-D Object Recognition from Point Cloud Data
NASA Astrophysics Data System (ADS)
Smith, W.; Walker, A. S.; Zhang, B.
2011-09-01
The market for real-time 3-D mapping includes not only traditional geospatial applications but also navigation of unmanned autonomous vehicles (UAVs). Massively parallel processes such as graphics processing unit (GPU) computing make real-time 3-D object recognition and mapping achievable. Geospatial technologies such as digital photogrammetry and GIS offer advanced capabilities to produce 2-D and 3-D static maps using UAV data. The goal is to develop real-time UAV navigation through increased automation. It is challenging for a computer to identify a 3-D object such as a car, a tree or a house, yet automatic 3-D object recognition is essential to increasing the productivity of geospatial data such as 3-D city site models. In the past three decades, researchers have used radiometric properties to identify objects in digital imagery with limited success, because these properties vary considerably from image to image. Consequently, our team has developed software that recognizes certain types of 3-D objects within 3-D point clouds. Although our software is developed for modeling, simulation and visualization, it has the potential to be valuable in robotics and UAV applications. The locations and shapes of 3-D objects such as buildings and trees are easily recognizable by a human from a brief glance at a representation of a point cloud such as terrain-shaded relief. The algorithms to extract these objects have been developed and require only the point cloud and minimal human inputs such as a set of limits on building size and a request to turn on a squaring option. The algorithms use both digital surface model (DSM) and digital elevation model (DEM), so software has also been developed to derive the latter from the former. The process continues through the following steps: identify and group 3-D object points into regions; separate buildings and houses from trees; trace region boundaries; regularize and simplify boundary polygons; construct complex roofs. Several case studies have been conducted using a variety of point densities, terrain types and building densities. The results have been encouraging. More work is required for better processing of, for example, forested areas, buildings with sides that are not at right angles or are not straight, and single trees that impinge on buildings. Further work may also be required to ensure that the buildings extracted are of fully cartographic quality. A first version will be included in production software later in 2011. In addition to the standard geospatial applications and the UAV navigation, the results have a further advantage: since LiDAR data tends to be accurately georeferenced, the building models extracted can be used to refine image metadata whenever the same buildings appear in imagery for which the GPS/IMU values are poorer than those for the LiDAR.
Reconstruction of 3d Models from Point Clouds with Hybrid Representation
NASA Astrophysics Data System (ADS)
Hu, P.; Dong, Z.; Yuan, P.; Liang, F.; Yang, B.
2018-05-01
The three-dimensional (3D) reconstruction of urban buildings from point clouds has long been an active topic in applications related to human activities. However, due to the structures significantly differ in terms of complexity, the task of 3D reconstruction remains a challenging issue especially for the freeform surfaces. In this paper, we present a new reconstruction algorithm which allows the 3D-models of building as a combination of regular structures and irregular surfaces, where the regular structures are parameterized plane primitives and the irregular surfaces are expressed as meshes. The extraction of irregular surfaces starts with an over-segmented method for the unstructured point data, a region growing approach based the adjacent graph of super-voxels is then applied to collapse these super-voxels, and the freeform surfaces can be clustered from the voxels filtered by a thickness threshold. To achieve these regular planar primitives, the remaining voxels with a larger flatness will be further divided into multiscale super-voxels as basic units, and the final segmented planes are enriched and refined in a mutually reinforcing manner under the framework of a global energy optimization. We have implemented the proposed algorithms and mainly tested on two point clouds that differ in point density and urban characteristic, and experimental results on complex building structures illustrated the efficacy of the proposed framework.
NASA Astrophysics Data System (ADS)
Soto-Ángeles, Alan Gustavo; Rodríguez-Hidalgo, María del Rosario; Soto-Figueroa, César; Vicente, Luis
2018-02-01
The thermoresponsive micellar phase behaviour that exhibits the Triton-X-100 micelles by temperature effect and addition of salt in the extraction process of metallic ions was explored from mesoscopic and experimental points. In the theoretical study, we analyse the formation of Triton-X-100 micelles, load and stabilization of dithizone molecules and metallic ions extraction inside the micellar core at room temperature; finally, a thermal analysis is presented. In the experimental study, the spectrophotometric outcomes confirm the solubility of the copper-dithizone complex in the micellar core, as well as the extraction of metallic ions of aqueous environment via a cloud-point at 332.2 K. The micellar solutions with salt present a low absorbance value compared with the micellar solutions without salt. The decrease in the absorbance value is attributed to a change in the size of hydrophobic region of colloidal micelles. All transitory stages of extraction process are discussed and analysed in this document.
First Prismatic Building Model Reconstruction from Tomosar Point Clouds
NASA Astrophysics Data System (ADS)
Sun, Y.; Shahzad, M.; Zhu, X.
2016-06-01
This paper demonstrates for the first time the potential of explicitly modelling the individual roof surfaces to reconstruct 3-D prismatic building models using spaceborne tomographic synthetic aperture radar (TomoSAR) point clouds. The proposed approach is modular and works as follows: it first extracts the buildings via DSM generation and cutting-off the ground terrain. The DSM is smoothed using BM3D denoising method proposed in (Dabov et al., 2007) and a gradient map of the smoothed DSM is generated based on height jumps. Watershed segmentation is then adopted to oversegment the DSM into different regions. Subsequently, height and polygon complexity constrained merging is employed to refine (i.e., to reduce) the retrieved number of roof segments. Coarse outline of each roof segment is then reconstructed and later refined using quadtree based regularization plus zig-zag line simplification scheme. Finally, height is associated to each refined roof segment to obtain the 3-D prismatic model of the building. The proposed approach is illustrated and validated over a large building (convention center) in the city of Las Vegas using TomoSAR point clouds generated from a stack of 25 images using Tomo-GENESIS software developed at DLR.
Efficient Open Source Lidar for Desktop Users
NASA Astrophysics Data System (ADS)
Flanagan, Jacob P.
Lidar --- Light Detection and Ranging --- is a remote sensing technology that utilizes a device similar to a rangefinder to determine a distance to a target. A laser pulse is shot at an object and the time it takes for the pulse to return in measured. The distance to the object is easily calculated using the speed property of light. For lidar, this laser is moved (primarily in a rotational movement usually accompanied by a translational movement) and records the distances to objects several thousands of times per second. From this, a 3 dimensional structure can be procured in the form of a point cloud. A point cloud is a collection of 3 dimensional points with at least an x, a y and a z attribute. These 3 attributes represent the position of a single point in 3 dimensional space. Other attributes can be associated with the points that include properties such as the intensity of the return pulse, the color of the target or even the time the point was recorded. Another very useful, post processed attribute is point classification where a point is associated with the type of object the point represents (i.e. ground.). Lidar has gained popularity and advancements in the technology has made its collection easier and cheaper creating larger and denser datasets. The need to handle this data in a more efficiently manner has become a necessity; The processing, visualizing or even simply loading lidar can be computationally intensive due to its very large size. Standard remote sensing and geographical information systems (GIS) software (ENVI, ArcGIS, etc.) was not originally built for optimized point cloud processing and its implementation is an afterthought and therefore inefficient. Newer, more optimized software for point cloud processing (QTModeler, TopoDOT, etc.) usually lack more advanced processing tools, requires higher end computers and are very costly. Existing open source lidar approaches the loading and processing of lidar in an iterative fashion that requires implementing batch coding and processing time that could take months for a standard lidar dataset. This project attempts to build a software with the best approach for creating, importing and exporting, manipulating and processing lidar, especially in the environmental field. Development of this software is described in 3 sections - (1) explanation of the search methods for efficiently extracting the "area of interest" (AOI) data from disk (file space), (2) using file space (for storage), budgeting memory space (for efficient processing) and moving between the two, and (3) method development for creating lidar products (usually raster based) used in environmental modeling and analysis (i.e.: hydrology feature extraction, geomorphological studies, ecology modeling, etc.).
Leong, Yoong Kit; Lan, John Chi-Wei; Loh, Hwei-San; Ling, Tau Chuan; Ooi, Chien Wei; Show, Pau Loke
2017-03-01
Polyhydroxyalkanoates (PHAs), a class of renewable and biodegradable green polymers, have gained attraction as a potential substitute for the conventional plastics due to the increasing concern towards environmental pollution as well as the rapidly depleting petroleum reserve. Nevertheless, the high cost of downstream processing of PHA has been a bottleneck for the wide adoption of PHAs. Among the options of PHAs recovery techniques, aqueous two-phase extraction (ATPE) outshines the others by having the advantages of providing a mild environment for bioseparation, being green and non-toxic, the capability to handle a large operating volume and easily scaled-up. Utilizing unique properties of thermo-responsive polymer which has decreasing solubility in its aqueous solution as the temperature rises, cloud point extraction (CPE) is an ATPE technique that allows its phase-forming component to be recycled and reused. A thorough literature review has shown that this is the first time isolation and recovery of PHAs from Cupriavidus necator H16 via CPE was reported. The optimum condition for PHAs extraction (recovery yield of 94.8% and purification factor of 1.42 fold) was achieved under the conditions of 20 wt/wt % ethylene oxide-propylene oxide (EOPO) with molecular weight of 3900 g/mol and 10 mM of sodium chloride addition at thermoseparating temperature of 60°C with crude feedstock limit of 37.5 wt/wt %. Recycling and reutilization of EOPO 3900 can be done at least twice with satisfying yield and PF. CPE has been demonstrated as an effective technique for the extraction of PHAs from microbial crude culture. Copyright © 2016 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.
Point-Cloud Compression for Vehicle-Based Mobile Mapping Systems Using Portable Network Graphics
NASA Astrophysics Data System (ADS)
Kohira, K.; Masuda, H.
2017-09-01
A mobile mapping system is effective for capturing dense point-clouds of roads and roadside objects Point-clouds of urban areas, residential areas, and arterial roads are useful for maintenance of infrastructure, map creation, and automatic driving. However, the data size of point-clouds measured in large areas is enormously large. A large storage capacity is required to store such point-clouds, and heavy loads will be taken on network if point-clouds are transferred through the network. Therefore, it is desirable to reduce data sizes of point-clouds without deterioration of quality. In this research, we propose a novel point-cloud compression method for vehicle-based mobile mapping systems. In our compression method, point-clouds are mapped onto 2D pixels using GPS time and the parameters of the laser scanner. Then, the images are encoded in the Portable Networking Graphics (PNG) format and compressed using the PNG algorithm. In our experiments, our method could efficiently compress point-clouds without deteriorating the quality.
Tang, Tao; Qian, Kun; Shi, Tianyu; Wang, Fang; Li, Jianqiang; Cao, Yongsong
2010-11-08
A preconcentration technique known as cloud point extraction was developed for the determination of trace levels of triazole fungicides tricyclazole, triadimefon, tebuconazole and diniconazole in environmental waters. The triazole fungicides were extracted and preconcentrated using polyethylene glycol 600 monooleate (PEG600MO) as a low toxic and environmentally benign nonionic surfactant, and determined by high performance liquid chromatography/ultraviolet detection (HPLC-UV). The extraction conditions were optimized for the four triazole fungicides as follows: 2.0 wt% PEG600MO, 2.5 wt% Na(2)SO(4), equilibration at 45°C for 10 min, and centrifugation at 2000 rpm (533 × g) for 5 min. The triazole fungicides were well separated on a reversed-phase kromasil ODS C(18) column (250 mm × 4.6 mm, 5 μm) with gradient elution at ambient temperature and detected at 225 nm. The calibration range was 0.05-20 μg L(-1) for tricyclazole and 0.5-20 μg L(-1) for the other three classes of analytes with the correlation coefficients over 0.9992. Preconcentration factors were higher than 60-fold for the four selected fungicides. The limits of detection were 6.8-34.5 ng L(-1) (S/N=3) and the recoveries were 82.0-96.0% with the relative standard deviations of 2.8-7.8%. Copyright © 2010 Elsevier B.V. All rights reserved.
Chen, Miao; Xia, Qinghai; Liu, Mousheng; Yang, Yaling
2011-01-01
A cloud-point extraction (CPE) method using Triton X-114 (TX-114) nonionic surfactant was developed for the extraction and preconcentration of propyl gallate (PG), tertiary butyl hydroquinone (TBHQ), butylated hydroxyanisole (BHA), and butylated hydroxytoluene (BHT) from edible oils. The optimum conditions of CPE were 2.5% (v/v) TX-114, 0.5% (w/v) NaCl and 40 min equilibration time at 50 °C. The surfactant-rich phase was then analyzed by reversed-phase high-performance liquid chromatography with ultraviolet detection at 280 nm, using a gradient mobile phase consisting of methanol and 1.5% (v/v) acetic acid. Under the studied conditions, 4 synthetic phenolic antioxidants (SPAs) were successfully separated within 24 min. The limits of detection (LOD) were 1.9 ng mL(-1) for PG, 11 ng mL(-1) for TBHQ, 2.3 ng mL(-1) for BHA, and 5.9 ng mL(-1) for BHT. Recoveries of the SPAs spiked into edible oil were in the range 81% to 88%. The CPE method was shown to be potentially useful for the preconcentration of the target analytes, with a preconcentration factor of 14. Moreover, the method is simple, has high sensitivity, consumes much less solvent than traditional methods, and is environment-friendly. Practical Application: The method established in this article uses less organic solvent to extract SPAs from edible oils; it is simple, highly sensitive and results in no pollution to the environment.
Decay dynamics in the coupled-dipole model
NASA Astrophysics Data System (ADS)
Araújo, M. O.; Guerin, W.; Kaiser, R.
2018-06-01
Cooperative scattering in cold atoms has gained renewed interest, in particular in the context of single-photon superradiance, with the recent experimental observation of super- and subradiance in dilute atomic clouds. Numerical simulations to support experimental signatures of cooperative scattering are often limited by the number of dipoles which can be treated, well below the number of atoms in the experiments. In this paper, we provide systematic numerical studies aimed at matching the regime of dilute atomic clouds. We use a scalar coupled-dipole model in the low excitation limit and an exclusion volume to avoid density-related effects. Scaling laws for super- and subradiance are obtained and the limits of numerical studies are pointed out. We also illustrate the cooperative nature of light scattering by considering an incident laser field, where half of the beam has a ? phase shift. The enhanced subradiance obtained under such condition provides an additional signature of the role of coherence in the detected signal.
Applications of Panoramic Images: from 720° Panorama to Interior 3d Models of Augmented Reality
NASA Astrophysics Data System (ADS)
Lee, I.-C.; Tsai, F.
2015-05-01
A series of panoramic images are usually used to generate a 720° panorama image. Although panoramic images are typically used for establishing tour guiding systems, in this research, we demonstrate the potential of using panoramic images acquired from multiple sites to create not only 720° panorama, but also three-dimensional (3D) point clouds and 3D indoor models. Since 3D modeling is one of the goals of this research, the location of the panoramic sites needed to be carefully planned in order to maintain a robust result for close-range photogrammetry. After the images are acquired, panoramic images are processed into 720° panoramas, and these panoramas which can be used directly as panorama guiding systems or other applications. In addition to these straightforward applications, interior orientation parameters can also be estimated while generating 720° panorama. These parameters are focal length, principle point, and lens radial distortion. The panoramic images can then be processed with closerange photogrammetry procedures to extract the exterior orientation parameters and generate 3D point clouds. In this research, VisaulSFM, a structure from motion software is used to estimate the exterior orientation, and CMVS toolkit is used to generate 3D point clouds. Next, the 3D point clouds are used as references to create building interior models. In this research, Trimble Sketchup was used to build the model, and the 3D point cloud was added to the determining of locations of building objects using plane finding procedure. In the texturing process, the panorama images are used as the data source for creating model textures. This 3D indoor model was used as an Augmented Reality model replacing a guide map or a floor plan commonly used in an on-line touring guide system. The 3D indoor model generating procedure has been utilized in two research projects: a cultural heritage site at Kinmen, and Taipei Main Station pedestrian zone guidance and navigation system. The results presented in this paper demonstrate the potential of using panoramic images to generate 3D point clouds and 3D models. However, it is currently a manual and labor-intensive process. A research is being carried out to Increase the degree of automation of these procedures.
He, Ying; Liang, Bin; Yang, Jun; Li, Shunzhi; He, Jin
2017-08-11
The Iterative Closest Points (ICP) algorithm is the mainstream algorithm used in the process of accurate registration of 3D point cloud data. The algorithm requires a proper initial value and the approximate registration of two point clouds to prevent the algorithm from falling into local extremes, but in the actual point cloud matching process, it is difficult to ensure compliance with this requirement. In this paper, we proposed the ICP algorithm based on point cloud features (GF-ICP). This method uses the geometrical features of the point cloud to be registered, such as curvature, surface normal and point cloud density, to search for the correspondence relationships between two point clouds and introduces the geometric features into the error function to realize the accurate registration of two point clouds. The experimental results showed that the algorithm can improve the convergence speed and the interval of convergence without setting a proper initial value.
Liang, Bin; Yang, Jun; Li, Shunzhi; He, Jin
2017-01-01
The Iterative Closest Points (ICP) algorithm is the mainstream algorithm used in the process of accurate registration of 3D point cloud data. The algorithm requires a proper initial value and the approximate registration of two point clouds to prevent the algorithm from falling into local extremes, but in the actual point cloud matching process, it is difficult to ensure compliance with this requirement. In this paper, we proposed the ICP algorithm based on point cloud features (GF-ICP). This method uses the geometrical features of the point cloud to be registered, such as curvature, surface normal and point cloud density, to search for the correspondence relationships between two point clouds and introduces the geometric features into the error function to realize the accurate registration of two point clouds. The experimental results showed that the algorithm can improve the convergence speed and the interval of convergence without setting a proper initial value. PMID:28800096
Dong, Xiquan; Schwantes, Adam C.; Xi, Baike; ...
2015-06-10
Here, six coupled and decoupled marine boundary layer (MBL) clouds were chosen from the 19 month Atmospheric Radiation Measurement Mobile Facility data set over the Azores. Thresholds of liquid water potential temperature difference Δθ L < 0.5 K (>0.5 K) and total water mixing ratio difference Δq t < 0.5 g/kg (>0.5 g/kg) below the cloud base were used for selecting the coupled (decoupled) cases. A schematic diagram was given to demonstrate the coupled and decoupled MBL vertical structures and how they associate with nondrizzle, virga, and rain drizzle events. Out of a total of 2676 5 min samples, 34.5%more » were classified as coupled and 65.5% as decoupled, 36.2% as nondrizzle and 63.8% as drizzle (47.7% as virga and 16.1% as rain), and 33.4% as daytime and 66.6% as nighttime. The decoupled cloud layer is deeper (0.406 km) than coupled cloud layer (0.304 km), and its liquid water path and cloud droplet effective radius (r e) values (122.1 gm -2 and 13.0 µm) are higher than coupled ones (83.7 gm -2 and 10.4 µm). Conversely, decoupled stratocumuli have lower cloud droplet number concentration (N d) and surface cloud condensation nucleus (CCN) concentration (N CCN) (74.5 cm -3 and 150.9 cm -3) than coupled stratocumuli (111.7 cm -3 and 216.4 cm -3). The linear regressions between r e and N d with N CCN have demonstrated that coupled r e and N d strongly depend on N CCN and have higher correlations (-0.56 and 0.59) with N CCN than decoupled results (-0.14 and 0.25). The MBL cloud properties under nondrizzle and virga drizzle conditions are similar to each other but significantly different to those of rain drizzle.« less
PET attenuation correction for flexible MRI surface coils in hybrid PET/MRI using a 3D depth camera
NASA Astrophysics Data System (ADS)
Frohwein, Lynn J.; Heß, Mirco; Schlicher, Dominik; Bolwin, Konstantin; Büther, Florian; Jiang, Xiaoyi; Schäfers, Klaus P.
2018-01-01
PET attenuation correction for flexible MRI radio frequency surface coils in hybrid PET/MRI is still a challenging task, as position and shape of these coils conform to large inter-patient variabilities. The purpose of this feasibility study is to develop a novel method for the incorporation of attenuation information about flexible surface coils in PET reconstruction using the Microsoft Kinect V2 depth camera. The depth information is used to determine a dense point cloud of the coil’s surface representing the shape of the coil. From a CT template—acquired once in advance—surface information of the coil is extracted likewise and converted into a point cloud. The two point clouds are then registered using a combination of an iterative-closest-point (ICP) method and a partially rigid registration step. Using the transformation derived through the point clouds, the CT template is warped and thereby adapted to the PET/MRI scan setup. The transformed CT template is converted into an attenuation map from Hounsfield units into linear attenuation coefficients. The resulting fitted attenuation map is then integrated into the MRI-based patient-specific DIXON-based attenuation map of the actual PET/MRI scan. A reconstruction of phantom PET data acquired with the coil present in the field-of-view (FoV), but without the corresponding coil attenuation map, shows large artifacts in regions close to the coil. The overall count loss is determined to be around 13% compared to a PET scan without the coil present in the FoV. A reconstruction using the new μ-map resulted in strongly reduced artifacts as well as increased overall PET intensities with a remaining relative difference of about 1% to a PET scan without the coil in the FoV.
Rutzinger, Martin; Höfle, Bernhard; Hollaus, Markus; Pfeifer, Norbert
2008-01-01
Airborne laser scanning (ALS) is a remote sensing technique well-suited for 3D vegetation mapping and structure characterization because the emitted laser pulses are able to penetrate small gaps in the vegetation canopy. The backscattered echoes from the foliage, woody vegetation, the terrain, and other objects are detected, leading to a cloud of points. Higher echo densities (>20 echoes/m2) and additional classification variables from full-waveform (FWF) ALS data, namely echo amplitude, echo width and information on multiple echoes from one shot, offer new possibilities in classifying the ALS point cloud. Currently FWF sensor information is hardly used for classification purposes. This contribution presents an object-based point cloud analysis (OBPA) approach, combining segmentation and classification of the 3D FWF ALS points designed to detect tall vegetation in urban environments. The definition tall vegetation includes trees and shrubs, but excludes grassland and herbage. In the applied procedure FWF ALS echoes are segmented by a seeded region growing procedure. All echoes sorted descending by their surface roughness are used as seed points. Segments are grown based on echo width homogeneity. Next, segment statistics (mean, standard deviation, and coefficient of variation) are calculated by aggregating echo features such as amplitude and surface roughness. For classification a rule base is derived automatically from a training area using a statistical classification tree. To demonstrate our method we present data of three sites with around 500,000 echoes each. The accuracy of the classified vegetation segments is evaluated for two independent validation sites. In a point-wise error assessment, where the classification is compared with manually classified 3D points, completeness and correctness better than 90% are reached for the validation sites. In comparison to many other algorithms the proposed 3D point classification works on the original measurements directly, i.e. the acquired points. Gridding of the data is not necessary, a process which is inherently coupled to loss of data and precision. The 3D properties provide especially a good separability of buildings and terrain points respectively, if they are occluded by vegetation. PMID:27873771
NASA Astrophysics Data System (ADS)
Schwantes, Adam Christopher
Stratocumuli are a type of low clouds composed of individual convective elements that together form a continuous layer of clouds. Stratocumuli cover large regions of the Earth's surface, which make them important components in the Earth's radiation budget. Stratocumuli strongly reflect solar shortwave radiation, while weakly affecting outgoing longwave radiation. This leads to a strong radiative cooling effect that affects the Earth's radiation budget. Therefore it is important to investigate the mechanisms that affect the longevity of stratocumuli, so that their impact on the Earth's radiation budget can be fully understood. One mechanism that is currently being studied as influencing the lifetime of such cloud layers is boundary layer/surface coupling. It has been shown than in some regions (i.e. the west coast of South America) stratocumuli tend to break up when the boundary layer is decoupled with the surface, because they are cut off from their moisture source. This study will investigate the macro- and micro-physical properties of stratocumuli when boundary layers are either coupled to or decoupled from the surface. This will help advance understanding of the effects these macro- and micro-physical properties have on the lifetime of stratocumuli under different boundary layer conditions. This study used the Department of Energy Atmospheric Radiation Measurement (DOE ARM) mobile measurements facility (AMF) at the Azores site from June 2009 to December 2010. The measurements that were used include temperature profiles from radiosondes, cloud liquid water path (LWP) retrieved from the Microwave radiometer, and cloud base and top heights derived from W-band ARM Cloud Radar and lidar. Satellite images provided by the NASA Langley Research Center were also used to visually decipher cloud types over the region so that only single-layered stratocumuli cases are used in the study. To differentiate between coupled and decoupled cloud layers, two methods are used. The first method compares cloud base height and lifting condensation level (LCL) for surface air parcels. The second method uses potential temperature profiles to indicate whether a boundary layer is coupled or decoupled from the surface. The results from these two methods were then compared using select cases/samples when both methods classified a sample as coupled or decoupled. In this study, a total of seven coupled or decoupled cases (2-3 days long each) have been selected from the 19 month AMF dataset. Characteristics of the coupled and decoupled cases have been studied to identify similarities and differences. Furthermore, comparison results from this study have shown that there are similarities and differences between drizzling/non-drizzling stratocumulus clouds and decoupled/coupled stratocumulus clouds. Drizzling/decoupled stratocumuli tend to have higher LWP, cloud-droplet effective radius (re), cloud-top height, and cloud thickness values while non-drizzling/coupled stratocumuli have higher cloud-droplet number concentration (Nd) and cloud condensation nuclei concentration (NCCN) values. It was also determined that during daytime hours when stratocumuli are decoupled, they tend to be open cells, while coupled stratocumuli tend to be closed cells. Finally, decoupled nighttime stratocumuli were found to have higher LWPs compared to decoupled daytime stratocumuli, which resulted in the significant amount of heavy drizzle events occurring at night.
Multiple-Primitives Hierarchical Classification of Airborne Laser Scanning Data in Urban Areas
NASA Astrophysics Data System (ADS)
Ni, H.; Lin, X. G.; Zhang, J. X.
2017-09-01
A hierarchical classification method for Airborne Laser Scanning (ALS) data of urban areas is proposed in this paper. This method is composed of three stages among which three types of primitives are utilized, i.e., smooth surface, rough surface, and individual point. In the first stage, the input ALS data is divided into smooth surfaces and rough surfaces by employing a step-wise point cloud segmentation method. In the second stage, classification based on smooth surfaces and rough surfaces is performed. Points in the smooth surfaces are first classified into ground and buildings based on semantic rules. Next, features of rough surfaces are extracted. Then, points in rough surfaces are classified into vegetation and vehicles based on the derived features and Random Forests (RF). In the third stage, point-based features are extracted for the ground points, and then, an individual point classification procedure is performed to classify the ground points into bare land, artificial ground and greenbelt. Moreover, the shortages of the existing studies are analyzed, and experiments show that the proposed method overcomes these shortages and handles more types of objects.
Using Multi-Scale Modeling Systems and Satellite Data to Study the Precipitation Processes
NASA Technical Reports Server (NTRS)
Tao, Wei--Kuo; Chern, J.; Lamg, S.; Matsui, T.; Shen, B.; Zeng, X.; Shi, R.
2010-01-01
In recent years, exponentially increasing computer power extended Cloud Resolving Model (CRM) integrations from hours to months, the number of computational grid points from less than a thousand to close to ten million. Three-dimensional models are now more prevalent. Much attention is devoted to precipitating cloud systems where the crucial 1-km scales are resolved in horizontal domains as large as 10,000 km in two-dimensions, and 1,000 x 1,000 sq km in three-dimensions. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that NWP and mesoscale models can be run in grid size similar to cloud resolving models through nesting technique. Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model). (2) a regional scale model (a NASA unified weather research and forecast, W8F). (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, a review of developments and applications of the multi-scale modeling system will be presented. In particular, the results from using multi-scale modeling systems to study the interactions between clouds, precipitation, and aerosols will be presented. Also how to use the multi-satellite simulator to improve precipitation processes will be discussed.
Using Multi-Scale Modeling Systems to Study the Precipitation Processes
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo
2010-01-01
In recent years, exponentially increasing computer power has extended Cloud Resolving Model (CRM) integrations from hours to months, the number of computational grid points from less than a thousand to close to ten million. Three-dimensional models are now more prevalent. Much attention is devoted to precipitating cloud systems where the crucial 1-km scales are resolved in horizontal domains as large as 10,000 km in two-dimensions, and 1,000 x 1,000 km2 in three-dimensions. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that NWP and mesoscale model can be run in grid size similar to cloud resolving model through nesting technique. Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, a review of developments and applications of the multi-scale modeling system will be presented. In particular, the results from using multi-scale modeling system to study the interactions between clouds, precipitation, and aerosols will be presented. Also how to use of the multi-satellite simulator to improve precipitation processes will be discussed.
Extraction and representation of common feature from uncertain facial expressions with cloud model.
Wang, Shuliang; Chi, Hehua; Yuan, Hanning; Geng, Jing
2017-12-01
Human facial expressions are key ingredient to convert an individual's innate emotion in communication. However, the variation of facial expressions affects the reliable identification of human emotions. In this paper, we present a cloud model to extract facial features for representing human emotion. First, the uncertainties in facial expression are analyzed in the context of cloud model. The feature extraction and representation algorithm is established under cloud generators. With forward cloud generator, facial expression images can be re-generated as many as we like for visually representing the extracted three features, and each feature shows different roles. The effectiveness of the computing model is tested on Japanese Female Facial Expression database. Three common features are extracted from seven facial expression images. Finally, the paper is concluded and remarked.
Evaluation Model for Pavement Surface Distress on 3d Point Clouds from Mobile Mapping System
NASA Astrophysics Data System (ADS)
Aoki, K.; Yamamoto, K.; Shimamura, H.
2012-07-01
This paper proposes a methodology to evaluate the pavement surface distress for maintenance planning of road pavement using 3D point clouds from Mobile Mapping System (MMS). The issue on maintenance planning of road pavement requires scheduled rehabilitation activities for damaged pavement sections to keep high level of services. The importance of this performance-based infrastructure asset management on actual inspection data is globally recognized. Inspection methodology of road pavement surface, a semi-automatic measurement system utilizing inspection vehicles for measuring surface deterioration indexes, such as cracking, rutting and IRI, have already been introduced and capable of continuously archiving the pavement performance data. However, any scheduled inspection using automatic measurement vehicle needs much cost according to the instruments' specification or inspection interval. Therefore, implementation of road maintenance work, especially for the local government, is difficult considering costeffectiveness. Based on this background, in this research, the methodologies for a simplified evaluation for pavement surface and assessment of damaged pavement section are proposed using 3D point clouds data to build urban 3D modelling. The simplified evaluation results of road surface were able to provide useful information for road administrator to find out the pavement section for a detailed examination and for an immediate repair work. In particular, the regularity of enumeration of 3D point clouds was evaluated using Chow-test and F-test model by extracting the section where the structural change of a coordinate value was remarkably achieved. Finally, the validity of the current methodology was investigated by conducting a case study dealing with the actual inspection data of the local roads.
Automated Feature Extraction of Foredune Morphology from Terrestrial Lidar Data
NASA Astrophysics Data System (ADS)
Spore, N.; Brodie, K. L.; Swann, C.
2014-12-01
Foredune morphology is often described in storm impact prediction models using the elevation of the dune crest and dune toe and compared with maximum runup elevations to categorize the storm impact and predicted responses. However, these parameters do not account for other foredune features that may make them more or less erodible, such as alongshore variations in morphology, vegetation coverage, or compaction. The goal of this work is to identify other descriptive features that can be extracted from terrestrial lidar data that may affect the rate of dune erosion under wave attack. Daily, mobile-terrestrial lidar surveys were conducted during a 6-day nor'easter (Hs = 4 m in 6 m water depth) along 20km of coastline near Duck, North Carolina which encompassed a variety of foredune forms in close proximity to each other. This abstract will focus on the tools developed for the automated extraction of the morphological features from terrestrial lidar data, while the response of the dune will be presented by Brodie and Spore as an accompanying abstract. Raw point cloud data can be dense and is often under-utilized due to time and personnel constraints required for analysis, since many algorithms are not fully automated. In our approach, the point cloud is first projected into a local coordinate system aligned with the coastline, and then bare earth points are interpolated onto a rectilinear 0.5 m grid creating a high resolution digital elevation model. The surface is analyzed by identifying features along each cross-shore transect. Surface curvature is used to identify the position of the dune toe, and then beach and berm morphology is extracted shoreward of the dune toe, and foredune morphology is extracted landward of the dune toe. Changes in, and magnitudes of, cross-shore slope, curvature, and surface roughness are used to describe the foredune face and each cross-shore transect is then classified using its pre-storm morphology for storm-response analysis.
a Hadoop-Based Algorithm of Generating dem Grid from Point Cloud Data
NASA Astrophysics Data System (ADS)
Jian, X.; Xiao, X.; Chengfang, H.; Zhizhong, Z.; Zhaohui, W.; Dengzhong, Z.
2015-04-01
Airborne LiDAR technology has proven to be the most powerful tools to obtain high-density, high-accuracy and significantly detailed surface information of terrain and surface objects within a short time, and from which the Digital Elevation Model of high quality can be extracted. Point cloud data generated from the pre-processed data should be classified by segmentation algorithms, so as to differ the terrain points from disorganized points, then followed by a procedure of interpolating the selected points to turn points into DEM data. The whole procedure takes a long time and huge computing resource due to high-density, that is concentrated on by a number of researches. Hadoop is a distributed system infrastructure developed by the Apache Foundation, which contains a highly fault-tolerant distributed file system (HDFS) with high transmission rate and a parallel programming model (Map/Reduce). Such a framework is appropriate for DEM generation algorithms to improve efficiency. Point cloud data of Dongting Lake acquired by Riegl LMS-Q680i laser scanner was utilized as the original data to generate DEM by a Hadoop-based algorithms implemented in Linux, then followed by another traditional procedure programmed by C++ as the comparative experiment. Then the algorithm's efficiency, coding complexity, and performance-cost ratio were discussed for the comparison. The results demonstrate that the algorithm's speed depends on size of point set and density of DEM grid, and the non-Hadoop implementation can achieve a high performance when memory is big enough, but the multiple Hadoop implementation can achieve a higher performance-cost ratio, while point set is of vast quantities on the other hand.
D Building FAÇADE Reconstruction Using Handheld Laser Scanning Data
NASA Astrophysics Data System (ADS)
Sadeghi, F.; Arefi, H.; Fallah, A.; Hahn, M.
2015-12-01
3D The three dimensional building modelling has been an interesting topic of research for decades and it seems that photogrammetry methods provide the only economic means to acquire truly 3D city data. According to the enormous developments of 3D building reconstruction with several applications such as navigation system, location based services and urban planning, the need to consider the semantic features (such as windows and doors) becomes more essential than ever, and therefore, a 3D model of buildings as block is not any more sufficient. To reconstruct the façade elements completely, we employed the high density point cloud data that obtained from the handheld laser scanner. The advantage of the handheld laser scanner with capability of direct acquisition of very dense 3D point clouds is that there is no need to derive three dimensional data from multi images using structure from motion techniques. This paper presents a grammar-based algorithm for façade reconstruction using handheld laser scanner data. The proposed method is a combination of bottom-up (data driven) and top-down (model driven) methods in which, at first the façade basic elements are extracted in a bottom-up way and then they are served as pre-knowledge for further processing to complete models especially in occluded and incomplete areas. The first step of data driven modelling is using the conditional RANSAC (RANdom SAmple Consensus) algorithm to detect façade plane in point cloud data and remove noisy objects like trees, pedestrians, traffic signs and poles. Then, the façade planes are divided into three depth layers to detect protrusion, indentation and wall points using density histogram. Due to an inappropriate reflection of laser beams from glasses, the windows appear like holes in point cloud data and therefore, can be distinguished and extracted easily from point cloud comparing to the other façade elements. Next step, is rasterizing the indentation layer that holds the windows and doors information. After rasterization process, the morphological operators are applied in order to remove small irrelevant objects. Next, the horizontal splitting lines are employed to determine floors and vertical splitting lines are employed to detect walls, windows, and doors. The windows, doors and walls elements which are named as terminals are clustered during classification process. Each terminal contains a special property as width. Among terminals, windows and doors are named the geometry tiles in definition of the vocabularies of grammar rules. Higher order structures that inferred by grouping the tiles resulted in the production rules. The rules with three dimensional modelled façade elements constitute formal grammar that is named façade grammar. This grammar holds all the information that is necessary to reconstruct façades in the style of the given building. Thus, it can be used to improve and complete façade reconstruction in areas with no or limited sensor data. Finally, a 3D reconstructed façade model is generated that the accuracy of its geometry size and geometry position depends on the density of the raw point cloud.
Titan's atmosphere (clouds and composition): new results
NASA Astrophysics Data System (ADS)
Griffith, C. A.
Titan's atmosphere potentially sports a cycle similar to the hydrologic one on Earth with clouds, rain and seas, but with methane playing the terrestrial role of water. Over the past ten years many independent efforts indicated no strong evidence for cloudiness until some unique spectra were analyzed in 1998 (Griffith et al.). These surprising observations displayed enhanced fluxes of 14-200 % on two nights at precisely the wavelengths (windows) that sense Titan's lower altitude where clouds might reside. The morphology of these enhancements in all 4 windows observed indicate that clouds covered ~6-9 % of Titan's surface and existed at ~15 km altitude. Here I discuss new observations recorded in 1999 aimed to further characterize Titan's clouds. While we find no evidence for a massive cloud system similar to the one observed previously, 1%-4% fluctuations in flux occur daily. These modulations, similar in wavelength and morphology to the more pronounced ones observed earlier, suggest the presence of clouds covering ≤1% of Titan's disk. The variations are too small to have been detected by most prior measurements. Repeated observations, spaced 30 minutes apart, indicate a temporal variability observable in the time scale of a couple of hours. The cloud heights hint that convection might govern their evolution. Their short lives point to the presence of rain.
Registration algorithm of point clouds based on multiscale normal features
NASA Astrophysics Data System (ADS)
Lu, Jun; Peng, Zhongtao; Su, Hang; Xia, GuiHua
2015-01-01
The point cloud registration technology for obtaining a three-dimensional digital model is widely applied in many areas. To improve the accuracy and speed of point cloud registration, a registration method based on multiscale normal vectors is proposed. The proposed registration method mainly includes three parts: the selection of key points, the calculation of feature descriptors, and the determining and optimization of correspondences. First, key points are selected from the point cloud based on the changes of magnitude of multiscale curvatures obtained by using principal components analysis. Then the feature descriptor of each key point is proposed, which consists of 21 elements based on multiscale normal vectors and curvatures. The correspondences in a pair of two point clouds are determined according to the descriptor's similarity of key points in the source point cloud and target point cloud. Correspondences are optimized by using a random sampling consistency algorithm and clustering technology. Finally, singular value decomposition is applied to optimized correspondences so that the rigid transformation matrix between two point clouds is obtained. Experimental results show that the proposed point cloud registration algorithm has a faster calculation speed, higher registration accuracy, and better antinoise performance.
Accuracy assessment of building point clouds automatically generated from iphone images
NASA Astrophysics Data System (ADS)
Sirmacek, B.; Lindenbergh, R.
2014-06-01
Low-cost sensor generated 3D models can be useful for quick 3D urban model updating, yet the quality of the models is questionable. In this article, we evaluate the reliability of an automatic point cloud generation method using multi-view iPhone images or an iPhone video file as an input. We register such automatically generated point cloud on a TLS point cloud of the same object to discuss accuracy, advantages and limitations of the iPhone generated point clouds. For the chosen example showcase, we have classified 1.23% of the iPhone point cloud points as outliers, and calculated the mean of the point to point distances to the TLS point cloud as 0.11 m. Since a TLS point cloud might also include measurement errors and noise, we computed local noise values for the point clouds from both sources. Mean (μ) and standard deviation (σ) of roughness histograms are calculated as (μ1 = 0.44 m., σ1 = 0.071 m.) and (μ2 = 0.025 m., σ2 = 0.037 m.) for the iPhone and TLS point clouds respectively. Our experimental results indicate possible usage of the proposed automatic 3D model generation framework for 3D urban map updating, fusion and detail enhancing, quick and real-time change detection purposes. However, further insights should be obtained first on the circumstances that are needed to guarantee a successful point cloud generation from smartphone images.
A graph signal filtering-based approach for detection of different edge types on airborne lidar data
NASA Astrophysics Data System (ADS)
Bayram, Eda; Vural, Elif; Alatan, Aydin
2017-10-01
Airborne Laser Scanning is a well-known remote sensing technology, which provides a dense and highly accurate, yet unorganized point cloud of earth surface. During the last decade, extracting information from the data generated by airborne LiDAR systems has been addressed by many studies in geo-spatial analysis and urban monitoring applications. However, the processing of LiDAR point clouds is challenging due to their irregular structure and 3D geometry. In this study, we propose a novel framework for the detection of the boundaries of an object or scene captured by LiDAR. Our approach is motivated by edge detection techniques in vision research and it is established on graph signal filtering which is an exciting and promising field of signal processing for irregular data types. Due to the convenient applicability of graph signal processing tools on unstructured point clouds, we achieve the detection of the edge points directly on 3D data by using a graph representation that is constructed exclusively to answer the requirements of the application. Moreover, considering the elevation data as the (graph) signal, we leverage aerial characteristic of the airborne LiDAR data. The proposed method can be employed both for discovering the jump edges on a segmentation problem and for exploring the crease edges on a LiDAR object on a reconstruction/modeling problem, by only adjusting the filter characteristics.
Curvature computation in volume-of-fluid method based on point-cloud sampling
NASA Astrophysics Data System (ADS)
Kassar, Bruno B. M.; Carneiro, João N. E.; Nieckele, Angela O.
2018-01-01
This work proposes a novel approach to compute interface curvature in multiphase flow simulation based on Volume of Fluid (VOF) method. It is well documented in the literature that curvature and normal vector computation in VOF may lack accuracy mainly due to abrupt changes in the volume fraction field across the interfaces. This may cause deterioration on the interface tension forces estimates, often resulting in inaccurate results for interface tension dominated flows. Many techniques have been presented over the last years in order to enhance accuracy in normal vectors and curvature estimates including height functions, parabolic fitting of the volume fraction, reconstructing distance functions, coupling Level Set method with VOF, convolving the volume fraction field with smoothing kernels among others. We propose a novel technique based on a representation of the interface by a cloud of points. The curvatures and the interface normal vectors are computed geometrically at each point of the cloud and projected onto the Eulerian grid in a Front-Tracking manner. Results are compared to benchmark data and significant reduction on spurious currents as well as improvement in the pressure jump are observed. The method was developed in the open source suite OpenFOAM® extending its standard VOF implementation, the interFoam solver.
Altunay, Nail; Gürkan, Ramazan
2015-05-15
A new cloud-point extraction (CPE) for the determination of antimony species in biological and beverages samples has been established with flame atomic absorption spectrometry (FAAS). The method is based on the fact that formation of the competitive ion-pairing complex of Sb(III) and Sb(V) with Victoria Pure Blue BO (VPB(+)) at pH 10. The antimony species were individually detected by FAAS. Under the optimized conditions, the calibration range for Sb(V) is 1-250 μg L(-1) with a detection limit of 0.25 μg L(-1) and sensitive enhancement factor of 76.3 while the calibration range for Sb(III) is 10-400 μg L(-1) with a detection limit of 5.15 μg L(-1) and sensitive enhancement factor of 48.3. The precision as a relative standard deviation is in range of 0.24-2.35%. The method was successfully applied to the speciative determination of antimony species in the samples. The validation was verified by analysis of certified reference materials (CRMs). Copyright © 2014 Elsevier Ltd. All rights reserved.
Gürkan, Ramazan; Kır, Ufuk; Altunay, Nail
2015-08-01
The determination of inorganic arsenic species in water, beverages and foods become crucial in recent years, because arsenic species are considered carcinogenic and found at high concentrations in the samples. This communication describes a new cloud-point extraction (CPE) method for the determination of low quantity of arsenic species in the samples, purchased from the local market by UV-Visible Spectrophotometer (UV-Vis). The method is based on selective ternary complex of As(V) with acridine orange (AOH(+)) being a versatile fluorescence cationic dye in presence of tartaric acid and polyethylene glycol tert-octylphenyl ether (Triton X-114) at pH 5.0. Under the optimized conditions, a preconcentration factor of 65 and detection limit (3S blank/m) of 1.14 μg L(-1) was obtained from the calibration curve constructed in the range of 4-450 μg L(-1) with a correlation coefficient of 0.9932 for As(V). The method is validated by the analysis of certified reference materials (CRMs). Copyright © 2015 Elsevier Ltd. All rights reserved.
Pole-Like Street Furniture Decompostion in Mobile Laser Scanning Data
NASA Astrophysics Data System (ADS)
Li, F.; Oude Elberink, S.; Vosselman, G.
2016-06-01
Automatic semantic interpretation of street furniture has become a popular topic in recent years. Current studies detect street furniture as connected components of points above the street level. Street furniture classification based on properties of such components suffers from large intra class variability of shapes and cannot deal with mixed classes like traffic signs attached to light poles. In this paper, we focus on the decomposition of point clouds of pole-like street furniture. A novel street furniture decomposition method is proposed, which consists of three steps: (i) acquirement of prior-knowledge, (ii) pole extraction, (iii) components separation. For the pole extraction, a novel global pole extraction approach is proposed to handle 3 different cases of street furniture. In the evaluation of results, which involves the decomposition of 27 different instances of street furniture, we demonstrate that our method decomposes mixed classes street furniture into poles and different components with respect to different functionalities.
Use of Assisted Photogrammetry for Indoor and Outdoor Navigation Purposes
NASA Astrophysics Data System (ADS)
Pagliari, D.; Cazzaniga, N. E.; Pinto, L.
2015-05-01
Nowadays, devices and applications that require navigation solutions are continuously growing. For instance, consider the increasing demand of mapping information or the development of applications based on users' location. In some case it could be sufficient an approximate solution (e.g. at room level), but in the large amount of cases a better solution is required. The navigation problem has been solved from a long time using Global Navigation Satellite System (GNSS). However, it can be unless in obstructed areas, such as in urban areas or inside buildings. An interesting low cost solution is photogrammetry, assisted using additional information to scale the photogrammetric problem and recovering a solution also in critical situation for image-based methods (e.g. poor textured surfaces). In this paper, the use of assisted photogrammetry has been tested for both outdoor and indoor scenarios. Outdoor navigation problem has been faced developing a positioning system with Ground Control Points extracted from urban maps as constrain and tie points automatically extracted from the images acquired during the survey. The proposed approach has been tested under different scenarios, recovering the followed trajectory with an accuracy of 0.20 m. For indoor navigation a solution has been thought to integrate the data delivered by Microsoft Kinect, by identifying interesting features on the RGB images and re-projecting them on the point clouds generated from the delivered depth maps. Then, these points have been used to estimate the rotation matrix between subsequent point clouds and, consequently, to recover the trajectory with few centimeters of error.
Liu, Jian; Liang, Huawei; Wang, Zhiling; Chen, Xiangcheng
2015-01-01
The quick and accurate understanding of the ambient environment, which is composed of road curbs, vehicles, pedestrians, etc., is critical for developing intelligent vehicles. The road elements included in this work are road curbs and dynamic road obstacles that directly affect the drivable area. A framework for the online modeling of the driving environment using a multi-beam LIDAR, i.e., a Velodyne HDL-64E LIDAR, which describes the 3D environment in the form of a point cloud, is reported in this article. First, ground segmentation is performed via multi-feature extraction of the raw data grabbed by the Velodyne LIDAR to satisfy the requirement of online environment modeling. Curbs and dynamic road obstacles are detected and tracked in different manners. Curves are fitted for curb points, and points are clustered into bundles whose form and kinematics parameters are calculated. The Kalman filter is used to track dynamic obstacles, whereas the snake model is employed for curbs. Results indicate that the proposed framework is robust under various environments and satisfies the requirements for online processing. PMID:26404290
Accuracy assessment of airborne LIDAR data and automated extraction of features
NASA Astrophysics Data System (ADS)
Cetin, Ali Fuat
Airborne LIDAR technology is becoming more widely used since it provides fast and dense irregularly spaced 3D point clouds. The coordinates produced as a result of calibration of the system are used for surface modeling and information extraction. In this research a new idea of LIDAR detectable targets is introduced. In the second part of this research, a new technique to delineate the edge of road pavements automatically using only LIDAR is presented. The accuracy of LIDAR data should be determined before exploitation for any information extraction to support a Geographic Information System (GIS) database. Until recently there was no definitive research to provide a methodology for common and practical assessment of both horizontal and vertical accuracy of LIDAR data for end users. The idea used in this research was to use targets of such a size and design so that the position of each target can be determined using the Least Squares Image Matching Technique. The technique used in this research can provide end users and data providers an easy way to evaluate the quality of the product, especially when there are accessible hard surfaces to install the targets. The results of the technique are determined to be in a reasonable range when the point spacing of the data is sufficient. To delineate the edge of pavements, trees and buildings are removed from the point cloud, and the road surfaces are segmented from the remaining terrain data. This is accomplished using the homogeneous nature of road surfaces in intensity and height. There are not many studies to delineate the edge of road pavement after the road surfaces are extracted. In this research, template matching techniques are used with criteria computed by Gray Level Co-occurrence Matrix (GLCM) properties, in order to locate seed pixels in the image. The seed pixels are then used for placement of the matched templates along the road. The accuracy of the delineated edge of pavement is determined by comparing the coordinates of reference points collected via photogrammetry with the coordinates of the nearest points along the delineated edge.
Topographic lidar survey of the Chandeleur Islands, Louisiana, February 6, 2012
Guy, Kristy K.; Plant, Nathaniel G.; Bonisteel-Cormier, Jamie M.
2014-01-01
This Data Series Report contains lidar elevation data collected February 6, 2012, for Chandeleur Islands, Louisiana. Point cloud data in lidar data exchange format (LAS) and bare earth digital elevation models (DEMs) in ERDAS Imagine raster format (IMG) are available as downloadable files. The point cloud data—data points described in three dimensions—were processed to extract bare earth data; therefore, the point cloud data are organized into the following classes: 1– and 17–unclassified, 2–ground, 9–water, and 10–breakline proximity. Digital Aerial Solutions, LLC, (DAS) was contracted by the U.S. Geological Survey (USGS) to collect and process these data. The lidar data were acquired at a horizontal spacing (or nominal pulse spacing) of 0.5 meters (m) or less. The USGS conducted two ground surveys in small areas on the Chandeleur Islands on February 5, 2012. DAS calculated a root mean square error (RMSEz) of 0.034 m by comparing the USGS ground survey point data to triangulated irregular network (TIN) models built from the lidar elevation data. This lidar survey was conducted to document the topography and topographic change of the Chandeleur Islands. The survey supports detailed studies of Louisiana, Mississippi and Alabama barrier islands that resolve annual and episodic changes in beaches, berms and dunes associated with processes driven by storms, sea-level rise, and even human restoration activities. These lidar data are available to Federal, State and local governments, emergency-response officials, resource managers, and the general public.
The observed influence of local anthropogenic pollution on northern Alaskan cloud properties
NASA Astrophysics Data System (ADS)
Maahn, Maximilian; de Boer, Gijs; Creamean, Jessie M.; Feingold, Graham; McFarquhar, Greg M.; Wu, Wei; Mei, Fan
2017-12-01
Due to their importance for the radiation budget, liquid-containing clouds are a key component of the Arctic climate system. Depending on season, they can cool or warm the near-surface air. The radiative properties of these clouds depend strongly on cloud drop sizes, which are governed in part by the availability of cloud condensation nuclei. Here, we investigate how cloud drop sizes are modified in the presence of local emissions from industrial facilities at the North Slope of Alaska. For this, we use aircraft in situ observations of clouds and aerosols from the 5th Department of Energy Atmospheric Radiation Measurement (DOE ARM) Program's Airborne Carbon Measurements (ACME-V) campaign obtained in summer 2015. Comparison of observations from an area with petroleum extraction facilities (Oliktok Point) with data from a reference area relatively free of anthropogenic sources (Utqiaġvik/Barrow) represents an opportunity to quantify the impact of local industrial emissions on cloud properties. In the presence of local industrial emissions, the mean effective radii of cloud droplets are reduced from 12.2 to 9.4 µm, which leads to suppressed drizzle production and precipitation. At the same time, concentrations of refractory black carbon and condensation nuclei are enhanced below the clouds. These results demonstrate that the effects of anthropogenic pollution on local climate need to be considered when planning Arctic industrial infrastructure in a warming environment.
Cloud microphysics modification with an online coupled COSMO-MUSCAT regional model
NASA Astrophysics Data System (ADS)
Sudhakar, D.; Quaas, J.; Wolke, R.; Stoll, J.; Muehlbauer, A. D.; Tegen, I.
2015-12-01
Abstract: The quantification of clouds, aerosols, and aerosol-cloud interactions in models, continues to be a challenge (IPCC, 2013). In this scenario two-moment bulk microphysical scheme is used to understand the aerosol-cloud interactions in the regional model COSMO (Consortium for Small Scale Modeling). The two-moment scheme in COSMO has been especially designed to represent aerosol effects on the microphysics of mixed-phase clouds (Seifert et al., 2006). To improve the model predictability, the radiation scheme has been coupled with two-moment microphysical scheme. Further, the cloud microphysics parameterization has been modified via coupling COSMO with MUSCAT (MultiScale Chemistry Aerosol Transport model, Wolke et al., 2004). In this study, we will be discussing the initial result from the online-coupled COSMO-MUSCAT model system with modified two-moment parameterization scheme along with COSP (CFMIP Observational Simulator Package) satellite simulator. This online coupled model system aims to improve the sub-grid scale process in the regional weather prediction scenario. The constant aerosol concentration used in the Seifert and Beheng, (2006) parameterizations in COSMO model has been replaced by aerosol concentration derived from MUSCAT model. The cloud microphysical process from the modified two-moment scheme is compared with stand-alone COSMO model. To validate the robustness of the model simulation, the coupled model system is integrated with COSP satellite simulator (Muhlbauer et al., 2012). Further, the simulations are compared with MODIS (Moderate Resolution Imaging Spectroradiometer) and ISCCP (International Satellite Cloud Climatology Project) satellite products.
Helicopter-based Photography for use in SfM over the West Greenland Ablation Zone
NASA Astrophysics Data System (ADS)
Mote, T. L.; Tedesco, M.; Astuti, I.; Cotten, D.; Jordan, T.; Rennermalm, A. K.
2015-12-01
Results of low-elevation high-resolution aerial photography from a helicopter are reported for a supraglacial watershed in West Greenland. Data were collected at the end of July 2015 over a supraglacial watershed terminating in the Kangerlussuaq region of Greenland and following the Utrecht University K-Transect of meteorological stations. The aerial photography reported here were complementary observations used to support hyperspectral measurements of albedo, discussed in the Greenland Ice sheet hydrology session of this AGU Fall meeting. A compact digital camera was installed inside a pod mounted on the side of the helicopter together with gyroscopes and accelerometers that were used to estimate the relative orientation. Continuous video was collected on 19 and 21 July flights, and frames extracted from the videos are used to create a series of aerial photos. Individual geo-located aerial photos were also taken on a 24 July flight. We demonstrate that by maintaining a constant flight elevation and a near constant ground speed, a helicopter with a mounted camera can produce 3-D structure of the ablation zone of the ice sheet at unprecedented spatial resolution of the order of 5 - 10 cm. By setting the intervalometer on the camera to 2 seconds, the images obtained provide sufficient overlap (>60%) for digital image alignment, even at a flight elevation of ~170m. As a result, very accurate point matching between photographs can be achieved and an extremely dense RGB encoded point cloud can be extracted. Overlapping images provide a series of stereopairs that can be used to create point cloud data consisting of 3 position and 3 color variables, X, Y, Z, R, G, and B. This point cloud is then used to create orthophotos or large scale digital elevation models, thus accurately displaying ice structure. The geo-referenced images provide a ground spatial resolution of approximately 6 cm, permitting analysis of detailed features, such as cryoconite holes, evolving small order streams, and cracks from hydrofracturing.
Pedestrian Pathfinding in Urban Environments: Preliminary Results
NASA Astrophysics Data System (ADS)
López-Pazos, G.; Balado, J.; Díaz-Vilariño, L.; Arias, P.; Scaioni, M.
2017-12-01
With the rise of urban population, many initiatives are focused upon the smart city concept, in which mobility of citizens arises as one of the main components. Updated and detailed spatial information of outdoor environments is needed to accurate path planning for pedestrians, especially for people with reduced mobility, in which physical barriers should be considered. This work presents a methodology to use point clouds to direct path planning. The starting point is a classified point cloud in which ground elements have been previously classified as roads, sidewalks, crosswalks, curbs and stairs. The remaining points compose the obstacle class. The methodology starts by individualizing ground elements and simplifying them into representative points, which are used as nodes in the graph creation. The region of influence of obstacles is used to refine the graph. Edges of the graph are weighted according to distance between nodes and according to their accessibility for wheelchairs. As a result, we obtain a very accurate graph representing the as-built environment. The methodology has been tested in a couple of real case studies and Dijkstra algorithm was used to pathfinding. The resulting paths represent the optimal according to motor skills and safety.
Towards a 3d Based Platform for Cultural Heritage Site Survey and Virtual Exploration
NASA Astrophysics Data System (ADS)
Seinturier, J.; Riedinger, C.; Mahiddine, A.; Peloso, D.; Boï, J.-M.; Merad, D.; Drap, P.
2013-07-01
This paper present a 3D platform that enables to make both cultural heritage site survey and its virtual exploration. It provides a single and easy way to use framework for merging multi scaled 3D measurements based on photogrammetry, documentation produced by experts and the knowledge of involved domains leaving the experts able to extract and choose the relevant information to produce the final survey. Taking into account the interpretation of the real world during the process of archaeological surveys is in fact the main goal of a survey. New advances in photogrammetry and the capability to produce dense 3D point clouds do not solve the problem of surveys. New opportunities for 3D representation are now available and we must to use them and find new ways to link geometry and knowledge. The new platform is able to efficiently manage and process large 3D data (points set, meshes) thanks to the implementation of space partition methods coming from the state of the art such as octrees and kd-trees and thus can interact with dense point clouds (thousands to millions of points) in real time. The semantisation of raw 3D data relies on geometric algorithms such as geodetic path computation, surface extraction from dense points cloud and geometrical primitive optimization. The platform provide an interface that enables expert to describe geometric representations of interesting objects like ashlar blocs, stratigraphic units or generic items (contour, lines, … ) directly onto the 3D representation of the site and without explicit links to underlying algorithms. The platform provide two ways for describing geometric representation. If oriented photographs are available, the expert can draw geometry on a photograph and the system computes its 3D representation by projection on the underlying mesh or the points cloud. If photographs are not available or if the expert wants to only use the 3D representation then he can simply draw objects shape on it. When 3D representations of objects of a surveyed site are extracted from the mesh, the link with domain related documentation is done by means of a set of forms designed by experts. Information from these forms are linked with geometry such that documentation can be attached to the viewed objects. Additional semantisation methods related to specific domains have been added to the platform. Beyond realistic rendering of surveyed site, the platform embeds non photorealistic rendering (NPR) algorithms. These algorithms enable to dynamically illustrate objects of interest that are related to knowledge with specific styles. The whole platform is implemented with a Java framework and relies on an actual and effective 3D engine that make available latest rendering methods. We illustrate this work on various photogrammetric survey, in medieval archaeology with the Shawbak castle in Jordan and in underwater archaeology on different marine sites.
Pion momentum distributions in the nucleon in chiral effective theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burkardt, Matthias R.; Hendricks, K. S.; Ji, Cheung Ryong
2013-03-01
We compute the light-cone momentum distributions of pions in the nucleon in chiral effective theory using both pseudovector and pseudoscalar pion--nucleon couplings. For the pseudovector coupling we identifymore » $$\\delta$$-function contributions associated with end-point singularities arising from the pion-nucleon rainbow diagrams, as well as from pion tadpole diagrams which are not present in the pseudoscalar model. Gauge invariance is demonstrated, to all orders in the pion mass, with the inclusion of Weinberg-Tomozawa couplings involving operator insertions at the $$\\pi NN$$ vertex. The results pave the way for phenomenological applications of pion cloud models that are manifestly consistent with the chiral symmetry properties of QCD.« less
Remote Sensing of Multiple Cloud Layer Heights Using Multi-Angular Measurements
NASA Technical Reports Server (NTRS)
Sinclair, Kenneth; Van Diedenhoven, Bastiaan; Cairns, Brian; Yorks, John; Wasilewski, Andrzej; Mcgill, Matthew
2017-01-01
Cloud top height (CTH) affects the radiative properties of clouds. Improved CTH observations will allow for improved parameterizations in large-scale models and accurate information on CTH is also important when studying variations in freezing point and cloud microphysics. NASAs airborne Research Scanning Polarimeter (RSP) is able to measure cloud top height using a novel multi-angular contrast approach. For the determination of CTH, a set of consecutive nadir reflectances is selected and the cross-correlations between this set and co-located sets at other viewing angles are calculated for a range of assumed cloud top heights, yielding a correlation profile. Under the assumption that cloud reflectances are isotropic, local peaks in the correlation profile indicate cloud layers. This technique can be applied to every RSP footprint and we demonstrate that detection of multiple peaks in the correlation profile allow retrieval of heights of multiple cloud layers within single RSP footprints. This paper provides an in-depth description of the architecture and performance of the RSPs CTH retrieval technique using data obtained during the Studies of Emissions and Atmospheric Composition, Clouds and Climate Coupling by Regional Surveys (SEAC(exp. 4)RS) campaign. RSP retrieved cloud heights are evaluated using collocated data from the Cloud Physics Lidar (CPL). The method's accuracy associated with the magnitude of correlation, optical thickness, cloud thickness and cloud height are explored. The technique is applied to measurements at a wavelength of 670 nm and 1880 nm and their combination. The 1880-nm band is virtually insensitive to the lower troposphere due to strong water vapor absorption.
Physical and numerical investigation of the flow induced vibration of the hydrofoil
NASA Astrophysics Data System (ADS)
Wu, Q.; Wang, G. Y.; Huang, B.
2016-11-01
The objective of this paper is to investigate the flow induced vibration of a flexible hydrofoil in cavitating flows via combined experimental and numerical studies. The experiments are presented for the modified NACA66 hydrofoil made of POM Polyacetate in the closed-loop cavitation tunnel at Beijing Institute of Technology. The high-speed camera and the single point Laser Doppler Vibrometer are applied to analyze the transient flow structures and the corresponding structural vibration characteristics. The hybrid coupled fluid structure interaction model is conducted to couple the incompressible and unsteady Reynolds Averaged Navier-Stokes solver with a simplified two-degree-of-freedom structural model. The k-ω SST turbulence model with the turbulence viscosity correction and the Zwart cavitation model are introduced to the present simulations. The results showed that with the decreasing of the cavitation number, the cavitating flows display incipient cavitation, sheet cavitation, cloud cavitation and supercavitation. The vibration magnitude increases dramatically for the cloud cavitation and decline for the supercavitation. The cloud cavitation development strongly affects the vibration response, which is corresponding to the periodically developing and shedding of the large-scale cloud cavity. The main frequency of the vibration amplitude is accordance with the cavity shedding frequency and other two frequencies of the vibration amplitude are corresponding to the natural frequencies of the bending and twisting modes.
Terrestrial laser scanning for geometry extraction and change monitoring of rubble mound breakwaters
NASA Astrophysics Data System (ADS)
Puente, I.; Lindenbergh, R.; González-Jorge, H.; Arias, P.
2014-05-01
Rubble mound breakwaters are coastal defense structures that protect harbors and beaches from the impacts of both littoral drift and storm waves. They occasionally break, leading to catastrophic damage to surrounding human populations and resulting in huge economic and environmental losses. Ensuring their stability is considered to be of vital importance and the major reason for setting up breakwater monitoring systems. Terrestrial laser scanning has been recognized as a monitoring technique of existing infrastructures. Its capability for measuring large amounts of accurate points in a short period of time is also well proven. In this paper we first introduce a method for the automatic extraction of face geometry of concrete cubic blocks, as typically used in breakwaters. Point clouds are segmented based on their orientation and location. Then we compare corresponding cuboids of three co-registered point clouds to estimate their transformation parameters over time. The first method is demonstrated on scan data from the Baiona breakwater (Spain) while the change detection is demonstrated on repeated scan data of concrete bricks, where the changing scenario was simulated. The application of the presented methodology has verified its effectiveness for outlining the 3D breakwater units and analyzing their changes at the millimeter level. Breakwater management activities could benefit from this initial version of the method in order to improve their productivity.
A Multi-scale Modeling System with Unified Physics to Study Precipitation Processes
NASA Astrophysics Data System (ADS)
Tao, W. K.
2017-12-01
In recent years, exponentially increasing computer power has extended Cloud Resolving Model (CRM) integrations from hours to months, the number of computational grid points from less than a thousand to close to ten million. Three-dimensional models are now more prevalent. Much attention is devoted to precipitating cloud systems where the crucial 1-km scales are resolved in horizontal domains as large as 10,000 km in two-dimensions, and 1,000 x 1,000 km2 in three-dimensions. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that NWP and mesoscale model can be run in grid size similar to cloud resolving model through nesting technique. Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), and (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF). The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, a review of developments and applications of the multi-scale modeling system will be presented. In particular, the results from using multi-scale modeling system to study the precipitation, processes and their sensitivity on model resolution and microphysics schemes will be presented. Also how to use of the multi-satellite simulator to improve precipitation processes will be discussed.
Using Multi-Scale Modeling Systems and Satellite Data to Study the Precipitation Processes
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo; Chern, J.; Lamg, S.; Matsui, T.; Shen, B.; Zeng, X.; Shi, R.
2011-01-01
In recent years, exponentially increasing computer power has extended Cloud Resolving Model (CRM) integrations from hours to months, the number of computational grid points from less than a thousand to close to ten million. Three-dimensional models are now more prevalent. Much attention is devoted to precipitating cloud systems where the crucial 1-km scales are resolved in horizontal domains as large as 10,000 km in two-dimensions, and 1,000 x 1,000 km2 in three-dimensions. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that NWP and mesoscale model can be run in grid size similar to cloud resolving model through nesting technique. Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (l) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, the recent developments and applications of the multi-scale modeling system will be presented. In particular, the results from using multi-scale modeling system to study the precipitating systems and hurricanes/typhoons will be presented. The high-resolution spatial and temporal visualization will be utilized to show the evolution of precipitation processes. Also how to use of the multi-satellite simulator tqimproy precipitation processes will be discussed.
A shape-based segmentation method for mobile laser scanning point clouds
NASA Astrophysics Data System (ADS)
Yang, Bisheng; Dong, Zhen
2013-07-01
Segmentation of mobile laser point clouds of urban scenes into objects is an important step for post-processing (e.g., interpretation) of point clouds. Point clouds of urban scenes contain numerous objects with significant size variability, complex and incomplete structures, and holes or variable point densities, raising great challenges for the segmentation of mobile laser point clouds. This paper addresses these challenges by proposing a shape-based segmentation method. The proposed method first calculates the optimal neighborhood size of each point to derive the geometric features associated with it, and then classifies the point clouds according to geometric features using support vector machines (SVMs). Second, a set of rules are defined to segment the classified point clouds, and a similarity criterion for segments is proposed to overcome over-segmentation. Finally, the segmentation output is merged based on topological connectivity into a meaningful geometrical abstraction. The proposed method has been tested on point clouds of two urban scenes obtained by different mobile laser scanners. The results show that the proposed method segments large-scale mobile laser point clouds with good accuracy and computationally effective time cost, and that it segments pole-like objects particularly well.
Liu, Wenyang; Cheung, Yam; Sabouri, Pouya; Arai, Tatsuya J; Sawant, Amit; Ruan, Dan
2015-11-01
To accurately and efficiently reconstruct a continuous surface from noisy point clouds captured by a surface photogrammetry system (VisionRT). The authors have developed a level-set based surface reconstruction method on point clouds captured by a surface photogrammetry system (VisionRT). The proposed method reconstructs an implicit and continuous representation of the underlying patient surface by optimizing a regularized fitting energy, offering extra robustness to noise and missing measurements. By contrast to explicit/discrete meshing-type schemes, their continuous representation is particularly advantageous for subsequent surface registration and motion tracking by eliminating the need for maintaining explicit point correspondences as in discrete models. The authors solve the proposed method with an efficient narrowband evolving scheme. The authors evaluated the proposed method on both phantom and human subject data with two sets of complementary experiments. In the first set of experiment, the authors generated a series of surfaces each with different black patches placed on one chest phantom. The resulting VisionRT measurements from the patched area had different degree of noise and missing levels, since VisionRT has difficulties in detecting dark surfaces. The authors applied the proposed method to point clouds acquired under these different configurations, and quantitatively evaluated reconstructed surfaces by comparing against a high-quality reference surface with respect to root mean squared error (RMSE). In the second set of experiment, the authors applied their method to 100 clinical point clouds acquired from one human subject. In the absence of ground-truth, the authors qualitatively validated reconstructed surfaces by comparing the local geometry, specifically mean curvature distributions, against that of the surface extracted from a high-quality CT obtained from the same patient. On phantom point clouds, their method achieved submillimeter reconstruction RMSE under different configurations, demonstrating quantitatively the faith of the proposed method in preserving local structural properties of the underlying surface in the presence of noise and missing measurements, and its robustness toward variations of such characteristics. On point clouds from the human subject, the proposed method successfully reconstructed all patient surfaces, filling regions where raw point coordinate readings were missing. Within two comparable regions of interest in the chest area, similar mean curvature distributions were acquired from both their reconstructed surface and CT surface, with mean and standard deviation of (μrecon=-2.7×10(-3) mm(-1), σrecon=7.0×10(-3) mm(-1)) and (μCT=-2.5×10(-3) mm(-1), σCT=5.3×10(-3) mm(-1)), respectively. The agreement of local geometry properties between the reconstructed surfaces and the CT surface demonstrated the ability of the proposed method in faithfully representing the underlying patient surface. The authors have integrated and developed an accurate level-set based continuous surface reconstruction method on point clouds acquired by a 3D surface photogrammetry system. The proposed method has generated a continuous representation of the underlying phantom and patient surfaces with good robustness against noise and missing measurements. It serves as an important first step for further development of motion tracking methods during radiotherapy.
Remote Sensing of Cloud Properties using Ground-based Measurements of Zenith Radiance
NASA Technical Reports Server (NTRS)
Chiu, J. Christine; Marshak, Alexander; Knyazikhin, Yuri; Wiscombe, Warren J.; Barker, Howard W.; Barnard, James C.; Luo, Yi
2006-01-01
An extensive verification of cloud property retrievals has been conducted for two algorithms using zenith radiances measured by the Atmospheric Radiation Measurement (ARM) Program ground-based passive two-channel (673 and 870 nm) Narrow Field-Of-View Radiometer. The underlying principle of these algorithms is that clouds have nearly identical optical properties at these wavelengths, but corresponding spectral surface reflectances (for vegetated surfaces) differ significantly. The first algorithm, the RED vs. NIR, works for a fully three-dimensional cloud situation. It retrieves not only cloud optical depth, but also an effective radiative cloud fraction. Importantly, due to one-second time resolution of radiance measurements, we are able, for the first time, to capture detailed changes in cloud structure at the natural time scale of cloud evolution. The cloud optical depths tau retrieved by this algorithm are comparable to those inferred from both downward fluxes in overcast situations and microwave brightness temperatures for broken clouds. Moreover, it can retrieve tau for thin patchy clouds, where flux and microwave observations fail to detect them. The second algorithm, referred to as COUPLED, couples zenith radiances with simultaneous fluxes to infer 2. In general, the COUPLED and RED vs. NIR algorithms retrieve consistent values of tau. However, the COUPLED algorithm is more sensitive to the accuracies of measured radiance, flux, and surface reflectance than the RED vs. NIR algorithm. This is especially true for thick overcast clouds where it may substantially overestimate z.
Extraction of Profile Information from Cloud Contaminated Radiances. Appendixes 2
NASA Technical Reports Server (NTRS)
Smith, W. L.; Zhou, D. K.; Huang, H.-L.; Li, Jun; Liu, X.; Larar, A. M.
2003-01-01
Clouds act to reduce the signal level and may produce noise dependence on the complexity of the cloud properties and the manner in which they are treated in the profile retrieval process. There are essentially three ways to extract profile information from cloud contaminated radiances: (1) cloud-clearing using spatially adjacent cloud contaminated radiance measurements, (2) retrieval based upon the assumption of opaque cloud conditions, and (3) retrieval or radiance assimilation using a physically correct cloud radiative transfer model which accounts for the absorption and scattering of the radiance observed. Cloud clearing extracts the radiance arising from the clear air portion of partly clouded fields of view permitting soundings to the surface or the assimilation of radiances as in the clear field of view case. However, the accuracy of the clear air radiance signal depends upon the cloud height and optical property uniformity across the two fields of view used in the cloud clearing process. The assumption of opaque clouds within the field of view permits relatively accurate profiles to be retrieved down to near cloud top levels, the accuracy near the cloud top level being dependent upon the actual microphysical properties of the cloud. The use of a physically correct cloud radiative transfer model enables accurate retrievals down to cloud top levels and below semi-transparent cloud layers (e.g., cirrus). It should also be possible to assimilate cloudy radiances directly into the model given a physically correct cloud radiative transfer model using geometric and microphysical cloud parameters retrieved from the radiance spectra as initial cloud variables in the radiance assimilation process. This presentation reviews the above three ways to extract profile information from cloud contaminated radiances. NPOESS Airborne Sounder Testbed-Interferometer radiance spectra and Aqua satellite AIRS radiance spectra are used to illustrate how cloudy radiances can be used in the profile retrieval process.
NASA Astrophysics Data System (ADS)
López-García, Ignacio; Marín-Hernández, Juan José; Hernández-Córdoba, Manuel
2018-05-01
Vanadium (V) and vanadium (IV) in the presence of a small concentration of graphene oxide (0.05 mg mL-1) are quantitatively transferred to the coacervate obtained with Triton X-114 in a cloud point microextraction process. The surfactant-rich phase is directly injected into the graphite atomizer of an atomic absorption spectrometer. Using a 10-mL aliquot sample and 150 μL of a 15% Triton X-114 solution, the enrichment factor for the analyte is 103, which results in a detection limit of 0.02 μg L-1 vanadium. The separation of V(V) and V(IV) using an ion-exchanger allows speciation of the element at low concentrations. Data for seven reference water samples with certified vanadium contents confirm the reliability of the procedure. Several beer samples are also analyzed, those supplied as canned drinks showing low levels of tetravalent vanadium.
LSAH: a fast and efficient local surface feature for point cloud registration
NASA Astrophysics Data System (ADS)
Lu, Rongrong; Zhu, Feng; Wu, Qingxiao; Kong, Yanzi
2018-04-01
Point cloud registration is a fundamental task in high level three dimensional applications. Noise, uneven point density and varying point cloud resolutions are the three main challenges for point cloud registration. In this paper, we design a robust and compact local surface descriptor called Local Surface Angles Histogram (LSAH) and propose an effectively coarse to fine algorithm for point cloud registration. The LSAH descriptor is formed by concatenating five normalized sub-histograms into one histogram. The five sub-histograms are created by accumulating a different type of angle from a local surface patch respectively. The experimental results show that our LSAH is more robust to uneven point density and point cloud resolutions than four state-of-the-art local descriptors in terms of feature matching. Moreover, we tested our LSAH based coarse to fine algorithm for point cloud registration. The experimental results demonstrate that our algorithm is robust and efficient as well.
NASA Astrophysics Data System (ADS)
Jing, Ran; Gong, Zhaoning; Zhao, Wenji; Pu, Ruiliang; Deng, Lei
2017-12-01
Above-bottom biomass (ABB) is considered as an important parameter for measuring the growth status of aquatic plants, and is of great significance for assessing health status of wetland ecosystems. In this study, Structure from Motion (SfM) technique was used to rebuild the study area with high overlapped images acquired by an unmanned aerial vehicle (UAV). We generated orthoimages and SfM dense point cloud data, from which vegetation indices (VIs) and SfM point cloud variables including average height (HAVG), standard deviation of height (HSD) and coefficient of variation of height (HCV) were extracted. These VIs and SfM point cloud variables could effectively characterize the growth status of aquatic plants, and thus they could be used to develop a simple linear regression model (SLR) and a stepwise linear regression model (SWL) with field measured ABB samples of aquatic plants. We also utilized a decision tree method to discriminate different types of aquatic plants. The experimental results indicated that (1) the SfM technique could effectively process high overlapped UAV images and thus be suitable for the reconstruction of fine texture feature of aquatic plant canopy structure; and (2) an SWL model based on point cloud variables: HAVG, HSD, HCV and two VIs: NGRDI, ExGR as independent variables has produced the best predictive result of ABB of aquatic plants in the study area, with a coefficient of determination of 0.84 and a relative root mean square error of 7.13%. In this analysis, a novel method for the quantitative inversion of a growth parameter (i.e., ABB) of aquatic plants in wetlands was demonstrated.
Wheat Ear Detection in Plots by Segmenting Mobile Laser Scanner Data
NASA Astrophysics Data System (ADS)
Velumani, K.; Oude Elberink, S.; Yang, M. Y.; Baret, F.
2017-09-01
The use of Light Detection and Ranging (LiDAR) to study agricultural crop traits is becoming popular. Wheat plant traits such as crop height, biomass fractions and plant population are of interest to agronomists and biologists for the assessment of a genotype's performance in the environment. Among these performance indicators, plant population in the field is still widely estimated through manual counting which is a tedious and labour intensive task. The goal of this study is to explore the suitability of LiDAR observations to automate the counting process by the individual detection of wheat ears in the agricultural field. However, this is a challenging task owing to the random cropping pattern and noisy returns present in the point cloud. The goal is achieved by first segmenting the 3D point cloud followed by the classification of segments into ears and non-ears. In this study, two segmentation techniques: a) voxel-based segmentation and b) mean shift segmentation were adapted to suit the segmentation of plant point clouds. An ear classification strategy was developed to distinguish the ear segments from leaves and stems. Finally, the ears extracted by the automatic methods were compared with reference ear segments prepared by manual segmentation. Both the methods had an average detection rate of 85 %, aggregated over different flowering stages. The voxel-based approach performed well for late flowering stages (wheat crops aged 210 days or more) with a mean percentage accuracy of 94 % and takes less than 20 seconds to process 50,000 points with an average point density of 16 points/cm2. Meanwhile, the mean shift approach showed comparatively better counting accuracy of 95% for early flowering stage (crops aged below 225 days) and takes approximately 4 minutes to process 50,000 points.
NASA Astrophysics Data System (ADS)
Ghate, V. P.; Albrecht, B. A.; Fairall, C. W.; Miller, M. A.; Brewer, A.
2010-12-01
Turbulence in the stratocumulus topped marine boundary layer (BL) is an important factor that is closely connected to both the cloud macro- and micro-physical characteristics, which can substantially affect their radiaitve properties. Data collected by ship borne instruments on the R/V Ronald H. Brown on November 27, 2008 as a part of the VAMOS Ocean-Cloud-Atmosphere-Land-Study Regional Experiment (VOCALS-Rex) are analyzed to study the turbulence structure of a stratocumulus topped marine BL. The first half of the analyzed 24 hour period was characterized by a coupled BL topped by a precipitating stratocumulus cloud; the second half had clear sky conditions with a decoupled BL. The motion stabilized vertically pointing W-band Doppler cloud radar reported the full Doppler spectrum at a temporal and spatial resolution of 3 Hz and 25 m respectively. The collocated motion stabilized Doppler lidar was operating at 2 micron wavelength and reported the Signal to Noise Ratio (SNR) and Doppler velocity at temporal and spatial resolution of 2 Hz and 30 m respectively. Data from the cloud Doppler radar and Doppler lidar were combined to yield the turbulence structure of entire BL in both cloudy and clear sky conditions. Retrievals were performed to remove the contribution of precipitating drizzle drops to the mean Doppler velocity measured by the radar. Hourly profiles of vertical velocity variance suggested high BL variance during coupled BL conditions and low variance during decoupled BL conditions. Some of the terms in second and third moment budget of vertical velocity are calculated and their diurnal evolution is explored.
Expanding the Impact of Photogrammetric Topography Through Improved Data Archiving and Access
NASA Astrophysics Data System (ADS)
Crosby, C. J.; Arrowsmith, R.; Nandigam, V.
2016-12-01
Centimeter to decimeter-scale 2.5 to 3D sampling of the Earth surface topography coupled with the potential for photorealistic coloring of point clouds and texture mapping of meshes enables a wide range of science applications. Not only is the configuration and state of the surface as imaged valuable, but repeat surveys enable quantification of topographic change (erosion, deposition, and displacement) caused by various geologic processes. We are in an era of ubiquitous point clouds which come from both active sources such as laser scanners and radar as well as passive scene reconstruction via structure from motion (SfM) photogrammetry. With the decreasing costs of high-resolution topography (HRT) data collection, via methods such as SfM, the number of researchers collecting these data is increasing. These "long-tail" topographic data are of modest size but great value, and challenges exist to making them widely discoverable, shared, annotated, cited, managed and archived. Presently, there are no central repositories or services to support storage and curation of these datasets. The NSF funded OpenTopography (OT) employs cyberinfrastructure including large-scale data management, high-performance computing, and service-oriented architectures, to provide efficient online access to large HRT (mostly lidar) datasets, metadata, and processing tools. With over 200 datasets and 12,000 registered users, OT is well positioned to provide curation for community collected photogrammetric topographic data. OT is developing a "Community DataSpace", a service built on a low cost storage cloud (e.g. AWS S3) to make it easy for researchers to upload, curate, annotate and distribute their datasets. The system's ingestion workflow will extract metadata from data uploaded; validate it; assign a digital object identifier (DOI); and create a searchable catalog entry, before publishing via the OT portal. The OT Community DataSpace will enable wider discovery and utilization of these HRT datasets via the OT portal and sources that federate the OT data catalog, promote citations, and most importantly increase the impact of investments in data to catalyze scientific discovery.
Managing the explosion of high resolution topography in the geosciences
NASA Astrophysics Data System (ADS)
Crosby, Christopher; Nandigam, Viswanath; Arrowsmith, Ramon; Phan, Minh; Gross, Benjamin
2017-04-01
Centimeter to decimeter-scale 2.5 to 3D sampling of the Earth surface topography coupled with the potential for photorealistic coloring of point clouds and texture mapping of meshes enables a wide range of science applications. Not only is the configuration and state of the surface as imaged valuable, but repeat surveys enable quantification of topographic change (erosion, deposition, and displacement) caused by various geologic processes. We are in an era of ubiquitous point clouds that come from both active sources such as laser scanners and radar as well as passive scene reconstruction via structure from motion (SfM) photogrammetry. With the decreasing costs of high-resolution topography (HRT) data collection, via methods such as SfM and UAS-based laser scanning, the number of researchers collecting these data is increasing. These "long-tail" topographic data are of modest size but great value, and challenges exist to making them widely discoverable, shared, annotated, cited, managed and archived. Presently, there are no central repositories or services to support storage and curation of these datasets. The U.S. National Science Foundation funded OpenTopography (OT) Facility employs cyberinfrastructure including large-scale data management, high-performance computing, and service-oriented architectures, to provide efficient online access to large HRT (mostly lidar) datasets, metadata, and processing tools. With over 225 datasets and 15,000 registered users, OT is well positioned to provide curation for community collected high-resolution topographic data. OT has developed a "Community DataSpace", a service built on a low cost storage cloud (e.g. AWS S3) to make it easy for researchers to upload, curate, annotate and distribute their datasets. The system's ingestion workflow will extract metadata from data uploaded; validate it; assign a digital object identifier (DOI); and create a searchable catalog entry, before publishing via the OT portal. The OT Community DataSpace enables wider discovery and utilization of these HRT datasets via the OT portal and sources that federate the OT data catalog, promote citations, and most importantly increase the impact of investments in data to catalyzes scientific discovery.
Reconstruction of Building Outlines in Dense Urban Areas Based on LIDAR Data and Address Points
NASA Astrophysics Data System (ADS)
Jarzabek-Rychard, M.
2012-07-01
The paper presents a comprehensive method for automated extraction and delineation of building outlines in densely built-up areas. A novel approach to outline reconstruction is the use of geocoded building address points. They give information about building location thus highly reduce task complexity. Reconstruction process is executed on 3D point clouds acquired by airborne laser scanner. The method consists of three steps: building detection, delineation and contours refinement. The algorithm is tested against a data set that presents the old market town and its surroundings. The results are discussed and evaluated by comparison to reference cadastral data.
NASA Astrophysics Data System (ADS)
Betts, A. K.; Tawfik, A. B.; Desjardins, R. L.
2016-12-01
We use 600 station years of hourly data from 14 stations on the Canadian Prairies to map the warm season hydrometeorology. The months from April (after snowmelt) to September, have a very similar coupling between surface thermodynamics and opaque cloud cover, which has been calibrated to give cloud radiative forcing. We can derive both the mean diurnal ranges and the diurnal imbalances as a function of opaque cloud cover. For the monthly diurnal climate, we compute the coupling coefficients with opaque cloud cover and lagged precipitation. In April the diurnal cycle climate has memory of precipitation back to freeze-up in November. During the growing season months of June, July and August, there is memory of precipitation back to March. Monthly mean temperature depends strongly on cloud but little on precipitation, while monthly mean mixing ratio depends on precipitation, but rather little on cloud. The coupling coefficients to cloud and precipitation change with increasing monthly precipitation anomaly. This observational climate analysis provides a firm basis for model evaluation.
NASA Astrophysics Data System (ADS)
Alby, E.; Elter, R.; Ripoche, C.; Quere, N.; de Strasbourg, INSA
2013-07-01
In a geopolitical very complex context as the Gaza Strip it has to be dealt with an enhancement of an archaeological site. This site is the monastery of St. Hilarion. To enable a cultural appropriation of a place with several identified phases of occupation must undertake extensive archaeological excavation. Excavate in this geographical area is to implement emergency excavations, so the aim of such a project can be questioned for each mission. Real estate pressure is also a motivating setting the documentation because the large population density does not allow systematic studies of underground before construction projects. This is also during the construction of a road that the site was discovered. Site dimensions are 150 m by 80 m. It is located on a sand dune, 300 m from the sea. To implement the survey, four different levels of detail have been defined for terrestrial photogrammetry. The first level elements are similar to objects, capitals, fragment of columns, tiles for example. Modeling of small objects requires the acquisition of very dense point clouds (density: 1 point / 1 mm on average). The object must then be a maximum area of the sensor of the camera, while retaining in the field of view a reference pattern for the scaling of the point cloud generated. The pictures are taken at a short distance from the object, using the images at full resolution. The main obstacle to the modeling of objects is the presence of noise partly due to the studied materials (sand, smooth rock), which do not favor the detection of points of interest quality. Pretreatments of the cloud will be achieved meticulously since the ouster of points on a surface of a small object results in the formation of a hole with a lack of information, useful to resulting mesh. Level 2 focuses on the stratigraphic units such as mosaics. The monastery of St. Hilarion identifies thirteen floors of which has been documented years ago by silver photographs, scanned later. Modeling of pavements is to obtain a three-dimensional model of the mosaic in particular to analyze the subsidence, which it may be subjected. The dense point cloud can go beyond by including the geometric shapes of the pavement. The calculation mesh using high-density point cloud colorization allows cloud sufficient to final rendering. Levels 3 and 4 will allow the survey and representation of loci and sectors. Their modeling can be done by colored mesh or textured by a generic pattern but also by geometric primitives. This method requires the segmentation simple geometrical elements and creates a surface geometry by analysis of the sample points. Statistical tools allow the extraction plans meet the requirements of the operator can monitor quantitatively the quality of the final rendering. Each level has constraints on the accuracy of survey and types of representation especially from the point clouds, which are detailed in the complete article.
NASA Astrophysics Data System (ADS)
Wei, Hongqiang; Zhou, Guiyun; Zhou, Junjie
2018-04-01
The classification of leaf and wood points is an essential preprocessing step for extracting inventory measurements and canopy characterization of trees from the terrestrial laser scanning (TLS) data. The geometry-based approach is one of the widely used classification method. In the geometry-based method, it is common practice to extract salient features at one single scale before the features are used for classification. It remains unclear how different scale(s) used affect the classification accuracy and efficiency. To assess the scale effect on the classification accuracy and efficiency, we extracted the single-scale and multi-scale salient features from the point clouds of two oak trees of different sizes and conducted the classification on leaf and wood. Our experimental results show that the balanced accuracy of the multi-scale method is higher than the average balanced accuracy of the single-scale method by about 10 % for both trees. The average speed-up ratio of single scale classifiers over multi-scale classifier for each tree is higher than 30.
Light extraction block with curved surface
Levermore, Peter; Krall, Emory; Silvernail, Jeffrey; Rajan, Kamala; Brown, Julia J.
2016-03-22
Light extraction blocks, and OLED lighting panels using light extraction blocks, are described, in which the light extraction blocks include various curved shapes that provide improved light extraction properties compared to parallel emissive surface, and a thinner form factor and better light extraction than a hemisphere. Lighting systems described herein may include a light source with an OLED panel. A light extraction block with a three-dimensional light emitting surface may be optically coupled to the light source. The three-dimensional light emitting surface of the block may includes a substantially curved surface, with further characteristics related to the curvature of the surface at given points. A first radius of curvature corresponding to a maximum principal curvature k.sub.1 at a point p on the substantially curved surface may be greater than a maximum height of the light extraction block. A maximum height of the light extraction block may be less than 50% of a maximum width of the light extraction block. Surfaces with cross sections made up of line segments and inflection points may also be fit to approximated curves for calculating the radius of curvature.
Exploring the Inner Edge of the Habitable Zone with Fully Coupled Oceans
NASA Technical Reports Server (NTRS)
Way, M.J; Del Genio, A.D.; Kelley, M.; Aleinov, I.; Clune, T.
2015-01-01
The role of rotation in planetary atmospheres plays an important role in regulating atmospheric and oceanic heat flow, cloud formation and precipitation. Using the Goddard Institute for Space Studies (GISS) three dimension General Circulation Model (3D-GCM) we demonstrate how varying rotation rate and increasing the incident solar flux on a planet are related to each other and may allow the inner edge of the habitable zone to be much closer than many previous habitable zone studies have indicated. This is shown in particular for fully coupled ocean runs -- some of the first that have been utilized in this context. Results with a 100m mixed layer depth and our fully coupled ocean runs are compared with those of Yang et al. 2014, which demonstrates consistency across models. However, there are clear differences for rotations rates of 1-16x present earth day lengths between the mixed layer and fully couple ocean models, which points to the necessity of using fully coupled oceans whenever possible. The latter was recently demonstrated quite clearly by Hu & Yang 2014 in their aquaworld study with a fully coupled ocean when compared with similar mixed layer ocean studies and by Cullum et al. 2014. Atmospheric constituent amounts were also varied alongside adjustments to cloud parameterizations (results not shown here). While the latter have an effect on what a planet's global mean temperature is once the oceans reach equilibrium they do not qualitatively change the overall relationship between the globally averaged surface temperature and incident solar flux for rotation rates ranging from 1 to 256 times the present Earth day length. At the same time this study demonstrates that given the lack of knowledge about the atmospheric constituents and clouds on exoplanets there is still a large uncertainty as to where a planet will sit in a given star's habitable zone.
Detecting Inspection Objects of Power Line from Cable Inspection Robot LiDAR Data
Qin, Xinyan; Wu, Gongping; Fan, Fei
2018-01-01
Power lines are extending to complex environments (e.g., lakes and forests), and the distribution of power lines in a tower is becoming complicated (e.g., multi-loop and multi-bundle). Additionally, power line inspection is becoming heavier and more difficult. Advanced LiDAR technology is increasingly being used to solve these difficulties. Based on precise cable inspection robot (CIR) LiDAR data and the distinctive position and orientation system (POS) data, we propose a novel methodology to detect inspection objects surrounding power lines. The proposed method mainly includes four steps: firstly, the original point cloud is divided into single-span data as a processing unit; secondly, the optimal elevation threshold is constructed to remove ground points without the existing filtering algorithm, improving data processing efficiency and extraction accuracy; thirdly, a single power line and its surrounding data can be respectively extracted by a structured partition based on a POS data (SPPD) algorithm from “layer” to “block” according to power line distribution; finally, a partition recognition method is proposed based on the distribution characteristics of inspection objects, highlighting the feature information and improving the recognition effect. The local neighborhood statistics and the 3D region growing method are used to recognize different inspection objects surrounding power lines in a partition. Three datasets were collected by two CIR LIDAR systems in our study. The experimental results demonstrate that an average 90.6% accuracy and average 98.2% precision at the point cloud level can be achieved. The successful extraction indicates that the proposed method is feasible and promising. Our study can be used to obtain precise dimensions of fittings for modeling, as well as automatic detection and location of security risks, so as to improve the intelligence level of power line inspection. PMID:29690560
Detecting Inspection Objects of Power Line from Cable Inspection Robot LiDAR Data.
Qin, Xinyan; Wu, Gongping; Lei, Jin; Fan, Fei; Ye, Xuhui
2018-04-22
Power lines are extending to complex environments (e.g., lakes and forests), and the distribution of power lines in a tower is becoming complicated (e.g., multi-loop and multi-bundle). Additionally, power line inspection is becoming heavier and more difficult. Advanced LiDAR technology is increasingly being used to solve these difficulties. Based on precise cable inspection robot (CIR) LiDAR data and the distinctive position and orientation system (POS) data, we propose a novel methodology to detect inspection objects surrounding power lines. The proposed method mainly includes four steps: firstly, the original point cloud is divided into single-span data as a processing unit; secondly, the optimal elevation threshold is constructed to remove ground points without the existing filtering algorithm, improving data processing efficiency and extraction accuracy; thirdly, a single power line and its surrounding data can be respectively extracted by a structured partition based on a POS data (SPPD) algorithm from "layer" to "block" according to power line distribution; finally, a partition recognition method is proposed based on the distribution characteristics of inspection objects, highlighting the feature information and improving the recognition effect. The local neighborhood statistics and the 3D region growing method are used to recognize different inspection objects surrounding power lines in a partition. Three datasets were collected by two CIR LIDAR systems in our study. The experimental results demonstrate that an average 90.6% accuracy and average 98.2% precision at the point cloud level can be achieved. The successful extraction indicates that the proposed method is feasible and promising. Our study can be used to obtain precise dimensions of fittings for modeling, as well as automatic detection and location of security risks, so as to improve the intelligence level of power line inspection.
Automatic digital surface model (DSM) generation from aerial imagery data
NASA Astrophysics Data System (ADS)
Zhou, Nan; Cao, Shixiang; He, Hongyan; Xing, Kun; Yue, Chunyu
2018-04-01
Aerial sensors are widely used to acquire imagery for photogrammetric and remote sensing application. In general, the images have large overlapped region, which provide a lot of redundant geometry and radiation information for matching. This paper presents a POS supported dense matching procedure for automatic DSM generation from aerial imagery data. The method uses a coarse-to-fine hierarchical strategy with an effective combination of several image matching algorithms: image radiation pre-processing, image pyramid generation, feature point extraction and grid point generation, multi-image geometrically constraint cross-correlation (MIG3C), global relaxation optimization, multi-image geometrically constrained least squares matching (MIGCLSM), TIN generation and point cloud filtering. The image radiation pre-processing is used in order to reduce the effects of the inherent radiometric problems and optimize the images. The presented approach essentially consists of 3 components: feature point extraction and matching procedure, grid point matching procedure and relational matching procedure. The MIGCLSM method is used to achieve potentially sub-pixel accuracy matches and identify some inaccurate and possibly false matches. The feasibility of the method has been tested on different aerial scale images with different landcover types. The accuracy evaluation is based on the comparison between the automatic extracted DSMs derived from the precise exterior orientation parameters (EOPs) and the POS.
The observed influence of local anthropogenic pollution on northern Alaskan cloud properties
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maahn, Maximilian; de Boer, Gijs; Creamean, Jessie M.
Due to their importance for the radiation budget, liquid-containing clouds are a key component of the Arctic climate system. Depending on season, they can cool or warm the near-surface air. The radiative properties of these clouds depend strongly on cloud drop sizes, which are governed in part by the availability of cloud condensation nuclei. Here, we investigate how cloud drop sizes are modified in the presence of local emissions from industrial facilities at the North Slope of Alaska. For this, we use aircraft in situ observations of clouds and aerosols from the 5th Department of Energy Atmospheric Radiation Measurement (DOE ARM)more » Program's Airborne Carbon Measurements (ACME-V) campaign obtained in summer 2015. Comparison of observations from an area with petroleum extraction facilities (Oliktok Point) with data from a reference area relatively free of anthropogenic sources (Utqiaġvik/Barrow) represents an opportunity to quantify the impact of local industrial emissions on cloud properties. In the presence of local industrial emissions, the mean effective radii of cloud droplets are reduced from 12.2 to 9.4 µm, which leads to suppressed drizzle production and precipitation. At the same time, concentrations of refractory black carbon and condensation nuclei are enhanced below the clouds. These results demonstrate that the effects of anthropogenic pollution on local climate need to be considered when planning Arctic industrial infrastructure in a warming environment.« less
The observed influence of local anthropogenic pollution on northern Alaskan cloud properties
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maahn, Maximilian; de Boer, Gijs; Creamean, Jessie M.
Due to their importance for the radiation budget, liquid-containing clouds are a key component of the Arctic climate system. Depending on season, they can cool or warm the near-surface air. The radiative properties of these clouds depend strongly on cloud drop sizes, which are governed in part by the availability of cloud condensation nuclei. Here, we investigate how cloud drop sizes are modified in the presence of local emissions from industrial facilities at the North Slope of Alaska. For this, we use aircraft in situ observations of clouds and aerosols from the 5th Department of Energy Atmospheric Radiation Measurement (DOE ARM)more » Program's Airborne Carbon Measurements (ACME-V) campaign obtained in summer 2015. Comparison of observations from an area with petroleum extraction facilities (Oliktok Point) with data from a reference area relatively free of anthropogenic sources (Utqiagvik/Barrow) represents an opportunity to quantify the impact of local industrial emissions on cloud properties. In the presence of local industrial emissions, the mean effective radii of cloud droplets are reduced from 12.2 to 9.4 µm, which leads to suppressed drizzle production and precipitation. At the same time, concentrations of refractory black carbon and condensation nuclei are enhanced below the clouds. These results demonstrate that the effects of anthropogenic pollution on local climate need to be considered when planning Arctic industrial infrastructure in a warming environment.« less
The observed influence of local anthropogenic pollution on northern Alaskan cloud properties
Maahn, Maximilian; de Boer, Gijs; Creamean, Jessie M.; ...
2017-12-11
Due to their importance for the radiation budget, liquid-containing clouds are a key component of the Arctic climate system. Depending on season, they can cool or warm the near-surface air. The radiative properties of these clouds depend strongly on cloud drop sizes, which are governed in part by the availability of cloud condensation nuclei. Here, we investigate how cloud drop sizes are modified in the presence of local emissions from industrial facilities at the North Slope of Alaska. For this, we use aircraft in situ observations of clouds and aerosols from the 5th Department of Energy Atmospheric Radiation Measurement (DOE ARM)more » Program's Airborne Carbon Measurements (ACME-V) campaign obtained in summer 2015. Comparison of observations from an area with petroleum extraction facilities (Oliktok Point) with data from a reference area relatively free of anthropogenic sources (Utqiagvik/Barrow) represents an opportunity to quantify the impact of local industrial emissions on cloud properties. In the presence of local industrial emissions, the mean effective radii of cloud droplets are reduced from 12.2 to 9.4 µm, which leads to suppressed drizzle production and precipitation. At the same time, concentrations of refractory black carbon and condensation nuclei are enhanced below the clouds. These results demonstrate that the effects of anthropogenic pollution on local climate need to be considered when planning Arctic industrial infrastructure in a warming environment.« less
D Central Line Extraction of Fossil Oyster Shells
NASA Astrophysics Data System (ADS)
Djuricic, A.; Puttonen, E.; Harzhauser, M.; Mandic, O.; Székely, B.; Pfeifer, N.
2016-06-01
Photogrammetry provides a powerful tool to digitally document protected, inaccessible, and rare fossils. This saves manpower in relation to current documentation practice and makes the fragile specimens more available for paleontological analysis and public education. In this study, high resolution orthophoto (0.5 mm) and digital surface models (1 mm) are used to define fossil boundaries that are then used as an input to automatically extract fossil length information via central lines. In general, central lines are widely used in geosciences as they ease observation, monitoring and evaluation of object dimensions. Here, the 3D central lines are used in a novel paleontological context to study fossilized oyster shells with photogrammetric and LiDAR-obtained 3D point cloud data. 3D central lines of 1121 Crassostrea gryphoides oysters of various shapes and sizes were computed in the study. Central line calculation included: i) Delaunay triangulation between the fossil shell boundary points and formation of the Voronoi diagram; ii) extraction of Voronoi vertices and construction of a connected graph tree from them; iii) reduction of the graph to the longest possible central line via Dijkstra's algorithm; iv) extension of longest central line to the shell boundary and smoothing by an adjustment of cubic spline curve; and v) integration of the central line into the corresponding 3D point cloud. The resulting longest path estimate for the 3D central line is a size parameter that can be applied in oyster shell age determination both in paleontological and biological applications. Our investigation evaluates ability and performance of the central line method to measure shell sizes accurately by comparing automatically extracted central lines with manually collected reference data used in paleontological analysis. Our results show that the automatically obtained central line length overestimated the manually collected reference by 1.5% in the test set, which is deemed sufficient for the selected paleontological application, namely shell age determination.
GPU surface extraction using the closest point embedding
NASA Astrophysics Data System (ADS)
Kim, Mark; Hansen, Charles
2015-01-01
Isosurface extraction is a fundamental technique used for both surface reconstruction and mesh generation. One method to extract well-formed isosurfaces is a particle system; unfortunately, particle systems can be slow. In this paper, we introduce an enhanced parallel particle system that uses the closest point embedding as the surface representation to speedup the particle system for isosurface extraction. The closest point embedding is used in the Closest Point Method (CPM), a technique that uses a standard three dimensional numerical PDE solver on two dimensional embedded surfaces. To fully take advantage of the closest point embedding, it is coupled with a Barnes-Hut tree code on the GPU. This new technique produces well-formed, conformal unstructured triangular and tetrahedral meshes from labeled multi-material volume datasets. Further, this new parallel implementation of the particle system is faster than any known methods for conformal multi-material mesh extraction. The resulting speed-ups gained in this implementation can reduce the time from labeled data to mesh from hours to minutes and benefits users, such as bioengineers, who employ triangular and tetrahedral meshes
Altunay, Nail; Gürkan, Ramazan; Kır, Ufuk
2016-01-01
A new, low-cost, micellar-sensitive and selective spectrophotometric method was developed for the determination of inorganic arsenic (As) species in beverage samples. Vortex-assisted cloud-point extraction (VA-CPE) was used for the efficient pre-concentration of As(V) in the selected samples. The method is based on selective and sensitive ion-pairing of As(V) with acridine red (ARH(+)) in the presence of pyrogallol and sequential extraction into the micellar phase of Triton X-45 at pH 6.0. Under the optimised conditions, the calibration curve was highly linear in the range of 0.8-280 µg l(-1) for As(V). The limits of detection and quantification of the method were 0.25 and 0.83 µg l(-1), respectively. The method was successfully applied to the determination of trace As in the pre-treated and digested samples under microwave and ultrasonic power. As(V) and total As levels in the samples were spectrophotometrically determined after pre-concentration with VA-CPE at 494 nm before and after oxidation with acidic KMnO4. The As(III) levels were calculated from the difference between As(V) and total As levels. The accuracy of the method was demonstrated by analysis of two certified reference materials (CRMs) where the measured values for As were statistically within the 95% confidence limit for the certified values.
NASA Astrophysics Data System (ADS)
Li, T.; Wang, Z.; Peng, J.
2018-04-01
Aboveground biomass (AGB) estimation is critical for quantifying carbon stocks and essential for evaluating carbon cycle. In recent years, airborne LiDAR shows its great ability for highly-precision AGB estimation. Most of the researches estimate AGB by the feature metrics extracted from the canopy height distribution of the point cloud which calculated based on precise digital terrain model (DTM). However, if forest canopy density is high, the probability of the LiDAR signal penetrating the canopy is lower, resulting in ground points is not enough to establish DTM. Then the distribution of forest canopy height is imprecise and some critical feature metrics which have a strong correlation with biomass such as percentiles, maximums, means and standard deviations of canopy point cloud can hardly be extracted correctly. In order to address this issue, we propose a strategy of first reconstructing LiDAR feature metrics through Auto-Encoder neural network and then using the reconstructed feature metrics to estimate AGB. To assess the prediction ability of the reconstructed feature metrics, both original and reconstructed feature metrics were regressed against field-observed AGB using the multiple stepwise regression (MS) and the partial least squares regression (PLS) respectively. The results showed that the estimation model using reconstructed feature metrics improved R2 by 5.44 %, 18.09 %, decreased RMSE value by 10.06 %, 22.13 % and reduced RMSEcv by 10.00 %, 21.70 % for AGB, respectively. Therefore, reconstructing LiDAR point feature metrics has potential for addressing AGB estimation challenge in dense canopy area.
Franck, J.V.; Broadhead, P.S.; Skiff, E.W.
1959-07-14
A semiautomatic measuring projector particularly adapted for measurement of the coordinates of photographic images of particle tracks as prcduced in a bubble or cloud chamber is presented. A viewing screen aids the operator in selecting a particle track for measurement. After approximate manual alignment, an image scanning system coupled to a servo control provides automatic exact alignment of a track image with a reference point. The apparatus can follow along a track with a continuous motion while recording coordinate data at various selected points along the track. The coordinate data is recorded on punched cards for subsequent computer calculation of particle trajectory, momentum, etc.
Applicability Analysis of Cloth Simulation Filtering Algorithm for Mobile LIDAR Point Cloud
NASA Astrophysics Data System (ADS)
Cai, S.; Zhang, W.; Qi, J.; Wan, P.; Shao, J.; Shen, A.
2018-04-01
Classifying the original point clouds into ground and non-ground points is a key step in LiDAR (light detection and ranging) data post-processing. Cloth simulation filtering (CSF) algorithm, which based on a physical process, has been validated to be an accurate, automatic and easy-to-use algorithm for airborne LiDAR point cloud. As a new technique of three-dimensional data collection, the mobile laser scanning (MLS) has been gradually applied in various fields, such as reconstruction of digital terrain models (DTM), 3D building modeling and forest inventory and management. Compared with airborne LiDAR point cloud, there are some different features (such as point density feature, distribution feature and complexity feature) for mobile LiDAR point cloud. Some filtering algorithms for airborne LiDAR data were directly used in mobile LiDAR point cloud, but it did not give satisfactory results. In this paper, we explore the ability of the CSF algorithm for mobile LiDAR point cloud. Three samples with different shape of the terrain are selected to test the performance of this algorithm, which respectively yields total errors of 0.44 %, 0.77 % and1.20 %. Additionally, large area dataset is also tested to further validate the effectiveness of this algorithm, and results show that it can quickly and accurately separate point clouds into ground and non-ground points. In summary, this algorithm is efficient and reliable for mobile LiDAR point cloud.
Investigating the Accuracy of Point Clouds Generated for Rock Surfaces
NASA Astrophysics Data System (ADS)
Seker, D. Z.; Incekara, A. H.
2016-12-01
Point clouds which are produced by means of different techniques are widely used to model the rocks and obtain the properties of rock surfaces like roughness, volume and area. These point clouds can be generated by applying laser scanning and close range photogrammetry techniques. Laser scanning is the most common method to produce point cloud. In this method, laser scanner device produces 3D point cloud at regular intervals. In close range photogrammetry, point cloud can be produced with the help of photographs taken in appropriate conditions depending on developing hardware and software technology. Many photogrammetric software which is open source or not currently provide the generation of point cloud support. Both methods are close to each other in terms of accuracy. Sufficient accuracy in the mm and cm range can be obtained with the help of a qualified digital camera and laser scanner. In both methods, field work is completed in less time than conventional techniques. In close range photogrammetry, any part of rock surfaces can be completely represented owing to overlapping oblique photographs. In contrast to the proximity of the data, these two methods are quite different in terms of cost. In this study, whether or not point cloud produced by photographs can be used instead of point cloud produced by laser scanner device is investigated. In accordance with this purpose, rock surfaces which have complex and irregular shape located in İstanbul Technical University Ayazaga Campus were selected as study object. Selected object is mixture of different rock types and consists of both partly weathered and fresh parts. Study was performed on a part of 30m x 10m rock surface. 2D and 3D analysis were performed for several regions selected from the point clouds of the surface models. 2D analysis is area-based and 3D analysis is volume-based. Analysis conclusions showed that point clouds in both are similar and can be used as alternative to each other. This proved that point cloud produced using photographs which are both economical and enables to produce data in less time can be used in several studies instead of point cloud produced by laser scanner.
Three-dimension reconstruction based on spatial light modulator
NASA Astrophysics Data System (ADS)
Deng, Xuejiao; Zhang, Nanyang; Zeng, Yanan; Yin, Shiliang; Wang, Weiyu
2011-02-01
Three-dimension reconstruction, known as an important research direction of computer graphics, is widely used in the related field such as industrial design and manufacture, construction, aerospace, biology and so on. Via such technology we can obtain three-dimension digital point cloud from a two-dimension image, and then simulate the three-dimensional structure of the physical object for further study. At present, the obtaining of three-dimension digital point cloud data is mainly based on the adaptive optics system with Shack-Hartmann sensor and phase-shifting digital holography. Referring to surface fitting, there are also many available methods such as iterated discrete fourier transform, convolution and image interpolation, linear phase retrieval. The main problems we came across in three-dimension reconstruction are the extraction of feature points and arithmetic of curve fitting. To solve such problems, we can, first of all, calculate the relevant surface normal vector information of each pixel in the light source coordinate system, then these vectors are to be converted to the coordinates of image through the coordinate conversion, so the expectant 3D point cloud get arise. Secondly, after the following procedures of de-noising, repairing, the feature points can later be selected and fitted to get the fitting function of the surface topography by means of Zernike polynomial, so as to reconstruct the determinand's three-dimensional topography. In this paper, a new kind of three-dimension reconstruction algorithm is proposed, with the assistance of which, the topography can be estimated from its grayscale at different sample points. Moreover, the previous stimulation and the experimental results prove that the new algorithm has a strong capability to fit, especially for large-scale objects .
Automated real-time search and analysis algorithms for a non-contact 3D profiling system
NASA Astrophysics Data System (ADS)
Haynes, Mark; Wu, Chih-Hang John; Beck, B. Terry; Peterman, Robert J.
2013-04-01
The purpose of this research is to develop a new means of identifying and extracting geometrical feature statistics from a non-contact precision-measurement 3D profilometer. Autonomous algorithms have been developed to search through large-scale Cartesian point clouds to identify and extract geometrical features. These algorithms are developed with the intent of providing real-time production quality control of cold-rolled steel wires. The steel wires in question are prestressing steel reinforcement wires for concrete members. The geometry of the wire is critical in the performance of the overall concrete structure. For this research a custom 3D non-contact profilometry system has been developed that utilizes laser displacement sensors for submicron resolution surface profiling. Optimizations in the control and sensory system allow for data points to be collected at up to an approximate 400,000 points per second. In order to achieve geometrical feature extraction and tolerancing with this large volume of data, the algorithms employed are optimized for parsing large data quantities. The methods used provide a unique means of maintaining high resolution data of the surface profiles while keeping algorithm running times within practical bounds for industrial application. By a combination of regional sampling, iterative search, spatial filtering, frequency filtering, spatial clustering, and template matching a robust feature identification method has been developed. These algorithms provide an autonomous means of verifying tolerances in geometrical features. The key method of identifying the features is through a combination of downhill simplex and geometrical feature templates. By performing downhill simplex through several procedural programming layers of different search and filtering techniques, very specific geometrical features can be identified within the point cloud and analyzed for proper tolerancing. Being able to perform this quality control in real time provides significant opportunities in cost savings in both equipment protection and waste minimization.
NASA Astrophysics Data System (ADS)
Zou, Xiaoliang; Zhao, Guihua; Li, Jonathan; Yang, Yuanxi; Fang, Yong
2016-06-01
With the rapid developments of the sensor technology, high spatial resolution imagery and airborne Lidar point clouds can be captured nowadays, which make classification, extraction, evaluation and analysis of a broad range of object features available. High resolution imagery, Lidar dataset and parcel map can be widely used for classification as information carriers. Therefore, refinement of objects classification is made possible for the urban land cover. The paper presents an approach to object based image analysis (OBIA) combing high spatial resolution imagery and airborne Lidar point clouds. The advanced workflow for urban land cover is designed with four components. Firstly, colour-infrared TrueOrtho photo and laser point clouds were pre-processed to derive the parcel map of water bodies and nDSM respectively. Secondly, image objects are created via multi-resolution image segmentation integrating scale parameter, the colour and shape properties with compactness criterion. Image can be subdivided into separate object regions. Thirdly, image objects classification is performed on the basis of segmentation and a rule set of knowledge decision tree. These objects imagery are classified into six classes such as water bodies, low vegetation/grass, tree, low building, high building and road. Finally, in order to assess the validity of the classification results for six classes, accuracy assessment is performed through comparing randomly distributed reference points of TrueOrtho imagery with the classification results, forming the confusion matrix and calculating overall accuracy and Kappa coefficient. The study area focuses on test site Vaihingen/Enz and a patch of test datasets comes from the benchmark of ISPRS WG III/4 test project. The classification results show higher overall accuracy for most types of urban land cover. Overall accuracy is 89.5% and Kappa coefficient equals to 0.865. The OBIA approach provides an effective and convenient way to combine high resolution imagery and Lidar ancillary data for classification of urban land cover.
NASA Astrophysics Data System (ADS)
Bremer, Magnus; Schmidtner, Korbinian; Rutzinger, Martin
2015-04-01
The architecture of forest canopies is a key parameter for forest ecological issues helping to model the variability of wood biomass and foliage in space and time. In order to understand the nature of subpixel effects of optical space-borne sensors with coarse spatial resolution, hypothetical 3D canopy models are widely used for the simulation of radiative transfer in forests. Thereby, radiation is traced through the atmosphere and canopy geometries until it reaches the optical sensor. For a realistic simulation scene we decompose terrestrial laser scanning point cloud data of leaf-off larch forest plots in the Austrian Alps and reconstruct detailed model ready input data for radiative transfer simulations. The point clouds are pre-classified into primitive classes using Principle Component Analysis (PCA) using scale adapted radius neighbourhoods. Elongated point structures are extracted as tree trunks. The tree trunks are used as seeds for a Dijkstra-growing procedure, in order to obtain single tree segmentation in the interlinked canopies. For the optimized reconstruction of branching architectures as vector models, point cloud skeletonisation is used in combination with an iterative Dijkstra-growing and by applying distance constraints. This allows conducting a hierarchical reconstruction preferring the tree trunk and higher order branches and avoiding over-skeletonization effects. Based on the reconstructed branching architectures, larch needles are modelled based on the hierarchical level of branches and the geometrical openness of the canopy. For radiative transfer simulations, branch architectures are used as mesh geometries representing branches as cylindrical pipes. Needles are either used as meshes or as voxel-turbids. The presented workflow allows an automatic classification and single tree segmentation in interlinked canopies. The iterative Dijkstra-growing using distance constraints generated realistic reconstruction results. As the mesh representation of branches proved to be sufficient for the simulation approach, the modelling of huge amounts of needles is much more efficient in voxel-turbid representation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Wenyang; Cheung, Yam; Sabouri, Pouya
2015-11-15
Purpose: To accurately and efficiently reconstruct a continuous surface from noisy point clouds captured by a surface photogrammetry system (VisionRT). Methods: The authors have developed a level-set based surface reconstruction method on point clouds captured by a surface photogrammetry system (VisionRT). The proposed method reconstructs an implicit and continuous representation of the underlying patient surface by optimizing a regularized fitting energy, offering extra robustness to noise and missing measurements. By contrast to explicit/discrete meshing-type schemes, their continuous representation is particularly advantageous for subsequent surface registration and motion tracking by eliminating the need for maintaining explicit point correspondences as in discretemore » models. The authors solve the proposed method with an efficient narrowband evolving scheme. The authors evaluated the proposed method on both phantom and human subject data with two sets of complementary experiments. In the first set of experiment, the authors generated a series of surfaces each with different black patches placed on one chest phantom. The resulting VisionRT measurements from the patched area had different degree of noise and missing levels, since VisionRT has difficulties in detecting dark surfaces. The authors applied the proposed method to point clouds acquired under these different configurations, and quantitatively evaluated reconstructed surfaces by comparing against a high-quality reference surface with respect to root mean squared error (RMSE). In the second set of experiment, the authors applied their method to 100 clinical point clouds acquired from one human subject. In the absence of ground-truth, the authors qualitatively validated reconstructed surfaces by comparing the local geometry, specifically mean curvature distributions, against that of the surface extracted from a high-quality CT obtained from the same patient. Results: On phantom point clouds, their method achieved submillimeter reconstruction RMSE under different configurations, demonstrating quantitatively the faith of the proposed method in preserving local structural properties of the underlying surface in the presence of noise and missing measurements, and its robustness toward variations of such characteristics. On point clouds from the human subject, the proposed method successfully reconstructed all patient surfaces, filling regions where raw point coordinate readings were missing. Within two comparable regions of interest in the chest area, similar mean curvature distributions were acquired from both their reconstructed surface and CT surface, with mean and standard deviation of (μ{sub recon} = − 2.7 × 10{sup −3} mm{sup −1}, σ{sub recon} = 7.0 × 10{sup −3} mm{sup −1}) and (μ{sub CT} = − 2.5 × 10{sup −3} mm{sup −1}, σ{sub CT} = 5.3 × 10{sup −3} mm{sup −1}), respectively. The agreement of local geometry properties between the reconstructed surfaces and the CT surface demonstrated the ability of the proposed method in faithfully representing the underlying patient surface. Conclusions: The authors have integrated and developed an accurate level-set based continuous surface reconstruction method on point clouds acquired by a 3D surface photogrammetry system. The proposed method has generated a continuous representation of the underlying phantom and patient surfaces with good robustness against noise and missing measurements. It serves as an important first step for further development of motion tracking methods during radiotherapy.« less
Liu, Wenyang; Cheung, Yam; Sabouri, Pouya; Arai, Tatsuya J.; Sawant, Amit; Ruan, Dan
2015-01-01
Purpose: To accurately and efficiently reconstruct a continuous surface from noisy point clouds captured by a surface photogrammetry system (VisionRT). Methods: The authors have developed a level-set based surface reconstruction method on point clouds captured by a surface photogrammetry system (VisionRT). The proposed method reconstructs an implicit and continuous representation of the underlying patient surface by optimizing a regularized fitting energy, offering extra robustness to noise and missing measurements. By contrast to explicit/discrete meshing-type schemes, their continuous representation is particularly advantageous for subsequent surface registration and motion tracking by eliminating the need for maintaining explicit point correspondences as in discrete models. The authors solve the proposed method with an efficient narrowband evolving scheme. The authors evaluated the proposed method on both phantom and human subject data with two sets of complementary experiments. In the first set of experiment, the authors generated a series of surfaces each with different black patches placed on one chest phantom. The resulting VisionRT measurements from the patched area had different degree of noise and missing levels, since VisionRT has difficulties in detecting dark surfaces. The authors applied the proposed method to point clouds acquired under these different configurations, and quantitatively evaluated reconstructed surfaces by comparing against a high-quality reference surface with respect to root mean squared error (RMSE). In the second set of experiment, the authors applied their method to 100 clinical point clouds acquired from one human subject. In the absence of ground-truth, the authors qualitatively validated reconstructed surfaces by comparing the local geometry, specifically mean curvature distributions, against that of the surface extracted from a high-quality CT obtained from the same patient. Results: On phantom point clouds, their method achieved submillimeter reconstruction RMSE under different configurations, demonstrating quantitatively the faith of the proposed method in preserving local structural properties of the underlying surface in the presence of noise and missing measurements, and its robustness toward variations of such characteristics. On point clouds from the human subject, the proposed method successfully reconstructed all patient surfaces, filling regions where raw point coordinate readings were missing. Within two comparable regions of interest in the chest area, similar mean curvature distributions were acquired from both their reconstructed surface and CT surface, with mean and standard deviation of (μrecon = − 2.7 × 10−3 mm−1, σrecon = 7.0 × 10−3 mm−1) and (μCT = − 2.5 × 10−3 mm−1, σCT = 5.3 × 10−3 mm−1), respectively. The agreement of local geometry properties between the reconstructed surfaces and the CT surface demonstrated the ability of the proposed method in faithfully representing the underlying patient surface. Conclusions: The authors have integrated and developed an accurate level-set based continuous surface reconstruction method on point clouds acquired by a 3D surface photogrammetry system. The proposed method has generated a continuous representation of the underlying phantom and patient surfaces with good robustness against noise and missing measurements. It serves as an important first step for further development of motion tracking methods during radiotherapy. PMID:26520747
NASA Astrophysics Data System (ADS)
Ratajczak, M.; Wężyk, P.
2015-12-01
Rapid development of terrestrial laser scanning (TLS) in recent years resulted in its recognition and implementation in many industries, including forestry and nature conservation. The use of the 3D TLS point clouds in the process of inventory of trees and stands, as well as in the determination of their biometric features (trunk diameter, tree height, crown base, number of trunk shapes), trees and lumber size (volume of trees) is slowly becoming a practice. In addition to the measurement precision, the primary added value of TLS is the ability to automate the processing of the clouds of points 3D in the direction of the extraction of selected features of trees and stands. The paper presents the original software (GNOM) for the automatic measurement of selected features of trees, based on the cloud of points obtained by the ground laser scanner FARO. With the developed algorithms (GNOM), the location of tree trunks on the circular research surface was specified and the measurement was performed; the measurement covered the DBH (l: 1.3m), further diameters of tree trunks at different heights of the tree trunk, base of the tree crown and volume of the tree trunk (the selection measurement method), as well as the tree crown. Research works were performed in the territory of the Niepolomice Forest in an unmixed pine stand (Pinussylvestris L.) on the circular surface with a radius of 18 m, within which there were 16 pine trees (14 of them were cut down). It was characterized by a two-storey and even-aged construction (147 years old) and was devoid of undergrowth. Ground scanning was performed just before harvesting. The DBH of 16 pine trees was specified in a fully automatic way, using the algorithm GNOM with an accuracy of +2.1%, as compared to the reference measurement by the DBH measurement device. The medium, absolute measurement error in the cloud of points - using semi-automatic methods "PIXEL" (between points) and PIPE (fitting the cylinder) in the FARO Scene 5.x., showed the error, 3.5% and 5.0%,.respectively The reference height was assumed as the measurement performed by the tape on the cut tree. The average error of automatic determination of the tree height by the algorithm GNOM based on the TLS point clouds amounted to 6.3% and was slightly higher than when using the manual method of measurements on profiles in the TerraScan (Terrasolid; the error of 5.6%). The relatively high value of the error may be mainly related to the small number of points TLS in the upper parts of crowns. The crown height measurement showed the error of +9.5%. The reference in this case was the tape measurement performed already on the trunks of cut pine trees. Processing the clouds of points by the algorithms GNOM for 16 analyzed trees took no longer than 10 min. (37 sec. /tree). The paper mainly showed the TLS measurement innovation and its high precision in acquiring biometric data in forestry, and at the same time also the further need to increase the degree of automation of processing the clouds of points 3D from terrestrial laser scanning.
Use of Vertical Aerial Images for Semi-Oblique Mapping
NASA Astrophysics Data System (ADS)
Poli, D.; Moe, K.; Legat, K.; Toschi, I.; Lago, F.; Remondino, F.
2017-05-01
The paper proposes a methodology for the use of the oblique sections of images from large-format photogrammetric cameras, by exploiting the effect of the central perspective geometry in the lateral parts of the nadir images ("semi-oblique" images). The point of origin of the investigation was the execution of a photogrammetric flight over Norcia (Italy), which was seriously damaged after the earthquake of 30/10/2016. Contrary to the original plan of oblique acquisitions, the flight was executed on 15/11/2017 using an UltraCam Eagle camera with focal length 80 mm, and combining two flight plans, rotated by 90º ("crisscross" flight). The images (GSD 5 cm) were used to extract a 2.5D DSM cloud, sampled to a XY-grid size of 2 GSD, a 3D point clouds with a mean spatial resolution of 1 GSD and a 3D mesh model at a resolution of 10 cm of the historic centre of Norcia for a quantitative assessment of the damages. From the acquired nadir images the "semi-oblique" images (forward, backward, left and right views) could be extracted and processed in a modified version of GEOBLY software for measurements and restitution purposes. The potential of such semi-oblique image acquisitions from nadir-view cameras is hereafter shown and commented.
Building Change Detection from Bi-Temporal Dense-Matching Point Clouds and Aerial Images.
Pang, Shiyan; Hu, Xiangyun; Cai, Zhongliang; Gong, Jinqi; Zhang, Mi
2018-03-24
In this work, a novel building change detection method from bi-temporal dense-matching point clouds and aerial images is proposed to address two major problems, namely, the robust acquisition of the changed objects above ground and the automatic classification of changed objects into buildings or non-buildings. For the acquisition of changed objects above ground, the change detection problem is converted into a binary classification, in which the changed area above ground is regarded as the foreground and the other area as the background. For the gridded points of each period, the graph cuts algorithm is adopted to classify the points into foreground and background, followed by the region-growing algorithm to form candidate changed building objects. A novel structural feature that was extracted from aerial images is constructed to classify the candidate changed building objects into buildings and non-buildings. The changed building objects are further classified as "newly built", "taller", "demolished", and "lower" by combining the classification and the digital surface models of two periods. Finally, three typical areas from a large dataset are used to validate the proposed method. Numerous experiments demonstrate the effectiveness of the proposed algorithm.
Confronting Models with Data: The GEWEX Cloud Systems Study
NASA Technical Reports Server (NTRS)
Randall, David; Curry, Judith; Duynkerke, Peter; Krueger, Steven; Moncrieff, Mitchell; Ryan, Brian; Starr, David OC.; Miller, Martin; Rossow, William; Tselioudis, George
2002-01-01
The GEWEX Cloud System Study (GCSS; GEWEX is the Global Energy and Water Cycle Experiment) was organized to promote development of improved parameterizations of cloud systems for use in climate and numerical weather prediction models, with an emphasis on the climate applications. The strategy of GCSS is to use two distinct kinds of models to analyze and understand observations of the behavior of several different types of clouds systems. Cloud-system-resolving models (CSRMs) have high enough spatial and temporal resolutions to represent individual cloud elements, but cover a wide enough range of space and time scales to permit statistical analysis of simulated cloud systems. Results from CSRMs are compared with detailed observations, representing specific cases based on field experiments, and also with statistical composites obtained from satellite and meteorological analyses. Single-column models (SCMs) are the surgically extracted column physics of atmospheric general circulation models. SCMs are used to test cloud parameterizations in an un-coupled mode, by comparison with field data and statistical composites. In the original GCSS strategy, data is collected in various field programs and provided to the CSRM Community, which uses the data to "certify" the CSRMs as reliable tools for the simulation of particular cloud regimes, and then uses the CSRMs to develop parameterizations, which are provided to the GCM Community. We report here the results of a re-thinking of the scientific strategy of GCSS, which takes into account the practical issues that arise in confronting models with data. The main elements of the proposed new strategy are a more active role for the large-scale modeling community, and an explicit recognition of the importance of data integration.
NASA Astrophysics Data System (ADS)
Korzeniowska, Karolina; Mandlburger, Gottfried; Klimczyk, Agata
2013-04-01
The paper presents an evaluation of different terrain point extraction algorithms for Airborne Laser Scanning (ALS) point clouds. The research area covers eight test sites in the Małopolska Province (Poland) with varying point density between 3-15points/m² and surface as well as land cover characteristics. In this paper the existing implementations of algorithms were considered. Approaches based on mathematical morphology, progressive densification, robust surface interpolation and segmentation were compared. From the group of morphological filters, the Progressive Morphological Filter (PMF) proposed by Zhang K. et al. (2003) in LIS software was evaluated. From the progressive densification filter methods developed by Axelsson P. (2000) the Martin Isenburg's implementation in LAStools software (LAStools, 2012) was chosen. The third group of methods are surface-based filters. In this study, we used the hierarchic robust interpolation approach by Kraus K., Pfeifer N. (1998) as implemented in SCOP++ (Trimble, 2012). The fourth group of methods works on segmentation. From this filtering concept the segmentation algorithm available in LIS was tested (Wichmann V., 2012). The main aim in executing the automatic classification for ground extraction was operating in default mode or with default parameters which were selected by the developers of the algorithms. It was assumed that the default settings were equivalent to the parameters on which the best results can be achieved. In case it was not possible to apply an algorithm in default mode, a combination of the available and most crucial parameters for ground extraction were selected. As a result of these analyses, several output LAS files with different ground classification were achieved. The results were described on the basis of qualitative and quantitative analyses, both being in a formal description. The classification differences were verified on point cloud data. Qualitative verification of ground extraction was made on the basis of a visual inspection of the results (Sithole G., Vosselman G., 2004; Meng X. et al., 2010). The results of these analyses were described as a graph using weighted assumption. The quantitative analyses were evaluated on a basis of Type I, Type II and Total errors (Sithole G., Vosselman G., 2003). The achieved results show that the analysed algorithms yield different classification accuracies depending on the landscape and land cover. The simplest terrain for ground extraction was flat rural area with sparse vegetation. The most difficult were mountainous areas with very dense vegetation where only a few ground points were available. Generally the LAStools algorithm gives good results in every type of terrain, but the ground surface is too smooth. The LIS Progressive Morphological Filter algorithm gives good results in forested flat and low slope areas. The surface-based algorithm from SCOP++ gives good results in mountainous areas - both forested and built-up because it better preserves steep slopes, sharp ridges and breaklines, but sometimes it fails to remove off-terrain objects from the ground class. The segmentation-based algorithm in LIS gives quite good results in built-up flat areas, but in forested areas it does not work well. Bibliography: Axelsson, P., 2000. DEM generation from laser scanner data using adaptive TIN models. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XXXIII (Pt. B4/1), 110- 117 Kraus, K., Pfeifer, N., 1998. Determination of terrain models in wooded areas with airborne laser scanner data. ISPRS Journal of Photogrammetry & Remote Sensing 53 (4), 193-203 LAStools website http://www.cs.unc.edu/~isenburg/lastools/ (verified in September 2012) Meng, X., Currit, N., Zhao, K., 2010. Ground Filtering Algorithms for Airborne LiDAR Data: A Review of Critical Issues. Remote Sensing 2, 833-860 Sithole, G., Vosselman, G., 2003. Report: ISPRS Comparison of Filters. Commission III, Working Group 3. Department of Geodesy, Faculty of Civil Engineering and Geosciences, Delft University of technology, The Netherlands Sithole, G., Vosselman, G., 2004. Experimental comparison of filter algorithms for bare-Earth extraction form airborne laser scanning point clouds. ISPRS Journal of Photogrammetry & Remote Sensing 59, 85-101 Trimble, 2012 http://www.trimble.com/geospatial/aerial-software.aspx (verified in November 2012) Wichmann, V., 2012. LIS Command Reference, LASERDATA GmbH, 1-231 Zhang, K., Chen, S.-C., Whitman, D., Shyu, M.-L., Yan, J., Zhang, C., 2003. A progressive morphological filter for removing non-ground measurements from airborne LIDAR data. IEEE Transactions on Geoscience and Remote Sensing, 41(4), 872-882
LiDAR Point Cloud and Stereo Image Point Cloud Fusion
2013-09-01
LiDAR point cloud (right) highlighting linear edge features ideal for automatic registration...point cloud (right) highlighting linear edge features ideal for automatic registration. Areas where topography is being derived, unfortunately, do...with the least amount of automatic correlation errors was used. The following graphic (Figure 12) shows the coverage of the WV1 stereo triplet as
LESTO: an Open Source GIS-based toolbox for LiDAR analysis
NASA Astrophysics Data System (ADS)
Franceschi, Silvia; Antonello, Andrea; Tonon, Giustino
2015-04-01
During the last five years different research institutes and private companies stared to implement new algorithms to analyze and extract features from LiDAR data but only a few of them also created a public available software. In the field of forestry there are different examples of software that can be used to extract the vegetation parameters from LiDAR data, unfortunately most of them are closed source (even if free), which means that the source code is not shared with the public for anyone to look at or make changes to. In 2014 we started the development of the library LESTO (LiDAR Empowered Sciences Toolbox Opensource): a set of modules for the analysis of LiDAR point cloud with an Open Source approach with the aim of improving the performance of the extraction of the volume of biomass and other vegetation parameters on large areas for mixed forest structures. LESTO contains a set of modules for data handling and analysis implemented within the JGrassTools spatial processing library. The main subsections are dedicated to 1) preprocessing of LiDAR raw data mainly in LAS format (utilities and filtering); 2) creation of raster derived products; 3) flight-lines identification and normalization of the intensity values; 4) tools for extraction of vegetation and buildings. The core of the LESTO library is the extraction of the vegetation parameters. We decided to follow the single tree based approach starting with the implementation of some of the most used algorithms in literature. These have been tweaked and applied on LiDAR derived raster datasets (DTM, DSM) as well as point clouds of raw data. The methods range between the simple extraction of tops and crowns from local maxima, the region growing method, the watershed method and individual tree segmentation on point clouds. The validation procedure consists in finding the matching between field and LiDAR-derived measurements at individual tree and plot level. An automatic validation procedure has been developed considering an Optimizer Algorithm based on Particle Swarm (PS) and a matching procedure which takes the position and the height of the extracted trees respect to the measured ones and iteratively tries to improve the candidate solution changing the models' parameters. Example of application of the LESTO tools will be presented on test sites. Test area consists in a series of circular sampling plots randomly selected from a 50x50 m regular grid within a buffer zone of 150 m from the forest road. Other studies on the same sites take as reference measurements of position, diameter, species and height and proposed allometric relationships. These allometric relationship were obtained for each species deriving the stem volume of single trees based on height and diameter at breast height. LESTO is integrated in the JGrassTools project and available for download at www.jgrasstools.org. A simple and easy to use graphical interface to run the models is available at https://github.com/moovida/STAGE/releases.
Satellite-derived vertical profiles of temperature and dew point for mesoscale weather forecast
NASA Astrophysics Data System (ADS)
Masselink, Thomas; Schluessel, P.
1995-12-01
Weather forecast-models need spatially high resolutioned vertical profiles of temperature and dewpoint for their initialisation. These profiles can be supplied by a combination of data from the Tiros-N Operational Vertical Sounder (TOVS) and the imaging Advanced Very High Resolution Radiometer (AVHRR) on board the NOAA polar orbiting sate!- lites. In cloudy cases the profiles derived from TOVS data only are of insufficient accuracy. The stanthrd deviations from radiosonde ascents or numerical weather analyses likely exceed 2 K in temperature and 5Kin dewpoint profiles. It will be shown that additional cloud information as retrieved from AVHIRR allows a significant improvement in theaccuracy of vertical profiles. The International TOVS Processing Package (ITPP) is coupled to an algorithm package called AVHRR Processing scheme Over cLouds, Land and Ocean (APOLLO) where parameters like cloud fraction and cloud-top temperature are determined with higher accuracy than obtained from TOVS retrieval alone. Furthermore, a split-window technique is applied to the cloud-free AVHRR imagery in order to derive more accurate surface temperatures than can be obtained from the pure TOVS retrieval. First results of the impact of AVHRR cloud detection on the quality of the profiles are presented. The temperature and humidity profiles of different retrieval approaches are validated against analyses of the European Centre for Medium-Range Weatherforecasts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Wenyang; Cheung, Yam; Sawant, Amit
2016-05-15
Purpose: To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. Methods: The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparsemore » regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. Results: On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced occlusions. Conclusions: The authors have developed a fast and robust surface reconstruction method on point clouds captured from a 3D surface photogrammetry system, with demonstrated sub-millimeter reconstruction accuracy and subsecond reconstruction time. It is suitable for real-time motion tracking in radiotherapy, with clear surface structures for better quantifications.« less
Liu, Wenyang; Cheung, Yam; Sawant, Amit; Ruan, Dan
2016-05-01
To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced occlusions. The authors have developed a fast and robust surface reconstruction method on point clouds captured from a 3D surface photogrammetry system, with demonstrated sub-millimeter reconstruction accuracy and subsecond reconstruction time. It is suitable for real-time motion tracking in radiotherapy, with clear surface structures for better quantifications.
Liu, Wenyang; Cheung, Yam; Sawant, Amit; Ruan, Dan
2016-01-01
Purpose: To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. Methods: The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. Results: On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced occlusions. Conclusions: The authors have developed a fast and robust surface reconstruction method on point clouds captured from a 3D surface photogrammetry system, with demonstrated sub-millimeter reconstruction accuracy and subsecond reconstruction time. It is suitable for real-time motion tracking in radiotherapy, with clear surface structures for better quantifications. PMID:27147347
Gaussian Radial Basis Function for Efficient Computation of Forest Indirect Illumination
NASA Astrophysics Data System (ADS)
Abbas, Fayçal; Babahenini, Mohamed Chaouki
2018-06-01
Global illumination of natural scenes in real time like forests is one of the most complex problems to solve, because the multiple inter-reflections between the light and material of the objects composing the scene. The major problem that arises is the problem of visibility computation. In fact, the computing of visibility is carried out for all the set of leaves visible from the center of a given leaf, given the enormous number of leaves present in a tree, this computation performed for each leaf of the tree which also reduces performance. We describe a new approach that approximates visibility queries, which precede in two steps. The first step is to generate point cloud representing the foliage. We assume that the point cloud is composed of two classes (visible, not-visible) non-linearly separable. The second step is to perform a point cloud classification by applying the Gaussian radial basis function, which measures the similarity in term of distance between each leaf and a landmark leaf. It allows approximating the visibility requests to extract the leaves that will be used to calculate the amount of indirect illumination exchanged between neighbor leaves. Our approach allows efficiently treat the light exchanges in the scene of a forest, it allows a fast computation and produces images of good visual quality, all this takes advantage of the immense power of computation of the GPU.
Dorninger, Peter; Pfeifer, Norbert
2008-01-01
Three dimensional city models are necessary for supporting numerous management applications. For the determination of city models for visualization purposes, several standardized workflows do exist. They are either based on photogrammetry or on LiDAR or on a combination of both data acquisition techniques. However, the automated determination of reliable and highly accurate city models is still a challenging task, requiring a workflow comprising several processing steps. The most relevant are building detection, building outline generation, building modeling, and finally, building quality analysis. Commercial software tools for building modeling require, generally, a high degree of human interaction and most automated approaches described in literature stress the steps of such a workflow individually. In this article, we propose a comprehensive approach for automated determination of 3D city models from airborne acquired point cloud data. It is based on the assumption that individual buildings can be modeled properly by a composition of a set of planar faces. Hence, it is based on a reliable 3D segmentation algorithm, detecting planar faces in a point cloud. This segmentation is of crucial importance for the outline detection and for the modeling approach. We describe the theoretical background, the segmentation algorithm, the outline detection, and the modeling approach, and we present and discuss several actual projects. PMID:27873931
On-line lab-in-syringe cloud point extraction for the spectrophotometric determination of antimony.
Frizzarin, Rejane M; Portugal, Lindomar A; Estela, José M; Rocha, Fábio R P; Cerdà, Victor
2016-02-01
Most of the procedures for antimony determination require time-consuming sample preparation (e.g. liquid-liquid extraction with organic solvents), which are harmful to the environment. Because of the high antimony toxicity, a rapid, sensitive and greener procedure for its determination becomes necessary. The goal of this work was to develop an analytical procedure exploiting for the first time the cloud point extraction on a lab-in-syringe flow system aiming at the spectrophotometric determination of antimony. The procedure was based on formation of an ion-pair between the antimony-iodide complex and H(+) followed by extraction with Triton X-114. The factorial design showed that the concentrations of ascorbic acid, H2SO4 and Triton X-114, as well as second and third order interactions were significant at the 95% confidence level. A Box-Behnken design was applied to obtain the response surfaces and to identify the critical values. System is robust at the 95% confidence level. A linear response was observed from 5 to 50 µg L(-1), described by the equation A=0.137+0.050C(Sb) (r=0.998). The detection limit (99.7% confidence level), the coefficient of variation (n=5; 15 µg L(-1)) and the sampling rate was estimated at 1.8 µg L(-1), 1.6% and 16 h(-1), respectively. The procedure allows quantification of antimony in the concentrations established by environmental legislation (6 µg L(-1)) and it was successfully applied to the determination of antimony in freshwater samples and antileishmanial drugs, yielding results in agreement with those obtained by HGFAAS at the 95% confidence level. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Dogon-Yaro, M. A.; Kumar, P.; Rahman, A. Abdul; Buyuksalih, G.
2016-09-01
Mapping of trees plays an important role in modern urban spatial data management, as many benefits and applications inherit from this detailed up-to-date data sources. Timely and accurate acquisition of information on the condition of urban trees serves as a tool for decision makers to better appreciate urban ecosystems and their numerous values which are critical to building up strategies for sustainable development. The conventional techniques used for extracting trees include ground surveying and interpretation of the aerial photography. However, these techniques are associated with some constraints, such as labour intensive field work and a lot of financial requirement which can be overcome by means of integrated LiDAR and digital image datasets. Compared to predominant studies on trees extraction mainly in purely forested areas, this study concentrates on urban areas, which have a high structural complexity with a multitude of different objects. This paper presented a workflow about semi-automated approach for extracting urban trees from integrated processing of airborne based LiDAR point cloud and multispectral digital image datasets over Istanbul city of Turkey. The paper reveals that the integrated datasets is a suitable technology and viable source of information for urban trees management. As a conclusion, therefore, the extracted information provides a snapshot about location, composition and extent of trees in the study area useful to city planners and other decision makers in order to understand how much canopy cover exists, identify new planting, removal, or reforestation opportunities and what locations have the greatest need or potential to maximize benefits of return on investment. It can also help track trends or changes to the urban trees over time and inform future management decisions.
NASA Astrophysics Data System (ADS)
Creamean, J.; Spada, N. J.; Kirpes, R.; Pratt, K.
2017-12-01
Aerosols that serve as ice nucleating particles (INPs) have the potential to modulate cloud microphysical properties. INPs can thus subsequently impact cloud radiative forcing in addition to modification of precipitation formation processes. In regions such as the Arctic, aerosol-cloud interactions are severely understudied yet have significant implications for surface radiation reaching the sea ice and snow surfaces. Further, uncertainties in model representations of heterogeneous ice nucleation are a significant hindrance to simulating Arctic mixed-phase cloud processes. Characterizing a combination of aerosol chemical, physical, and ice nucleating properties is pertinent to evaluating of the role of aerosols in altering Arctic cloud microphysics. We present preliminary results from an aerosol sampling campaign called INPOP (Ice Nucleating Particles at Oliktok Point), which took place at a U.S. Department of Energy's Atmospheric Radiation Measurement (DOE ARM) facility on the North Slope of Alaska. Three time- and size-resolved aerosol samplers were deployed from 1 Mar to 31 May 2017 and were co-located with routine measurements of aerosol number, size, chemical, and radiative property measurements conducted by DOE ARM at their Aerosol Observing System (AOS). Offline analysis of samples collected at a daily time resolution included composition and morphology via single-particle analysis and drop freezing measurements for INP concentrations, while analysis of 12-hourly samples included mass, optical, and elemental composition. We deliberate the possible influences on the aerosol and INP population from the Prudhoe Bay oilfield resource extraction and daily operations in addition to what may be local background or long-range transported aerosol. To our knowledge our results represent some of the first INP characterization measurements in an Arctic oilfield location and can be used as a benchmark for future INP characterization studies in Arctic locations impacted by local resource extraction pollution. Ultimately, these results can be used to evaluate the impacts of oil exploration activities on Arctic cloud aerosol composition and possible linkages to Arctic cloud ice formation.
Automatic Registration of TLS-TLS and TLS-MLS Point Clouds Using a Genetic Algorithm
Yan, Li; Xie, Hong; Chen, Changjun
2017-01-01
Registration of point clouds is a fundamental issue in Light Detection and Ranging (LiDAR) remote sensing because point clouds scanned from multiple scan stations or by different platforms need to be transformed to a uniform coordinate reference frame. This paper proposes an efficient registration method based on genetic algorithm (GA) for automatic alignment of two terrestrial LiDAR scanning (TLS) point clouds (TLS-TLS point clouds) and alignment between TLS and mobile LiDAR scanning (MLS) point clouds (TLS-MLS point clouds). The scanning station position acquired by the TLS built-in GPS and the quasi-horizontal orientation of the LiDAR sensor in data acquisition are used as constraints to narrow the search space in GA. A new fitness function to evaluate the solutions for GA, named as Normalized Sum of Matching Scores, is proposed for accurate registration. Our method is divided into five steps: selection of matching points, initialization of population, transformation of matching points, calculation of fitness values, and genetic operation. The method is verified using a TLS-TLS data set and a TLS-MLS data set. The experimental results indicate that the RMSE of registration of TLS-TLS point clouds is 3~5 mm, and that of TLS-MLS point clouds is 2~4 cm. The registration integrating the existing well-known ICP with GA is further proposed to accelerate the optimization and its optimizing time decreases by about 50%. PMID:28850100
Automatic Registration of TLS-TLS and TLS-MLS Point Clouds Using a Genetic Algorithm.
Yan, Li; Tan, Junxiang; Liu, Hua; Xie, Hong; Chen, Changjun
2017-08-29
Registration of point clouds is a fundamental issue in Light Detection and Ranging (LiDAR) remote sensing because point clouds scanned from multiple scan stations or by different platforms need to be transformed to a uniform coordinate reference frame. This paper proposes an efficient registration method based on genetic algorithm (GA) for automatic alignment of two terrestrial LiDAR scanning (TLS) point clouds (TLS-TLS point clouds) and alignment between TLS and mobile LiDAR scanning (MLS) point clouds (TLS-MLS point clouds). The scanning station position acquired by the TLS built-in GPS and the quasi-horizontal orientation of the LiDAR sensor in data acquisition are used as constraints to narrow the search space in GA. A new fitness function to evaluate the solutions for GA, named as Normalized Sum of Matching Scores, is proposed for accurate registration. Our method is divided into five steps: selection of matching points, initialization of population, transformation of matching points, calculation of fitness values, and genetic operation. The method is verified using a TLS-TLS data set and a TLS-MLS data set. The experimental results indicate that the RMSE of registration of TLS-TLS point clouds is 3~5 mm, and that of TLS-MLS point clouds is 2~4 cm. The registration integrating the existing well-known ICP with GA is further proposed to accelerate the optimization and its optimizing time decreases by about 50%.
NASA Astrophysics Data System (ADS)
Wang, Weixing; Wang, Zhiwei; Han, Ya; Li, Shuang; Zhang, Xin
2015-03-01
In order to ensure safety, long term stability and quality control in modern tunneling operations, the acquisition of geotechnical information about encountered rock conditions and detailed installed support information is required. The limited space and time in an operational tunnel environment make the acquiring data challenging. The laser scanning in a tunneling environment, however, shows a great potential. The surveying and mapping of tunnels are crucial for the optimal use after construction and in routine inspections. Most of these applications focus on the geometric information of the tunnels extracted from the laser scanning data. There are two kinds of applications widely discussed: deformation measurement and feature extraction. The traditional deformation measurement in an underground environment is performed with a series of permanent control points installed around the profile of an excavation, which is unsuitable for a global consideration of the investigated area. Using laser scanning for deformation analysis provides many benefits as compared to traditional monitoring techniques. The change in profile is able to be fully characterized and the areas of the anomalous movement can easily be separated from overall trends due to the high density of the point cloud data. Furthermore, monitoring with a laser scanner does not require the permanent installation of control points, therefore the monitoring can be completed more quickly after excavation, and the scanning is non-contact, hence, no damage is done during the installation of temporary control points. The main drawback of using the laser scanning for deformation monitoring is that the point accuracy of the original data is generally the same magnitude as the smallest level of deformations that are to be measured. To overcome this, statistical techniques and three dimensional image processing techniques for the point clouds must be developed. For safely, effectively and easily control the problem of Over Underbreak detection of road and solve the problemof the roadway data collection difficulties, this paper presents a new method of continuous section extraction and Over Underbreak detection of road based on 3D laser scanning technology and image processing, the method is divided into the following three steps: based on Canny edge detection, local axis fitting, continuous extraction section and Over Underbreak detection of section. First, after Canny edge detection, take the least-squares curve fitting method to achieve partial fitting in axis. Then adjust the attitude of local roadway that makes the axis of the roadway be consistent with the direction of the extraction reference, and extract section along the reference direction. Finally, we compare the actual cross-sectional view and the cross-sectional design to complete Overbreak detected. Experimental results show that the proposed method have a great advantage in computing costs and ensure cross-section orthogonal intercept terms compared with traditional detection methods.
Automatic Classification of Trees from Laser Scanning Point Clouds
NASA Astrophysics Data System (ADS)
Sirmacek, B.; Lindenbergh, R.
2015-08-01
Development of laser scanning technologies has promoted tree monitoring studies to a new level, as the laser scanning point clouds enable accurate 3D measurements in a fast and environmental friendly manner. In this paper, we introduce a probability matrix computation based algorithm for automatically classifying laser scanning point clouds into 'tree' and 'non-tree' classes. Our method uses the 3D coordinates of the laser scanning points as input and generates a new point cloud which holds a label for each point indicating if it belongs to the 'tree' or 'non-tree' class. To do so, a grid surface is assigned to the lowest height level of the point cloud. The grids are filled with probability values which are calculated by checking the point density above the grid. Since the tree trunk locations appear with very high values in the probability matrix, selecting the local maxima of the grid surface help to detect the tree trunks. Further points are assigned to tree trunks if they appear in the close proximity of trunks. Since heavy mathematical computations (such as point cloud organization, detailed shape 3D detection methods, graph network generation) are not required, the proposed algorithm works very fast compared to the existing methods. The tree classification results are found reliable even on point clouds of cities containing many different objects. As the most significant weakness, false detection of light poles, traffic signs and other objects close to trees cannot be prevented. Nevertheless, the experimental results on mobile and airborne laser scanning point clouds indicate the possible usage of the algorithm as an important step for tree growth observation, tree counting and similar applications. While the laser scanning point cloud is giving opportunity to classify even very small trees, accuracy of the results is reduced in the low point density areas further away than the scanning location. These advantages and disadvantages of two laser scanning point cloud sources are discussed in detail.
NASA Astrophysics Data System (ADS)
Griffith, C. A.; Hall, J. L.; Geballe, T. R.
2000-10-01
Titan's atmosphere potentially sports a cycle similar to the hydrologic one on Earth with clouds, rain and seas, but with methane playing the terrestrial role of water. Over the past ten years many independent efforts indicated no strong evidence for cloudiness until some unique spectra were analyzed in 1998 (Griffith et al.). These surprising observations displayed enhanced fluxes of 14-200% on two nights at precisely the wavelengths (windows) that sense Titan's lower altitude where clouds might reside. The morphology of these enhancements in all 4 windows observed indicate that clouds covered ~6-9% of Titan's surface and existed at ~15 km altitude. Here I discuss new observations recorded in 1999 aimed to further characterize Titan's clouds. While we find no evidence for a massive cloud system similar to the one observed previously, 1%-4% fluctuations in flux occur daily. These modulations, similar in wavelength and morphology to the more pronounced ones observed earlier, suggest the presence of clouds covering <=1% of Titan's disk. The variations are too small to have been detected by most prior measurements. Repeated observations, spaced 30 minutes apart, indicate a temporal variability observable in the time scale of a couple of hours. The cloud heights hint that convection governs their evolutions. Their short lives point to the presence of rain. C. A. Griffith and J. L. Hall are supported by the NASA Planetary Astronomy Program NAG5-6790.
Visual Data Analysis for Satellites
NASA Technical Reports Server (NTRS)
Lau, Yee; Bhate, Sachin; Fitzpatrick, Patrick
2008-01-01
The Visual Data Analysis Package is a collection of programs and scripts that facilitate visual analysis of data available from NASA and NOAA satellites, as well as dropsonde, buoy, and conventional in-situ observations. The package features utilities for data extraction, data quality control, statistical analysis, and data visualization. The Hierarchical Data Format (HDF) satellite data extraction routines from NASA's Jet Propulsion Laboratory were customized for specific spatial coverage and file input/output. Statistical analysis includes the calculation of the relative error, the absolute error, and the root mean square error. Other capabilities include curve fitting through the data points to fill in missing data points between satellite passes or where clouds obscure satellite data. For data visualization, the software provides customizable Generic Mapping Tool (GMT) scripts to generate difference maps, scatter plots, line plots, vector plots, histograms, timeseries, and color fill images.
This study implemented first, second and glaciations aerosol indirect effects (AIE) on resolved clouds in the two-way coupled WRF-CMAQ modeling system by including parameterizations for both cloud drop and ice number concentrations on the basis of CMAQ predicted aerosol distribu...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robinson, Troy A
2011-08-01
This dissertation explores lanthanide speciation in liquid solution systems related to separation schemes involving the acidic ligands: bis(2-ethylhexyl) phosphoric acid (HDEHP), lactate, and 8-hydroxyquinoline. Equilibrium speciation of neodymium (Nd 3+), sodium (Na+), HDEHP, water, and lactate in the TALSPEAK liquid-liquid extraction system was explored under varied Nd 3+ loading of HDEHP in the organic phase and through extraction from aqueous HCl and lactate media. System speciation was probed through vapor pressure osmometry, visible and Fourier Transform Infrared (FTIR) spectroscopy, 22Na and 13C labeled lactate radiotracer distribution measurements, Karl Fischer titrations, and equilibrium pH measurements. Distribution of Nd 3+, Na +,more » lactate, and equilibrium pH were modeled using the SXLSQI software to obtain logKNd and logKNa extraction constants under selected conditions. Results showed that high Nd 3+ loading of the HDEHP led to Nd 3+ speciation that departs from the ion exchange mechanism and includes formation of highly aggregated, polynuclear [NdLactate(DEHP) 2] x; (with x > 1). By substituting lanthanum (La 3+) for Nd 3+ in this system, NMR scoping experiments using 23Na, 31P nuclei and 13C labeled lactate were performed. Results indicated that this technique is sensitive to changes in system speciation, and that further experiments are warranted. In a homogeneous system representing the TALSPEAK aqueous phase, Lactate protonation behavior at various temperatures was characterized using a combination of potentiometric titration and modeling with the Hyperquad computer program. The temperature dependent deprotonation behavior of lactate showed little change with temperature at 2.0 M NaCl ionic strength. Cloud point extraction is a non-traditional separation technique that starts with a homogeneous phase that becomes heterogeneous by the micellization of surfactants through the increase of temperature. To better understand the behavior of europium (Eu 3+) and 8-hydroxyquinoline under cloud point extraction conditions, potentiometric and spectrophotometric titrations coupled with modeling with Hyperquad and SQUAD computer programs were performed to assess europium (Eu 3+) and 8-hydroxyquinoline speciation. Experiments in both water and a 1wt% Triton X-114/water mixed solvent were compared to understand the effect of Triton X-114 on the system speciation. Results indicated that increased solvation of 8-hydroxyquinoline by the mixed solvent lead to more stable complexes involving 8-hydroxyquinoline than in water, whereas competition between hydroxide and Triton X-114 for Eu 3+ led to lower stability hydrolysis complexes in the mixed solvent than in water. Lanthanide speciation is challenging due to the trivalent oxidation state that leads to multiple ligand complexes, including some mixed complexes. The complexity of the system demands well-designed and precise experiments that capture the nuances of the chemistry. This work increased the understanding of lanthanide speciation in the explored systems, but more work is required to produce a comprehensive understanding of the speciation involved.« less
NASA Astrophysics Data System (ADS)
Wei, Wei; Li, Wenhong; Deng, Yi; Yang, Song; Jiang, Jonathan H.; Huang, Lei; Liu, W. Timothy
2018-04-01
This study investigates dynamical and thermodynamical coupling between the North Atlantic subtropical high (NASH), marine boundary layer (MBL) clouds, and the local sea surface temperatures (SSTs) over the North Atlantic in boreal summer for 1984-2009 using NCEP/DOE Reanalysis 2 dataset, various cloud data, and the Hadley Centre sea surface temperature. On interannual timescales, the summer mean subtropical MBL clouds to the southeast of the NASH is actively coupled with the NASH and local SSTs: a stronger (weaker) NASH is often accompanied with an increase (a decrease) of MBL clouds and abnormally cooler (warmer) SSTs along the southeast flank of the NASH. To understand the physical processes between the NASH and the MBL clouds, the authors conduct a data diagnostic analysis and implement a numerical modeling investigation using an idealized anomalous atmospheric general circulation model (AGCM). Results suggest that significant northeasterly anomalies in the southeast flank of the NASH associated with an intensified NASH tend to induce stronger cold advection and coastal upwelling in the MBL cloud region, reducing the boundary surface temperature. Meanwhile, warm advection associated with the easterly anomalies from the African continent leads to warming over the MBL cloud region at 700 hPa. Such warming and the surface cooling increase the atmospheric static stability, favoring growth of the MBL clouds. The anomalous diabatic cooling associated with the growth of the MBL clouds dynamically excites an anomalous anticyclone to its north and contributes to strengthening of the NASH circulation in its southeast flank. The dynamical and thermodynamical couplings and their associated variations in the NASH, MBL clouds, and SSTs constitute an important aspect of the summer climate variability over the North Atlantic.
NASA Astrophysics Data System (ADS)
Park, Joong Yong; Tuell, Grady
2010-04-01
The Data Processing System (DPS) of the Coastal Zone Mapping and Imaging Lidar (CZMIL) has been designed to automatically produce a number of novel environmental products through the fusion of Lidar, spectrometer, and camera data in a single software package. These new products significantly transcend use of the system as a bathymeter, and support use of CZMIL as a complete coastal and benthic mapping tool. The DPS provides a spinning globe capability for accessing data files; automated generation of combined topographic and bathymetric point clouds; a fully-integrated manual editor and data analysis tool; automated generation of orthophoto mosaics; automated generation of reflectance data cubes from the imaging spectrometer; a coupled air-ocean spectral optimization model producing images of chlorophyll and CDOM concentrations; and a fusion based capability to produce images and classifications of the shallow water seafloor. Adopting a multitasking approach, we expect to achieve computation of the point clouds, DEMs, and reflectance images at a 1:1 processing to acquisition ratio.
NASA Astrophysics Data System (ADS)
Cura, Rémi; Perret, Julien; Paparoditis, Nicolas
2017-05-01
In addition to more traditional geographical data such as images (rasters) and vectors, point cloud data are becoming increasingly available. Such data are appreciated for their precision and true three-Dimensional (3D) nature. However, managing point clouds can be difficult due to scaling problems and specificities of this data type. Several methods exist but are usually fairly specialised and solve only one aspect of the management problem. In this work, we propose a comprehensive and efficient point cloud management system based on a database server that works on groups of points (patches) rather than individual points. This system is specifically designed to cover the basic needs of point cloud users: fast loading, compressed storage, powerful patch and point filtering, easy data access and exporting, and integrated processing. Moreover, the proposed system fully integrates metadata (like sensor position) and can conjointly use point clouds with other geospatial data, such as images, vectors, topology and other point clouds. Point cloud (parallel) processing can be done in-base with fast prototyping capabilities. Lastly, the system is built on open source technologies; therefore it can be easily extended and customised. We test the proposed system with several billion points obtained from Lidar (aerial and terrestrial) and stereo-vision. We demonstrate loading speeds in the ˜50 million pts/h per process range, transparent-for-user and greater than 2 to 4:1 compression ratio, patch filtering in the 0.1 to 1 s range, and output in the 0.1 million pts/s per process range, along with classical processing methods, such as object detection.
NASA Astrophysics Data System (ADS)
Zhao, Yu; Shi, Chen-Xiao; Kwon, Ki-Chul; Piao, Yan-Ling; Piao, Mei-Lan; Kim, Nam
2018-03-01
We propose a fast calculation method for a computer-generated hologram (CGH) of real objects that uses a point cloud gridding method. The depth information of the scene is acquired using a depth camera and the point cloud model is reconstructed virtually. Because each point of the point cloud is distributed precisely to the exact coordinates of each layer, each point of the point cloud can be classified into grids according to its depth. A diffraction calculation is performed on the grids using a fast Fourier transform (FFT) to obtain a CGH. The computational complexity is reduced dramatically in comparison with conventional methods. The feasibility of the proposed method was confirmed by numerical and optical experiments.
New particle dependant parameterizations of heterogeneous freezing processes.
NASA Astrophysics Data System (ADS)
Diehl, Karoline; Mitra, Subir K.
2014-05-01
For detailed investigations of cloud microphysical processes an adiabatic air parcel model with entrainment is used. It represents a spectral bin model which explicitly solves the microphysical equations. The initiation of the ice phase is parameterized and describes the effects of different types of ice nuclei (mineral dust, soot, biological particles) in immersion, contact, and deposition modes. As part of the research group INUIT (Ice Nuclei research UnIT), existing parameterizations have been modified for the present studies and new parameterizations have been developed mainly on the basis of the outcome of INUIT experiments. Deposition freezing in the model is dependant on the presence of dry particles and on ice supersaturation. The description of contact freezing combines the collision kernel of dry particles with the fraction of frozen drops as function of temperature and particle size. A new parameterization of immersion freezing has been coupled to the mass of insoluble particles contained in the drops using measured numbers of ice active sites per unit mass. Sensitivity studies have been performed with a convective temperature and dew point profile and with two dry aerosol particle number size distributions. Single and coupled freezing processes are studied with different types of ice nuclei (e.g., bacteria, illite, kaolinite, feldspar). The strength of convection is varied so that the simulated cloud reaches different levels of temperature. As a parameter to evaluate the results the ice water fraction is selected which is defined as the relation of the ice water content to the total water content. Ice water fractions between 0.1 and 0.9 represent mixed-phase clouds, larger than 0.9 ice clouds. The results indicate the sensitive parameters for the formation of mixed-phase and ice clouds are: 1. broad particle number size distribution with high number of small particles, 2. temperatures below -25°C, 3. specific mineral dust particles as ice nuclei such as illite or montmorillonite. Coupled cases of deposition and contact freezing show that they are hardly in competition because of differences in the preferred particle sizes. In the contact mode, small particles are less efficient for collisions as well as less efficient as ice nuclei so that these are available for deposition freezing. On the other hand, immersion freezing is the dominant process when it is coupled with deposition freezing. As it is initiated earlier the formed ice particles consume water vapor for growing. The competition of combined contact and immersion freezing leads to lower ice water contents because more ice particles are formed via the immersion mode. In general, ice clouds and mixed-phase clouds with high ice water fractions are not directly the result of primary ice formation but of secondary ice formation and growth of ice particles at the expense of liquid drops.
Numerical Coupling and Simulation of Point-Mass System with the Turbulent Fluid Flow
NASA Astrophysics Data System (ADS)
Gao, Zheng
A computational framework that combines the Eulerian description of the turbulence field with a Lagrangian point-mass ensemble is proposed in this dissertation. Depending on the Reynolds number, the turbulence field is simulated using Direct Numerical Simulation (DNS) or eddy viscosity model. In the meanwhile, the particle system, such as spring-mass system and cloud droplets, are modeled using the ordinary differential system, which is stiff and hence poses a challenge to the stability of the entire system. This computational framework is applied to the numerical study of parachute deceleration and cloud microphysics. These two distinct problems can be uniformly modeled with Partial Differential Equations (PDEs) and Ordinary Differential Equations (ODEs), and numerically solved in the same framework. For the parachute simulation, a novel porosity model is proposed to simulate the porous effects of the parachute canopy. This model is easy to implement with the projection method and is able to reproduce Darcy's law observed in the experiment. Moreover, the impacts of using different versions of k-epsilon turbulence model in the parachute simulation have been investigated and conclude that the standard and Re-Normalisation Group (RNG) model may overestimate the turbulence effects when Reynolds number is small while the Realizable model has a consistent performance with both large and small Reynolds number. For another application, cloud microphysics, the cloud entrainment-mixing problem is studied in the same numerical framework. Three sets of DNS are carried out with both decaying and forced turbulence. The numerical result suggests a new way parameterize the cloud mixing degree using the dynamical measures. The numerical experiments also verify the negative relationship between the droplets number concentration and the vorticity field. The results imply that the gravity has fewer impacts on the forced turbulence than the decaying turbulence. In summary, the proposed framework can be used to solve a physics problem that involves turbulence field and point-mass system, and therefore has a broad application.
First observations of volcanic eruption clouds from L1 by DSCOVR/EPIC
NASA Astrophysics Data System (ADS)
Carn, S. A.; Krotkov, N. A.; Taylor, S.; Fisher, B. L.; Li, C.; Hughes, E. J.; Bhartia, P. K.; Prata, F.
2016-12-01
Volcanic emissions of sulfur dioxide (SO2) and ash have been measured by ultraviolet (UV) sensors on US and European polar-orbiting satellites since the late 1970s. Although successful, the main limitation of these UV observations from low-Earth orbit has been poor temporal resolution. Timeliness can be crucial when detecting hazardous volcanic eruption clouds that threaten aviation, and most operational geostationary satellites cannot detect SO2, a key tracer of volcanic plumes. In 2015, the launch of the Earth Polychromatic Imaging Camera (EPIC) aboard the Deep Space Climate Observatory (DSCOVR) provided the first opportunity to observe volcanic clouds from the L1 Lagrange point. EPIC is a 10-band spectroradiometer spanning UV to near-IR wavelengths with two UV channels sensitive to SO2, and a ground resolution of 25 km. The unique L1 vantage point provides continuous observations of the sunlit Earth disk, potentially offering multiple daily observations of volcanic SO2 and ash clouds in the EPIC field of view. When coupled with complementary retrievals from polar-orbiting UV and infrared (IR) sensors such as the Ozone Monitoring Instrument (OMI), the Ozone Mapping and Profiler Suite (OMPS), and the Atmospheric Infrared Sounder (AIRS), the increased observation frequency afforded by DSCOVR/EPIC will permit more timely volcanic eruption detection, improved trajectory modeling, and novel analyses of the temporal evolution of volcanic clouds. We demonstrate the sensitivity of EPIC UV radiances to volcanic clouds using examples from the first year of EPIC observations including the December 2015 paroxysmal eruption of Etna volcano (Italy). When combined with OMI and OMPS measurements, the EPIC SO2 data permit hourly tracking of the Etna eruption cloud as it drifts away from the volcano. We also describe ongoing efforts to adapt existing UV backscatter (BUV) algorithms to produce operational EPIC SO2 and Ash Index (AI) products.
DSCOVR/EPIC observations of SO2 reveal dynamics of young volcanic eruption clouds
NASA Astrophysics Data System (ADS)
Carn, S. A.; Krotkov, N. A.; Taylor, S.; Fisher, B. L.; Li, C.; Bhartia, P. K.; Prata, F. J.
2017-12-01
Volcanic emissions of sulfur dioxide (SO2) and ash have been measured by ultraviolet (UV) and infrared (IR) sensors on US and European polar-orbiting satellites since the late 1970s. Although successful, the main limitation of these observations from low Earth orbit (LEO) is poor temporal resolution (once per day at low latitudes). Furthermore, most currently operational geostationary satellites cannot detect SO2, a key tracer of volcanic plumes, limiting our ability to elucidate processes in fresh, rapidly evolving volcanic eruption clouds. In 2015, the launch of the Earth Polychromatic Imaging Camera (EPIC) aboard the Deep Space Climate Observatory (DSCOVR) provided the first opportunity to observe volcanic clouds from the L1 Lagrange point. EPIC is a 10-band spectroradiometer spanning UV to near-IR wavelengths with two UV channels sensitive to SO2, and a ground resolution of 25 km. The unique L1 vantage point provides continuous observations of the sunlit Earth disk, from sunrise to sunset, offering multiple daily observations of volcanic SO2 and ash clouds in the EPIC field of view. When coupled with complementary retrievals from polar-orbiting UV and IR sensors such as the Ozone Monitoring Instrument (OMI), the Ozone Mapping and Profiler Suite (OMPS), and the Atmospheric Infrared Sounder (AIRS), we demonstrate how the increased observation frequency afforded by DSCOVR/EPIC permits more timely volcanic eruption detection and novel analyses of the temporal evolution of volcanic clouds. Although EPIC has detected several mid- to high-latitude volcanic eruptions since launch, we focus on recent eruptions of Bogoslof volcano (Aleutian Islands, AK, USA). A series of EPIC exposures from May 28-29, 2017, uniquely captures the evolution of SO2 mass in a young Bogoslof eruption cloud, showing separation of SO2- and ice-rich regions of the cloud. We show how analyses of these sequences of EPIC SO2 data can elucidate poorly understood processes in transient eruption clouds, such as the relative roles of H2S oxidation and ice scavenging in modifying volcanic SO2 emissions. Detection of these relatively small events also proves EPIC's ability to provide timely detection of volcanic clouds in the upper troposphere and lower stratosphere.
Automatic building extraction from LiDAR data fusion of point and grid-based features
NASA Astrophysics Data System (ADS)
Du, Shouji; Zhang, Yunsheng; Zou, Zhengrong; Xu, Shenghua; He, Xue; Chen, Siyang
2017-08-01
This paper proposes a method for extracting buildings from LiDAR point cloud data by combining point-based and grid-based features. To accurately discriminate buildings from vegetation, a point feature based on the variance of normal vectors is proposed. For a robust building extraction, a graph cuts algorithm is employed to combine the used features and consider the neighbor contexture information. As grid feature computing and a graph cuts algorithm are performed on a grid structure, a feature-retained DSM interpolation method is proposed in this paper. The proposed method is validated by the benchmark ISPRS Test Project on Urban Classification and 3D Building Reconstruction and compared to the state-art-of-the methods. The evaluation shows that the proposed method can obtain a promising result both at area-level and at object-level. The method is further applied to the entire ISPRS dataset and to a real dataset of the Wuhan City. The results show a completeness of 94.9% and a correctness of 92.2% at the per-area level for the former dataset and a completeness of 94.4% and a correctness of 95.8% for the latter one. The proposed method has a good potential for large-size LiDAR data.
Tri-stereo Pleiades images-derived digital surface models for tectonic geomorphology studies
NASA Astrophysics Data System (ADS)
Ferry, Matthieu; Le Roux-Mallouf, Romain; Ritz, Jean-François; Berthet, Théo; Peyret, Michel; Vernant, Philippe; Maréchal, Anaïs; Cattin, Rodolphe; Mazzotti, Stéphane; Poujol, Antoine
2014-05-01
Very high resolution digital elevation models are a key component of modern quantitative geomorphology. In parallel to high-precision but time-consuming kinematic GPS and/or total station surveys and dense coverage but expensive LiDAR campaigns, we explore the usability of affordable, flexible, wide coverage digital surface models (DSMs) derived from Pleiades tri-stereo optical images. We present two different approaches to extract DSM from a triplet of images. The first relies on the photogrammetric extraction of 3 DSMs from the 3 possible stereo couples and subsequent merge based on the best correlation score. The second takes advantage of simultaneous correlation over the 3 images to derive a point cloud. We further extract DSM from panchromatic 0.5 m resolution images and multispectral 2 m resolution images to test for correlation and noise and determine optimal correlation window size and achievable resolution. Georeferencing is also assessed by comparing raw coordinates derived from Pleiades Rational Polynomial Coefficients to ground control points. Primary images appear to be referenced within ~15 m over flat areas where parallax is minimal while derived DSMs and associated orthorectified images show a much improved referencing within ~5 m of GCPs. In order to assess the adequacy of Pleiades DSMs for tectonic geomorphology, we present examples from case studies along the Trougout normal fault (Morocco), the Hovd strike-slip fault (Mongolia), the Denali strike-slip fault (USA and Canada) and the Main Frontal Thrust (Bhutan). In addition to proposing a variety of tectonic contexts, these examples cover a wide range of climatic conditions (semi-arid, arctic and tropical), vegetation covers (bare earth, sparse Mediterranean, homogeneous arctic pine, varied tropical forest), lithological natures and related erosion rates. The capacity of derived DSMs is demonstrated to characterize geomorphic markers of active deformation such as marine and alluvial terraces, stream gullies, alluvial fans and fluvio-glacial deposits in terms of vertical (from DSMs) and horizontal (from orthorectified optical images) offsets. Values extracted from Pleiades DSMs compare well to field measurements in terms of relief and slope, which suggests effort and resources necessary for field topography could be significantly reduced, especially in poorly accessible areas.
Carbon Dioxide Clouds at High Altitude in the Tropics and in an Early Dense Martian Atmosphere
NASA Technical Reports Server (NTRS)
Colaprete, Anthony; Toon, Owen B.
2001-01-01
We use a time dependent, microphysical cloud model to study the formation of carbon dioxide clouds in the Martian atmosphere. Laboratory studies by Glandor et al. show that high critical supersaturations are required for cloud particle nucleation and that surface kinetic growth is not limited. These conditions, which are similar to those for cirrus clouds on Earth, lead to the formation of carbon dioxide ice particles with radii greater than 500 micrometers and concentrations of less than 0.1 cm(exp -3) for typical atmospheric conditions. Within the current Martian atmosphere, CO2 cloud formation is possible at the poles during winter and at high altitudes in the tropics during periods of increased atmospheric dust loading. In both cases, temperature perturbations of several degrees below the CO2 saturation temperature are required to nucleate new cloud particles suggesting that dynamical processes are the most common initiators of carbon dioxide clouds rather than diabatic cooling. The microphysical cloud model, coupled to a two-stream radiative transfer model, is used to reexamine the impact of CO2 clouds on the surface temperature within a dense CO2 atmosphere. The formation of carbon dioxide clouds leads to a warmer surface than what would be expected for clear sky conditions. The amount of warming is sensitive to the presence of dust and water vapor in the atmosphere, both of which act to dampen cloud effects. The radiative warming associated with cloud formation, as well as latent heating, work to dissipate the clouds when present. Thus, clouds never last for periods much longer than several days, limiting their overall effectiveness for warming the surface. The time average cloud optical depth is approximately unity leading to a 5-10 K warming, depending on the surface pressure. However, the surface temperature does not rise about the freezing point of liquid water even for pressures as high as 5 bars, at a solar luminosity of 75% the current value.
A cost-effective laser scanning method for mapping stream channel geometry and roughness
NASA Astrophysics Data System (ADS)
Lam, Norris; Nathanson, Marcus; Lundgren, Niclas; Rehnström, Robin; Lyon, Steve
2015-04-01
In this pilot project, we combine an Arduino Uno and SICK LMS111 outdoor laser ranging camera to acquire high resolution topographic area scans for a stream channel. The microprocessor and imaging system was installed in a custom gondola and suspended from a wire cable system. To demonstrate the systems capabilities for capturing stream channel topography, a small stream (< 2m wide) in the Krycklan Catchment Study was temporarily diverted and scanned. Area scans along the stream channel resulted in a point spacing of 4mm and a point cloud density of 5600 points/m2 for the 5m by 2m area. A grain size distribution of the streambed material was extracted from the point cloud using a moving window, local maxima search algorithm. The median, 84th and 90th percentiles (common metrics to describe channel roughness) of this distribution were found to be within the range of measured values while the largest modelled element was approximately 35% smaller than its measured counterpart. The laser scanning system captured grain sizes between 30mm and 255mm (coarse gravel/pebbles and boulders based on the Wentworth (1922) scale). This demonstrates that our system was capable of resolving both large-scale geometry (e.g. bed slope and stream channel width) and small-scale channel roughness elements (e.g. coarse gravel/pebbles and boulders) for the study area. We further show that the point cloud resolution is suitable for estimating ecohydraulic parameters such as Manning's n and hydraulic radius. Although more work is needed to fine-tune our system's design, these preliminary results are encouraging, specifically for those with a limited operational budget.
Processing Uav and LIDAR Point Clouds in Grass GIS
NASA Astrophysics Data System (ADS)
Petras, V.; Petrasova, A.; Jeziorska, J.; Mitasova, H.
2016-06-01
Today's methods of acquiring Earth surface data, namely lidar and unmanned aerial vehicle (UAV) imagery, non-selectively collect or generate large amounts of points. Point clouds from different sources vary in their properties such as number of returns, density, or quality. We present a set of tools with applications for different types of points clouds obtained by a lidar scanner, structure from motion technique (SfM), and a low-cost 3D scanner. To take advantage of the vertical structure of multiple return lidar point clouds, we demonstrate tools to process them using 3D raster techniques which allow, for example, the development of custom vegetation classification methods. Dense point clouds obtained from UAV imagery, often containing redundant points, can be decimated using various techniques before further processing. We implemented and compared several decimation techniques in regard to their performance and the final digital surface model (DSM). Finally, we will describe the processing of a point cloud from a low-cost 3D scanner, namely Microsoft Kinect, and its application for interaction with physical models. All the presented tools are open source and integrated in GRASS GIS, a multi-purpose open source GIS with remote sensing capabilities. The tools integrate with other open source projects, specifically Point Data Abstraction Library (PDAL), Point Cloud Library (PCL), and OpenKinect libfreenect2 library to benefit from the open source point cloud ecosystem. The implementation in GRASS GIS ensures long term maintenance and reproducibility by the scientific community but also by the original authors themselves.
Filtering Airborne LIDAR Data by AN Improved Morphological Method Based on Multi-Gradient Analysis
NASA Astrophysics Data System (ADS)
Li, Y.
2013-05-01
The technology of airborne Light Detection And Ranging (LIDAR) is capable of acquiring dense and accurate 3D geospatial data. Although many related efforts have been made by a lot of researchers in the last few years, LIDAR data filtering is still a challenging task, especially for area with high relief or hybrid geographic features. In order to address the bare-ground extraction from LIDAR point clouds of complex landscapes, a novel morphological filtering algorithm is proposed based on multi-gradient analysis in terms of the characteristic of LIDAR data distribution in this paper. Firstly, point clouds are organized by an index mesh. Then, the multigradient of each point is calculated using the morphological method. And, objects are removed gradually by choosing some points to carry on an improved opening operation constrained by multi-gradient iteratively. 15 sample data provided by ISPRS Working Group III/3 are employed to test the filtering algorithm proposed. These sample data include those environments that may lead to filtering difficulty. Experimental results show that filtering algorithm proposed by this paper is of high adaptability to various scenes including urban and rural areas. Omission error, commission error and total error can be simultaneously controlled in a relatively small interval. This algorithm can efficiently remove object points while preserves ground points to a great degree.
Section-Based Tree Species Identification Using Airborne LIDAR Point Cloud
NASA Astrophysics Data System (ADS)
Yao, C.; Zhang, X.; Liu, H.
2017-09-01
The application of LiDAR data in forestry initially focused on mapping forest community, particularly and primarily intended for largescale forest management and planning. Then with the smaller footprint and higher sampling density LiDAR data available, detecting individual tree overstory, estimating crowns parameters and identifying tree species are demonstrated practicable. This paper proposes a section-based protocol of tree species identification taking palm tree as an example. Section-based method is to detect objects through certain profile among different direction, basically along X-axis or Y-axis. And this method improve the utilization of spatial information to generate accurate results. Firstly, separate the tree points from manmade-object points by decision-tree-based rules, and create Crown Height Mode (CHM) by subtracting the Digital Terrain Model (DTM) from the digital surface model (DSM). Then calculate and extract key points to locate individual trees, thus estimate specific tree parameters related to species information, such as crown height, crown radius, and cross point etc. Finally, with parameters we are able to identify certain tree species. Comparing to species information measured on ground, the portion correctly identified trees on all plots could reach up to 90.65 %. The identification result in this research demonstrate the ability to distinguish palm tree using LiDAR point cloud. Furthermore, with more prior knowledge, section-based method enable the process to classify trees into different classes.
a Global Registration Algorithm of the Single-Closed Ring Multi-Stations Point Cloud
NASA Astrophysics Data System (ADS)
Yang, R.; Pan, L.; Xiang, Z.; Zeng, H.
2018-04-01
Aimed at the global registration problem of the single-closed ring multi-stations point cloud, a formula in order to calculate the error of rotation matrix was constructed according to the definition of error. The global registration algorithm of multi-station point cloud was derived to minimize the error of rotation matrix. And fast-computing formulas of transformation matrix with whose implementation steps and simulation experiment scheme was given. Compared three different processing schemes of multi-station point cloud, the experimental results showed that the effectiveness of the new global registration method was verified, and it could effectively complete the global registration of point cloud.
NASA Astrophysics Data System (ADS)
Fast, J. D.; Berg, L. K.; Schmid, B.; Alexander, M. L. L.; Bell, D.; D'Ambro, E.; Hubbe, J. M.; Liu, J.; Mei, F.; Pekour, M. S.; Pinterich, T.; Schobesberger, S.; Shilling, J.; Springston, S. R.; Thornton, J. A.; Tomlinson, J. M.; Wang, J.; Zelenyuk, A.
2016-12-01
Cumulus convection is an important component in the atmospheric radiation budget and hydrologic cycle over the southern Great Plains and over many regions of the world, particularly during the summertime growing season when intense turbulence induced by surface radiation couples the land surface to clouds. Current convective cloud parameterizations, however, contain uncertainties resulting from insufficient coincident data that couples cloud macrophysical and microphysical properties to inhomogeneity in surface layer, boundary layer, and aerosol properties. We describe the measurement strategy and preliminary findings from the recent Holistic Interactions of Shallow Clouds, Aerosols, and Land-Ecosystems (HI-SCALE) campaign conducted in May and September of 2016 in the vicinity of the DOE's Atmospheric Radiation Measurement (ARM) Southern Great Plains (SGP) site located in Oklahoma. The goal of the HI-SCALE campaign is to provide a detailed set of aircraft and surface measurements needed to obtain a more complete understanding and improved parameterizations of the lifecycle of shallow clouds. The sampling is done in two periods, one in the spring and the other in the late summer to take advantage of variations in the "greenness" for various types of vegetation, new particle formation, anthropogenic enhancement of biogenic secondary organic aerosol (SOA), and other aerosol properties. The aircraft measurements will be coupled with extensive routine ARM SGP measurements as well as Large Eddy Simulation (LES), cloud resolving, and cloud-system resolving models. Through these integrated analyses and modeling studies, the affects of inhomogeneity in land use, vegetation, soil moisture, convective eddies, and aerosol properties on the evolution of shallow clouds will be determined, including the feedbacks of cloud radiative effects.
NASA Astrophysics Data System (ADS)
Xu, Y.; Sun, Z.; Boerner, R.; Koch, T.; Hoegner, L.; Stilla, U.
2018-04-01
In this work, we report a novel way of generating ground truth dataset for analyzing point cloud from different sensors and the validation of algorithms. Instead of directly labeling large amount of 3D points requiring time consuming manual work, a multi-resolution 3D voxel grid for the testing site is generated. Then, with the help of a set of basic labeled points from the reference dataset, we can generate a 3D labeled space of the entire testing site with different resolutions. Specifically, an octree-based voxel structure is applied to voxelize the annotated reference point cloud, by which all the points are organized by 3D grids of multi-resolutions. When automatically annotating the new testing point clouds, a voting based approach is adopted to the labeled points within multiple resolution voxels, in order to assign a semantic label to the 3D space represented by the voxel. Lastly, robust line- and plane-based fast registration methods are developed for aligning point clouds obtained via various sensors. Benefiting from the labeled 3D spatial information, we can easily create new annotated 3D point clouds of different sensors of the same scene directly by considering the corresponding labels of 3D space the points located, which would be convenient for the validation and evaluation of algorithms related to point cloud interpretation and semantic segmentation.
Kartal Temel, Nuket; Gürkan, Ramazan
2018-03-01
A novel ultrasound assisted-cloud point extraction method was developed for preconcentration and determination of V(V) in beverage samples. After complexation by pyrogallol in presence of safranin T at pH 6.0, V(V) ions as ternary complex are extracted into the micellar phase of Triton X-114. The complex was monitored at 533 nm by spectrophotometry. The matrix effect on the recovery of V(V) from the spiked samples at 50 μg L-1 was evaluated. In optimized conditions, the limits of detection and quantification of the method, respectively, was 0.58 and 1.93 μg L-1 in linear range of 2-500 μg L-1 with sensitivity enhancement and preconcentration factors of 47.7 and 40 for preconcentration from 15 mL of sample solution. The recoveries from spiked samples were in range of 93.8-103.2% with a relative standard deviation ranging from 2.6% to 4.1% (25, 100 and 250 μg L-1, n: 5). The accuracy was verified by analysis of two certified samples, and the results were in a good agreement with the certified values. The intra-day and inter-day precision were tested by reproducibility (as 3.3-3.4%) and repeatability (as 3.4-4.1%) analysis for five replicate measurements of V(V) in quality control samples spiked with 5, 10 and 15 μg L-1. Trace V(V) contents of the selected beverage samples by the developed method were successfully determined.
The One to Multiple Automatic High Accuracy Registration of Terrestrial LIDAR and Optical Images
NASA Astrophysics Data System (ADS)
Wang, Y.; Hu, C.; Xia, G.; Xue, H.
2018-04-01
The registration of ground laser point cloud and close-range image is the key content of high-precision 3D reconstruction of cultural relic object. In view of the requirement of high texture resolution in the field of cultural relic at present, The registration of point cloud and image data in object reconstruction will result in the problem of point cloud to multiple images. In the current commercial software, the two pairs of registration of the two kinds of data are realized by manually dividing point cloud data, manual matching point cloud and image data, manually selecting a two - dimensional point of the same name of the image and the point cloud, and the process not only greatly reduces the working efficiency, but also affects the precision of the registration of the two, and causes the problem of the color point cloud texture joint. In order to solve the above problems, this paper takes the whole object image as the intermediate data, and uses the matching technology to realize the automatic one-to-one correspondence between the point cloud and multiple images. The matching of point cloud center projection reflection intensity image and optical image is applied to realize the automatic matching of the same name feature points, and the Rodrigo matrix spatial similarity transformation model and weight selection iteration are used to realize the automatic registration of the two kinds of data with high accuracy. This method is expected to serve for the high precision and high efficiency automatic 3D reconstruction of cultural relic objects, which has certain scientific research value and practical significance.
Brute Force Matching Between Camera Shots and Synthetic Images from Point Clouds
NASA Astrophysics Data System (ADS)
Boerner, R.; Kröhnert, M.
2016-06-01
3D point clouds, acquired by state-of-the-art terrestrial laser scanning techniques (TLS), provide spatial information about accuracies up to several millimetres. Unfortunately, common TLS data has no spectral information about the covered scene. However, the matching of TLS data with images is important for monoplotting purposes and point cloud colouration. Well-established methods solve this issue by matching of close range images and point cloud data by fitting optical camera systems on top of laser scanners or rather using ground control points. The approach addressed in this paper aims for the matching of 2D image and 3D point cloud data from a freely moving camera within an environment covered by a large 3D point cloud, e.g. a 3D city model. The key advantage of the free movement affects augmented reality applications or real time measurements. Therefore, a so-called real image, captured by a smartphone camera, has to be matched with a so-called synthetic image which consists of reverse projected 3D point cloud data to a synthetic projection centre whose exterior orientation parameters match the parameters of the image, assuming an ideal distortion free camera.
Dynamics of the CRRES barium releases in the magnetosphere
NASA Technical Reports Server (NTRS)
Fuselier, S. A.; Mende, S. B.; Geller, S. P.; Miller, M.; Hoffman, R. A.; Wygant, J. R.; Pongratz, M.; Meredith, N. P.; Anderson, R. R.
1994-01-01
The Combined Release and Radiation Effects Satellite (CRRES) G-2, G-3, and G-4 ionized and neutral barium cloud positions are triangulated from ground-based optical data. From the time history of the ionized cloud motion perpendicular to the magnetic field, the late time coupling of the ionized cloud with the collisionless ambient plasma in the magnetosphere is investigated for each of the releases. The coupling of the ionized clouds with the ambient medium is quantitatively consistent with predictions from theory in that the coupling time increases with increasing distance from the Earth. Quantitative comparison with simple theory for the couping time also yields reasonable agreement. Other effects not predicted by the theory are discussed in the context of the observations.
An Approach of Web-based Point Cloud Visualization without Plug-in
NASA Astrophysics Data System (ADS)
Ye, Mengxuan; Wei, Shuangfeng; Zhang, Dongmei
2016-11-01
With the advances in three-dimensional laser scanning technology, the demand for visualization of massive point cloud is increasingly urgent, but a few years ago point cloud visualization was limited to desktop-based solutions until the introduction of WebGL, several web renderers are available. This paper addressed the current issues in web-based point cloud visualization, and proposed a method of web-based point cloud visualization without plug-in. The method combines ASP.NET and WebGL technologies, using the spatial database PostgreSQL to store data and the open web technologies HTML5 and CSS3 to implement the user interface, a visualization system online for 3D point cloud is developed by Javascript with the web interactions. Finally, the method is applied to the real case. Experiment proves that the new model is of great practical value which avoids the shortcoming of the existing WebGIS solutions.
NASA Astrophysics Data System (ADS)
Zlinszky, András; Schroiff, Anke; Otepka, Johannes; Mandlburger, Gottfried; Pfeifer, Norbert
2014-05-01
LIDAR point clouds hold valuable information for land cover and vegetation analysis, not only in the spatial distribution of the points but also in their various attributes. However, LIDAR point clouds are rarely used for visual interpretation, since for most users, the point cloud is difficult to interpret compared to passive optical imagery. Meanwhile, point cloud viewing software is available allowing interactive 3D interpretation, but typically only one attribute at a time. This results in a large number of points with the same colour, crowding the scene and often obscuring detail. We developed a scheme for mapping information from multiple LIDAR point attributes to the Red, Green, and Blue channels of a widely used LIDAR data format, which are otherwise mostly used to add information from imagery to create "photorealistic" point clouds. The possible combinations of parameters are therefore represented in a wide range of colours, but relative differences in individual parameter values of points can be well understood. The visualization was implemented in OPALS software, using a simple and robust batch script, and is viewer independent since the information is stored in the point cloud data file itself. In our case, the following colour channel assignment delivered best results: Echo amplitude in the Red, echo width in the Green and normalized height above a Digital Terrain Model in the Blue channel. With correct parameter scaling (but completely without point classification), points belonging to asphalt and bare soil are dark red, low grassland and crop vegetation are bright red to yellow, shrubs and low trees are green and high trees are blue. Depending on roof material and DTM quality, buildings are shown from red through purple to dark blue. Erroneously high or low points, or points with incorrect amplitude or echo width usually have colours contrasting from terrain or vegetation. This allows efficient visual interpretation of the point cloud in planar, profile and 3D views since it reduces crowding of the scene and delivers intuitive contextual information. The resulting visualization has proved useful for vegetation analysis for habitat mapping, and can also be applied as a first step for point cloud level classification. An interactive demonstration of the visualization script is shown during poster attendance, including the opportunity to view your own point cloud sample files.
Rosnell, Tomi; Honkavaara, Eija
2012-01-01
The objective of this investigation was to develop and investigate methods for point cloud generation by image matching using aerial image data collected by quadrocopter type micro unmanned aerial vehicle (UAV) imaging systems. Automatic generation of high-quality, dense point clouds from digital images by image matching is a recent, cutting-edge step forward in digital photogrammetric technology. The major components of the system for point cloud generation are a UAV imaging system, an image data collection process using high image overlaps, and post-processing with image orientation and point cloud generation. Two post-processing approaches were developed: one of the methods is based on Bae Systems’ SOCET SET classical commercial photogrammetric software and another is built using Microsoft®’s Photosynth™ service available in the Internet. Empirical testing was carried out in two test areas. Photosynth processing showed that it is possible to orient the images and generate point clouds fully automatically without any a priori orientation information or interactive work. The photogrammetric processing line provided dense and accurate point clouds that followed the theoretical principles of photogrammetry, but also some artifacts were detected. The point clouds from the Photosynth processing were sparser and noisier, which is to a large extent due to the fact that the method is not optimized for dense point cloud generation. Careful photogrammetric processing with self-calibration is required to achieve the highest accuracy. Our results demonstrate the high performance potential of the approach and that with rigorous processing it is possible to reach results that are consistent with theory. We also point out several further research topics. Based on theoretical and empirical results, we give recommendations for properties of imaging sensor, data collection and processing of UAV image data to ensure accurate point cloud generation. PMID:22368479
Rosnell, Tomi; Honkavaara, Eija
2012-01-01
The objective of this investigation was to develop and investigate methods for point cloud generation by image matching using aerial image data collected by quadrocopter type micro unmanned aerial vehicle (UAV) imaging systems. Automatic generation of high-quality, dense point clouds from digital images by image matching is a recent, cutting-edge step forward in digital photogrammetric technology. The major components of the system for point cloud generation are a UAV imaging system, an image data collection process using high image overlaps, and post-processing with image orientation and point cloud generation. Two post-processing approaches were developed: one of the methods is based on Bae Systems' SOCET SET classical commercial photogrammetric software and another is built using Microsoft(®)'s Photosynth™ service available in the Internet. Empirical testing was carried out in two test areas. Photosynth processing showed that it is possible to orient the images and generate point clouds fully automatically without any a priori orientation information or interactive work. The photogrammetric processing line provided dense and accurate point clouds that followed the theoretical principles of photogrammetry, but also some artifacts were detected. The point clouds from the Photosynth processing were sparser and noisier, which is to a large extent due to the fact that the method is not optimized for dense point cloud generation. Careful photogrammetric processing with self-calibration is required to achieve the highest accuracy. Our results demonstrate the high performance potential of the approach and that with rigorous processing it is possible to reach results that are consistent with theory. We also point out several further research topics. Based on theoretical and empirical results, we give recommendations for properties of imaging sensor, data collection and processing of UAV image data to ensure accurate point cloud generation.
Michez, Adrien; Piégay, Hervé; Lejeune, Philippe; Claessens, Hugues
2017-11-01
Riparian buffers are of major concern for land and water resource managers despite their relatively low spatial coverage. In Europe, this concern has been acknowledged by different environmental directives which recommend multi-scale monitoring (from local to regional scales). Remote sensing methods could be a cost-effective alternative to field-based monitoring, to build replicable "wall-to-wall" monitoring strategies of large river networks and associated riparian buffers. The main goal of our study is to extract and analyze various parameters of the riparian buffers of up to 12,000 km of river in southern Belgium (Wallonia) from three-dimensional (3D) point clouds based on LiDAR and photogrammetric surveys to i) map riparian buffers parameters on different scales, ii) interpret the regional patterns of the riparian buffers and iii) propose new riparian buffer management indicators. We propose different strategies to synthesize and visualize relevant information at different spatial scales ranging from local (<10 km) to regional scale (>12,000 km). Our results showed that the selected parameters had a clear regional pattern. The reaches of Ardenne ecoregion have channels with the highest flow widths and shallowest depths. In contrast, the reaches of the Loam ecoregion have the narrowest and deepest flow channels. Regional variability in channel width and depth is used to locate management units potentially affected by human impact. Riparian forest of the Loam ecoregion is characterized by the lowest longitudinal continuity and mean tree height, underlining significant human disturbance. As the availability of 3D point clouds at the regional scale is constantly growing, our study proposes reproducible methods which can be integrated into regional monitoring by land managers. With LiDAR still being relatively expensive to acquire, the use of photogrammetric point clouds combined with LiDAR data is a cost-effective means to update the characterization of the riparian forest conditions. Copyright © 2017 Elsevier Ltd. All rights reserved.
a Fast Method for Measuring the Similarity Between 3d Model and 3d Point Cloud
NASA Astrophysics Data System (ADS)
Zhang, Zongliang; Li, Jonathan; Li, Xin; Lin, Yangbin; Zhang, Shanxin; Wang, Cheng
2016-06-01
This paper proposes a fast method for measuring the partial Similarity between 3D Model and 3D point Cloud (SimMC). It is crucial to measure SimMC for many point cloud-related applications such as 3D object retrieval and inverse procedural modelling. In our proposed method, the surface area of model and the Distance from Model to point Cloud (DistMC) are exploited as measurements to calculate SimMC. Here, DistMC is defined as the weighted distance of the distances between points sampled from model and point cloud. Similarly, Distance from point Cloud to Model (DistCM) is defined as the average distance of the distances between points in point cloud and model. In order to reduce huge computational burdens brought by calculation of DistCM in some traditional methods, we define SimMC as the ratio of weighted surface area of model to DistMC. Compared to those traditional SimMC measuring methods that are only able to measure global similarity, our method is capable of measuring partial similarity by employing distance-weighted strategy. Moreover, our method is able to be faster than other partial similarity assessment methods. We demonstrate the superiority of our method both on synthetic data and laser scanning data.
Capturing Revolute Motion and Revolute Joint Parameters with Optical Tracking
NASA Astrophysics Data System (ADS)
Antonya, C.
2017-12-01
Optical tracking of users and various technical systems are becoming more and more popular. It consists of analysing sequence of recorded images using video capturing devices and image processing algorithms. The returned data contains mainly point-clouds, coordinates of markers or coordinates of point of interest. These data can be used for retrieving information related to the geometry of the objects, but also to extract parameters for the analytical model of the system useful in a variety of computer aided engineering simulations. The parameter identification of joints deals with extraction of physical parameters (mainly geometric parameters) for the purpose of constructing accurate kinematic and dynamic models. The input data are the time-series of the marker’s position. The least square method was used for fitting the data into different geometrical shapes (ellipse, circle, plane) and for obtaining the position and orientation of revolute joins.
Motion Estimation System Utilizing Point Cloud Registration
NASA Technical Reports Server (NTRS)
Chen, Qi (Inventor)
2016-01-01
A system and method of estimation motion of a machine is disclosed. The method may include determining a first point cloud and a second point cloud corresponding to an environment in a vicinity of the machine. The method may further include generating a first extended gaussian image (EGI) for the first point cloud and a second EGI for the second point cloud. The method may further include determining a first EGI segment based on the first EGI and a second EGI segment based on the second EGI. The method may further include determining a first two dimensional distribution for points in the first EGI segment and a second two dimensional distribution for points in the second EGI segment. The method may further include estimating motion of the machine based on the first and second two dimensional distributions.
A holistic image segmentation framework for cloud detection and extraction
NASA Astrophysics Data System (ADS)
Shen, Dan; Xu, Haotian; Blasch, Erik; Horvath, Gregory; Pham, Khanh; Zheng, Yufeng; Ling, Haibin; Chen, Genshe
2013-05-01
Atmospheric clouds are commonly encountered phenomena affecting visual tracking from air-borne or space-borne sensors. Generally clouds are difficult to detect and extract because they are complex in shape and interact with sunlight in a complex fashion. In this paper, we propose a clustering game theoretic image segmentation based approach to identify, extract, and patch clouds. In our framework, the first step is to decompose a given image containing clouds. The problem of image segmentation is considered as a "clustering game". Within this context, the notion of a cluster is equivalent to a classical equilibrium concept from game theory, as the game equilibrium reflects both the internal and external (e.g., two-player) cluster conditions. To obtain the evolutionary stable strategies, we explore three evolutionary dynamics: fictitious play, replicator dynamics, and infection and immunization dynamics (InImDyn). Secondly, we use the boundary and shape features to refine the cloud segments. This step can lower the false alarm rate. In the third step, we remove the detected clouds and patch the empty spots by performing background recovery. We demonstrate our cloud detection framework on a video clip provides supportive results.
Pointo - a Low Cost Solution to Point Cloud Processing
NASA Astrophysics Data System (ADS)
Houshiar, H.; Winkler, S.
2017-11-01
With advance in technology access to data especially 3D point cloud data becomes more and more an everyday task. 3D point clouds are usually captured with very expensive tools such as 3D laser scanners or very time consuming methods such as photogrammetry. Most of the available softwares for 3D point cloud processing are designed for experts and specialists in this field and are usually very large software packages containing variety of methods and tools. This results in softwares that are usually very expensive to acquire and also very difficult to use. Difficulty of use is caused by complicated user interfaces that is required to accommodate a large list of features. The aim of these complex softwares is to provide a powerful tool for a specific group of specialist. However they are not necessary required by the majority of the up coming average users of point clouds. In addition to complexity and high costs of these softwares they generally rely on expensive and modern hardware and only compatible with one specific operating system. Many point cloud customers are not point cloud processing experts or willing to spend the high acquisition costs of these expensive softwares and hardwares. In this paper we introduce a solution for low cost point cloud processing. Our approach is designed to accommodate the needs of the average point cloud user. To reduce the cost and complexity of software our approach focuses on one functionality at a time in contrast with most available softwares and tools that aim to solve as many problems as possible at the same time. Our simple and user oriented design improve the user experience and empower us to optimize our methods for creation of an efficient software. In this paper we introduce Pointo family as a series of connected softwares to provide easy to use tools with simple design for different point cloud processing requirements. PointoVIEWER and PointoCAD are introduced as the first components of the Pointo family to provide a fast and efficient visualization with the ability to add annotation and documentation to the point clouds.
Wake coupling to full potential rotor analysis code
NASA Technical Reports Server (NTRS)
Torres, Francisco J.; Chang, I-Chung; Oh, Byung K.
1990-01-01
The wake information from a helicopter forward flight code is coupled with two transonic potential rotor codes. The induced velocities for the near-, mid-, and far-wake geometries are extracted from a nonlinear rigid wake of a standard performance and analysis code. These, together with the corresponding inflow angles, computation points, and azimuth angles, are then incorporated into the transonic potential codes. The coupled codes can then provide an improved prediction of rotor blade loading at transonic speeds.
Validity of association rules extracted by healthcare-data-mining.
Takeuchi, Hiroshi; Kodama, Naoki
2014-01-01
A personal healthcare system used with cloud computing has been developed. It enables a daily time-series of personal health and lifestyle data to be stored in the cloud through mobile devices. The cloud automatically extracts personally useful information, such as rules and patterns concerning the user's lifestyle and health condition embedded in their personal big data, by using healthcare-data-mining. This study has verified that the extracted rules on the basis of a daily time-series data stored during a half- year by volunteer users of this system are valid.
Railway Tunnel Clearance Inspection Method Based on 3D Point Cloud from Mobile Laser Scanning
Zhou, Yuhui; Wang, Shaohua; Mei, Xi; Yin, Wangling; Lin, Chunfeng; Mao, Qingzhou
2017-01-01
Railway tunnel clearance is directly related to the safe operation of trains and upgrading of freight capacity. As more and more railway are put into operation and the operation is continuously becoming faster, the railway tunnel clearance inspection should be more precise and efficient. In view of the problems existing in traditional tunnel clearance inspection methods, such as low density, slow speed and a lot of manual operations, this paper proposes a tunnel clearance inspection approach based on 3D point clouds obtained by a mobile laser scanning system (MLS). First, a dynamic coordinate system for railway tunnel clearance inspection has been proposed. A rail line extraction algorithm based on 3D linear fitting is implemented from the segmented point cloud to establish a dynamic clearance coordinate system. Second, a method to seamlessly connect all rail segments based on the railway clearance restrictions, and a seamless rail alignment is formed sequentially from the middle tunnel section to both ends. Finally, based on the rail alignment and the track clearance coordinate system, different types of clearance frames are introduced for intrusion operation with the tunnel section to realize the tunnel clearance inspection. By taking the Shuanghekou Tunnel of the Chengdu–Kunming Railway as an example, when the clearance inspection is carried out by the method mentioned herein, its precision can reach 0.03 m, and difference types of clearances can be effectively calculated. This method has a wide application prospects. PMID:28880232
DeWitt, Jessica D.; Warner, Timothy A.; Chirico, Peter G.; Bergstresser, Sarah E.
2017-01-01
For areas of the world that do not have access to lidar, fine-scale digital elevation models (DEMs) can be photogrammetrically created using globally available high-spatial resolution stereo satellite imagery. The resultant DEM is best termed a digital surface model (DSM) because it includes heights of surface features. In densely vegetated conditions, this inclusion can limit its usefulness in applications requiring a bare-earth DEM. This study explores the use of techniques designed for filtering lidar point clouds to mitigate the elevation artifacts caused by above ground features, within the context of a case study of Prince William Forest Park, Virginia, USA. The influences of land cover and leaf-on vs. leaf-off conditions are investigated, and the accuracy of the raw photogrammetric DSM extracted from leaf-on imagery was between that of a lidar bare-earth DEM and the Shuttle Radar Topography Mission DEM. Although the filtered leaf-on photogrammetric DEM retains some artifacts of the vegetation canopy and may not be useful for some applications, filtering procedures significantly improved the accuracy of the modeled terrain. The accuracy of the DSM extracted in leaf-off conditions was comparable in most areas to the lidar bare-earth DEM and filtering procedures resulted in accuracy comparable of that to the lidar DEM.
Study of Huizhou architecture component point cloud in surface reconstruction
NASA Astrophysics Data System (ADS)
Zhang, Runmei; Wang, Guangyin; Ma, Jixiang; Wu, Yulu; Zhang, Guangbin
2017-06-01
Surface reconfiguration softwares have many problems such as complicated operation on point cloud data, too many interaction definitions, and too stringent requirements for inputing data. Thus, it has not been widely popularized so far. This paper selects the unique Huizhou Architecture chuandou wooden beam framework as the research object, and presents a complete set of implementation in data acquisition from point, point cloud preprocessing and finally implemented surface reconstruction. Firstly, preprocessing the acquired point cloud data, including segmentation and filtering. Secondly, the surface’s normals are deduced directly from the point cloud dataset. Finally, the surface reconstruction is studied by using Greedy Projection Triangulation Algorithm. Comparing the reconstructed model with the three-dimensional surface reconstruction softwares, the results show that the proposed scheme is more smooth, time efficient and portable.
NASA Astrophysics Data System (ADS)
Lague, D.
2014-12-01
High Resolution Topographic (HRT) datasets are predominantly stored and analyzed as 2D raster grids of elevations (i.e., Digital Elevation Models). Raster grid processing is common in GIS software and benefits from a large library of fast algorithms dedicated to geometrical analysis, drainage network computation and topographic change measurement. Yet, all instruments or methods currently generating HRT datasets (e.g., ALS, TLS, SFM, stereo satellite imagery) output natively 3D unstructured point clouds that are (i) non-regularly sampled, (ii) incomplete (e.g., submerged parts of river channels are rarely measured), and (iii) include 3D elements (e.g., vegetation, vertical features such as river banks or cliffs) that cannot be accurately described in a DEM. Interpolating the raw point cloud onto a 2D grid generally results in a loss of position accuracy, spatial resolution and in more or less controlled interpolation. Here I demonstrate how studying earth surface topography and processes directly on native 3D point cloud datasets offers several advantages over raster based methods: point cloud methods preserve the accuracy of the original data, can better handle the evaluation of uncertainty associated to topographic change measurements and are more suitable to study vegetation characteristics and steep features of the landscape. In this presentation, I will illustrate and compare Point Cloud based and Raster based workflows with various examples involving ALS, TLS and SFM for the analysis of bank erosion processes in bedrock and alluvial rivers, rockfall statistics (including rockfall volume estimate directly from point clouds) and the interaction of vegetation/hydraulics and sedimentation in salt marshes. These workflows use 2 recently published algorithms for point cloud classification (CANUPO) and point cloud comparison (M3C2) now implemented in the open source software CloudCompare.
An efficient cloud detection method for high resolution remote sensing panchromatic imagery
NASA Astrophysics Data System (ADS)
Li, Chaowei; Lin, Zaiping; Deng, Xinpu
2018-04-01
In order to increase the accuracy of cloud detection for remote sensing satellite imagery, we propose an efficient cloud detection method for remote sensing satellite panchromatic images. This method includes three main steps. First, an adaptive intensity threshold value combined with a median filter is adopted to extract the coarse cloud regions. Second, a guided filtering process is conducted to strengthen the textural features difference and then we conduct the detection process of texture via gray-level co-occurrence matrix based on the acquired texture detail image. Finally, the candidate cloud regions are extracted by the intersection of two coarse cloud regions above and we further adopt an adaptive morphological dilation to refine them for thin clouds in boundaries. The experimental results demonstrate the effectiveness of the proposed method.
Using Mobile Laser Scanning Data for Features Extraction of High Accuracy Driving Maps
NASA Astrophysics Data System (ADS)
Yang, Bisheng; Liu, Yuan; Liang, Fuxun; Dong, Zhen
2016-06-01
High Accuracy Driving Maps (HADMs) are the core component of Intelligent Drive Assistant Systems (IDAS), which can effectively reduce the traffic accidents due to human error and provide more comfortable driving experiences. Vehicle-based mobile laser scanning (MLS) systems provide an efficient solution to rapidly capture three-dimensional (3D) point clouds of road environments with high flexibility and precision. This paper proposes a novel method to extract road features (e.g., road surfaces, road boundaries, road markings, buildings, guardrails, street lamps, traffic signs, roadside-trees, power lines, vehicles and so on) for HADMs in highway environment. Quantitative evaluations show that the proposed algorithm attains an average precision and recall in terms of 90.6% and 91.2% in extracting road features. Results demonstrate the efficiencies and feasibilities of the proposed method for extraction of road features for HADMs.
Compression of 3D Point Clouds Using a Region-Adaptive Hierarchical Transform.
De Queiroz, Ricardo; Chou, Philip A
2016-06-01
In free-viewpoint video, there is a recent trend to represent scene objects as solids rather than using multiple depth maps. Point clouds have been used in computer graphics for a long time and with the recent possibility of real time capturing and rendering, point clouds have been favored over meshes in order to save computation. Each point in the cloud is associated with its 3D position and its color. We devise a method to compress the colors in point clouds which is based on a hierarchical transform and arithmetic coding. The transform is a hierarchical sub-band transform that resembles an adaptive variation of a Haar wavelet. The arithmetic encoding of the coefficients assumes Laplace distributions, one per sub-band. The Laplace parameter for each distribution is transmitted to the decoder using a custom method. The geometry of the point cloud is encoded using the well-established octtree scanning. Results show that the proposed solution performs comparably to the current state-of-the-art, in many occasions outperforming it, while being much more computationally efficient. We believe this work represents the state-of-the-art in intra-frame compression of point clouds for real-time 3D video.
Block Adjustment and Image Matching of WORLDVIEW-3 Stereo Pairs and Accuracy Evaluation
NASA Astrophysics Data System (ADS)
Zuo, C.; Xiao, X.; Hou, Q.; Li, B.
2018-05-01
WorldView-3, as a high-resolution commercial earth observation satellite, which is launched by Digital Global, provides panchromatic imagery of 0.31 m resolution. The positioning accuracy is less than 3.5 meter CE90 without ground control, which can use for large scale topographic mapping. This paper presented the block adjustment for WorldView-3 based on RPC model and achieved the accuracy of 1 : 2000 scale topographic mapping with few control points. On the base of stereo orientation result, this paper applied two kinds of image matching algorithm for DSM extraction: LQM and SGM. Finally, this paper compared the accuracy of the point cloud generated by the two image matching methods with the reference data which was acquired by an airborne laser scanner. The results showed that the RPC adjustment model of WorldView-3 image with small number of GCPs could satisfy the requirement of Chinese Surveying and Mapping regulations for 1 : 2000 scale topographic maps. And the point cloud result obtained through WorldView-3 stereo image matching had higher elevation accuracy, the RMS error of elevation for bare ground area is 0.45 m, while for buildings the accuracy can almost reach 1 meter.
Improved ATLAS HammerCloud Monitoring for Local Site Administration
NASA Astrophysics Data System (ADS)
Böhler, M.; Elmsheuser, J.; Hönig, F.; Legger, F.; Mancinelli, V.; Sciacca, G.
2015-12-01
Every day hundreds of tests are run on the Worldwide LHC Computing Grid for the ATLAS, and CMS experiments in order to evaluate the performance and reliability of the different computing sites. All this activity is steered, controlled, and monitored by the HammerCloud testing infrastructure. Sites with failing functionality tests are auto-excluded from the ATLAS computing grid, therefore it is essential to provide a detailed and well organized web interface for the local site administrators such that they can easily spot and promptly solve site issues. Additional functionality has been developed to extract and visualize the most relevant information. The site administrators can now be pointed easily to major site issues which lead to site blacklisting as well as possible minor issues that are usually not conspicuous enough to warrant the blacklisting of a specific site, but can still cause undesired effects such as a non-negligible job failure rate. This paper summarizes the different developments and optimizations of the HammerCloud web interface and gives an overview of typical use cases.
NASA Astrophysics Data System (ADS)
Tandon, Neil F.; Cane, Mark A.
2017-06-01
In a suite of idealized experiments with the Community Atmospheric Model version 3 coupled to a slab ocean, we show that the atmospheric circulation response to CO2 increase is sensitive to extratropical cloud feedback that is potentially nonlinear. Doubling CO2 produces a poleward shift of the Southern Hemisphere (SH) midlatitude jet that is driven primarily by cloud shortwave feedback and modulated by ice albedo feedback, in agreement with earlier studies. More surprisingly, for CO2 increases smaller than 25 %, the SH jet shifts equatorward. Nonlinearities are also apparent in the Northern Hemisphere, but with less zonal symmetry. Baroclinic instability theory and climate feedback analysis suggest that as the CO2 forcing amplitude is reduced, there is a transition from a regime in which cloud and circulation changes are largely decoupled to a regime in which they are highly coupled. In the dynamically coupled regime, there is an apparent cancellation between cloud feedback due to warming and cloud feedback due to the shifting jet, and this allows the ice albedo feedback to dominate in the high latitudes. The extent to which dynamical coupling effects exceed thermodynamic forcing effects is strongly influenced by cloud microphysics: an alternate model configuration with slightly increased cloud liquid (LIQ) produces poleward jet shifts regardless of the amplitude of CO2 forcing. Altering the cloud microphysics also produces substantial spread in the circulation response to CO2 doubling: the LIQ configuration produces a poleward SH jet shift approximately twice that produced under the default configuration. Analysis of large ensembles of the Canadian Earth System Model version 2 demonstrates that nonlinear, cloud-coupled jet shifts are also possible in comprehensive models. We still expect a poleward trend in SH jet latitude for timescales on which CO2 increases by more than 25 %. But on shorter timescales, our results give good reason to expect significant equatorward deviations. We also discuss the implications for understanding the circulation response to small external forcings from other sources, such as the solar cycle.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, W; Sawant, A; Ruan, D
2016-06-15
Purpose: Surface photogrammetry (e.g. VisionRT, C-Rad) provides a noninvasive way to obtain high-frequency measurement for patient motion monitoring in radiotherapy. This work aims to develop a real-time surface reconstruction method on the acquired point clouds, whose acquisitions are subject to noise and missing measurements. In contrast to existing surface reconstruction methods that are usually computationally expensive, the proposed method reconstructs continuous surfaces with comparable accuracy in real-time. Methods: The key idea in our method is to solve and propagate a sparse linear relationship from the point cloud (measurement) manifold to the surface (reconstruction) manifold, taking advantage of the similarity inmore » local geometric topology in both manifolds. With consistent point cloud acquisition, we propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, building the point correspondences by the iterative closest point (ICP) method. To accommodate changing noise levels and/or presence of inconsistent occlusions, we further propose a modified sparse regression (MSR) model to account for the large and sparse error built by ICP, with a Laplacian prior. We evaluated our method on both clinical acquired point clouds under consistent conditions and simulated point clouds with inconsistent occlusions. The reconstruction accuracy was evaluated w.r.t. root-mean-squared-error, by comparing the reconstructed surfaces against those from the variational reconstruction method. Results: On clinical point clouds, both the SR and MSR models achieved sub-millimeter accuracy, with mean reconstruction time reduced from 82.23 seconds to 0.52 seconds and 0.94 seconds, respectively. On simulated point cloud with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent performance despite the introduced occlusions. Conclusion: We have developed a real-time and robust surface reconstruction method on point clouds acquired by photogrammetry systems. It serves an important enabling step for real-time motion tracking in radiotherapy. This work is supported in part by NIH grant R01 CA169102-02.« less
Mingus Discontinuous Multiphysics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pat Notz, Dan Turner
Mingus provides hybrid coupled local/non-local mechanics analysis capabilities that extend several traditional methods to applications with inherent discontinuities. Its primary features include adaptations of solid mechanics, fluid dynamics and digital image correlation that naturally accommodate dijointed data or irregular solution fields by assimilating a variety of discretizations (such as control volume finite elements, peridynamics and meshless control point clouds). The goal of this software is to provide an analysis framework form multiphysics engineering problems with an integrated image correlation capability that can be used for experimental validation and model
FPFH-based graph matching for 3D point cloud registration
NASA Astrophysics Data System (ADS)
Zhao, Jiapeng; Li, Chen; Tian, Lihua; Zhu, Jihua
2018-04-01
Correspondence detection is a vital step in point cloud registration and it can help getting a reliable initial alignment. In this paper, we put forward an advanced point feature-based graph matching algorithm to solve the initial alignment problem of rigid 3D point cloud registration with partial overlap. Specifically, Fast Point Feature Histograms are used to determine the initial possible correspondences firstly. Next, a new objective function is provided to make the graph matching more suitable for partially overlapping point cloud. The objective function is optimized by the simulated annealing algorithm for final group of correct correspondences. Finally, we present a novel set partitioning method which can transform the NP-hard optimization problem into a O(n3)-solvable one. Experiments on the Stanford and UWA public data sets indicates that our method can obtain better result in terms of both accuracy and time cost compared with other point cloud registration methods.
From data to information and knowledge for geospatial applications
NASA Astrophysics Data System (ADS)
Schenk, T.; Csatho, B.; Yoon, T.
2006-12-01
An ever-increasing number of airborne and spaceborne data-acquisition missions with various sensors produce a glut of data. Sensory data rarely contains information in a explicit form such that an application can directly use it. The processing and analyzing of data constitutes a real bottleneck; therefore, automating the processes of gaining useful information and knowledge from the raw data is of paramount interest. This presentation is concerned with the transition from data to information and knowledge. With data we refer to the sensor output and we notice that data provide very rarely direct answers for applications. For example, a pixel in a digital image or a laser point from a LIDAR system (data) have no direct relationship with elevation changes of topographic surfaces or the velocity of a glacier (information, knowledge). We propose to employ the computer vision paradigm to extract information and knowledge as it pertains to a wide range of geoscience applications. After introducing the paradigm we describe the major steps to be undertaken for extracting information and knowledge from sensory input data. Features play an important role in this process. Thus we focus on extracting features and their perceptual organization to higher order constructs. We demonstrate these concepts with imaging data and laser point clouds. The second part of the presentation addresses the problem of combining data obtained by different sensors. An absolute prerequisite for successful fusion is to establish a common reference frame. We elaborate on the concept of sensor invariant features that allow the registration of such disparate data sets as aerial/satellite imagery, 3D laser point clouds, and multi/hyperspectral imagery. Fusion takes place on the data level (sensor registration) and on the information level. We show how fusion increases the degree of automation for reconstructing topographic surfaces. Moreover, fused information gained from the three sensors results in a more abstract surface representation with a rich set of explicit surface information that can be readily used by an analyst for applications such as change detection.
Extracting Objects for Aerial Manipulation on UAVs Using Low Cost Stereo Sensors
Ramon Soria, Pablo; Bevec, Robert; Arrue, Begoña C.; Ude, Aleš; Ollero, Aníbal
2016-01-01
Giving unmanned aerial vehicles (UAVs) the possibility to manipulate objects vastly extends the range of possible applications. This applies to rotary wing UAVs in particular, where their capability of hovering enables a suitable position for in-flight manipulation. Their manipulation skills must be suitable for primarily natural, partially known environments, where UAVs mostly operate. We have developed an on-board object extraction method that calculates information necessary for autonomous grasping of objects, without the need to provide the model of the object’s shape. A local map of the work-zone is generated using depth information, where object candidates are extracted by detecting areas different to our floor model. Their image projections are then evaluated using support vector machine (SVM) classification to recognize specific objects or reject bad candidates. Our method builds a sparse cloud representation of each object and calculates the object’s centroid and the dominant axis. This information is then passed to a grasping module. Our method works under the assumption that objects are static and not clustered, have visual features and the floor shape of the work-zone area is known. We used low cost cameras for creating depth information that cause noisy point clouds, but our method has proved robust enough to process this data and return accurate results. PMID:27187413
Extracting Objects for Aerial Manipulation on UAVs Using Low Cost Stereo Sensors.
Ramon Soria, Pablo; Bevec, Robert; Arrue, Begoña C; Ude, Aleš; Ollero, Aníbal
2016-05-14
Giving unmanned aerial vehicles (UAVs) the possibility to manipulate objects vastly extends the range of possible applications. This applies to rotary wing UAVs in particular, where their capability of hovering enables a suitable position for in-flight manipulation. Their manipulation skills must be suitable for primarily natural, partially known environments, where UAVs mostly operate. We have developed an on-board object extraction method that calculates information necessary for autonomous grasping of objects, without the need to provide the model of the object's shape. A local map of the work-zone is generated using depth information, where object candidates are extracted by detecting areas different to our floor model. Their image projections are then evaluated using support vector machine (SVM) classification to recognize specific objects or reject bad candidates. Our method builds a sparse cloud representation of each object and calculates the object's centroid and the dominant axis. This information is then passed to a grasping module. Our method works under the assumption that objects are static and not clustered, have visual features and the floor shape of the work-zone area is known. We used low cost cameras for creating depth information that cause noisy point clouds, but our method has proved robust enough to process this data and return accurate results.
Motion-Compensated Compression of Dynamic Voxelized Point Clouds.
De Queiroz, Ricardo L; Chou, Philip A
2017-05-24
Dynamic point clouds are a potential new frontier in visual communication systems. A few articles have addressed the compression of point clouds, but very few references exist on exploring temporal redundancies. This paper presents a novel motion-compensated approach to encoding dynamic voxelized point clouds at low bit rates. A simple coder breaks the voxelized point cloud at each frame into blocks of voxels. Each block is either encoded in intra-frame mode or is replaced by a motion-compensated version of a block in the previous frame. The decision is optimized in a rate-distortion sense. In this way, both the geometry and the color are encoded with distortion, allowing for reduced bit-rates. In-loop filtering is employed to minimize compression artifacts caused by distortion in the geometry information. Simulations reveal that this simple motion compensated coder can efficiently extend the compression range of dynamic voxelized point clouds to rates below what intra-frame coding alone can accommodate, trading rate for geometry accuracy.
Solubilization of phenanthrene above cloud point of Brij 30: a new application in biodegradation.
Pantsyrnaya, T; Delaunay, S; Goergen, J L; Guseva, E; Boudrant, J
2013-06-01
In the present study a new application of solubilization of phenanthrene above cloud point of Brij 30 in biodegradation was developed. It was shown that a temporal solubilization of phenanthrene above cloud point of Brij 30 (5wt%) permitted to obtain a stable increase of the solubility of phenanthrene even when the temperature was decreased to culture conditions of used microorganism Pseudomonas putida (28°C). A higher initial concentration of soluble phenanthrene was obtained after the cloud point treatment: 200 against 120μM without treatment. All soluble phenanthrene was metabolized and a higher final concentration of its major metabolite - 1-hydroxy-2-naphthoic acid - (160 against 85μM) was measured in the culture medium in the case of a preliminary cloud point treatment. Therefore a temporary solubilization at cloud point might have a perspective application in the enhancement of biodegradation of polycyclic aromatic hydrocarbons. Copyright © 2013 Elsevier Ltd. All rights reserved.
A portable low-cost 3D point cloud acquiring method based on structure light
NASA Astrophysics Data System (ADS)
Gui, Li; Zheng, Shunyi; Huang, Xia; Zhao, Like; Ma, Hao; Ge, Chao; Tang, Qiuxia
2018-03-01
A fast and low-cost method of acquiring 3D point cloud data is proposed in this paper, which can solve the problems of lack of texture information and low efficiency of acquiring point cloud data with only one pair of cheap cameras and projector. Firstly, we put forward a scene adaptive design method of random encoding pattern, that is, a coding pattern is projected onto the target surface in order to form texture information, which is favorable for image matching. Subsequently, we design an efficient dense matching algorithm that fits the projected texture. After the optimization of global algorithm and multi-kernel parallel development with the fusion of hardware and software, a fast acquisition system of point-cloud data is accomplished. Through the evaluation of point cloud accuracy, the results show that point cloud acquired by the method proposed in this paper has higher precision. What`s more, the scanning speed meets the demand of dynamic occasion and has better practical application value.
NASA Astrophysics Data System (ADS)
Chan, Christine S.; Ostertag, Michael H.; Akyürek, Alper Sinan; Šimunić Rosing, Tajana
2017-05-01
The Internet of Things envisions a web-connected infrastructure of billions of sensors and actuation devices. However, the current state-of-the-art presents another reality: monolithic end-to-end applications tightly coupled to a limited set of sensors and actuators. Growing such applications with new devices or behaviors, or extending the existing infrastructure with new applications, involves redesign and redeployment. We instead propose a modular approach to these applications, breaking them into an equivalent set of functional units (context engines) whose input/output transformations are driven by general-purpose machine learning, demonstrating an improvement in compute redundancy and computational complexity with minimal impact on accuracy. In conjunction with formal data specifications, or ontologies, we can replace application-specific implementations with a composition of context engines that use common statistical learning to generate output, thus improving context reuse. We implement interconnected context-aware applications using our approach, extracting user context from sensors in both healthcare and grid applications. We compare our infrastructure to single-stage monolithic implementations with single-point communications between sensor nodes and the cloud servers, demonstrating a reduction in combined system energy by 22-45%, and multiplying the battery lifetime of power-constrained devices by at least 22x, with easy deployment across different architectures and devices.
The Research on the Spectral Characteristics of Sea Fog Based on Caliop and Modis Data
NASA Astrophysics Data System (ADS)
Wan, J.; Su, J.; Liu, S.; Sheng, H.
2018-04-01
In view of that difficulty of distinguish between sea fog and low cloud by optical remote sensing mean, the research on spectral characteristics of sea fog is focused and carried out. The satellite laser radar CALIOP data and the high spectral MODIS data were obtained from May to December 2017, and the scattering coefficient and the vertical height information were extracted from the atmospheric attenuation of the lower star to extract the sea fog sample points, and the spectral response curve based on MODIS was formed to analyse the spectral response characteristics of the sea fog, thus providing a theoretical basis for the monitoring of sea fog with optical remote sensing image.
NASA Astrophysics Data System (ADS)
Hosseinzadeh-Nik, Zahra; Regele, Jonathan D.
2015-11-01
Dense compressible particle-laden flow, which has a complex nature, exists in various engineering applications. Shock waves impacting a particle cloud is a canonical problem to investigate this type of flow. It has been demonstrated that large flow unsteadiness is generated inside the particle cloud from the flow induced by the shock passage. It is desirable to develop models for the Reynolds stress to capture the energy contained in vortical structures so that volume-averaged models with point particles can be simulated accurately. However, the previous work used Euler equations, which makes the prediction of vorticity generation and propagation innacurate. In this work, a fully resolved two dimensional (2D) simulation using the compressible Navier-Stokes equations with a volume penalization method to model the particles has been performed with the parallel adaptive wavelet-collocation method. The results still show large unsteadiness inside and downstream of the particle cloud. A 1D model is created for the unclosed terms based upon these 2D results. The 1D model uses a two-phase simple low dissipation AUSM scheme (TSLAU) developed by coupled with the compressible two phase kinetic energy equation.
LiDAR Vegetation Investigation and Signature Analysis System (LVISA)
NASA Astrophysics Data System (ADS)
Höfle, Bernhard; Koenig, Kristina; Griesbaum, Luisa; Kiefer, Andreas; Hämmerle, Martin; Eitel, Jan; Koma, Zsófia
2015-04-01
Our physical environment undergoes constant changes in space and time with strongly varying triggers, frequencies, and magnitudes. Monitoring these environmental changes is crucial to improve our scientific understanding of complex human-environmental interactions and helps us to respond to environmental change by adaptation or mitigation. The three-dimensional (3D) description of the Earth surface features and the detailed monitoring of surface processes using 3D spatial data have gained increasing attention within the last decades, such as in climate change research (e.g., glacier retreat), carbon sequestration (e.g., forest biomass monitoring), precision agriculture and natural hazard management. In all those areas, 3D data have helped to improve our process understanding by allowing quantifying the structural properties of earth surface features and their changes over time. This advancement has been fostered by technological developments and increased availability of 3D sensing systems. In particular, LiDAR (light detection and ranging) technology, also referred to as laser scanning, has made significant progress and has evolved into an operational tool in environmental research and geosciences. The main result of LiDAR measurements is a highly spatially resolved 3D point cloud. Each point within the LiDAR point cloud has a XYZ coordinate associated with it and often additional information such as the strength of the returned backscatter. The point cloud provided by LiDAR contains rich geospatial, structural, and potentially biochemical information about the surveyed objects. To deal with the inherently unorganized datasets and the large data volume (frequently millions of XYZ coordinates) of LiDAR datasets, a multitude of algorithms for automatic 3D object detection (e.g., of single trees) and physical surface description (e.g., biomass) have been developed. However, so far the exchange of datasets and approaches (i.e., extraction algorithms) among LiDAR users lacks behind. We propose a novel concept, the LiDAR Vegetation Investigation and Signature Analysis System (LVISA), which shall enhance sharing of i) reference datasets of single vegetation objects with rich reference data (e.g., plant species, basic plant morphometric information) and ii) approaches for information extraction (e.g., single tree detection, tree species classification based on waveform LiDAR features). We will build an extensive LiDAR data repository for supporting the development and benchmarking of LiDAR-based object information extraction. The LiDAR Vegetation Investigation and Signature Analysis System (LVISA) uses international web service standards (Open Geospatial Consortium, OGC) for geospatial data access and also analysis (e.g., OGC Web Processing Services). This will allow the research community identifying plant object specific vegetation features from LiDAR data, while accounting for differences in LiDAR systems (e.g., beam divergence), settings (e.g., point spacing), and calibration techniques. It is the goal of LVISA to develop generic 3D information extraction approaches, which can be seamlessly transferred to other datasets, timestamps and also extraction tasks. The current prototype of LVISA can be visited and tested online via http://uni-heidelberg.de/lvisa. Video tutorials provide a quick overview and entry into the functionality of LVISA. We will present the current advances of LVISA and we will highlight future research and extension of LVISA, such as integrating low-cost LiDAR data and datasets acquired by highly temporal scanning of vegetation (e.g., continuous measurements). Everybody is invited to join the LVISA development and share datasets and analysis approaches in an interoperable way via the web-based LVISA geoportal.
Point clouds segmentation as base for as-built BIM creation
NASA Astrophysics Data System (ADS)
Macher, H.; Landes, T.; Grussenmeyer, P.
2015-08-01
In this paper, a three steps segmentation approach is proposed in order to create 3D models from point clouds acquired by TLS inside buildings. The three scales of segmentation are floors, rooms and planes composing the rooms. First, floor segmentation is performed based on analysis of point distribution along Z axis. Then, for each floor, room segmentation is achieved considering a slice of point cloud at ceiling level. Finally, planes are segmented for each room, and planes corresponding to ceilings and floors are identified. Results of each step are analysed and potential improvements are proposed. Based on segmented point clouds, the creation of as-built BIM is considered in a future work section. Not only the classification of planes into several categories is proposed, but the potential use of point clouds acquired outside buildings is also considered.
High-Precision Registration of Point Clouds Based on Sphere Feature Constraints.
Huang, Junhui; Wang, Zhao; Gao, Jianmin; Huang, Youping; Towers, David Peter
2016-12-30
Point cloud registration is a key process in multi-view 3D measurements. Its precision affects the measurement precision directly. However, in the case of the point clouds with non-overlapping areas or curvature invariant surface, it is difficult to achieve a high precision. A high precision registration method based on sphere feature constraint is presented to overcome the difficulty in the paper. Some known sphere features with constraints are used to construct virtual overlapping areas. The virtual overlapping areas provide more accurate corresponding point pairs and reduce the influence of noise. Then the transformation parameters between the registered point clouds are solved by an optimization method with weight function. In that case, the impact of large noise in point clouds can be reduced and a high precision registration is achieved. Simulation and experiments validate the proposed method.
High-Precision Registration of Point Clouds Based on Sphere Feature Constraints
Huang, Junhui; Wang, Zhao; Gao, Jianmin; Huang, Youping; Towers, David Peter
2016-01-01
Point cloud registration is a key process in multi-view 3D measurements. Its precision affects the measurement precision directly. However, in the case of the point clouds with non-overlapping areas or curvature invariant surface, it is difficult to achieve a high precision. A high precision registration method based on sphere feature constraint is presented to overcome the difficulty in the paper. Some known sphere features with constraints are used to construct virtual overlapping areas. The virtual overlapping areas provide more accurate corresponding point pairs and reduce the influence of noise. Then the transformation parameters between the registered point clouds are solved by an optimization method with weight function. In that case, the impact of large noise in point clouds can be reduced and a high precision registration is achieved. Simulation and experiments validate the proposed method. PMID:28042846
Pan, Tao; Liu, Chunyan; Zeng, Xinying; Xin, Qiao; Xu, Meiying; Deng, Yangwu; Dong, Wei
2017-06-01
A recent work has shown that hydrophobic organic compounds solubilized in the micelle phase of some nonionic surfactants present substrate toxicity to microorganisms with increasing bioavailability. However, in cloud point systems, biotoxicity is prevented, because the compounds are solubilized into a coacervate phase, thereby leaving a fraction of compounds with cells in a dilute phase. This study extends the understanding of the relationship between substrate toxicity and bioavailability of hydrophobic organic compounds solubilized in nonionic surfactant micelle phase and cloud point system. Biotoxicity experiments were conducted with naphthalene and phenanthrene in the presence of mixed nonionic surfactants Brij30 and TMN-3, which formed a micelle phase or cloud point system at different concentrations. Saccharomyces cerevisiae, unable to degrade these compounds, was used for the biotoxicity experiments. Glucose in the cloud point system was consumed faster than in the nonionic surfactant micelle phase, indicating that the solubilized compounds had increased toxicity to cells in the nonionic surfactant micelle phase. The results were verified by subsequent biodegradation experiments. The compounds were degraded faster by PAH-degrading bacterium in the cloud point system than in the micelle phase. All these results showed that biotoxicity of the hydrophobic organic compounds increases with bioavailability in the surfactant micelle phase but remains at a low level in the cloud point system. These results provide a guideline for the application of cloud point systems as novel media for microbial transformation or biodegradation.
NASA Astrophysics Data System (ADS)
Erfani, E.; Burls, N.
2017-12-01
The nature of local coupled ocean-atmosphere interactions within the tropics is determined by background conditions such as the depth of the equatorial thermocline, the water vapor content of the tropical atmosphere, and the radiative forcing of tropical clouds. These factors are set not only by the coupled tropical variability itself but also by extra-tropical conditions. For example, the strength of the cold tongue is ultimately controlled by the temperature of waters subducted in the extra-tropics and transported to the equator by the ocean subtropical cells (STCs). Similarly, inter-hemispheric asymmetries in extra-tropical atmospheric heating are communicated to the tropics affecting cross-equatorial heat transport and ITCZ position. Acknowledging from a fully coupled perspective the influence of both tropical and extra-tropical conditions, we are performing a suite of CESM experiments across which we systematically alter the strength of convective and stratus cloud feedbacks. By systematically exploring the sensitivity of the tropical coupled system to imposed changes in the strength of tropical and extra-tropical cloud feedbacks to CO2-induced warming this work aims to formalize our understanding of cloud controls on tropical climate.
Automatic 3d Building Model Generations with Airborne LiDAR Data
NASA Astrophysics Data System (ADS)
Yastikli, N.; Cetin, Z.
2017-11-01
LiDAR systems become more and more popular because of the potential use for obtaining the point clouds of vegetation and man-made objects on the earth surface in an accurate and quick way. Nowadays, these airborne systems have been frequently used in wide range of applications such as DEM/DSM generation, topographic mapping, object extraction, vegetation mapping, 3 dimensional (3D) modelling and simulation, change detection, engineering works, revision of maps, coastal management and bathymetry. The 3D building model generation is the one of the most prominent applications of LiDAR system, which has the major importance for urban planning, illegal construction monitoring, 3D city modelling, environmental simulation, tourism, security, telecommunication and mobile navigation etc. The manual or semi-automatic 3D building model generation is costly and very time-consuming process for these applications. Thus, an approach for automatic 3D building model generation is needed in a simple and quick way for many studies which includes building modelling. In this study, automatic 3D building models generation is aimed with airborne LiDAR data. An approach is proposed for automatic 3D building models generation including the automatic point based classification of raw LiDAR point cloud. The proposed point based classification includes the hierarchical rules, for the automatic production of 3D building models. The detailed analyses for the parameters which used in hierarchical rules have been performed to improve classification results using different test areas identified in the study area. The proposed approach have been tested in the study area which has partly open areas, forest areas and many types of the buildings, in Zekeriyakoy, Istanbul using the TerraScan module of TerraSolid. The 3D building model was generated automatically using the results of the automatic point based classification. The obtained results of this research on study area verified that automatic 3D building models can be generated successfully using raw LiDAR point cloud data.
[The application of wavelet analysis of remote detection of pollution clouds].
Zhang, J; Jiang, F
2001-08-01
The discrete wavelet transform (DWT) is used to analyse the spectra of pollution clouds in complicated environment and extract the small-features. The DWT is a time-frequency analysis technology, which detects the subtle small changes in the target spectrum. The results show that the DWT is a quite effective method to extract features of target-cloud and improve the reliability of monitoring alarm system.
Retrieval of cloud cover parameters from multispectral satellite images
NASA Technical Reports Server (NTRS)
Arking, A.; Childs, J. D.
1985-01-01
A technique is described for extracting cloud cover parameters from multispectral satellite radiometric measurements. Utilizing three channels from the AVHRR (Advanced Very High Resolution Radiometer) on NOAA polar orbiting satellites, it is shown that one can retrieve four parameters for each pixel: cloud fraction within the FOV, optical thickness, cloud-top temperature and a microphysical model parameter. The last parameter is an index representing the properties of the cloud particle and is determined primarily by the radiance at 3.7 microns. The other three parameters are extracted from the visible and 11 micron infrared radiances, utilizing the information contained in the two-dimensional scatter plot of the measured radiances. The solution is essentially one in which the distributions of optical thickness and cloud-top temperature are maximally clustered for each region, with cloud fraction for each pixel adjusted to achieve maximal clustering.
Towards a More Efficient Detection of Earthquake Induced FAÇADE Damages Using Oblique Uav Imagery
NASA Astrophysics Data System (ADS)
Duarte, D.; Nex, F.; Kerle, N.; Vosselman, G.
2017-08-01
Urban search and rescue (USaR) teams require a fast and thorough building damage assessment, to focus their rescue efforts accordingly. Unmanned aerial vehicles (UAV) are able to capture relevant data in a short time frame and survey otherwise inaccessible areas after a disaster, and have thus been identified as useful when coupled with RGB cameras for façade damage detection. Existing literature focuses on the extraction of 3D and/or image features as cues for damage. However, little attention has been given to the efficiency of the proposed methods which hinders its use in an urban search and rescue context. The framework proposed in this paper aims at a more efficient façade damage detection using UAV multi-view imagery. This was achieved directing all damage classification computations only to the image regions containing the façades, hence discarding the irrelevant areas of the acquired images and consequently reducing the time needed for such task. To accomplish this, a three-step approach is proposed: i) building extraction from the sparse point cloud computed from the nadir images collected in an initial flight; ii) use of the latter as proxy for façade location in the oblique images captured in subsequent flights, and iii) selection of the façade image regions to be fed to a damage classification routine. The results show that the proposed framework successfully reduces the extracted façade image regions to be assessed for damage 6 fold, hence increasing the efficiency of subsequent damage detection routines. The framework was tested on a set of UAV multi-view images over a neighborhood of the city of L'Aquila, Italy, affected in 2009 by an earthquake.
The Registration and Segmentation of Heterogeneous Laser Scanning Data
NASA Astrophysics Data System (ADS)
Al-Durgham, Mohannad M.
Light Detection And Ranging (LiDAR) mapping has been emerging over the past few years as a mainstream tool for the dense acquisition of three dimensional point data. Besides the conventional mapping missions, LiDAR systems have proven to be very useful for a wide spectrum of applications such as forestry, structural deformation analysis, urban mapping, and reverse engineering. The wide application scope of LiDAR lead to the development of many laser scanning technologies that are mountable on multiple platforms (i.e., airborne, mobile terrestrial, and tripod mounted), this caused variations in the characteristics and quality of the generated point clouds. As a result of the increased popularity and diversity of laser scanners, one should address the heterogeneous LiDAR data post processing (i.e., registration and segmentation) problems adequately. Current LiDAR integration techniques do not take into account the varying nature of laser scans originating from various platforms. In this dissertation, the author proposes a methodology designed particularly for the registration and segmentation of heterogeneous LiDAR data. A data characterization and filtering step is proposed to populate the points' attributes and remove non-planar LiDAR points. Then, a modified version of the Iterative Closest Point (ICP), denoted by the Iterative Closest Projected Point (ICPP) is designed for the registration of heterogeneous scans to remove any misalignments between overlapping strips. Next, a region-growing-based heterogeneous segmentation algorithm is developed to ensure the proper extraction of planar segments from the point clouds. Validation experiments show that the proposed heterogeneous registration can successfully align airborne and terrestrial datasets despite the great differences in their point density and their noise level. In addition, similar testes have been conducted to examine the heterogeneous segmentation and it is shown that one is able to identify common planar features in airborne and terrestrial data without resampling or manipulating the data in any way. The work presented in this dissertation provides a framework for the registration and segmentation of airborne and terrestrial laser scans which has a positive impact on the completeness of the scanned feature. Therefore, the derived products from these point clouds have higher accuracy as seen in the full manuscript.
Simulation of Asia Dust and Cloud Interaction Over Pacific Ocean During Pacdex
NASA Astrophysics Data System (ADS)
Long, X.; Huang, J.; Cheng, C.; Wang, W.
2007-12-01
The effect of dust plume on the Pacific cloud systems and the associated radiative forcing is an outstanding problem for understanding climate change. Many studies showing that dust aerosol might be a good absorber for solar radiation, at the same time dust aerosols could affect the cloud's formation and precipitation by its capability as cloud condensation nuclei (CCN) and ice forming nuclei (IFN). But the role of aerosols in clouds and precipitation is very complex. Simulation of interaction between cloud and dust aerosols requires recognition that the aerosol cloud system comprises coupled components of dynamics, aerosol and cloud microphysics, radiation processes. In this study, we investigated the interaction between dust aerosols and cloud with WRF which coupled with detailed cloud microphysics processes and dust process. The observed data of SACOL (Semi-Arid Climate and Environment Observatory of Lanzhou University) and PACDEX (Pacific Dust Experiment) is used as the initialization which include the vertical distributions and concentration of dust particles. Our results show that dust aerosol not only impacts cloud microphysical processes but also cloud microstructure; Dust aerosols can act as effective ice nuclei and intensify the ice-forming processes.
An Automatic Procedure for Combining Digital Images and Laser Scanner Data
NASA Astrophysics Data System (ADS)
Moussa, W.; Abdel-Wahab, M.; Fritsch, D.
2012-07-01
Besides improving both the geometry and the visual quality of the model, the integration of close-range photogrammetry and terrestrial laser scanning techniques directs at filling gaps in laser scanner point clouds to avoid modeling errors, reconstructing more details in higher resolution and recovering simple structures with less geometric details. Thus, within this paper a flexible approach for the automatic combination of digital images and laser scanner data is presented. Our approach comprises two methods for data fusion. The first method starts by a marker-free registration of digital images based on a point-based environment model (PEM) of a scene which stores the 3D laser scanner point clouds associated with intensity and RGB values. The PEM allows the extraction of accurate control information for the direct computation of absolute camera orientations with redundant information by means of accurate space resection methods. In order to use the computed relations between the digital images and the laser scanner data, an extended Helmert (seven-parameter) transformation is introduced and its parameters are estimated. Precedent to that, in the second method, the local relative orientation parameters of the camera images are calculated by means of an optimized Structure and Motion (SaM) reconstruction method. Then, using the determined transformation parameters results in having absolute oriented images in relation to the laser scanner data. With the resulting absolute orientations we have employed robust dense image reconstruction algorithms to create oriented dense image point clouds, which are automatically combined with the laser scanner data to form a complete detailed representation of a scene. Examples of different data sets are shown and experimental results demonstrate the effectiveness of the presented procedures.
A point particle model of lightly bound skyrmions
NASA Astrophysics Data System (ADS)
Gillard, Mike; Harland, Derek; Kirk, Elliot; Maybee, Ben; Speight, Martin
2017-04-01
A simple model of the dynamics of lightly bound skyrmions is developed in which skyrmions are replaced by point particles, each carrying an internal orientation. The model accounts well for the static energy minimizers of baryon number 1 ≤ B ≤ 8 obtained by numerical simulation of the full field theory. For 9 ≤ B ≤ 23, a large number of static solutions of the point particle model are found, all closely resembling size B subsets of a face centred cubic lattice, with the particle orientations dictated by a simple colouring rule. Rigid body quantization of these solutions is performed, and the spin and isospin of the corresponding ground states extracted. As part of the quantization scheme, an algorithm to compute the symmetry group of an oriented point cloud, and to determine its corresponding Finkelstein-Rubinstein constraints, is devised.
[The progress in speciation analysis of trace elements by atomic spectrometry].
Wang, Zeng-Huan; Wang, Xu-Nuo; Ke, Chang-Liang; Lin, Qin
2013-12-01
The main purpose of the present work is to review the different non-chromatographic methods for the speciation analysis of trace elements in geological, environmental, biological and medical areas. In this paper, the sample processing methods in speciation analysis were summarized, and the main strategies for non-chromatographic technique were evaluated. The basic principles of the liquid extractions proposed in the published literatures recently and their advantages and disadvantages were discussed, such as conventional solvent extraction, cloud point extraction, single droplet microextraction, and dispersive liquid-liquid microextraction. Solid phase extraction, as a non-chromatographic technique for speciation analysis, can be used in batch or in flow detection, and especially suitable for the online connection to atomic spectrometric detector. The developments and applications of sorbent materials filled in the columns of solid phase extraction were reviewed. The sorbents include chelating resins, nanometer materials, molecular and ion imprinted materials, and bio-sorbents. Other techniques, e. g. hydride generation technique and coprecipitation, were also reviewed together with their main applications.
Method for separating water soluble organics from a process stream by aqueous biphasic extraction
Chaiko, David J.; Mego, William A.
1999-01-01
A method for separating water-miscible organic species from a process stream by aqueous biphasic extraction is provided. An aqueous biphase system is generated by contacting a process stream comprised of water, salt, and organic species with an aqueous polymer solution. The organic species transfer from the salt-rich phase to the polymer-rich phase, and the phases are separated. Next, the polymer is recovered from the loaded polymer phase by selectively extracting the polymer into an organic phase at an elevated temperature, while the organic species remain in a substantially salt-free aqueous solution. Alternatively, the polymer is recovered from the loaded polymer by a temperature induced phase separation (cloud point extraction), whereby the polymer and the organic species separate into two distinct solutions. The method for separating water-miscible organic species is applicable to the treatment of industrial wastewater streams, including the extraction and recovery of complexed metal ions from salt solutions, organic contaminants from mineral processing streams, and colorants from spent dye baths.
Filtering Photogrammetric Point Clouds Using Standard LIDAR Filters Towards DTM Generation
NASA Astrophysics Data System (ADS)
Zhang, Z.; Gerke, M.; Vosselman, G.; Yang, M. Y.
2018-05-01
Digital Terrain Models (DTMs) can be generated from point clouds acquired by laser scanning or photogrammetric dense matching. During the last two decades, much effort has been paid to developing robust filtering algorithms for the airborne laser scanning (ALS) data. With the point cloud quality from dense image matching (DIM) getting better and better, the research question that arises is whether those standard Lidar filters can be used to filter photogrammetric point clouds as well. Experiments are implemented to filter two dense matching point clouds with different noise levels. Results show that the standard Lidar filter is robust to random noise. However, artefacts and blunders in the DIM points often appear due to low contrast or poor texture in the images. Filtering will be erroneous in these locations. Filtering the DIM points pre-processed by a ranking filter will bring higher Type II error (i.e. non-ground points actually labelled as ground points) but much lower Type I error (i.e. bare ground points labelled as non-ground points). Finally, the potential DTM accuracy that can be achieved by DIM points is evaluated. Two DIM point clouds derived by Pix4Dmapper and SURE are compared. On grassland dense matching generates points higher than the true terrain surface, which will result in incorrectly elevated DTMs. The application of the ranking filter leads to a reduced bias in the DTM height, but a slightly increased noise level.
3D face analysis by using Mesh-LBP feature
NASA Astrophysics Data System (ADS)
Wang, Haoyu; Yang, Fumeng; Zhang, Yuming; Wu, Congzhong
2017-11-01
Objective: Face Recognition is one of the widely application of image processing. Corresponding two-dimensional limitations, such as the pose and illumination changes, to a certain extent restricted its accurate rate and further development. How to overcome the pose and illumination changes and the effects of self-occlusion is the research hotspot and difficulty, also attracting more and more domestic and foreign experts and scholars to study it. 3D face recognition fusing shape and texture descriptors has become a very promising research direction. Method: Our paper presents a 3D point cloud based on mesh local binary pattern grid (Mesh-LBP), then feature extraction for 3D face recognition by fusing shape and texture descriptors. 3D Mesh-LBP not only retains the integrity of the 3D geometry, is also reduces the need for recognition process of normalization steps, because the triangle Mesh-LBP descriptor is calculated on 3D grid. On the other hand, in view of multi-modal consistency in face recognition advantage, construction of LBP can fusing shape and texture information on Triangular Mesh. In this paper, some of the operators used to extract Mesh-LBP, Such as the normal vectors of the triangle each face and vertex, the gaussian curvature, the mean curvature, laplace operator and so on. Conclusion: First, Kinect devices obtain 3D point cloud face, after the pretreatment and normalization, then transform it into triangular grid, grid local binary pattern feature extraction from face key significant parts of face. For each local face, calculate its Mesh-LBP feature with Gaussian curvature, mean curvature laplace operator and so on. Experiments on the our research database, change the method is robust and high recognition accuracy.
Point Cloud Management Through the Realization of the Intelligent Cloud Viewer Software
NASA Astrophysics Data System (ADS)
Costantino, D.; Angelini, M. G.; Settembrini, F.
2017-05-01
The paper presents a software dedicated to the elaboration of point clouds, called Intelligent Cloud Viewer (ICV), made in-house by AESEI software (Spin-Off of Politecnico di Bari), allowing to view point cloud of several tens of millions of points, also on of "no" very high performance systems. The elaborations are carried out on the whole point cloud and managed by means of the display only part of it in order to speed up rendering. It is designed for 64-bit Windows and is fully written in C ++ and integrates different specialized modules for computer graphics (Open Inventor by SGI, Silicon Graphics Inc), maths (BLAS, EIGEN), computational geometry (CGAL, Computational Geometry Algorithms Library), registration and advanced algorithms for point clouds (PCL, Point Cloud Library), advanced data structures (BOOST, Basic Object Oriented Supporting Tools), etc. ICV incorporates a number of features such as, for example, cropping, transformation and georeferencing, matching, registration, decimation, sections, distances calculation between clouds, etc. It has been tested on photographic and TLS (Terrestrial Laser Scanner) data, obtaining satisfactory results. The potentialities of the software have been tested by carrying out the photogrammetric survey of the Castel del Monte which was already available in previous laser scanner survey made from the ground by the same authors. For the aerophotogrammetric survey has been adopted a flight height of approximately 1000ft AGL (Above Ground Level) and, overall, have been acquired over 800 photos in just over 15 minutes, with a covering not less than 80%, the planned speed of about 90 knots.
Anthropogenic Sulfate, Clouds, and Climate Forcing
NASA Technical Reports Server (NTRS)
Ghan, Steven J.
1997-01-01
This research work is a joint effort between research groups at the Battelle Pacific Northwest Laboratory, Virginia Tech University, Georgia Institute of Technology, Brookhaven National Laboratory, and Texas A&M University. It has been jointly sponsored by the National Aeronautics and Space Administration, the U.S. Department of Energy, and the U.S. Environmental Protection Agency. In this research, a detailed tropospheric aerosol-chemistry model that predicts oxidant concentrations as well as concentrations of sulfur dioxide and sulfate aerosols has been coupled to a general circulation model that distinguishes between cloud water mass and cloud droplet number. The coupled model system has been first validated and then used to estimate the radiative impact of anthropogenic sulfur emissions. Both the direct radiative impact of the aerosols and their indirect impact through their influence on cloud droplet number are represented by distinguishing between sulfuric acid vapor and fresh and aged sulfate aerosols, and by parameterizing cloud droplet nucleation in terms of vertical velocity and the number concentration of aged sulfur aerosols. Natural sulfate aerosols, dust, and carbonaceous and nitrate aerosols and their influence on the radiative impact of anthropogenic sulfate aerosols, through competition as cloud condensation nuclei, will also be simulated. Parallel simulations with and without anthropogenic sulfur emissions are performed for a global domain. The objectives of the research are: To couple a state-of-the-art tropospheric aerosol-chemistry model with a global climate model. To use field and satellite measurements to evaluate the treatment of tropospheric chemistry and aerosol physics in the coupled model. To use the coupled model to simulate the radiative (and ultimately climatic) impacts of anthropogenic sulfur emissions.
NASA Astrophysics Data System (ADS)
Gézero, L.; Antunes, C.
2017-05-01
The digital terrain models (DTM) assume an essential role in all types of road maintenance, water supply and sanitation projects. The demand of such information is more significant in developing countries, where the lack of infrastructures is higher. In recent years, the use of Mobile LiDAR Systems (MLS) proved to be a very efficient technique in the acquisition of precise and dense point clouds. These point clouds can be a solution to obtain the data for the production of DTM in remote areas, due mainly to the safety, precision, speed of acquisition and the detail of the information gathered. However, the point clouds filtering and algorithms to separate "terrain points" from "no terrain points", quickly and consistently, remain a challenge that has caught the interest of researchers. This work presents a method to create the DTM from point clouds collected by MLS. The method is based in two interactive steps. The first step of the process allows reducing the cloud point to a set of points that represent the terrain's shape, being the distance between points inversely proportional to the terrain variation. The second step is based on the Delaunay triangulation of the points resulting from the first step. The achieved results encourage a wider use of this technology as a solution for large scale DTM production in remote areas.
Evaluation of terrestrial photogrammetric point clouds derived from thermal imagery
NASA Astrophysics Data System (ADS)
Metcalf, Jeremy P.; Olsen, Richard C.
2016-05-01
Computer vision and photogrammetric techniques have been widely applied to digital imagery producing high density 3D point clouds. Using thermal imagery as input, the same techniques can be applied to infrared data to produce point clouds in 3D space, providing surface temperature information. The work presented here is an evaluation of the accuracy of 3D reconstruction of point clouds produced using thermal imagery. An urban scene was imaged over an area at the Naval Postgraduate School, Monterey, CA, viewing from above as with an airborne system. Terrestrial thermal and RGB imagery were collected from a rooftop overlooking the site using a FLIR SC8200 MWIR camera and a Canon T1i DSLR. In order to spatially align each dataset, ground control points were placed throughout the study area using Trimble R10 GNSS receivers operating in RTK mode. Each image dataset is processed to produce a dense point cloud for 3D evaluation.
Applications and Improvement of a Coupled, Global and Cloud-Resolving Modeling System
NASA Technical Reports Server (NTRS)
Tao, W.-K.; Chern, J.; Atlas, R.
2005-01-01
Recently Grabowski (2001) and Khairoutdinov and Randall (2001) have proposed the use of 2D CFWs as a "super parameterization" [or multi-scale modeling framework (MMF)] to represent cloud processes within atmospheric general circulation models (GCMs). In the MMF, a fine-resolution 2D CRM takes the place of the single-column parameterization used in conventional GCMs. A prototype Goddard MMF based on the 2D Goddard Cumulus Ensemble (GCE) model and the Goddard finite volume general circulation model (fvGCM) is now being developed. The prototype includes the fvGCM run at 2.50 x 20 horizontal resolution with 32 vertical layers from the surface to 1 mb and the 2D (x-z) GCE using 64 horizontal and 32 vertical grid points with 4 km horizontal resolution and a cyclic lateral boundary. The time step for the 2D GCE would be 15 seconds, and the fvGCM-GCE coupling frequency would be 30 minutes (i.e. the fvGCM physical time step). We have successfully developed an fvGCM-GCE coupler for this prototype. Because the vertical coordinate of the fvGCM (a terrain-following floating Lagrangian coordinate) is different from that of the GCE (a z coordinate), vertical interpolations between the two coordinates are needed in the coupler. In interpolating fields from the GCE to fvGCM, we use an existing fvGCM finite- volume piecewise parabolic mapping (PPM) algorithm, which conserves the mass, momentum, and total energy. A new finite-volume PPM algorithm, which conserves the mass, momentum and moist static energy in the z coordinate, is being developed for interpolating fields from the fvGCM to the GCE. In the meeting, we will discuss the major differences between the two MMFs (i.e., the CSU MMF and the Goddard MMF). We will also present performance and critical issues related to the MMFs. In addition, we will present multi-dimensional cloud datasets (i.e., a cloud data library) generated by the Goddard MMF that will be provided to the global modeling community to help improve the representation and performance of moist processes in climate models and to improve our understanding of cloud processes globally (the software tools needed to produce cloud statistics and to identify various types of clouds and cloud systems from both high-resolution satellite and model data will be also presented).
NASA Astrophysics Data System (ADS)
Bolkas, Dimitrios; Martinez, Aaron
2018-01-01
Point-cloud coordinate information derived from terrestrial Light Detection And Ranging (LiDAR) is important for several applications in surveying and civil engineering. Plane fitting and segmentation of target-surfaces is an important step in several applications such as in the monitoring of structures. Reliable parametric modeling and segmentation relies on the underlying quality of the point-cloud. Therefore, understanding how point-cloud errors affect fitting of planes and segmentation is important. Point-cloud intensity, which accompanies the point-cloud data, often goes hand-in-hand with point-cloud noise. This study uses industrial particle boards painted with eight different colors (black, white, grey, red, green, blue, brown, and yellow) and two different sheens (flat and semi-gloss) to explore how noise and plane residuals vary with scanning geometry (i.e., distance and incidence angle) and target-color. Results show that darker colors, such as black and brown, can produce point clouds that are several times noisier than bright targets, such as white. In addition, semi-gloss targets manage to reduce noise in dark targets by about 2-3 times. The study of plane residuals with scanning geometry reveals that, in many of the cases tested, residuals decrease with increasing incidence angles, which can assist in understanding the distribution of plane residuals in a dataset. Finally, a scheme is developed to derive survey guidelines based on the data collected in this experiment. Three examples demonstrate that users should consider instrument specification, required precision of plane residuals, required point-spacing, target-color, and target-sheen, when selecting scanning locations. Outcomes of this study can aid users to select appropriate instrumentation and improve planning of terrestrial LiDAR data-acquisition.
Alonzo, Michael; Van Den Hoek, Jamon; Ahmed, Nabil
2016-10-11
The socio-ecological impacts of large scale resource extraction are frequently underreported in underdeveloped regions. The open-pit Grasberg mine in Papua, Indonesia, is one of the world's largest copper and gold extraction operations. Grasberg mine tailings are discharged into the lowland Ajkwa River deposition area (ADA) leading to forest inundation and degradation of water bodies critical to indigenous peoples. The extent of the changes and temporal linkages with mining activities are difficult to establish given restricted access to the region and persistent cloud cover. Here, we introduce remote sensing methods to "peer through" atmospheric contamination using a dense Landsat time series to simultaneously quantify forest loss and increases in estuarial suspended particulate matter (SPM) concentration. We identified 138 km 2 of forest loss between 1987 and 2014, an area >42 times larger than the mine itself. Between 1987 and 1998, the rate of disturbance was highly correlated (Pearson's r = 0.96) with mining activity. Following mine expansion and levee construction along the ADA in the mid-1990s, we recorded significantly (p < 0.05) higher SPM in the Ajkwa Estuary compared to neighboring estuaries. This research provides a means to quantify multiple modes of ecological damage from mine waste disposal or other disturbance events.
Alonzo, Michael; Van Den Hoek, Jamon; Ahmed, Nabil
2016-01-01
The socio-ecological impacts of large scale resource extraction are frequently underreported in underdeveloped regions. The open-pit Grasberg mine in Papua, Indonesia, is one of the world’s largest copper and gold extraction operations. Grasberg mine tailings are discharged into the lowland Ajkwa River deposition area (ADA) leading to forest inundation and degradation of water bodies critical to indigenous peoples. The extent of the changes and temporal linkages with mining activities are difficult to establish given restricted access to the region and persistent cloud cover. Here, we introduce remote sensing methods to “peer through” atmospheric contamination using a dense Landsat time series to simultaneously quantify forest loss and increases in estuarial suspended particulate matter (SPM) concentration. We identified 138 km2 of forest loss between 1987 and 2014, an area >42 times larger than the mine itself. Between 1987 and 1998, the rate of disturbance was highly correlated (Pearson’s r = 0.96) with mining activity. Following mine expansion and levee construction along the ADA in the mid-1990s, we recorded significantly (p < 0.05) higher SPM in the Ajkwa Estuary compared to neighboring estuaries. This research provides a means to quantify multiple modes of ecological damage from mine waste disposal or other disturbance events. PMID:27725748
NASA Astrophysics Data System (ADS)
Alonzo, Michael; van den Hoek, Jamon; Ahmed, Nabil
2016-10-01
The socio-ecological impacts of large scale resource extraction are frequently underreported in underdeveloped regions. The open-pit Grasberg mine in Papua, Indonesia, is one of the world’s largest copper and gold extraction operations. Grasberg mine tailings are discharged into the lowland Ajkwa River deposition area (ADA) leading to forest inundation and degradation of water bodies critical to indigenous peoples. The extent of the changes and temporal linkages with mining activities are difficult to establish given restricted access to the region and persistent cloud cover. Here, we introduce remote sensing methods to “peer through” atmospheric contamination using a dense Landsat time series to simultaneously quantify forest loss and increases in estuarial suspended particulate matter (SPM) concentration. We identified 138 km2 of forest loss between 1987 and 2014, an area >42 times larger than the mine itself. Between 1987 and 1998, the rate of disturbance was highly correlated (Pearson’s r = 0.96) with mining activity. Following mine expansion and levee construction along the ADA in the mid-1990s, we recorded significantly (p < 0.05) higher SPM in the Ajkwa Estuary compared to neighboring estuaries. This research provides a means to quantify multiple modes of ecological damage from mine waste disposal or other disturbance events.
A simple biota removal algorithm for 35 GHz cloud radar measurements
NASA Astrophysics Data System (ADS)
Kalapureddy, Madhu Chandra R.; Sukanya, Patra; Das, Subrata K.; Deshpande, Sachin M.; Pandithurai, Govindan; Pazamany, Andrew L.; Ambuj K., Jha; Chakravarty, Kaustav; Kalekar, Prasad; Krishna Devisetty, Hari; Annam, Sreenivas
2018-03-01
Cloud radar reflectivity profiles can be an important measurement for the investigation of cloud vertical structure (CVS). However, extracting intended meteorological cloud content from the measurement often demands an effective technique or algorithm that can reduce error and observational uncertainties in the recorded data. In this work, a technique is proposed to identify and separate cloud and non-hydrometeor echoes using the radar Doppler spectral moments profile measurements. The point and volume target-based theoretical radar sensitivity curves are used for removing the receiver noise floor and identified radar echoes are scrutinized according to the signal decorrelation period. Here, it is hypothesized that cloud echoes are observed to be temporally more coherent and homogenous and have a longer correlation period than biota. That can be checked statistically using ˜ 4 s sliding mean and standard deviation value of reflectivity profiles. The above step helps in screen out clouds critically by filtering out the biota. The final important step strives for the retrieval of cloud height. The proposed algorithm potentially identifies cloud height solely through the systematic characterization of Z variability using the local atmospheric vertical structure knowledge besides to the theoretical, statistical and echo tracing tools. Thus, characterization of high-resolution cloud radar reflectivity profile measurements has been done with the theoretical echo sensitivity curves and observed echo statistics for the true cloud height tracking (TEST). TEST showed superior performance in screening out clouds and filtering out isolated insects. TEST constrained with polarimetric measurements was found to be more promising under high-density biota whereas TEST combined with linear depolarization ratio and spectral width perform potentially to filter out biota within the highly turbulent shallow cumulus clouds in the convective boundary layer (CBL). This TEST technique is promisingly simple in realization but powerful in performance due to the flexibility in constraining, identifying and filtering out the biota and screening out the true cloud content, especially the CBL clouds. Therefore, the TEST algorithm is superior for screening out the low-level clouds that are strongly linked to the rainmaking mechanism associated with the Indian Summer Monsoon region's CVS.
Temporally consistent segmentation of point clouds
NASA Astrophysics Data System (ADS)
Owens, Jason L.; Osteen, Philip R.; Daniilidis, Kostas
2014-06-01
We consider the problem of generating temporally consistent point cloud segmentations from streaming RGB-D data, where every incoming frame extends existing labels to new points or contributes new labels while maintaining the labels for pre-existing segments. Our approach generates an over-segmentation based on voxel cloud connectivity, where a modified k-means algorithm selects supervoxel seeds and associates similar neighboring voxels to form segments. Given the data stream from a potentially mobile sensor, we solve for the camera transformation between consecutive frames using a joint optimization over point correspondences and image appearance. The aligned point cloud may then be integrated into a consistent model coordinate frame. Previously labeled points are used to mask incoming points from the new frame, while new and previous boundary points extend the existing segmentation. We evaluate the algorithm on newly-generated RGB-D datasets.
Critical infrastructure monitoring using UAV imagery
NASA Astrophysics Data System (ADS)
Maltezos, Evangelos; Skitsas, Michael; Charalambous, Elisavet; Koutras, Nikolaos; Bliziotis, Dimitris; Themistocleous, Kyriacos
2016-08-01
The constant technological evolution in Computer Vision enabled the development of new techniques which in conjunction with the use of Unmanned Aerial Vehicles (UAVs) may extract high quality photogrammetric products for several applications. Dense Image Matching (DIM) is a Computer Vision technique that can generate a dense 3D point cloud of an area or object. The use of UAV systems and DIM techniques is not only a flexible and attractive solution to produce accurate and high qualitative photogrammetric results but also is a major contribution to cost effectiveness. In this context, this study aims to highlight the benefits of the use of the UAVs in critical infrastructure monitoring applying DIM. A Multi-View Stereo (MVS) approach using multiple images (RGB digital aerial and oblique images), to fully cover the area of interest, is implemented. The application area is an Olympic venue in Attica, Greece, at an area of 400 acres. The results of our study indicate that the UAV+DIM approach respond very well to the increasingly greater demands for accurate and cost effective applications when provided with, a 3D point cloud and orthomosaic.
Hybrid Automatic Building Interpretation System
NASA Astrophysics Data System (ADS)
Pakzad, K.; Klink, A.; Müterthies, A.; Gröger, G.; Stroh, V.; Plümer, L.
2011-09-01
HABIS (Hybrid Automatic Building Interpretation System) is a system for an automatic reconstruction of building roofs used in virtual 3D building models. Unlike most of the commercially available systems, HABIS is able to work to a high degree automatically. The hybrid method uses different sources intending to exploit the advantages of the particular sources. 3D point clouds usually provide good height and surface data, whereas spatial high resolution aerial images provide important information for edges and detail information for roof objects like dormers or chimneys. The cadastral data provide important basis information about the building ground plans. The approach used in HABIS works with a multi-stage-process, which starts with a coarse roof classification based on 3D point clouds. After that it continues with an image based verification of these predicted roofs. In a further step a final classification and adjustment of the roofs is done. In addition some roof objects like dormers and chimneys are also extracted based on aerial images and added to the models. In this paper the used methods are described and some results are presented.
An automated 3D reconstruction method of UAV images
NASA Astrophysics Data System (ADS)
Liu, Jun; Wang, He; Liu, Xiaoyang; Li, Feng; Sun, Guangtong; Song, Ping
2015-10-01
In this paper a novel fully automated 3D reconstruction approach based on low-altitude unmanned aerial vehicle system (UAVs) images will be presented, which does not require previous camera calibration or any other external prior knowledge. Dense 3D point clouds are generated by integrating orderly feature extraction, image matching, structure from motion (SfM) and multi-view stereo (MVS) algorithms, overcoming many of the cost, time limitations of rigorous photogrammetry techniques. An image topology analysis strategy is introduced to speed up large scene reconstruction by taking advantage of the flight-control data acquired by UAV. Image topology map can significantly reduce the running time of feature matching by limiting the combination of images. A high-resolution digital surface model of the study area is produced base on UAV point clouds by constructing the triangular irregular network. Experimental results show that the proposed approach is robust and feasible for automatic 3D reconstruction of low-altitude UAV images, and has great potential for the acquisition of spatial information at large scales mapping, especially suitable for rapid response and precise modelling in disaster emergency.
3D Reconstruction of Irregular Buildings and Buddha Statues
NASA Astrophysics Data System (ADS)
Zhang, K.; Li, M.-j.
2014-04-01
Three-dimensional laser scanning could acquire object's surface data quickly and accurately. However, the post-processing of point cloud is not perfect and could be improved. Based on the study of 3D laser scanning technology, this paper describes the details of solutions to modelling irregular ancient buildings and Buddha statues in Jinshan Temple, which aiming at data acquisition, modelling and texture mapping, etc. In order to modelling irregular ancient buildings effectively, the structure of each building is extracted manually by point cloud and the textures are mapped by the software of 3ds Max. The methods clearly combine 3D laser scanning technology with traditional modelling methods, and greatly improves the efficiency and accuracy of the ancient buildings restored. On the other hand, the main idea of modelling statues is regarded as modelling objects in reverse engineering. The digital model of statues obtained is not just vivid, but also accurate in the field of surveying and mapping. On this basis, a 3D scene of Jinshan Temple is reconstructed, which proves the validity of the solutions.
Holistic Interactions of Shallow Clouds, Aerosols, and Land-Ecosystems (HI-SCALE) Science Plan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fast, JD; Berg, LK
Cumulus convection is an important component in the atmospheric radiation budget and hydrologic cycle over the Southern Great Plains and over many regions of the world, particularly during the summertime growing season when intense turbulence induced by surface radiation couples the land surface to clouds. Current convective cloud parameterizations contain uncertainties resulting in part from insufficient coincident data that couples cloud macrophysical and microphysical properties to inhomogeneities in boundary layer and aerosol properties. The Holistic Interactions of Shallow Clouds, Aerosols, and Land-Ecosystems (HI-SCALE) campaign is designed to provide a detailed set of measurements that are needed to obtain a moremore » complete understanding of the life cycle of shallow clouds by coupling cloud macrophysical and microphysical properties to land surface properties, ecosystems, and aerosols. HI-SCALE consists of 2, 4-week intensive observational periods, one in the spring and the other in the late summer, to take advantage of different stages and distribution of “greenness” for various types of vegetation in the vicinity of the Atmospheric Radiation and Measurement (ARM) Climate Research Facility’s Southern Great Plains (SGP) site as well as aerosol properties that vary during the growing season. Most of the proposed instrumentation will be deployed on the ARM Aerial Facility (AAF) Gulfstream 1 (G-1) aircraft, including those that measure atmospheric turbulence, cloud water content and drop size distributions, aerosol precursor gases, aerosol chemical composition and size distributions, and cloud condensation nuclei concentrations. Routine ARM aerosol measurements made at the surface will be supplemented with aerosol microphysical properties measurements. The G-1 aircraft will complete transects over the SGP Central Facility at multiple altitudes within the boundary layer, within clouds, and above clouds.« less
a Gross Error Elimination Method for Point Cloud Data Based on Kd-Tree
NASA Astrophysics Data System (ADS)
Kang, Q.; Huang, G.; Yang, S.
2018-04-01
Point cloud data has been one type of widely used data sources in the field of remote sensing. Key steps of point cloud data's pro-processing focus on gross error elimination and quality control. Owing to the volume feature of point could data, existed gross error elimination methods need spend massive memory both in space and time. This paper employed a new method which based on Kd-tree algorithm to construct, k-nearest neighbor algorithm to search, settled appropriate threshold to determine with result turns out a judgement that whether target point is or not an outlier. Experimental results show that, our proposed algorithm will help to delete gross error in point cloud data and facilitate to decrease memory consumption, improve efficiency.
Strong-coupling Bose polarons out of equilibrium: Dynamical renormalization-group approach
NASA Astrophysics Data System (ADS)
Grusdt, Fabian; Seetharam, Kushal; Shchadilova, Yulia; Demler, Eugene
2018-03-01
When a mobile impurity interacts with a surrounding bath of bosons, it forms a polaron. Numerous methods have been developed to calculate how the energy and the effective mass of the polaron are renormalized by the medium for equilibrium situations. Here, we address the much less studied nonequilibrium regime and investigate how polarons form dynamically in time. To this end, we develop a time-dependent renormalization-group approach which allows calculations of all dynamical properties of the system and takes into account the effects of quantum fluctuations in the polaron cloud. We apply this method to calculate trajectories of polarons following a sudden quench of the impurity-boson interaction strength, revealing how the polaronic cloud around the impurity forms in time. Such trajectories provide additional information about the polaron's properties which are challenging to extract directly from the spectral function measured experimentally using ultracold atoms. At strong couplings, our calculations predict the appearance of trajectories where the impurity wavers back at intermediate times as a result of quantum fluctuations. Our method is applicable to a broader class of nonequilibrium problems. As a check, we also apply it to calculate the spectral function and find good agreement with experimental results. At very strong couplings, we predict that quantum fluctuations lead to the appearance of a dark continuum with strongly suppressed spectral weight at low energies. While our calculations start from an effective Fröhlich Hamiltonian describing impurities in a three-dimensional Bose-Einstein condensate, we also calculate the effects of additional terms in the Hamiltonian beyond the Fröhlich paradigm. We demonstrate that the main effect of these additional terms on the attractive side of a Feshbach resonance is to renormalize the coupling strength of the effective Fröhlich model.
Warrick, Jonathan; Ritchie, Andy; Adelman, Gabrielle; Adelman, Ken; Limber, Patrick W.
2017-01-01
Oblique aerial photograph surveys are commonly used to document coastal landscapes. Here it is shown that adequate overlap may exist in these photographic records to develop topographic models with Structure-from-Motion (SfM) photogrammetric techniques. Using photographs of Fort Funston, California, from the California Coastal Records Project, imagery were combined with ground control points in a four-dimensional analysis that produced topographic point clouds of the study area’s cliffs for 5 years spanning 2002 to 2010. Uncertainty was assessed by comparing point clouds with airborne LIDAR data, and these uncertainties were related to the number and spatial distribution of ground control points used in the SfM analyses. With six or more ground control points, the root mean squared errors between the SfM and LIDAR data were less than 0.30 m (minimum 1⁄4 0.18 m), and the mean systematic error was less than 0.10 m. The SfM results had several benefits over traditional airborne LIDAR in that they included point coverage on vertical- to-overhanging sections of the cliff and resulted in 10–100 times greater point densities. Time series of the SfM results revealed topographic changes, including landslides, rock falls, and the erosion of landslide talus along the Fort Funston beach. Thus, it was concluded that SfM photogrammetric techniques with historical oblique photographs allow for the extraction of useful quantitative information for mapping coastal topography and measuring coastal change. The new techniques presented here are likely applicable to many photograph collections and problems in the earth sciences.
Light Stops at Exceptional Points
NASA Astrophysics Data System (ADS)
Goldzak, Tamar; Mailybaev, Alexei A.; Moiseyev, Nimrod
2018-01-01
Almost twenty years ago, light was slowed down to less than 10-7 of its vacuum speed in a cloud of ultracold atoms of sodium. Upon a sudden turn-off of the coupling laser, a slow light pulse can be imprinted on cold atoms such that it can be read out and converted into a photon again. In this process, the light is stopped by absorbing it and storing its shape within the atomic ensemble. Alternatively, the light can be stopped at the band edge in photonic-crystal waveguides, where the group speed vanishes. Here, we extend the phenomenon of stopped light to the new field of parity-time (P T ) symmetric systems. We show that zero group speed in P T symmetric optical waveguides can be achieved if the system is prepared at an exceptional point, where two optical modes coalesce. This effect can be tuned for optical pulses in a wide range of frequencies and bandwidths, as we demonstrate in a system of coupled waveguides with gain and loss.
Object Detection using the Kinect
2012-03-01
Kinect camera and point cloud data from the Kinect’s structured light stereo system (figure 1). We obtain reasonable results using a single prototype...same manner we present in this report. For example, at Willow Garage , Steder uses a 3-D feature he developed to classify objects directly from point...detecting backpacks using the data available from the Kinect sensor. 4 3.1 Point Cloud Filtering Dense point clouds derived from stereo are notoriously
How Will Aerosol-Cloud Interactions Change in an Ice-Free Arctic Summer?
NASA Astrophysics Data System (ADS)
Gilgen, Anina; Katty Huang, Wan Ting; Ickes, Luisa; Lohmann, Ulrike
2016-04-01
Future temperatures in the Arctic are expected to increase more than the global mean temperature, which will lead to a pronounced retreat in Arctic sea ice. Before mid-century, most sea ice will likely have vanished in late Arctic summers. This will allow ships to cruise in the Arctic Ocean, e.g. to shorten their transport passage or to extract oil. Since both ships and open water emit aerosol particles and precursors, Arctic clouds and radiation may be affected via aerosol-cloud and cloud-radiation interactions. The change in radiation feeds back on temperature and sea ice retreat. In addition to aerosol particles, also the temperature and the open ocean as a humidity source should have a strong effect on clouds. The main goal of this study is to assess the impact of sea ice retreat on the Arctic climate with focus on aerosol emissions and cloud properties. To this purpose, we conducted ensemble runs with the global climate model ECHAM6-HAM2 under present-day and future (2050) conditions. ECHAM6-HAM2 was coupled with a mixed layer ocean model, which includes a sea ice model. To estimate Arctic aerosol emissions from ships, we used an elaborated ship emission inventory (Peters et al. 2011); changes in aerosol emissions from the ocean are calculated online. Preliminary results show that the sea salt aerosol and the dimethyl sulfide burdens over the Arctic Ocean significantly increase. While the ice water path decreases, the total water path increases. Due to the decrease in surface albedo, the cooling effect of the Arctic clouds becomes more important in 2050. Enhanced Arctic shipping has only a very small impact. The increase in the aersol burden due to shipping is less pronounced than the increase due to natural emissions even if the ship emissions are increased by a factor of ten. Hence, there is hardly an effect on clouds and radiation caused by shipping. References Peters et al. (2011), Atmos. Chem. Phys., 11, 5305-5320
A Modular Approach to Video Designation of Manipulation Targets for Manipulators
2014-05-12
side view of a ray going through a point cloud of a water bottle sitting on the ground. The bottom left image shows the same point cloud after it has...System (ROS), Point Cloud Library (PCL), and OpenRAVE were used to a great extent to help promote reusability of the code developed during this
NASA Astrophysics Data System (ADS)
Dipu, Sudhakar; Quaas, Johannes; Wolke, Ralf; Stoll, Jens; Mühlbauer, Andreas; Sourdeval, Odran; Salzmann, Marc; Heinold, Bernd; Tegen, Ina
2017-06-01
The regional atmospheric model Consortium for Small-scale Modeling (COSMO) coupled to the Multi-Scale Chemistry Aerosol Transport model (MUSCAT) is extended in this work to represent aerosol-cloud interactions. Previously, only one-way interactions (scavenging of aerosol and in-cloud chemistry) and aerosol-radiation interactions were included in this model. The new version allows for a microphysical aerosol effect on clouds. For this, we use the optional two-moment cloud microphysical scheme in COSMO and the online-computed aerosol information for cloud condensation nuclei concentrations (Cccn), replacing the constant Cccn profile. In the radiation scheme, we have implemented a droplet-size-dependent cloud optical depth, allowing now for aerosol-cloud-radiation interactions. To evaluate the models with satellite data, the Cloud Feedback Model Intercomparison Project Observation Simulator Package (COSP) has been implemented. A case study has been carried out to understand the effects of the modifications, where the modified modeling system is applied over the European domain with a horizontal resolution of 0.25° × 0.25°. To reduce the complexity in aerosol-cloud interactions, only warm-phase clouds are considered. We found that the online-coupled aerosol introduces significant changes for some cloud microphysical properties. The cloud effective radius shows an increase of 9.5 %, and the cloud droplet number concentration is reduced by 21.5 %.
Automatic Matching of Large Scale Images and Terrestrial LIDAR Based on App Synergy of Mobile Phone
NASA Astrophysics Data System (ADS)
Xia, G.; Hu, C.
2018-04-01
The digitalization of Cultural Heritage based on ground laser scanning technology has been widely applied. High-precision scanning and high-resolution photography of cultural relics are the main methods of data acquisition. The reconstruction with the complete point cloud and high-resolution image requires the matching of image and point cloud, the acquisition of the homonym feature points, the data registration, etc. However, the one-to-one correspondence between image and corresponding point cloud depends on inefficient manual search. The effective classify and management of a large number of image and the matching of large image and corresponding point cloud will be the focus of the research. In this paper, we propose automatic matching of large scale images and terrestrial LiDAR based on APP synergy of mobile phone. Firstly, we develop an APP based on Android, take pictures and record related information of classification. Secondly, all the images are automatically grouped with the recorded information. Thirdly, the matching algorithm is used to match the global and local image. According to the one-to-one correspondence between the global image and the point cloud reflection intensity image, the automatic matching of the image and its corresponding laser radar point cloud is realized. Finally, the mapping relationship between global image, local image and intensity image is established according to homonym feature point. So we can establish the data structure of the global image, the local image in the global image, the local image corresponding point cloud, and carry on the visualization management and query of image.
NASA Astrophysics Data System (ADS)
Vázquez Tarrío, Daniel; Borgniet, Laurent; Recking, Alain; Liebault, Frédéric; Vivier, Marie
2016-04-01
The present research is focused on the Vénéon river at Plan du Lac (Massif des Ecrins, France), an alpine braided gravel bed stream with a glacio-nival hydrological regime. It drains a catchment area of 316 km2. The present research is focused in a 2.5 km braided reach placed immediately upstream of a small hydropower dam. An airbone LIDAR survey was accomplished in October, 2014 by EDF (the company managing the small hydropower dam), and data coming from this LIDAR survey were available for the present research. Point density of the LIDAR-derived 3D-point cloud was between 20-50 points/m2, with a vertical precision of 2-3 cm over flat surfaces. Moreover, between April and Juin, 2015, we carried out a photogrammetrical campaign based in aerial images taken with an UAV-drone. The UAV-derived point-cloud has a point density of 200-300 points/m2, and a vertical precision over flat control surfaces comparable to that of the LIDAR point cloud (2-3 cm). Simultaneously to the UAV campaign, we took several Wolman samples with the aim of characterizing the grain size distribution of bed sediment. Wolman samples were taken following a geomorphological criterion (unit bars, head/tail of compound bars). Furthermore, some of the Wolman samples were repeated with the aim of defining the uncertainty of our sampling protocol. LIDAR and UAV-derived point clouds were treated in order to check whether both point-clouds were correctly co-aligned. After that, we estimated bed roughness using the detrended standard deviation of heights, in a 40-cm window. For all this data treatment we used CloudCompare. Then, we measured the distribution of roughness in the same geomorphological units where we took the Wolman samples, and we compared with the grain size distributions measured in the field: differences between UAV-point cloud roughness distributions and measured-grain size distribution (~1-2 cm) are in the same order of magnitude of the differences found between the repeated Wolman samples (~0.5-1.5 cm). Differences with LIDAR-derived roughness distributions are only slightly higher, which could be due to the lower point density of the LIDAR point clouds.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zheng, X.; Klein, S. A.; Ma, H. -Y.
The Community Atmosphere Model (CAM) adopts Cloud Layers Unified By Binormals (CLUBB) scheme and an updated microphysics (MG2) scheme for a more unified treatment of cloud processes. This makes interactions between parameterizations tighter and more explicit. In this study, a cloudy planetary boundary layer (PBL) oscillation related to interaction between CLUBB and MG2 is identified in CAM. This highlights the need for consistency between the coupled subgrid processes in climate model development. This oscillation occurs most often in the marine cumulus cloud regime. The oscillation occurs only if the modeled PBL is strongly decoupled and precipitation evaporates below the cloud.more » Two aspects of the parameterized coupling assumptions between CLUBB and MG2 schemes cause the oscillation: (1) a parameterized relationship between rain evaporation and CLUBB's subgrid spatial variance of moisture and heat that induces an extra cooling in the lower PBL and (2) rain evaporation which happens at a too low an altitude because of the precipitation fraction parameterization in MG2. Either one of these two conditions can overly stabilize the PBL and reduce the upward moisture transport to the cloud layer so that the PBL collapses. Global simulations prove that turning off the evaporation-variance coupling and improving the precipitation fraction parameterization effectively reduces the cloudy PBL oscillation in marine cumulus clouds. By evaluating the causes of the oscillation in CAM, we have identified the PBL processes that should be examined in models having similar oscillations. This study may draw the attention of the modeling and observational communities to the issue of coupling between parameterized physical processes.« less
Zheng, X.; Klein, S. A.; Ma, H. -Y.; ...
2017-08-24
The Community Atmosphere Model (CAM) adopts Cloud Layers Unified By Binormals (CLUBB) scheme and an updated microphysics (MG2) scheme for a more unified treatment of cloud processes. This makes interactions between parameterizations tighter and more explicit. In this study, a cloudy planetary boundary layer (PBL) oscillation related to interaction between CLUBB and MG2 is identified in CAM. This highlights the need for consistency between the coupled subgrid processes in climate model development. This oscillation occurs most often in the marine cumulus cloud regime. The oscillation occurs only if the modeled PBL is strongly decoupled and precipitation evaporates below the cloud.more » Two aspects of the parameterized coupling assumptions between CLUBB and MG2 schemes cause the oscillation: (1) a parameterized relationship between rain evaporation and CLUBB's subgrid spatial variance of moisture and heat that induces an extra cooling in the lower PBL and (2) rain evaporation which happens at a too low an altitude because of the precipitation fraction parameterization in MG2. Either one of these two conditions can overly stabilize the PBL and reduce the upward moisture transport to the cloud layer so that the PBL collapses. Global simulations prove that turning off the evaporation-variance coupling and improving the precipitation fraction parameterization effectively reduces the cloudy PBL oscillation in marine cumulus clouds. By evaluating the causes of the oscillation in CAM, we have identified the PBL processes that should be examined in models having similar oscillations. This study may draw the attention of the modeling and observational communities to the issue of coupling between parameterized physical processes.« less
NASA Astrophysics Data System (ADS)
Maahn, M.; Acquistapace, C.; de Boer, G.; Cox, C.; Feingold, G.; Marke, T.; Williams, C. R.
2017-12-01
When acting as cloud condensation nuclei (CCN) or ice nucleating particles (INPs), aerosols have a strong potential to influence cloud properties. In particular, they can impact the number, size, and phase of cloud particles and potentially cloud lifetime through aerosol indirect and semi-direct effects. In polar regions, these effects are of great importance for the radiation budget due to the shortwave albedo and longwave emissivity of mixed-phase clouds. The Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) program operates two super sites equipped with state of the art ground-based remote sensing instruments in northern Alaska. The sites are both coastal and are highly correlated with respect to large scale synoptic patterns. While the site at Utqiaġvik (formerly known as Barrow) generally represents a relatively pristine Arctic environment lacking significant anthropogenic sources, the site at Oliktok Point, approximately 250 km to the east, is surrounded by the Prudhoe Bay Oil Field, which is the largest oil field in North America. Based on aircraft measurement, the authors recently showed that differences in the properties of liquid clouds properties between the sites can be attributed to local emissions associated with the industrial activities in the Prudhoe Bay region (Maahn et al. 2017, ACPD). However, aircraft measurements do not provide a representative sample of cloud properties due to temporal limitations in the amount of data. In order to investigate how frequently and to what extent liquid cloud properties and processes are modified, we use ground based remote sensing observations such as e.g., cloud radar, Doppler lidar, and microwave radiometer obtained continuously at the two sites. In this way, we are able to quantify inter-site differences with respect to cloud drizzle production, liquid water path, frequency of cloud occurrence, and cloud radiative properties. Turbulence and the coupling of clouds to the boundary layer is investigated in order to infer the potential role of locally emitted aerosols in modulating the observed differences.
NASA Astrophysics Data System (ADS)
Gong, Y.; Yang, Y.; Yang, X.
2018-04-01
For the purpose of extracting productions of some specific branching plants effectively and realizing its 3D reconstruction, Terrestrial LiDAR data was used as extraction source of production, and a 3D reconstruction method based on Terrestrial LiDAR technologies combined with the L-system was proposed in this article. The topology structure of the plant architectures was extracted using the point cloud data of the target plant with space level segmentation mechanism. Subsequently, L-system productions were obtained and the structural parameters and production rules of branches, which fit the given plant, was generated. A three-dimensional simulation model of target plant was established combined with computer visualization algorithm finally. The results suggest that the method can effectively extract a given branching plant topology and describes its production, realizing the extraction of topology structure by the computer algorithm for given branching plant and also simplifying the extraction of branching plant productions which would be complex and time-consuming by L-system. It improves the degree of automation in the L-system extraction of productions of specific branching plants, providing a new way for the extraction of branching plant production rules.
Fast Semantic Segmentation of 3d Point Clouds with Strongly Varying Density
NASA Astrophysics Data System (ADS)
Hackel, Timo; Wegner, Jan D.; Schindler, Konrad
2016-06-01
We describe an effective and efficient method for point-wise semantic classification of 3D point clouds. The method can handle unstructured and inhomogeneous point clouds such as those derived from static terrestrial LiDAR or photogammetric reconstruction; and it is computationally efficient, making it possible to process point clouds with many millions of points in a matter of minutes. The key issue, both to cope with strong variations in point density and to bring down computation time, turns out to be careful handling of neighborhood relations. By choosing appropriate definitions of a point's (multi-scale) neighborhood, we obtain a feature set that is both expressive and fast to compute. We evaluate our classification method both on benchmark data from a mobile mapping platform and on a variety of large, terrestrial laser scans with greatly varying point density. The proposed feature set outperforms the state of the art with respect to per-point classification accuracy, while at the same time being much faster to compute.
A new morphology algorithm for shoreline extraction from DEM data
NASA Astrophysics Data System (ADS)
Yousef, Amr H.; Iftekharuddin, Khan; Karim, Mohammad
2013-03-01
Digital elevation models (DEMs) are a digital representation of elevations at regularly spaced points. They provide an accurate tool to extract the shoreline profiles. One of the emerging sources of creating them is light detection and ranging (LiDAR) that can capture a highly dense cloud points with high resolution that can reach 15 cm and 100 cm in the vertical and horizontal directions respectively in short periods of time. In this paper we present a multi-step morphological algorithm to extract shorelines locations from the DEM data and a predefined tidal datum. Unlike similar approaches, it utilizes Lowess nonparametric regression to estimate the missing values within the DEM file. Also, it will detect and eliminate the outliers and errors that result from waves, ships, etc by means of anomality test with neighborhood constrains. Because, there might be some significant broken regions such as branches and islands, it utilizes a constrained morphological open and close to reduce these artifacts that can affect the extracted shorelines. In addition, it eliminates docks, bridges and fishing piers along the extracted shorelines by means of Hough transform. Based on a specific tidal datum, the algorithm will segment the DEM data into water and land objects. Without sacrificing the accuracy and the spatial details of the extracted boundaries, the algorithm should smooth and extract the shoreline profiles by tracing the boundary pixels between the land and the water segments. For given tidal values, we qualitatively assess the visual quality of the extracted shorelines by superimposing them on the available aerial photographs.
Takalo, Jouni; Timonen, Jussi; Sampo, Jouni; Rantala, Maaria; Siltanen, Samuli; Lassas, Matti
2014-11-01
A novel method is presented for distinguishing postal stamp forgeries and counterfeit banknotes from genuine samples. The method is based on analyzing differences in paper fibre networks. The main tool is a curvelet-based algorithm for measuring overall fibre orientation distribution and quantifying anisotropy. Using a couple of more appropriate parameters makes it possible to distinguish forgeries from genuine originals as concentrated point clouds in two- or three-dimensional parameter space. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Cellular convection in the atmosphere of Venus
NASA Technical Reports Server (NTRS)
Baker, R. D., II; Schubert, Gerald
1992-01-01
Among the most intriguing feature of the atmosphere of Venus is the presence of cellular structures near and downwind of the subpolar point. It has been suggested that the structures are atmospheric convection cells, but their breadth and thinness would pose a severe challenge to the dynamics of convection. It is proposed here that strongly penetrative convection into the stable regions above and below the neutrally stable cloud layer coupled with penetrative convection from the surface increases the vertical dimensions of the cells, thereby helping to explain their large horizontal extent.
A Voxel-Based Approach for Imaging Voids in Three-Dimensional Point Clouds
NASA Astrophysics Data System (ADS)
Salvaggio, Katie N.
Geographically accurate scene models have enormous potential beyond that of just simple visualizations in regard to automated scene generation. In recent years, thanks to ever increasing computational efficiencies, there has been significant growth in both the computer vision and photogrammetry communities pertaining to automatic scene reconstruction from multiple-view imagery. The result of these algorithms is a three-dimensional (3D) point cloud which can be used to derive a final model using surface reconstruction techniques. However, the fidelity of these point clouds has not been well studied, and voids often exist within the point cloud. Voids exist in texturally difficult areas, as well as areas where multiple views were not obtained during collection, constant occlusion existed due to collection angles or overlapping scene geometry, or in regions that failed to triangulate accurately. It may be possible to fill in small voids in the scene using surface reconstruction or hole-filling techniques, but this is not the case with larger more complex voids, and attempting to reconstruct them using only the knowledge of the incomplete point cloud is neither accurate nor aesthetically pleasing. A method is presented for identifying voids in point clouds by using a voxel-based approach to partition the 3D space. By using collection geometry and information derived from the point cloud, it is possible to detect unsampled voxels such that voids can be identified. This analysis takes into account the location of the camera and the 3D points themselves to capitalize on the idea of free space, such that voxels that lie on the ray between the camera and point are devoid of obstruction, as a clear line of sight is a necessary requirement for reconstruction. Using this approach, voxels are classified into three categories: occupied (contains points from the point cloud), free (rays from the camera to the point passed through the voxel), and unsampled (does not contain points and no rays passed through the area). Voids in the voxel space are manifested as unsampled voxels. A similar line-of-sight analysis can then be used to pinpoint locations at aircraft altitude at which the voids in the point clouds could theoretically be imaged. This work is based on the assumption that inclusion of more images of the void areas in the 3D reconstruction process will reduce the number of voids in the point cloud that were a result of lack of coverage. Voids resulting from texturally difficult areas will not benefit from more imagery in the reconstruction process, and thus are identified and removed prior to the determination of future potential imaging locations.
NASA Astrophysics Data System (ADS)
Sun, Z.; Cao, Y. K.
2015-08-01
The paper focuses on the versatility of data processing workflows ranging from BIM-based survey to structural analysis and reverse modeling. In China nowadays, a large number of historic architecture are in need of restoration, reinforcement and renovation. But the architects are not prepared for the conversion from the booming AEC industry to architectural preservation. As surveyors working with architects in such projects, we have to develop efficient low-cost digital survey workflow robust to various types of architecture, and to process the captured data for architects. Although laser scanning yields high accuracy in architectural heritage documentation and the workflow is quite straightforward, the cost and portability hinder it from being used in projects where budget and efficiency are of prime concern. We integrate Structure from Motion techniques with UAV and total station in data acquisition. The captured data is processed for various purposes illustrated with three case studies: the first one is as-built BIM for a historic building based on registered point clouds according to Ground Control Points; The second one concerns structural analysis for a damaged bridge using Finite Element Analysis software; The last one relates to parametric automated feature extraction from captured point clouds for reverse modeling and fabrication.
Photolysis frequency and cloud dynamics during DC3 and SEAC4RS
NASA Astrophysics Data System (ADS)
Hall, S. R.; Ullmann, K.; Madronich, S.; Hair, J. W.; Butler, C. F.; Fenn, M. A.
2013-12-01
Cloud shading plays a critical role in extending the lifetime of short-lived chemical species. During convection, photochemistry is reduced such that short-lived species may be transported from the boundary layer to the upper troposphere/ lower stratosphere. In the anvil outflow, shading continues within and below the cloud. However, near the highly scattering cloud top, the chemistry is greatly accelerated. In this rapidly evolving environment, accurate photolysis frequencies are required to study photochemical evolution of the complex composition. During the Deep Convective Clouds and Chemistry (DC3, 2012) and the Studies of Emissions and Atmospheric Composition, Clouds and Climate Coupling by Regional Surveys (SEAC4RS, 2013) campaigns, photolysis frequencies were determined by measurement of spectrally resolved actinic flux by the Charge-coupled device Actinic Flux Spectroradiometer (CAFS) on the NASA DC-8 and the HIAPER Airborne Radiation Package (HARP) on the NCAR G-V aircraft. Vertical flight profiles allowed in situ characterization of the radiation environment. Input of geometrical cloud characteristics into the Tropospheric Ultraviolet and Visible (TUV) Radiation was used to constrain cloud optical depths for more spatially and temporally stable conditions.
Classification by Using Multispectral Point Cloud Data
NASA Astrophysics Data System (ADS)
Liao, C. T.; Huang, H. H.
2012-07-01
Remote sensing images are generally recorded in two-dimensional format containing multispectral information. Also, the semantic information is clearly visualized, which ground features can be better recognized and classified via supervised or unsupervised classification methods easily. Nevertheless, the shortcomings of multispectral images are highly depending on light conditions, and classification results lack of three-dimensional semantic information. On the other hand, LiDAR has become a main technology for acquiring high accuracy point cloud data. The advantages of LiDAR are high data acquisition rate, independent of light conditions and can directly produce three-dimensional coordinates. However, comparing with multispectral images, the disadvantage is multispectral information shortage, which remains a challenge in ground feature classification through massive point cloud data. Consequently, by combining the advantages of both LiDAR and multispectral images, point cloud data with three-dimensional coordinates and multispectral information can produce a integrate solution for point cloud classification. Therefore, this research acquires visible light and near infrared images, via close range photogrammetry, by matching images automatically through free online service for multispectral point cloud generation. Then, one can use three-dimensional affine coordinate transformation to compare the data increment. At last, the given threshold of height and color information is set as threshold in classification.
Characterizing Sorghum Panicles using 3D Point Clouds
NASA Astrophysics Data System (ADS)
Lonesome, M.; Popescu, S. C.; Horne, D. W.; Pugh, N. A.; Rooney, W.
2017-12-01
To address demands of population growth and impacts of global climate change, plant breeders must increase crop yield through genetic improvement. However, plant phenotyping, the characterization of a plant's physical attributes, remains a primary bottleneck in modern crop improvement programs. 3D point clouds generated from terrestrial laser scanning (TLS) and unmanned aerial systems (UAS) based structure from motion (SfM) are a promising data source to increase the efficiency of screening plant material in breeding programs. This study develops and evaluates methods for characterizing sorghum (Sorghum bicolor) panicles (heads) in field plots from both TLS and UAS-based SfM point clouds. The TLS point cloud over experimental sorghum field at Texas A&M farm in Burleston County TX were collected using a FARO Focus X330 3D laser scanner. SfM point cloud was generated from UAS imagery captured using a Phantom 3 Professional UAS at 10m altitude and 85% image overlap. The panicle detection method applies point cloud reflectance, height and point density attributes characteristic of sorghum panicles to detect them and estimate their dimensions (panicle length and width) through image classification and clustering procedures. We compare the derived panicle counts and panicle sizes with field-based and manually digitized measurements in selected plots and study the strengths and limitations of each data source for sorghum panicle characterization.
Improving Subtropical Boundary Layer Cloudiness in the 2011 NCEP GFS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fletcher, J. K.; Bretherton, Christopher S.; Xiao, Heng
2014-09-23
The current operational version of National Centers for Environmental Prediction (NCEP) Global Forecasting System (GFS) shows significant low cloud bias. These biases also appear in the Coupled Forecast System (CFS), which is developed from the GFS. These low cloud biases degrade seasonal and longer climate forecasts, particularly of short-wave cloud radiative forcing, and affect predicted sea surface temperature. Reducing this bias in the GFS will aid the development of future CFS versions and contributes to NCEP's goal of unified weather and climate modelling. Changes are made to the shallow convection and planetary boundary layer parameterisations to make them more consistentmore » with current knowledge of these processes and to reduce the low cloud bias. These changes are tested in a single-column version of GFS and in global simulations with GFS coupled to a dynamical ocean model. In the single-column model, we focus on changing parameters that set the following: the strength of shallow cumulus lateral entrainment, the conversion of updraught liquid water to precipitation and grid-scale condensate, shallow cumulus cloud top, and the effect of shallow convection in stratocumulus environments. Results show that these changes improve the single-column simulations when compared to large eddy simulations, in particular through decreasing the precipitation efficiency of boundary layer clouds. These changes, combined with a few other model improvements, also reduce boundary layer cloud and albedo biases in global coupled simulations.« less
Direct Determinations of the πNN Coupling Constants
NASA Astrophysics Data System (ADS)
Ericson, T. E. O.; Loiseau, B.
1998-11-01
A novel extrapolation method has been used to deduce directly the charged πN N coupling constant from backward np differential scattering cross sections. The extracted value, g2c = 14.52(0.26) is higher than the indirectly deduced values obtained in nucleon-nucleon energy-dependent partial-wave analyses. Our preliminary direct value from a reanalysis of the GMO sum-rule points to an intermediate value of g2c about 13.97(30).
Efficient terrestrial laser scan segmentation exploiting data structure
NASA Astrophysics Data System (ADS)
Mahmoudabadi, Hamid; Olsen, Michael J.; Todorovic, Sinisa
2016-09-01
New technologies such as lidar enable the rapid collection of massive datasets to model a 3D scene as a point cloud. However, while hardware technology continues to advance, processing 3D point clouds into informative models remains complex and time consuming. A common approach to increase processing efficiently is to segment the point cloud into smaller sections. This paper proposes a novel approach for point cloud segmentation using computer vision algorithms to analyze panoramic representations of individual laser scans. These panoramas can be quickly created using an inherent neighborhood structure that is established during the scanning process, which scans at fixed angular increments in a cylindrical or spherical coordinate system. In the proposed approach, a selected image segmentation algorithm is applied on several input layers exploiting this angular structure including laser intensity, range, normal vectors, and color information. These segments are then mapped back to the 3D point cloud so that modeling can be completed more efficiently. This approach does not depend on pre-defined mathematical models and consequently setting parameters for them. Unlike common geometrical point cloud segmentation methods, the proposed method employs the colorimetric and intensity data as another source of information. The proposed algorithm is demonstrated on several datasets encompassing variety of scenes and objects. Results show a very high perceptual (visual) level of segmentation and thereby the feasibility of the proposed algorithm. The proposed method is also more efficient compared to Random Sample Consensus (RANSAC), which is a common approach for point cloud segmentation.
NASA Astrophysics Data System (ADS)
Sanchez, E. Y.; Colman Lerner, J. E.; Porta, A.; Jacovkis, P. M.
2013-11-01
Information on spatial and time dependent concentration patterns of hazardous substances, as well as on the potential effects on population, is necessary to assist in chemical emergency planning and response. To that end, some models predict transport and dispersion of hazardous substances, and others estimate potential effects upon exposed population. Taken together, both groups constitute a powerful tool to estimate vulnerable regions and to evaluate environmental impact upon affected populations. The development of methodologies and models with direct application to the context in which we live allows us to draft a more clear representation of the risk scenario and, hence, to obtain the adequate tools for an optimal response. By means of the recently developed DDC (Damage Differential Coupling) exposure model, it was possible to optimize, from both the qualitative and the quantitative points of view, the estimation of the population affected by a toxic cloud, because the DDC model has a very good capacity to couple with different atmospheric dispersion models able to provide data over time. In this way, DDC analyzes the different concentration profiles (output from the transport model) associating them with some reference concentration to identify risk zones. In this work we present a disaster scenario in Chicago (USA), by coupling DDC with two transport models of different complexity, showing the close relationship between a representative result and the run time of the models. In the same way, it becomes evident that knowing the time evolution of the toxic cloud and of the affected regions significantly improves the probability of taking the correct decisions on planning and response facing the emergency.
NASA Astrophysics Data System (ADS)
Gupta, Shaurya; Guha, Daipayan; Jakubovic, Raphael; Yang, Victor X. D.
2017-02-01
Computer-assisted navigation is used by surgeons in spine procedures to guide pedicle screws to improve placement accuracy and in some cases, to better visualize patient's underlying anatomy. Intraoperative registration is performed to establish a correlation between patient's anatomy and the pre/intra-operative image. Current algorithms rely on seeding points obtained directly from the exposed spinal surface to achieve clinically acceptable registration accuracy. Registration of these three dimensional surface point-clouds are prone to various systematic errors. The goal of this study was to evaluate the robustness of surgical navigation systems by looking at the relationship between the optical density of an acquired 3D point-cloud and the corresponding surgical navigation error. A retrospective review of a total of 48 registrations performed using an experimental structured light navigation system developed within our lab was conducted. For each registration, the number of points in the acquired point cloud was evaluated relative to whether the registration was acceptable, the corresponding system reported error and target registration error. It was demonstrated that the number of points in the point cloud neither correlates with the acceptance/rejection of a registration or the system reported error. However, a negative correlation was observed between the number of the points in the point-cloud and the corresponding sagittal angular error. Thus, system reported total registration points and accuracy are insufficient to gauge the accuracy of a navigation system and the operating surgeon must verify and validate registration based on anatomical landmarks prior to commencing surgery.
Using baryon octet magnetic moments and masses to fix the pion cloud contribution
Franz L. Gross; Ramalho, Gilberto T. F.; Tsushima, Kazuo
2010-05-12
In this study, using SU(3) symmetry to constrain themore » $$\\pi BB'$$ couplings, assuming SU(3) breaking comes only from one-loop pion cloud contributions, and using the the covariant spectator theory to describe the photon coupling to the quark core, we show how the experimental masses and magnetic moments of the baryon octet can be used to set a model independent constraint on the strength of the pion cloud contributions to the octet, and hence the nucleon, form factors at $Q^2=0$.« less
Study on Huizhou architecture of point cloud registration based on optimized ICP algorithm
NASA Astrophysics Data System (ADS)
Zhang, Runmei; Wu, Yulu; Zhang, Guangbin; Zhou, Wei; Tao, Yuqian
2018-03-01
In view of the current point cloud registration software has high hardware requirements, heavy workload and moltiple interactive definition, the source of software with better processing effect is not open, a two--step registration method based on normal vector distribution feature and coarse feature based iterative closest point (ICP) algorithm is proposed in this paper. This method combines fast point feature histogram (FPFH) algorithm, define the adjacency region of point cloud and the calculation model of the distribution of normal vectors, setting up the local coordinate system for each key point, and obtaining the transformation matrix to finish rough registration, the rough registration results of two stations are accurately registered by using the ICP algorithm. Experimental results show that, compared with the traditional ICP algorithm, the method used in this paper has obvious time and precision advantages for large amount of point clouds.
In this study, the shortwave cloud forcing (SWCF) and longwave cloud forcing (LWCF) are estimated with the newly developed two-way coupled WRF-CMAQ over the eastern United States. Preliminary indirect aerosol forcing has been successfully implemented in WRF-CMAQ. The comparisons...
NASA Astrophysics Data System (ADS)
Ge, Xuming
2017-08-01
The coarse registration of point clouds from urban building scenes has become a key topic in applications of terrestrial laser scanning technology. Sampling-based algorithms in the random sample consensus (RANSAC) model have emerged as mainstream solutions to address coarse registration problems. In this paper, we propose a novel combined solution to automatically align two markerless point clouds from building scenes. Firstly, the method segments non-ground points from ground points. Secondly, the proposed method detects feature points from each cross section and then obtains semantic keypoints by connecting feature points with specific rules. Finally, the detected semantic keypoints from two point clouds act as inputs to a modified 4PCS algorithm. Examples are presented and the results compared with those of K-4PCS to demonstrate the main contributions of the proposed method, which are the extension of the original 4PCS to handle heavy datasets and the use of semantic keypoints to improve K-4PCS in relation to registration accuracy and computational efficiency.
Automated Detection of Geomorphic Features in LiDAR Point Clouds of Various Spatial Density
NASA Astrophysics Data System (ADS)
Dorninger, Peter; Székely, Balázs; Zámolyi, András.; Nothegger, Clemens
2010-05-01
LiDAR, also referred to as laser scanning, has proved to be an important tool for topographic data acquisition. Terrestrial laser scanning allows for accurate (several millimeter) and high resolution (several centimeter) data acquisition at distances of up to some hundred meters. By contrast, airborne laser scanning allows for acquiring homogeneous data for large areas, albeit with lower accuracy (decimeter) and resolution (some ten points per square meter) compared to terrestrial laser scanning. Hence, terrestrial laser scanning is preferably used for precise data acquisition of limited areas such as landslides or steep structures, while airborne laser scanning is well suited for the acquisition of topographic data of huge areas or even country wide. Laser scanners acquire more or less homogeneously distributed point clouds. These points represent natural objects like terrain and vegetation and artificial objects like buildings, streets or power lines. Typical products derived from such data are geometric models such as digital surface models representing all natural and artificial objects and digital terrain models representing the geomorphic topography only. As the LiDAR technology evolves, the amount of data produced increases almost exponentially even in smaller projects. This means a considerable challenge for the end user of the data: the experimenter has to have enough knowledge, experience and computer capacity in order to manage the acquired dataset and to derive geomorphologically relevant information from the raw or intermediate data products. Additionally, all this information might need to be integrated with other data like orthophotos. In all theses cases, in general, interactive interpretation is necessary to determine geomorphic structures from such models to achieve effective data reduction. There is little support for the automatic determination of characteristic features and their statistical evaluation. From the lessons learnt from automated extraction and modeling of buildings (Dorninger & Pfeifer, 2008) we expected that similar generalizations for geomorphic features can be achieved. Our aim is to recognize as many features as possible from the point cloud in the same processing loop, if they can be geometrically described with appropriate accuracy (e.g., as a plane). For this, we propose to apply a segmentation process allowing determining connected, planar structures within a surface represented by a point cloud. It is based on a robust determination of local tangential planes for all points acquired (Nothegger & Dorninger, 2009). It assumes that for points, belonging to a distinct planar structure, similar tangential planes can be determined. In passing, points acquired at continuous such as vegetation can be identified and eliminated. The plane parameters are used to define a four-dimensional feature space which is used to determine seed-clusters globally for the whole are of interest. Starting from these seeds, all points defining a connected, planar region are assigned to a segment. Due to the design of the algorithm, millions of input points can be processed with acceptable processing time on standard computer systems. This allows for processing geomorphically representative areas at once. For each segment, numerous parameter are derived which can be used for further exploitation. These are, for example, location, area, aspect, slope, and roughness. To prove the applicability of our method for automated geomorphic terrain analysis, we used terrestrial and airborne laser scanning data, acquired at two locations. The data of the Doren landslide located in Vorarlberg, Austria, was acquired by a terrestrial Riegl LS-321 laser scanner in 2008, by a terrestrial Riegl LMS-Z420i laser scanner in 2009, and additionally by three airborne LiDAR measurement campaigns, organized by the Landesvermessungsamt Vorarlberg, Feldkirch, in 2003, 2006, and 2007. The measurement distance of the terrestrial measurements was considerably varying considerably because of the various base points that were needed to cover the whole landslide. The resulting point spacing is approximately 20 cm. The achievable accuracy was about 10 cm. The airborne data was acquired with mean point densities of 2 points per square-meter. The accuracy of this dataset was about 15 cm. The second testing site is an area of the Leithagebirge in Burgenland, Austria. The data was acquired by an airborne Riegl LMS-Q560 laser scanner mounted on a helicopter. The mean point density was 6-8 points per square with an accuracy better than 10 cm. We applied our processing chain on the datasets individually. First, they were transformed to local reference frames and fine adjustments of the individual scans respectively flight strips were applied. Subsequently, the local regression planes were determined for each point of the point clouds and planar features were extracted by means of the proposed approach. It turned out that even small displacements can be detected if the number of points used for the fit is enough to define a parallel but somewhat displaced plane. Smaller cracks and erosional incisions do not disturb the plane fitting, because mostly they are filtered out as outliers. A comparison of the different campaigns of the Doren site showed exciting matches of the detected geomorphic structures. Although the geomorphic structure of the Leithagebirge differs from the Doren landslide, and the scales of the two studies were also different, reliable results were achieved in both cases. Additionally, the approach turned out to be highly robust against points which were not located on the terrain. Hence, no false positives were determined within the dense vegetation above the terrain, while it was possible to cover the investigated areas completely with reliable planes. In some cases, however, some structures in the tree crowns were also recognized, but these small patches could be very well sorted out from the geomorphically relevant results. Consequently, it could be verified that a topographic surface can be properly represented by a set of distinct planar structures. Therefore, the subsequent interpretation of those planes with respect to geomorphic characteristics is acceptable. The additional in situ geological measurements verified some of our findings in the sense that similar primary directions could be found that were derived from the LiDAR data set and (Zámolyi et al., 2010, this volume). References: P. Dorninger, N. Pfeifer: "A Comprehensive Automated 3D Approach for Building Extraction, Reconstruction, and Regularization from Airborne Laser Scanning Point Clouds"; Sensors, 8 (2008), 11; 7323 - 7343. C. Nothegger, P. Dorninger: "3D Filtering of High-Resolution Terrestrial Laser Scanner Point Clouds for Cultural Heritage Documentation"; Photogrammetrie, Fernerkundung, Geoinformation, 1 (2009), 53 - 63. A. Zámolyi, B. Székely, G. Molnár, A. Roncat, P. Dorninger, A. Pocsai, M. Wyszyski, P. Drexel: "Comparison of LiDAR derived directional topographic features with geologic field evidence: a case study of Doren landslide (Vorarlberg, Austria)"; EGU General Assembly 2010, Vienna, Austria
Shi, Zongbo; Krom, Michael D; Bonneville, Steeve; Baker, Alex R; Jickells, Timothy D; Benning, Liane G
2009-09-01
The formation of iron (Fe) nanoperticles and increase in Fe reactivity in mineral dust during simulated cloud processing was investigated using high-resolution microscopy and chemical extraction methods. Cloud processing of dust was experimentally simulated via an alternation of acidic (pH 2) and circumneutral conditions (pH 5-6) over periods of 24 h each on presieved (<20 microm) Saharan soil and goethite suspensions. Microscopic analyses of the processed soil and goethite samples reveal the neo-formation of Fe-rich nanoparticle aggregates, which were not found initially. Similar Fe-rich nanoparticles were also observed in wet-deposited Saharen dusts from the western Mediterranean but not in dry-deposited dust from the eastern Mediterranean. Sequential Fe extraction of the soil samples indicated an increase in the proportion of chemically reactive Fe extractable by an ascorbate solution after simulated cloud processing. In addition, the sequential extractions on the Mediterranean dust samples revealed a higher content of reactive Fe in the wet-deposited dust compared to that of the dry-deposited dust These results suggestthat large variations of pH commonly reported in aerosol and cloud waters can trigger neo-formation of nanosize Fe particles and an increase in Fe reactivity in the dust
Object-Based Coregistration of Terrestrial Photogrammetric and ALS Point Clouds in Forested Areas
NASA Astrophysics Data System (ADS)
Polewski, P.; Erickson, A.; Yao, W.; Coops, N.; Krzystek, P.; Stilla, U.
2016-06-01
Airborne Laser Scanning (ALS) and terrestrial photogrammetry are methods applicable for mapping forested environments. While ground-based techniques provide valuable information about the forest understory, the measured point clouds are normally expressed in a local coordinate system, whose transformation into a georeferenced system requires additional effort. In contrast, ALS point clouds are usually georeferenced, yet the point density near the ground may be poor under dense overstory conditions. In this work, we propose to combine the strengths of the two data sources by co-registering the respective point clouds, thus enriching the georeferenced ALS point cloud with detailed understory information in a fully automatic manner. Due to markedly different sensor characteristics, coregistration methods which expect a high geometric similarity between keypoints are not suitable in this setting. Instead, our method focuses on the object (tree stem) level. We first calculate approximate stem positions in the terrestrial and ALS point clouds and construct, for each stem, a descriptor which quantifies the 2D and vertical distances to other stem centers (at ground height). Then, the similarities between all descriptor pairs from the two point clouds are calculated, and standard graph maximum matching techniques are employed to compute corresponding stem pairs (tiepoints). Finally, the tiepoint subset yielding the optimal rigid transformation between the terrestrial and ALS coordinate systems is determined. We test our method on simulated tree positions and a plot situated in the northern interior of the Coast Range in western Oregon, USA, using ALS data (76 x 121 m2) and a photogrammetric point cloud (33 x 35 m2) derived from terrestrial photographs taken with a handheld camera. Results on both simulated and real data show that the proposed stem descriptors are discriminative enough to derive good correspondences. Specifically, for the real plot data, 24 corresponding stems were coregistered with an average 2D position deviation of 66 cm.
Large-scale urban point cloud labeling and reconstruction
NASA Astrophysics Data System (ADS)
Zhang, Liqiang; Li, Zhuqiang; Li, Anjian; Liu, Fangyu
2018-04-01
The large number of object categories and many overlapping or closely neighboring objects in large-scale urban scenes pose great challenges in point cloud classification. In this paper, a novel framework is proposed for classification and reconstruction of airborne laser scanning point cloud data. To label point clouds, we present a rectified linear units neural network named ReLu-NN where the rectified linear units (ReLu) instead of the traditional sigmoid are taken as the activation function in order to speed up the convergence. Since the features of the point cloud are sparse, we reduce the number of neurons by the dropout to avoid over-fitting of the training process. The set of feature descriptors for each 3D point is encoded through self-taught learning, and forms a discriminative feature representation which is taken as the input of the ReLu-NN. The segmented building points are consolidated through an edge-aware point set resampling algorithm, and then they are reconstructed into 3D lightweight models using the 2.5D contouring method (Zhou and Neumann, 2010). Compared with deep learning approaches, the ReLu-NN introduced can easily classify unorganized point clouds without rasterizing the data, and it does not need a large number of training samples. Most of the parameters in the network are learned, and thus the intensive parameter tuning cost is significantly reduced. Experimental results on various datasets demonstrate that the proposed framework achieves better performance than other related algorithms in terms of classification accuracy and reconstruction quality.
NASA Astrophysics Data System (ADS)
Vericat, Damià; Narciso, Efrén; Béjar, Maria; Tena, Álvaro; Brasington, James; Gibbins, Chris; Batalla, Ramon J.
2014-05-01
Digital Terrain Models are fundamental to characterise landscapes, to support numerical modelling and to monitor topographic changes. Recent advances in topography, remote sensing and geomatics are providing new opportunities to obtain high density/quality and rapid topographic data. In this paper we present an integrated methodology to rapidly obtain reach scale topographic models of fluvial systems. This methodology has been tested and is being applied to develop event-scale terrain models of a 11-km river reach in the highly dynamic Upper Cinca (NE Iberian Peninsula). This research is conducted in the background of the project MorphSed. The methodology integrates (a) the acquisition of dense point clouds of the exposed floodplain (aerial photography and digital photogrammetry); (b) the registration of all observations to the same coordinate system (using RTK-GPS surveyed GCPs); (c) the acquisition of bathymetric data (using aDcp measurements integrated with RTK-GPS); (d) the intelligent decimation of survey observations (using the open source TopCat toolkit) and, finally, (e) data fusion (elaborating Digital Elevation Models). In this paper special emphasis is given to the acquisition and registration of point clouds. 3D point clouds are obtained from aerial photography and by means of automated digital photogrammetry. Aerial photographs are taken at 275 meters above the ground by means of a SLR digital camera manually operated from an autogyro. Four flight paths are defined in order to cover the 11 km long and 500 meters wide river reach. A total of 45 minutes are required to fly along these paths. Camera has been previously calibrated with the objective to ensure image resolution at around 5 cm. A total of 220 GCPs are deployed and RTK-GPS surveyed before the flight is conducted. Two people and one full workday are necessary to deploy and survey the full set of GCPs. Field data acquisition may be finalised in less than 2 days. Structure-from-Motion is subsequently applied in the lab using Agisoft PhotoScan, photographs are aligned and a 3d point cloud is generated. GCPs are used to geo-register all point clouds. This task may be time consuming since GCPs need to be identified in at least two of the pictures. A first automatic identification of GCPs positions is performed in the rest of the photos, although user supervision is necessary. Preliminary results show as geo-registration errors between 0.08 and and 0.10 meters can be obtained. The number of GCPs is being degraded and the quality of the point cloud assessed based on check points (the extracted GCPs). A critical analysis of GCPs density and scene locations is being performed (results in preparation). The results show that automated digital photogrammetry may provide new opportunities in the acquisition of topographic data at multiple temporal and spatial scales, being competitive with other more expensive techniques that, in turn, may require much more time to acquire field observations. SfM offers new opportunities to develop event-scale terrain models of fluvial systems suitable for hydraulic modelling and to study topographic change in highly dynamic environments.
Formation of the Oort Cloud: Coupling Dynamical and Collisional Evolutions of Cometesimals
NASA Astrophysics Data System (ADS)
Charnoz, S.; Morbidelli, A.
2002-09-01
Cometesimals are thought to be born in the region of Giant Planets region and were subsequently ejected to the Oort Cloud by gravitational scattering. Some recent works (Stern & Weisman, 2001 Nature 409) have emphasized that during this phase of violent ejection, random velocities among cometesimals become so high that the majority of kilometer-sized comets might have been destroyed by multiple violent collisions before they reach the Oort Cloud, resulting in a low mass Oort Cloud. We present a new approach which allows to couple dynamical and collisional evolutions. This study focuses on cometesimals starting from the Jupiter-Saturn region. We find that the rapid depletion of the disk, due to the gravitational-scattering exerted by the giant planets, prevents a large fraction of cometesimals from rapid collisional destruction. These conclusions support the classical scenario of Oort Cloud formation.
A Case Study of Reverse Engineering Integrated in an Automated Design Process
NASA Astrophysics Data System (ADS)
Pescaru, R.; Kyratsis, P.; Oancea, G.
2016-11-01
This paper presents a design methodology which automates the generation of curves extracted from the point clouds that have been obtained by digitizing the physical objects. The methodology is described on a product belonging to the industry of consumables, respectively a footwear type product that has a complex shape with many curves. The final result is the automated generation of wrapping curves, surfaces and solids according to the characteristics of the customer's foot, and to the preferences for the chosen model, which leads to the development of customized products.
Six Martian years of CO2 clouds survey by OMEGA/MEx.
NASA Astrophysics Data System (ADS)
Gondet, Brigitte; bibring, Jean-Pierre; Vincendon, Mathieu
2014-05-01
Mesospheric clouds have been detected first from Earth (Bell et al 1996 [1]), then from Mars orbit (MGS/TES and MOC, Clancy et al 1998 [2]). Their composition (CO2) was inferred from temperature. Similar detection and temperature-inferred composition was then performed by Spicam and PFS on board Mars Express (Monmessin et al [3], Formisano et al [4]., 2006). The first direct detection and characterization (altitude, composition, velocity) was performed by OMEGA/ Mars Express (then coupled to HRSC/ Mars Express, and confirmed by CRISM/MRO (Montmessin et al. [5], 2007, Maattanen et al [6]., Scholten et al. [7], 2010, Vincendon et al [8]., 2011). Omega is a very powerful tool for the study of CO2 clouds as it is able to unambiguously identify the CO2 composition of a cloud based on a near-IR spectral feature located at 4.26 μm [5],. Therefore since the beginning of the Mars Express mission (2004) OMEGA as done a systematic survey of these mesospheric clouds. Thanks to the orbit of Mars Express, we can observe this clouds from different altitudes (from apocenter to pericenter) and at different local times. We will present the result of 6 Martians years of observations and point out a correlation with the dust activity. We also observe that their time of appearance/disappearance varies slightly from year to year. We will mention also the existence of mesospheric H2O clouds. References [1] JF Bell. et al. JGR 1996; [2] RT Clancy et al., GRL 1998 [3] F. Montmessin et al. JGR 2006; [4] V. Formisano et al., Icarus 2006; [5] F. Montmessin et al JGR 2007 [6] A. Määttänen et al. Icarus 2010; [7] F. Scholten et al. PSS 2010; [8] M. Viencendon et al. JGR 2011
Mesospheric CO2 Clouds at Mars: Seven Martian Years Survey by OMEGA/MEX
NASA Astrophysics Data System (ADS)
Gondet, Brigitte; Bibring, Jean-Pierre
2016-04-01
Mesospheric clouds have been detected first from Earth (Bell et al 1996 [1]), then from Mars orbit (MGS/TES and MOC, Clancy et al 1998 [2]). Their composition (CO2) was inferred from temperature. Similar detection and temperature-inferred composition was then performed by Spicam and PFS on board Mars Express (Monmessin et al [3], Formisano et al [4]. 2006). The first direct detection and characterization (altitude, composition, velocity) was performed by OMEGA/ Mars Express (then coupled to HRSC/ Mars Express, and confirmed by CRISM/MRO (Montmessin et al. [5], 2007, Maattanen et al [6]. Scholten et al. [7], 2010, Vincendon et al [8], 2011). Omega is a very powerful tool for the study of CO2 clouds as it is able to unambiguously identify the CO2 composition of a cloud based on a near-IR spectral feature located at 4.26 μm [5] Therefore since the beginning of the Mars Express mission (2004) OMEGA as done a systematic survey of these mesospheric clouds. Thanks to the orbit of Mars Express, we can observe this clouds from different altitudes (from apocenter to pericenter) and at different local times. We will present the result of 7 Martians years of observations, point out a correlation with the dust activity and an irregular concentration of clouds from years to years. References [1] JF Bell. et al. JGR 1996; [2] RT Clancy et al., GRL 1998 [3] F. Montmessin et al. JGR 2006; [4] V. Formisano et al., Icarus 2006; [5] F. Montmessin et al JGR 2007 [6] A. Määttänen et al. Icarus 2010; [7] F. Scholten et al. PSS 2010; [8] M. Viencendon et al. JGR 2011
Superposition and alignment of labeled point clouds.
Fober, Thomas; Glinca, Serghei; Klebe, Gerhard; Hüllermeier, Eyke
2011-01-01
Geometric objects are often represented approximately in terms of a finite set of points in three-dimensional euclidean space. In this paper, we extend this representation to what we call labeled point clouds. A labeled point cloud is a finite set of points, where each point is not only associated with a position in three-dimensional space, but also with a discrete class label that represents a specific property. This type of model is especially suitable for modeling biomolecules such as proteins and protein binding sites, where a label may represent an atom type or a physico-chemical property. Proceeding from this representation, we address the question of how to compare two labeled points clouds in terms of their similarity. Using fuzzy modeling techniques, we develop a suitable similarity measure as well as an efficient evolutionary algorithm to compute it. Moreover, we consider the problem of establishing an alignment of the structures in the sense of a one-to-one correspondence between their basic constituents. From a biological point of view, alignments of this kind are of great interest, since mutually corresponding molecular constituents offer important information about evolution and heredity, and can also serve as a means to explain a degree of similarity. In this paper, we therefore develop a method for computing pairwise or multiple alignments of labeled point clouds. To this end, we proceed from an optimal superposition of the corresponding point clouds and construct an alignment which is as much as possible in agreement with the neighborhood structure established by this superposition. We apply our methods to the structural analysis of protein binding sites.
NASA Technical Reports Server (NTRS)
Wang, J.; Biasca, R.; Liewer, P. C.
1996-01-01
Although the existence of the critical ionization velocity (CIV) is known from laboratory experiments, no agreement has been reached as to whether CIV exists in the natural space environment. In this paper we move towards more realistic models of CIV and present the first fully three-dimensional, electromagnetic particle-in-cell Monte-Carlo collision (PIC-MCC) simulations of typical space-based CIV experiments. In our model, the released neutral gas is taken to be a spherical cloud traveling across a magnetized ambient plasma. Simulations are performed for neutral clouds with various sizes and densities. The effects of the cloud parameters on ionization yield, wave energy growth, electron heating, momentum coupling, and the three-dimensional structure of the newly ionized plasma are discussed. The simulations suggest that the quantitative characteristics of momentum transfers among the ion beam, neutral cloud, and plasma waves is the key indicator of whether CIV can occur in space. The missing factors in space-based CIV experiments may be the conditions necessary for a continuous enhancement of the beam ion momentum. For a typical shaped charge release experiment, favorable CIV conditions may exist only in a very narrow, intermediate spatial region some distance from the release point due to the effects of the cloud density and size. When CIV does occur, the newly ionized plasma from the cloud forms a very complex structure due to the combined forces from the geomagnetic field, the motion induced emf, and the polarization. Hence the detection of CIV also critically depends on the sensor location.
Continuum Limit of Total Variation on Point Clouds
NASA Astrophysics Data System (ADS)
García Trillos, Nicolás; Slepčev, Dejan
2016-04-01
We consider point clouds obtained as random samples of a measure on a Euclidean domain. A graph representing the point cloud is obtained by assigning weights to edges based on the distance between the points they connect. Our goal is to develop mathematical tools needed to study the consistency, as the number of available data points increases, of graph-based machine learning algorithms for tasks such as clustering. In particular, we study when the cut capacity, and more generally total variation, on these graphs is a good approximation of the perimeter (total variation) in the continuum setting. We address this question in the setting of Γ-convergence. We obtain almost optimal conditions on the scaling, as the number of points increases, of the size of the neighborhood over which the points are connected by an edge for the Γ-convergence to hold. Taking of the limit is enabled by a transportation based metric which allows us to suitably compare functionals defined on different point clouds.
Point cloud registration from local feature correspondences-Evaluation on challenging datasets.
Petricek, Tomas; Svoboda, Tomas
2017-01-01
Registration of laser scans, or point clouds in general, is a crucial step of localization and mapping with mobile robots or in object modeling pipelines. A coarse alignment of the point clouds is generally needed before applying local methods such as the Iterative Closest Point (ICP) algorithm. We propose a feature-based approach to point cloud registration and evaluate the proposed method and its individual components on challenging real-world datasets. For a moderate overlap between the laser scans, the method provides a superior registration accuracy compared to state-of-the-art methods including Generalized ICP, 3D Normal-Distribution Transform, Fast Point-Feature Histograms, and 4-Points Congruent Sets. Compared to the surface normals, the points as the underlying features yield higher performance in both keypoint detection and establishing local reference frames. Moreover, sign disambiguation of the basis vectors proves to be an important aspect in creating repeatable local reference frames. A novel method for sign disambiguation is proposed which yields highly repeatable reference frames.
Cloud Condensation Nuclei in Cumulus Humilis - Selected Case Study During the CHAPS Campaign
NASA Astrophysics Data System (ADS)
Yu, X.; Berg, L. K.; Berkowitz, C. M.; Alexander, M. L.; Lee, Y.; Laskin, A.; Ogren, J. A.; Andrews, B.
2009-12-01
The Cumulus Humilis Aerosol Processing Study (CHAPS) provided a unique opportunity to study aerosol and cloud processing. Clouds play an active role in the processing and cycling of atmospheric constituents. Gases and particles can partition to cloud droplets by absorption and condensation as well as activation and pact scavenging. The Department of Energy (DOE) G-1 aircraft was used as one of the main platforms in CHAPS. Flight tracks were designed and implemented to characterize freshly emitted aerosols on cloud top and cloud base as well as with cloud, i.e., cumulus humilis (or fair-weather cumulus), in the vicinity of Oklahoma City. Measurements of interstitial aerosols and residuals of activated condensation cloud nuclei were conducted simultaneously. The interstitial aerosols were determined downstream of an isokinetic inlet; and the activated particles downstream of a counter-flow virtual impactor (CVI). The sampling line to the Aerodyne Aerosol Mass Spectrometer was switched between the isokinetic inlet and the CVI to allow characterization of interstitial particles out of clouds in contrast to particles activated in clouds. Trace gases including ozone, carbon monoxide, sulfur dioxide, and a series of volatile organic compounds (VOCs) were also measured as were key meteorological state parameters including liquid water content, cloud drop size, and dew point temperature were measured. This work will focus on studying CCN properties in cumulus humilis. Several approaches will be taken. The first is single particle analysis of particles collected by the Time-Resolved Aerosol Sampler (TRAC) by SEM/TEM coupled with EDX. We will specifically look into differences in particle properties such as chemical composition and morphology between activated and interstitial ones. The second analysis will link in situ measurements with the snap shots observations by TRAC. For instance, by looking into the characteristic m/z obtained by AMS vs. CO or isoprene, one can gain more insight into the role of primary and secondary organic aerosols in CCNs and background aerosols. Combined with observations of cloud properties, an improved picture of CCN activation in cumulus humilis can be made.
On the performance of metrics to predict quality in point cloud representations
NASA Astrophysics Data System (ADS)
Alexiou, Evangelos; Ebrahimi, Touradj
2017-09-01
Point clouds are a promising alternative for immersive representation of visual contents. Recently, an increased interest has been observed in the acquisition, processing and rendering of this modality. Although subjective and objective evaluations are critical in order to assess the visual quality of media content, they still remain open problems for point cloud representation. In this paper we focus our efforts on subjective quality assessment of point cloud geometry, subject to typical types of impairments such as noise corruption and compression-like distortions. In particular, we propose a subjective methodology that is closer to real-life scenarios of point cloud visualization. The performance of the state-of-the-art objective metrics is assessed by considering the subjective scores as the ground truth. Moreover, we investigate the impact of adopting different test methodologies by comparing them. Advantages and drawbacks of every approach are reported, based on statistical analysis. The results and conclusions of this work provide useful insights that could be considered in future experimentation.