Sample records for cloud point cp

  1. Effect of additives on the clouding and aggregation behavior of Triton X-100

    NASA Astrophysics Data System (ADS)

    Semwal, Divyam; Sen, Indrani Das; Jayaram, Radha V.

    2018-04-01

    The present study investigates the effect of additives such as CsNO3 and imidazolium ionic liquids on the cloud point (CP) of Triton X-100. Thermodynamic parameters of the clouding process were determined in order to understand the interactions. CP was found to increase with the increase in concentration of most of the ionic liquids studied. This increase of CP reflects the solubilization of the ionic liquids in the micellar phase1. The thermodynamic parameters on the introduction of CsNO3 in TX-100 - ionic liquid system helps in understanding the different interactions occurring in the system. All ΔG values for clouding were found to be positive and hence made the process non spontaneous.

  2. Effect of the additives on clouding behavior and thermodynamics of coenzyme Q10-Kolliphor HS15 micelle aqueous solutions

    NASA Astrophysics Data System (ADS)

    Hu, Li; Zhang, Jing; Zhu, Chao; Pan, Hong-chun; Liu, Hong

    2017-11-01

    Herein we investigate the effect of different additives (electrolytes, amino acids, PEG, and sugars) on the cloud points (CP) of coenzyme Q10 (CoQ10) - Kolliphor HS15 (HS15) micelle aqueous solutions. The CP values were decreased with the increase of electrolytes and sugars, following: CPAl3+ < CPMg2+ < CPCa2+ < CPNa+ < CPK+ < CPNH4+; CPdisaccharide < CPmonosaccharide. The presences of Arginine and Tryptophan significantly increased the CP; while the other amino acids reduced the CP. A depression of CP for CoQ10-HS15 micelle solution with PEG was molecular weight of PEG dependent. The significant thermodynamic parameters were also evaluated and discussed.

  3. Effect of Cerium(III) and ionic liquids on the clouding behavior of Triton X-100 micelles

    NASA Astrophysics Data System (ADS)

    Sen, Indrani Das; Negi, Charu; Jayaram, Radha V.

    2018-04-01

    In the present study, the effect of Ce(III) on the clouding behavior of Triton X-100 has been investigated in the presence and absence of imidazolium based ionic liquids of varying chain length and counter ions. Thermodynamic parameters of clouding were calculated to comprehend the underlying interactions between the surfactant and the additives. The cloud point (CP) of Triton X-100 was found to increase with the concentration of Ce(III) and that of the ionic liquids studied. This increase of CP reflects the solubilization of the ionic liquids in the micellar solution1.

  4. Synthesis and physical properties of new estolide esters

    USDA-ARS?s Scientific Manuscript database

    Vegetable oil-based oils usually fail to meet the rigorous demands of industrial lubricants by not having acceptable low temperature properties, pour point (PP) and/or cloud point (CP). The oleic estolide was esterified with a series of 16 different alcohols that were either branched or straight-cha...

  5. Synthesis and physical properties of new coco-oleic estolide branched esters

    USDA-ARS?s Scientific Manuscript database

    Oils derived from vegetable oils tend to not meet the standards for industrial lubricants because of unacceptable low temperature properties, pour point (PP), and/or cloud point (CP). However, a catalytic amount of perchloric acid with oleic and coconut (coco) fatty acids produced a coco-oleic estol...

  6. Impact of fatty ester composition on low temperature properties of biodiesel-petroleum diesel blends

    USDA-ARS?s Scientific Manuscript database

    Several biodiesel fuels along with neat fatty acid methyl esters (FAMEs) commonly encountered in biodiesel were blended with ultra-low sulfur diesel (ULSD) fuel at low blend levels permitted by ASTM D975 (B1-B5) and cold flow properties such as cloud point (CP), cold filter plugging point (CFPP), an...

  7. Building a Citizen Pscientist: Advancing Patient-Centered Psoriasis Research by Empowering Patients as Contributors and Analysts.

    PubMed

    Sanchez, Isabelle M; Shankle, Lindsey; Wan, Marilyn T; Afifi, Ladan; Wu, Jashin J; Doris, Frank; Bridges, Alisha; Boas, Marc; Lafoy, Brian; Truman, Sarah; Orbai, Ana-Maria; Takeshita, Junko; Gelfand, Joel M; Armstrong, April W; Siegel, Michael P; Liao, Wilson

    2018-06-06

    To design and implement a novel cloud-based digital platform that allows psoriatic patients and researchers to engage in the research process. Citizen Pscientist (CP) was created by the National Psoriasis Foundation (NPF) to support and educate the global psoriatic disease community, where patients and researchers have the ability to analyze data. Psoriatic patients were invited to enroll in CP and contribute health data to a cloud database by responding to a 59-question online survey. They were then invited to perform their own analyses of the data using built-in visualization tools allowing for the creation of "discovery charts." These charts were posted on the CP website allowing for further discussion. As of May 2017, 3534 patients have enrolled in CP and have collectively contributed over 200,000 data points on their health status. Patients posted 70 discovery charts, generating 209 discussion comments. With the growing influence of the internet and technology in society, medical research can be enhanced by crowdsourcing and online patient portals. Patient discovery charts focused on the topics of psoriatic disease demographics, clinical features, environmental triggers, and quality of life. Patients noted that the CP platform adds to their well-being and allows them to express what research questions matter most to them in a direct and quantifiable way. The implementation of CP is a successful and novel method of allowing patients to engage in research. Thus, CP is an important tool to promote patient-centered psoriatic disease research.

  8. Technical note: Fu-Liou-Gu and Corti-Peter model performance evaluation for radiative retrievals from cirrus clouds

    NASA Astrophysics Data System (ADS)

    Lolli, Simone; Campbell, James R.; Lewis, Jasper R.; Gu, Yu; Welton, Ellsworth J.

    2017-06-01

    We compare, for the first time, the performance of a simplified atmospheric radiative transfer algorithm package, the Corti-Peter (CP) model, versus the more complex Fu-Liou-Gu (FLG) model, for resolving top-of-the-atmosphere radiative forcing characteristics from single-layer cirrus clouds obtained from the NASA Micro-Pulse Lidar Network database in 2010 and 2011 at Singapore and in Greenbelt, Maryland, USA, in 2012. Specifically, CP simplifies calculation of both clear-sky longwave and shortwave radiation through regression analysis applied to radiative calculations, which contributes significantly to differences between the two. The results of the intercomparison show that differences in annual net top-of-the-atmosphere (TOA) cloud radiative forcing can reach 65 %. This is particularly true when land surface temperatures are warmer than 288 K, where the CP regression analysis becomes less accurate. CP proves useful for first-order estimates of TOA cirrus cloud forcing, but may not be suitable for quantitative accuracy, including the absolute sign of cirrus cloud daytime TOA forcing that can readily oscillate around zero globally.

  9. Phase Behavior of Three PBX Elastomers in High-Pressure Chlorodifluoromethane

    NASA Astrophysics Data System (ADS)

    Lee, Byung-Chul

    2017-10-01

    The phase equilibrium behavior data are presented for three kinds of commercial polymer-bonded explosive (PBX) elastomers in chlorodifluoromethane (HCFC22). Levapren^{{registered }} ethylene- co-vinyl acetate (LP-EVA), HyTemp^{{registered }} alkyl acrylate copolymer (HT-ACM), and Viton^{{registered }} fluoroelastomer (VT-FE) were used as the PBX elastomers. For each elastomer + HCFC22 system, the cloud point (CP) and/or bubble point (BP) pressures were measured while varying the temperature and elastomer composition using a phase equilibrium apparatus fitted with a variable-volume view cell. The elastomers examined in this study indicated a lower critical solution temperature phase behavior in the HCFC22 solvent. LP-EVA showed the CPs at temperatures of 323 K to 343 K and at pressures of 3 MPa to 10 MPa, whereas HT-ACM showed the CPs at conditions between 338 K and 363 K and between 4 MPa and 12 MPa. For the LP-EVA and HT-ACM elastomers, the BP behavior was observed at temperatures below about 323 K. For the VT-FE + HCFC22 system, only the CP behavior was observed at temperatures between 323 K and 353 K and at pressures between 6 MPa and 21 MPa. As the elastomer composition increased, the CP pressure increased, reached a maximum value at a specific elastomer composition, and then remained almost constant.

  10. High definition clouds and precipitation for climate prediction -results from a unified German research initiative on high resolution modeling and observations

    NASA Astrophysics Data System (ADS)

    Rauser, F.

    2013-12-01

    We present results from the German BMBF initiative 'High Definition Cloud and Precipitation for advancing Climate Prediction -HD(CP)2'. This initiative addresses most of the problems that are discussed in this session in one, unified approach: cloud physics, convection, boundary layer development, radiation and subgrid variability are approached in one organizational framework. HD(CP)2 merges both observation and high performance computing / model development communities to tackle a shared problem: how to improve the understanding of the most important subgrid-scale processes of cloud and precipitation physics, and how to utilize this knowledge for improved climate predictions. HD(CP)2 is a coordinated initiative to: (i) realize; (ii) evaluate; and (iii) statistically characterize and exploit for the purpose of both parameterization development and cloud / precipitation feedback analysis; ultra-high resolution (100 m in the horizontal, 10-50 m in the vertical) regional hind-casts over time periods (3-15 y) and spatial scales (1000-1500 km) that are climatically meaningful. HD(CP)2 thus consists of three elements (the model development and simulations, their observational evaluation and exploitation/synthesis to advance CP prediction) and its first three-year phase has started on October 1st 2012. As a central part of HD(CP)2, the HD(CP)2 Observational Prototype Experiment (HOPE) has been carried out in spring 2013. In this campaign, high resolution measurements with a multitude of instruments from all major centers in Germany have been carried out in a limited domain, to allow for unprecedented resolution and precision in the observation of microphysics parameters on a resolution that will allow for evaluation and improvement of ultra-high resolution models. At the same time, a local area version of the new climate model ICON of the Max Planck Institute and the German weather service has been developed that allows for LES-type simulations on high resolutions on limited domains. The advantage of modifying an existing, evolving climate model is to share insights from high resolution runs directly with the large-scale modelers and to allow for easy intercomparison and evaluation later on. Within this presentation, we will give a short overview on HD(CP)2 , show results from the observation campaign HOPE and the LES simulations of the same domain and conditions and will discuss how these will lead to an improved understanding and evaluation background for the efforts to improve fast physics in our climate model.

  11. The HD(CP)2 Observational Prototype Experiment (HOPE) - an overview

    NASA Astrophysics Data System (ADS)

    Macke, Andreas; Seifert, Patric; Baars, Holger; Barthlott, Christian; Beekmans, Christoph; Behrendt, Andreas; Bohn, Birger; Brueck, Matthias; Bühl, Johannes; Crewell, Susanne; Damian, Thomas; Deneke, Hartwig; Düsing, Sebastian; Foth, Andreas; Di Girolamo, Paolo; Hammann, Eva; Heinze, Rieke; Hirsikko, Anne; Kalisch, John; Kalthoff, Norbert; Kinne, Stefan; Kohler, Martin; Löhnert, Ulrich; Lakshmi Madhavan, Bomidi; Maurer, Vera; Muppa, Shravan Kumar; Schween, Jan; Serikov, Ilya; Siebert, Holger; Simmer, Clemens; Späth, Florian; Steinke, Sandra; Träumner, Katja; Trömel, Silke; Wehner, Birgit; Wieser, Andreas; Wulfmeyer, Volker; Xie, Xinxin

    2017-04-01

    The HD(CP)2 Observational Prototype Experiment (HOPE) was performed as a major 2-month field experiment in Jülich, Germany, in April and May 2013, followed by a smaller campaign in Melpitz, Germany, in September 2013. HOPE has been designed to provide an observational dataset for a critical evaluation of the new German community atmospheric icosahedral non-hydrostatic (ICON) model at the scale of the model simulations and further to provide information on land-surface-atmospheric boundary layer exchange, cloud and precipitation processes, as well as sub-grid variability and microphysical properties that are subject to parameterizations. HOPE focuses on the onset of clouds and precipitation in the convective atmospheric boundary layer. This paper summarizes the instrument set-ups, the intensive observation periods, and example results from both campaigns. HOPE-Jülich instrumentation included a radio sounding station, 4 Doppler lidars, 4 Raman lidars (3 of them provide temperature, 3 of them water vapour, and all of them particle backscatter data), 1 water vapour differential absorption lidar, 3 cloud radars, 5 microwave radiometers, 3 rain radars, 6 sky imagers, 99 pyranometers, and 5 sun photometers operated at different sites, some of them in synergy. The HOPE-Melpitz campaign combined ground-based remote sensing of aerosols and clouds with helicopter- and balloon-based in situ observations in the atmospheric column and at the surface. HOPE provided an unprecedented collection of atmospheric dynamical, thermodynamical, and micro- and macrophysical properties of aerosols, clouds, and precipitation with high spatial and temporal resolution within a cube of approximately 10 × 10 × 10 km3. HOPE data will significantly contribute to our understanding of boundary layer dynamics and the formation of clouds and precipitation. The datasets have been made available through a dedicated data portal. First applications of HOPE data for model evaluation have shown a general agreement between observed and modelled boundary layer height, turbulence characteristics, and cloud coverage, but they also point to significant differences that deserve further investigations from both the observational and the modelling perspective.

  12. Developing Present-day Proxy Cases Based on NARVAL Data for Investigating Low Level Cloud Responses to Future Climate Change.

    NASA Astrophysics Data System (ADS)

    Reilly, Stephanie

    2017-04-01

    The energy budget of the entire global climate is significantly influenced by the presence of boundary layer clouds. The main aim of the High Definition Clouds and Precipitation for Advancing Climate Prediction (HD(CP)2) project is to improve climate model predictions by means of process studies of clouds and precipitation. This study makes use of observed elevated moisture layers as a proxy of future changes in tropospheric humidity. The associated impact on radiative transfer triggers fast responses in boundary layer clouds, providing a framework for investigating this phenomenon. The investigation will be carried out using data gathered during the Next-generation Aircraft Remote-sensing for VALidation (NARVAL) South campaigns. Observational data will be combined with ECMWF reanalysis data to derive the large scale forcings for the Large Eddy Simulations (LES). Simulations will be generated for a range of elevated moisture layers, spanning a multi-dimensional phase space in depth, amplitude, elevation, and cloudiness. The NARVAL locations will function as anchor-points. The results of the large eddy simulations and the observations will be studied and compared in an attempt to determine how simulated boundary layer clouds react to changes in radiative transfer from the free troposphere. Preliminary LES results will be presented and discussed.

  13. Assemblage of Presolar Materials and Early Solar System Condensates in Chondritic Porous Interplanetary Dust Particles

    NASA Technical Reports Server (NTRS)

    Nguyen, A. N.; Nakamura-Messenger, K.; Messenger, S.; Keller, L. P.; Kloeck, W.

    2015-01-01

    Anhydrous chondritic porous inter-planetary dust particles (CP IDPs) contain an assortment of highly primitive solar system components, molecular cloud matter, and presolar grains. These IDPs have largely escaped parent body processing that has affected meteorites, advocating cometary origins. Though the stardust abundance in CP IDPs is generally greater than in primitive meteorites, it can vary widely among individual CP IDPs. The average abundance of silicate stardust among isotopically primitive IDPs is approx. 375 ppm while some have extreme abundances up to approx. 1.5%. H and N isotopic anomalies are common in CP IDPs and the carrier of these anomalies has been traced to organic matter that has experienced chemical reactions in cold molecular clouds or the outer protosolar disk. Significant variations in these anomalies may reflect different degrees of nebular processing. Refractory inclusions are commonly observed in carbonaceous chondrites. These inclusions are among the first solar system condensates and display 16O-rich isotopic compositions. Refractory grains have also been observed in the comet 81P/Wild-2 samples re-turned from the Stardust Mission and in CP IDPs, but they occur with much less frequency. Here we conduct coordinated mineralogical and isotopic analyses of CP IDPs that were characterized for their bulk chemistry by to study the distribution of primitive components and the degree of nebular alteration incurred.

  14. Improving the Cold Temperature Properties of Tallow-Based Methyl Ester Mixtures Using Fractionation, Blending, and Additives

    NASA Astrophysics Data System (ADS)

    Elwell, Caleb

    Beef tallow is a less common feedstock source for biodiesel than soy or canola oil, but it can have economic benefits in comparison to these traditional feedstocks. However, tallow methyl ester (TME) has the major disadvantage of poor cold temperature properties. Cloud point (CP) is an standard industry metric for evaluating the cold temperature performance of biodiesel and is directly related to the thermodynamic properties of the fuel's constituents. TME has a CP of 14.5°C compared with 2.3°C for soy methyl ester (SME) and -8.3°C for canola methyl ester (CME). In this study, three methods were evaluated to reduce the CP of TME: fractionation, blending with SME and CME, and using polymer additives. TME fractionation (i.e. removal of specific methyl ester constituents) was simulated by creating FAME mixtures to match the FAME profiles of fractionated TME. The fractionation yield was found to be highest at the eutectic point of methyl palmitate (MP) and methyl stearate (MS), which was empirically determined to be at a MP/(MP+MS) ratio of approximately 82%. Since unmodified TME has a MP/(MP+MS) ratio of 59%, initially only MS should be removed to produce a ratio closer to the eutectic point to reduce CP and maximize yield. Graphs relating yield (with 4:1 methyl stearate to methyl oleate carryover) to CP were produced to determine the economic viability of this approach. To evaluate the effect of blending TME with other methyl esters, SME and CME were blended with TME at blend ratios of 0 to 100%. Both the SME/TME and CME/TME blends exhibited decreased CPs with increasing levels of SME and CME. Although the CP of the SME/TME blends varied linearly with SME content, the CP of the CME/TME blends varied quadratically with CME content. To evaluate the potential of fuel additives to reduce the CP of TME, 11 different polymer additives were tested. Although all of these additives were specifically marketed to enhance the cold temperature properties of petroleum diesel or biodiesel, only two of the additives had any significant effect on TME CP. The additive formulated by Meat & Livestock Australia (MLA) outperformed Evonik's Viscoplex 10-530. The MLA additive was investigated further and its effect on CP was characterized in pure TME and in CME/TME blends. When mixed in CME/TME blends, the MLA additive had a synergistic effect and produced lower CPs than the addition of mixing MLA in TME and blending CME with TME. To evalulate the cold temperature properties of TME blended with petroleum diesel, CPs of TME/diesel blends from 0 to 100% were measured. The TME/diesel blends were treated with the MLA additives to determine the effects of the additives under these blend conditions. The MLA additive also had a synergistic effect when mixed in TME/diesel blends. Finally, all three of the TME CP reduction methods were evaluated in an economic model to determine the conditions under which each method would be economically viable. Each of the CP reduction methods were compared using a common metric based on the cost of reducing the CP of 1 gallon of finished biodiesel by 1°C (i.e. $/gal/°C). Since the cost of each method is dependent on varying commodity prices, further development of the economic model (which was developed and tested with 2012 prices) to account for stochastic variation in commodity prices is recommended.

  15. Fuel Property Effects on the Cold Startability of Navy High-Speed Diesel Engines.

    DTIC Science & Technology

    1985-12-01

    0C D 93 77 64.4 85.0 40.6 Cloud Point, 0C D 2500 -13 -18 +16 -5 Pour Point, 0C D 97 -17 <-50 -17 <-50 K . Vis at 400C, m2 /sec D 445 2.75 1.43 2.97...0.75 X 10- 6 X 10-6 X 10-6 X 10-6 K . Vis at 50C, m2/sec D 445 6.65 2.78 8.44 1.17 X 10-6 X 10-6 X 10-6 X 10-6 Distillation, 0C D 2887 IBP 133.2 138.1...volumes, and k is the rate of molal specific heats Cp/Cv. This process is independent of time. However, as the gas temperature is increased, the gas

  16. UCST-Type Thermoresponsive Polymers in Synthetic Lubricating Oil Polyalphaolefin (PAO)

    DOE PAGES

    Fu, Wenxin; Bai, Wei; Jiang, Sisi; ...

    2018-02-20

    Here, this article reports a family of UCST-type thermoresponsive polymers, poly(alkyl methacrylate)s with an appropriate alkyl pendant length, in an industrially important non-volatile organic liquid polyalphaolefin (PAO). The cloud point (CP) can be readily tuned over a wide temperature range by changing the alkyl pendant length; at a concentration of 1 wt% and similar polymer molecular weights, the CP varies linearly with the (average) number of carbon atoms in the alkyl pendant. PAO solutions of ABA triblock copolymers, composed of a PAO-philic middle block and thermoresponsive outer blocks with appropriate block lengths, undergo thermoreversible sol-gel transitions at sufficiently high concentrations.more » The discovery of thermoresponsive polymers in PAO makes it possible to explore new applications by utilizing PAO’s unique characteristics such as thermal stability, non-volatility, superior lubrication properties, etc. Lastly, two examples are presented: thermoresponsive physical gels for control of optical transmittance and injectable gel lubricants.« less

  17. UCST-Type Thermoresponsive Polymers in Synthetic Lubricating Oil Polyalphaolefin (PAO)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fu, Wenxin; Bai, Wei; Jiang, Sisi

    Here, this article reports a family of UCST-type thermoresponsive polymers, poly(alkyl methacrylate)s with an appropriate alkyl pendant length, in an industrially important non-volatile organic liquid polyalphaolefin (PAO). The cloud point (CP) can be readily tuned over a wide temperature range by changing the alkyl pendant length; at a concentration of 1 wt% and similar polymer molecular weights, the CP varies linearly with the (average) number of carbon atoms in the alkyl pendant. PAO solutions of ABA triblock copolymers, composed of a PAO-philic middle block and thermoresponsive outer blocks with appropriate block lengths, undergo thermoreversible sol-gel transitions at sufficiently high concentrations.more » The discovery of thermoresponsive polymers in PAO makes it possible to explore new applications by utilizing PAO’s unique characteristics such as thermal stability, non-volatility, superior lubrication properties, etc. Lastly, two examples are presented: thermoresponsive physical gels for control of optical transmittance and injectable gel lubricants.« less

  18. The effect of the processing and formulation parameters on the size of nanoparticles based on block copolymers of poly(ethylene glycol) and poly(N-isopropylacrylamide) with and without hydrolytically sensitive groups.

    PubMed

    Neradovic, D; Soga, O; Van Nostrum, C F; Hennink, W E

    2004-05-01

    Block copolymers of poly(ethylene glycol) (PEG) as a hydrophilic block and N-isopropylacrylamide (PNIPAAm) or poly (NIPAAm-co-N-(2-hydroxypropyl) methacrylamide-dilactate) (poly(NIPAAm-co-HPMAm-dilactate)) as a thermosensitive block, are able to self-assemble in water into nanoparticles above the cloud point (CP) of the thermosensitive block. The influence of processing and the formulation parameters on the size of the nanoparticles was studied using dynamic light scattering. PNIPAAm-b-PEG 2000 polymers were not suitable for the formation of small and stable particles. Block copolymers with PEG 5000 and 10000 formed relatively small and stable particles in aqueous solutions at temperatures above the CP of the thermosensitive block. Their size decreased with increasing molecular weight of the thermosensitive block, decreasing polymer concentration and using water instead of phosphate buffered saline as solvent. Extrusion and ultrasonication were inefficient methods to size down the polymeric nanoparticles. The heating rate of the polymer solutions was a dominant factor for the size of the nanoparticles. When an aqueous polymer solution was slowly heated through the CP, rather large particles (> or = 200 nm) were formed. Regardless the polymer composition, small nanoparticles (50-70 nm) with a narrow size distribution were formed, when a small volume of an aqueous polymer solution below the CP was added to a large volume of heated water. In this way the thermosensitive block copolymers rapidly pass their CP ('heat shock' procedure), resulting in small and stable nanoparticles.

  19. A comparison of two ground-based lightning detection networks against the satellite-based lightning imaging sensor (LIS)

    NASA Astrophysics Data System (ADS)

    Thompson, Kelsey B.

    We compared lightning stroke data from the ground-based World Wide Lightning Location Network (WWLLN) and lightning stroke data from the ground-based Earth Networks Total Lightning Network (ENTLN) to lightning group data from the satellite-based Lightning Imaging Sensor (LIS) from 1 January 2010 through 30 June 2011. The region of study, about 39°S to 39°N latitude, 164°E to 17°W longitude, chosen to approximate the Geostationary Lightning Mapper (GLM) field of view, was considered in its entirety and then divided into four geographical sub-regions. We found the highest 18-mon WWLLN coincidence percent (CP) value in the Pacific Ocean at 18.9% and the highest 18-mon ENTLN CP value in North America at 63.3%. We found the lowest 18-mon CP value for both WWLLN and ENTLN in South America at 6.2% and 2.2% respectively. Daily CP values and how often large radiance LIS groups had a coincident stroke varied. Coincidences between LIS groups and ENTLN strokes often resulted in more cloud than ground coincidences in North America and more ground than cloud coincidences in the other three sub-regions.

  20. Efficiently Multi-User Searchable Encryption Scheme with Attribute Revocation and Grant for Cloud Storage

    PubMed Central

    Wang, Shangping; Zhang, Xiaoxue; Zhang, Yaling

    2016-01-01

    Cipher-policy attribute-based encryption (CP-ABE) focus on the problem of access control, and keyword-based searchable encryption scheme focus on the problem of finding the files that the user interested in the cloud storage quickly. To design a searchable and attribute-based encryption scheme is a new challenge. In this paper, we propose an efficiently multi-user searchable attribute-based encryption scheme with attribute revocation and grant for cloud storage. In the new scheme the attribute revocation and grant processes of users are delegated to proxy server. Our scheme supports multi attribute are revoked and granted simultaneously. Moreover, the keyword searchable function is achieved in our proposed scheme. The security of our proposed scheme is reduced to the bilinear Diffie-Hellman (BDH) assumption. Furthermore, the scheme is proven to be secure under the security model of indistinguishability against selective ciphertext-policy and chosen plaintext attack (IND-sCP-CPA). And our scheme is also of semantic security under indistinguishability against chosen keyword attack (IND-CKA) in the random oracle model. PMID:27898703

  1. Efficiently Multi-User Searchable Encryption Scheme with Attribute Revocation and Grant for Cloud Storage.

    PubMed

    Wang, Shangping; Zhang, Xiaoxue; Zhang, Yaling

    2016-01-01

    Cipher-policy attribute-based encryption (CP-ABE) focus on the problem of access control, and keyword-based searchable encryption scheme focus on the problem of finding the files that the user interested in the cloud storage quickly. To design a searchable and attribute-based encryption scheme is a new challenge. In this paper, we propose an efficiently multi-user searchable attribute-based encryption scheme with attribute revocation and grant for cloud storage. In the new scheme the attribute revocation and grant processes of users are delegated to proxy server. Our scheme supports multi attribute are revoked and granted simultaneously. Moreover, the keyword searchable function is achieved in our proposed scheme. The security of our proposed scheme is reduced to the bilinear Diffie-Hellman (BDH) assumption. Furthermore, the scheme is proven to be secure under the security model of indistinguishability against selective ciphertext-policy and chosen plaintext attack (IND-sCP-CPA). And our scheme is also of semantic security under indistinguishability against chosen keyword attack (IND-CKA) in the random oracle model.

  2. Point-Cloud Compression for Vehicle-Based Mobile Mapping Systems Using Portable Network Graphics

    NASA Astrophysics Data System (ADS)

    Kohira, K.; Masuda, H.

    2017-09-01

    A mobile mapping system is effective for capturing dense point-clouds of roads and roadside objects Point-clouds of urban areas, residential areas, and arterial roads are useful for maintenance of infrastructure, map creation, and automatic driving. However, the data size of point-clouds measured in large areas is enormously large. A large storage capacity is required to store such point-clouds, and heavy loads will be taken on network if point-clouds are transferred through the network. Therefore, it is desirable to reduce data sizes of point-clouds without deterioration of quality. In this research, we propose a novel point-cloud compression method for vehicle-based mobile mapping systems. In our compression method, point-clouds are mapped onto 2D pixels using GPS time and the parameters of the laser scanner. Then, the images are encoded in the Portable Networking Graphics (PNG) format and compressed using the PNG algorithm. In our experiments, our method could efficiently compress point-clouds without deteriorating the quality.

  3. The registration of non-cooperative moving targets laser point cloud in different view point

    NASA Astrophysics Data System (ADS)

    Wang, Shuai; Sun, Huayan; Guo, Huichao

    2018-01-01

    Non-cooperative moving target multi-view cloud registration is the key technology of 3D reconstruction of laser threedimension imaging. The main problem is that the density changes greatly and noise exists under different acquisition conditions of point cloud. In this paper, firstly, the feature descriptor is used to find the most similar point cloud, and then based on the registration algorithm of region segmentation, the geometric structure of the point is extracted by the geometric similarity between point and point, The point cloud is divided into regions based on spectral clustering, feature descriptors are created for each region, searching to find the most similar regions in the most similar point of view cloud, and then aligning the pair of point clouds by aligning their minimum bounding boxes. Repeat the above steps again until registration of all point clouds is completed. Experiments show that this method is insensitive to the density of point clouds and performs well on the noise of laser three-dimension imaging.

  4. Bioluminescent system for dynamic imaging of cell and animal behavior

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hara-Miyauchi, Chikako; Laboratory for Cell Function Dynamics, Brain Science Institute, RIKEN, Saitama 351-0198; Department of Biophysics and Biochemistry, Graduate School of Health Care Sciences, Tokyo Medical and Dental University, Tokyo 113-8510

    2012-03-09

    Highlights: Black-Right-Pointing-Pointer We combined a yellow variant of GFP and firefly luciferase to make ffLuc-cp156. Black-Right-Pointing-Pointer ffLuc-cp156 showed improved photon yield in cultured cells and transgenic mice. Black-Right-Pointing-Pointer ffLuc-cp156 enabled video-rate bioluminescence imaging of freely-moving animals. Black-Right-Pointing-Pointer ffLuc-cp156 mice enabled tracking real-time drug delivery in conscious animals. -- Abstract: The current utility of bioluminescence imaging is constrained by a low photon yield that limits temporal sensitivity. Here, we describe an imaging method that uses a chemiluminescent/fluorescent protein, ffLuc-cp156, which consists of a yellow variant of Aequorea GFP and firefly luciferase. We report an improvement in photon yield by over threemore » orders of magnitude over current bioluminescent systems. We imaged cellular movement at high resolution including neuronal growth cones and microglial cell protrusions. Transgenic ffLuc-cp156 mice enabled video-rate bioluminescence imaging of freely moving animals, which may provide a reliable assay for drug distribution in behaving animals for pre-clinical studies.« less

  5. 78 FR 71710 - Notice of Application for Approval of Discontinuance or Modification of a Railroad Signal System

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-29

    ... installation of cab signals without wayside signaling between Control Point (CP) Kiski, Milepost (MP) LC 47.8, and CP Penn, MP LC 77.9, on the Conemaugh Line, Pittsburgh Division. CP Kiski, CP Harris, CP Beale, CP Sharp, and CP Etna will be upgraded from existing, legacy, relay-based signal systems to electronic...

  6. An Iterative Closest Points Algorithm for Registration of 3D Laser Scanner Point Clouds with Geometric Features.

    PubMed

    He, Ying; Liang, Bin; Yang, Jun; Li, Shunzhi; He, Jin

    2017-08-11

    The Iterative Closest Points (ICP) algorithm is the mainstream algorithm used in the process of accurate registration of 3D point cloud data. The algorithm requires a proper initial value and the approximate registration of two point clouds to prevent the algorithm from falling into local extremes, but in the actual point cloud matching process, it is difficult to ensure compliance with this requirement. In this paper, we proposed the ICP algorithm based on point cloud features (GF-ICP). This method uses the geometrical features of the point cloud to be registered, such as curvature, surface normal and point cloud density, to search for the correspondence relationships between two point clouds and introduces the geometric features into the error function to realize the accurate registration of two point clouds. The experimental results showed that the algorithm can improve the convergence speed and the interval of convergence without setting a proper initial value.

  7. An Iterative Closest Points Algorithm for Registration of 3D Laser Scanner Point Clouds with Geometric Features

    PubMed Central

    Liang, Bin; Yang, Jun; Li, Shunzhi; He, Jin

    2017-01-01

    The Iterative Closest Points (ICP) algorithm is the mainstream algorithm used in the process of accurate registration of 3D point cloud data. The algorithm requires a proper initial value and the approximate registration of two point clouds to prevent the algorithm from falling into local extremes, but in the actual point cloud matching process, it is difficult to ensure compliance with this requirement. In this paper, we proposed the ICP algorithm based on point cloud features (GF-ICP). This method uses the geometrical features of the point cloud to be registered, such as curvature, surface normal and point cloud density, to search for the correspondence relationships between two point clouds and introduces the geometric features into the error function to realize the accurate registration of two point clouds. The experimental results showed that the algorithm can improve the convergence speed and the interval of convergence without setting a proper initial value. PMID:28800096

  8. Reducing the CP content in broiler feeds: impact on animal performance, meat quality and nitrogen utilization.

    PubMed

    Belloir, P; Méda, B; Lambert, W; Corrent, E; Juin, H; Lessire, M; Tesseraud, S

    2017-11-01

    Reducing the dietary CP content is an efficient way to limit nitrogen excretion in broilers but, as reported in the literature, it often reduces performance, probably because of an inadequate provision in amino acids (AA). The aim of this study was to investigate the effect of decreasing the CP content in the diet on animal performance, meat quality and nitrogen utilization in growing-finishing broilers using an optimized dietary AA profile based on the ideal protein concept. Two experiments (1 and 2) were performed using 1-day-old PM3 Ross male broilers (1520 and 912 for experiments 1 and 2, respectively) using the minimum AA:Lys ratios proposed by Mack et al. with modifications for Thr and Arg. The digestible Thr (dThr): dLys ratio was increased from 63% to 68% and the dArg:dLys ratio was decreased from 112% to 108%. In experiment 1, the reduction of dietary CP from 19% to 15% (five treatments) did not alter feed intake or BW, but the feed conversion ratio was increased for the 16% and 15% CP diets (+2.4% and +3.6%, respectively), while in experiment 2 (three treatments: 19%, 17.5% and 16% CP) there was no effect of dietary CP on performance. In both experiments, dietary CP content did not affect breast meat yield. However, abdominal fat content (expressed as a percentage of BW) was increased by the decrease in CP content (up to +0.5 and +0.2 percentage point, in experiments 1 and 2, respectively). In experiment 2, meat quality traits responded to dietary CP content with a higher ultimate pH and lower lightness and drip loss values for the low CP diets. Nitrogen retention efficiency increased when reducing CP content in both experiments (+3.5 points/CP percentage point). The main consequence of this higher efficiency was a decrease in nitrogen excretion (-2.5 g N/kg BW gain) and volatilization (expressed as a percentage of excretion: -5 points/CP percentage point). In conclusion, this study demonstrates that with an adapted AA profile, it is possible to reduce dietary CP content to at least 17% in growing-finishing male broilers, without altering animal performance and meat quality. Such a feeding strategy could therefore help improving the sustainability of broiler production as it is an efficient way to reduce environmental burden associated with nitrogen excretion.

  9. Automated localization of costophrenic recesses and costophrenic angle measurement on frontal chest radiographs

    NASA Astrophysics Data System (ADS)

    Maduskar, Pragnya; Hogeweg, Laurens; Philipsen, Rick; van Ginneken, Bram

    2013-03-01

    Computer aided detection (CAD) of tuberculosis (TB) on chest radiographs (CXR) is difficult because the disease has varied manifestations, like opacification, hilar elevation, and pleural effusions. We have developed a CAD research prototype for TB (CAD4TB v1.08, Diagnostic Image Analysis Group, Nijmegen, The Netherlands) which is trained to detect textural abnormalities inside unobscured lung fields. If the only abnormality visible on a CXR would be a blunt costophrenic angle, caused by pleural fluid in the costophrenic recess, this is likely to be missed by texture analysis in the lung fields. The goal of this work is therefore to detect the presence of blunt costophrenic (CP) angles caused by pleural effusion on chest radiographs. The CP angle is the angle formed by the hemidiaphragm and the chest wall. We define the intersection point of both as the CP angle point. We first detect the CP angle point automatically from a lung field segmentation by finding the foreground pixel of each lung with maximum y location. Patches are extracted around the CP angle point and boundary tracing is performed to detect 10 consecutive pixels along the hemidiaphragm and the chest wall and derive the CP angle from these. We evaluate the method on a data set of 250 normal CXRs, 200 CXRs with only one or two blunt CP angles and 200 CXRs with one or two blunt CP angles but also other abnormalities. For these three groups, the CP angle location and angle measurements were accurate in 91%, 88%, and 92% of all the cases, respectively. The average CP angles for the three groups are indeed different with 71.6° +/- 22.9, 87.5° +/- 25.7, and 87.7° +/- 25.3, respectively.

  10. The HD(CP)2 Observational Prototype Experiment HOPE - Overview and Examples

    NASA Astrophysics Data System (ADS)

    Macke, Andreas

    2017-04-01

    The "HD(CP)2 Observational Prototype Experiment" (HOPE) was executed as a major 2-month field experiment in Jülich, Germany, performed in April and May 2013, followed by a smaller campaign in Melpitz, Germany in September 2013. HOPE has been designed to provide information on land-surface-atmospheric boundary layer exchange, aerosol, cloud and precipitation pattern for process studies and model evaluation with a focuses on the onset of clouds and precipitation in the convective atmospheric boundary layer. HOPE-Jülich instrumentation included a radio sounding station, 4 Doppler lidars, 4 Raman lidars,1 water vapour differential absorption lidar, 3 cloud radars, 5 microwave radiometers, 3 rain radars, 6 sky imagers, 99 pyranometers, and 4 Sun photometers operated in synergy at different supersites. The HOPE-Melpitz campaign combined ground-based remote sensing of aerosols and clouds with helicopter- and ballon-based in-situ observations in the atmospheric column and at the surface. HOPE provided an unprecedented collection of atmospheric dynamical, thermodynamical, and micro- and macrophysical properties of aerosols, clouds and precipitation with high spatial and temporal resolution within a cube of approximately 10 x 10 x 10 km3. HOPE data will significantly contribute to our understanding of boundary layer dynamics and the formation of clouds and precipitation. The datasets are made available through the Standardized Atmospheric Measurement Data SAMD archive at https://icdc.cen.uni-hamburg.de/index.php?id=samd. The presentation is based on an overview paper in ACP where results published in an ACP HOPE special issue are summarized, see http://www.atmos-chem-phys.net/special_issue366.html. Citation: Macke, A., Seifert, P., Baars, H., Beekmans, C., Behrendt, A., Bohn, B., Bühl, J., Crewell, S., Damian, T., Deneke, H., Düsing, S., Foth, A., Di Girolamo, P., Hammann, E., Heinze, R., Hirsikko, A., Kalisch, J., Kalthoff, N., Kinne, S., Kohler, M., Löhnert, U., Madhavan, B. L., Maurer, V., Muppa, S. K., Schween, J., Serikov, I., Siebert, H., Simmer, C., Späth, F., Steinke, S., Träumner, K., Wehner, B., Wieser, A., Wulfmeyer, V., and Xie, X.: The HD(CP)2 Observational Prototype Experiment HOPE - An Overview, Atmos. Chem. Phys. Discuss., doi:10.5194/acp-2016-990, in review, 2016.

  11. Registration algorithm of point clouds based on multiscale normal features

    NASA Astrophysics Data System (ADS)

    Lu, Jun; Peng, Zhongtao; Su, Hang; Xia, GuiHua

    2015-01-01

    The point cloud registration technology for obtaining a three-dimensional digital model is widely applied in many areas. To improve the accuracy and speed of point cloud registration, a registration method based on multiscale normal vectors is proposed. The proposed registration method mainly includes three parts: the selection of key points, the calculation of feature descriptors, and the determining and optimization of correspondences. First, key points are selected from the point cloud based on the changes of magnitude of multiscale curvatures obtained by using principal components analysis. Then the feature descriptor of each key point is proposed, which consists of 21 elements based on multiscale normal vectors and curvatures. The correspondences in a pair of two point clouds are determined according to the descriptor's similarity of key points in the source point cloud and target point cloud. Correspondences are optimized by using a random sampling consistency algorithm and clustering technology. Finally, singular value decomposition is applied to optimized correspondences so that the rigid transformation matrix between two point clouds is obtained. Experimental results show that the proposed point cloud registration algorithm has a faster calculation speed, higher registration accuracy, and better antinoise performance.

  12. Accuracy assessment of building point clouds automatically generated from iphone images

    NASA Astrophysics Data System (ADS)

    Sirmacek, B.; Lindenbergh, R.

    2014-06-01

    Low-cost sensor generated 3D models can be useful for quick 3D urban model updating, yet the quality of the models is questionable. In this article, we evaluate the reliability of an automatic point cloud generation method using multi-view iPhone images or an iPhone video file as an input. We register such automatically generated point cloud on a TLS point cloud of the same object to discuss accuracy, advantages and limitations of the iPhone generated point clouds. For the chosen example showcase, we have classified 1.23% of the iPhone point cloud points as outliers, and calculated the mean of the point to point distances to the TLS point cloud as 0.11 m. Since a TLS point cloud might also include measurement errors and noise, we computed local noise values for the point clouds from both sources. Mean (μ) and standard deviation (σ) of roughness histograms are calculated as (μ1 = 0.44 m., σ1 = 0.071 m.) and (μ2 = 0.025 m., σ2 = 0.037 m.) for the iPhone and TLS point clouds respectively. Our experimental results indicate possible usage of the proposed automatic 3D model generation framework for 3D urban map updating, fusion and detail enhancing, quick and real-time change detection purposes. However, further insights should be obtained first on the circumstances that are needed to guarantee a successful point cloud generation from smartphone images.

  13. Validation of Accelerometer Cut-Points in Children With Cerebral Palsy Aged 4 to 5 Years.

    PubMed

    Keawutan, Piyapa; Bell, Kristie L; Oftedal, Stina; Davies, Peter S W; Boyd, Roslyn N

    2016-01-01

    To derive and validate triaxial accelerometer cut-points in children with cerebral palsy (CP) and compare these with previously established cut-points in children with typical development. Eighty-four children with CP aged 4 to 5 years wore the ActiGraph during a play-based gross motor function measure assessment that was video-taped for direct observation. Receiver operating characteristic and Bland-Altman plots were used for analyses. The ActiGraph had good classification accuracy in Gross Motor Function Classification System (GMFCS) levels III and V and fair classification accuracy in GMFCS levels I, II, and IV. These results support the use of the previously established cut-points for sedentary time of 820 counts per minute in children with CP aged 4 to 5 years across all functional abilities. The cut-point provides an objective measure of sedentary and active time in children with CP. The cut-point is applicable to group data but not for individual children.

  14. Bio-lubricants derived from waste cooking oil with improved oxidation stability and low-temperature properties.

    PubMed

    Li, Weimin; Wang, Xiaobo

    2015-01-01

    Waste cooking oil (WCO) was chemically modified via epoxidation using H2O2 followed by transesterification with methanol and branched alcohols (isooctanol, isotridecanol and isooctadecanol) to produce bio-lubricants with improved oxidative stability and low temperature properties. Physicochemical properties of synthesized bio-lubricants such as pour point (PP), cloud point (CP), viscosity, viscosity index (VI), oxidative stability, and corrosion resistant property were determined according to standard methods. The synthesized bio-lubricants showed improved low temperature flow performances compared with WCO, which can be attributing to the introduction of branched chains in their molecular structures. What's more, the oxidation stability of the WCO showed more than 10 folds improvement due to the elimination of -C=C-bonds in the WCO molecule. Tribological performances of these bio-lubricants were also investigated using four-ball friction and wear tester. Experimental results showed that derivatives of WCO exhibited favorable physicochemical properties and tribological performances which making them good candidates in formulating eco-friendly lubricants.

  15. 77 FR 22840 - Petition for Waiver of Compliance

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-17

    ... Control Point (CP) 55 to now include Minson siding; incorporating trackwork improvements and changes to... approved NJT's S&TC improvements between CP45 and CP70 (which includes CP Ross, Minson siding, and... future Pennsauken transfer station on the single track south of CP Ross at Milepost 4.9 will add time to...

  16. Performance Evaluation of sUAS Equipped with Velodyne HDL-32E LiDAR Sensor

    NASA Astrophysics Data System (ADS)

    Jozkow, G.; Wieczorek, P.; Karpina, M.; Walicka, A.; Borkowski, A.

    2017-08-01

    The Velodyne HDL-32E laser scanner is used more frequently as main mapping sensor in small commercial UASs. However, there is still little information about the actual accuracy of point clouds collected with such UASs. This work evaluates empirically the accuracy of the point cloud collected with such UAS. Accuracy assessment was conducted in four aspects: impact of sensors on theoretical point cloud accuracy, trajectory reconstruction quality, and internal and absolute point cloud accuracies. Theoretical point cloud accuracy was evaluated by calculating 3D position error knowing errors of used sensors. The quality of trajectory reconstruction was assessed by comparing position and attitude differences from forward and reverse EKF solution. Internal and absolute accuracies were evaluated by fitting planes to 8 point cloud samples extracted for planar surfaces. In addition, the absolute accuracy was also determined by calculating point 3D distances between LiDAR UAS and reference TLS point clouds. Test data consisted of point clouds collected in two separate flights performed over the same area. Executed experiments showed that in tested UAS, the trajectory reconstruction, especially attitude, has significant impact on point cloud accuracy. Estimated absolute accuracy of point clouds collected during both test flights was better than 10 cm, thus investigated UAS fits mapping-grade category.

  17. Mitigating cold flow problems of biodiesel: Strategies with additives

    NASA Astrophysics Data System (ADS)

    Mohanan, Athira

    The present thesis explores the cold flow properties of biodiesel and the effect of vegetable oil derived compounds on the crystallization path as well as the mechanisms at play at different stages and length scales. Model systems including triacylglycerol (TAG) oils and their derivatives, and a polymer were tested with biodiesel. The goal was to acquire the fundamental knowledge that would help design cold flow improver (CFI) additives that would address effectively and simultaneously the flow problems of biodiesel, particularly the cloud point (CP) and pour point (PP). The compounds were revealed to be fundamentally vegetable oil crystallization modifiers (VOCM) and the polymer was confirmed to be a pour point depressant (PPD). The results obtained with the VOCMs indicate that two cis-unsaturated moieties combined with a trans-/saturated fatty acid is a critical structural architecture for depressing the crystallization onset by a mechanism wherein while the straight chain promotes a first packing with the linear saturated FAMEs, the kinked moieties prevent further crystallization. The study of model binary systems made of a VOCM and a saturated FAME with DSC, XRD and PLM provided a complete phase diagram including the thermal transformation lines, crystal structure and microstructure that impact the phase composition along the different crystallization stages, and elicited the competing effects of molecular mass, chain length mismatch and isomerism. The liquid-solid boundary is discussed in light of a simple thermodynamic model based on the Hildebrand equation and pair interactions. In order to test for synergies, the PP and CP of a biodiesel (Soy1500) supplemented with several VOCM and PLMA binary cocktails were measured using a specially designed method inspired by ASTM standards. The results were impressive, the combination of additives depressed CP and PP better than any single additive. The PLM and DSC results suggest that the cocktail additives are most effective when the right molecular structure and optimal concentration are provided. The cocktail mixture achieves then tiny crystals that are prevented from aggregating for an extended temperature range. The results of the study can be directly used for the design of functional and economical CFI from vegetable oils and their derivatives.

  18. A shape-based segmentation method for mobile laser scanning point clouds

    NASA Astrophysics Data System (ADS)

    Yang, Bisheng; Dong, Zhen

    2013-07-01

    Segmentation of mobile laser point clouds of urban scenes into objects is an important step for post-processing (e.g., interpretation) of point clouds. Point clouds of urban scenes contain numerous objects with significant size variability, complex and incomplete structures, and holes or variable point densities, raising great challenges for the segmentation of mobile laser point clouds. This paper addresses these challenges by proposing a shape-based segmentation method. The proposed method first calculates the optimal neighborhood size of each point to derive the geometric features associated with it, and then classifies the point clouds according to geometric features using support vector machines (SVMs). Second, a set of rules are defined to segment the classified point clouds, and a similarity criterion for segments is proposed to overcome over-segmentation. Finally, the segmentation output is merged based on topological connectivity into a meaningful geometrical abstraction. The proposed method has been tested on point clouds of two urban scenes obtained by different mobile laser scanners. The results show that the proposed method segments large-scale mobile laser point clouds with good accuracy and computationally effective time cost, and that it segments pole-like objects particularly well.

  19. Privacy-Preserving Location-Based Service Scheme for Mobile Sensing Data.

    PubMed

    Xie, Qingqing; Wang, Liangmin

    2016-11-25

    With the wide use of mobile sensing application, more and more location-embedded data are collected and stored in mobile clouds, such as iCloud, Samsung cloud, etc. Using these data, the cloud service provider (CSP) can provide location-based service (LBS) for users. However, the mobile cloud is untrustworthy. The privacy concerns force the sensitive locations to be stored on the mobile cloud in an encrypted form. However, this brings a great challenge to utilize these data to provide efficient LBS. To solve this problem, we propose a privacy-preserving LBS scheme for mobile sensing data, based on the RSA (for Rivest, Shamir and Adleman) algorithm and ciphertext policy attribute-based encryption (CP-ABE) scheme. The mobile cloud can perform location distance computing and comparison efficiently for authorized users, without location privacy leakage. In the end, theoretical security analysis and experimental evaluation demonstrate that our scheme is secure against the chosen plaintext attack (CPA) and efficient enough for practical applications in terms of user side computation overhead.

  20. Privacy-Preserving Location-Based Service Scheme for Mobile Sensing Data †

    PubMed Central

    Xie, Qingqing; Wang, Liangmin

    2016-01-01

    With the wide use of mobile sensing application, more and more location-embedded data are collected and stored in mobile clouds, such as iCloud, Samsung cloud, etc. Using these data, the cloud service provider (CSP) can provide location-based service (LBS) for users. However, the mobile cloud is untrustworthy. The privacy concerns force the sensitive locations to be stored on the mobile cloud in an encrypted form. However, this brings a great challenge to utilize these data to provide efficient LBS. To solve this problem, we propose a privacy-preserving LBS scheme for mobile sensing data, based on the RSA (for Rivest, Shamir and Adleman) algorithm and ciphertext policy attribute-based encryption (CP-ABE) scheme. The mobile cloud can perform location distance computing and comparison efficiently for authorized users, without location privacy leakage. In the end, theoretical security analysis and experimental evaluation demonstrate that our scheme is secure against the chosen plaintext attack (CPA) and efficient enough for practical applications in terms of user side computation overhead. PMID:27897984

  1. LSAH: a fast and efficient local surface feature for point cloud registration

    NASA Astrophysics Data System (ADS)

    Lu, Rongrong; Zhu, Feng; Wu, Qingxiao; Kong, Yanzi

    2018-04-01

    Point cloud registration is a fundamental task in high level three dimensional applications. Noise, uneven point density and varying point cloud resolutions are the three main challenges for point cloud registration. In this paper, we design a robust and compact local surface descriptor called Local Surface Angles Histogram (LSAH) and propose an effectively coarse to fine algorithm for point cloud registration. The LSAH descriptor is formed by concatenating five normalized sub-histograms into one histogram. The five sub-histograms are created by accumulating a different type of angle from a local surface patch respectively. The experimental results show that our LSAH is more robust to uneven point density and point cloud resolutions than four state-of-the-art local descriptors in terms of feature matching. Moreover, we tested our LSAH based coarse to fine algorithm for point cloud registration. The experimental results demonstrate that our algorithm is robust and efficient as well.

  2. Developing control points for halal slaughtering of poultry.

    PubMed

    Shahdan, I A; Regenstein, J M; Shahabuddin, A S M; Rahman, M T

    2016-07-01

    Halal (permissible or lawful) poultry meat production must meet industry, economic, and production needs, and government health requirements without compromising the Islamic religious requirements derived from the Qur'an and the Hadiths (the actions and sayings of the Prophet Muhammad, peace and blessings be upon him). Halal certification authorities may vary in their interpretation of these teachings, which leads to differences in halal slaughter requirements. The current study proposes 6 control points (CP) for halal poultry meat production based on the most commonly used halal production systems. CP 1 describes what is allowed and prohibited, such as blood and animal manure, and feed ingredients for halal poultry meat production. CP 2 describes the requirements for humane handling during lairage. CP 3 describes different methods for immobilizing poultry, when immobilization is used, such as water bath stunning. CP 4 describes the importance of intention, details of the halal slaughter, and the equipment permitted. CP 5 and CP 6 describe the requirements after the neck cut has been made such as the time needed before the carcasses can enter the scalding tank, and the potential for meat adulteration with fecal residues and blood. It is important to note that the proposed halal CP program is presented as a starting point for any individual halal certifying body to improve its practices. © 2016 Poultry Science Association Inc.

  3. Validation of accelerometer cut points in toddlers with and without cerebral palsy.

    PubMed

    Oftedal, Stina; Bell, Kristie L; Davies, Peter S W; Ware, Robert S; Boyd, Roslyn N

    2014-09-01

    The purpose of this study was to validate uni- and triaxial ActiGraph cut points for sedentary time in toddlers with cerebral palsy (CP) and typically developing children (TDC). Children (n = 103, 61 boys, mean age = 2 yr, SD = 6 months, range = 1 yr 6 months-3 yr) were divided into calibration (n = 65) and validation (n = 38) samples with separate analyses for TDC (n = 28) and ambulant (Gross Motor Function Classification System I-III, n = 51) and nonambulant (Gross Motor Function Classification System IV-V, n = 25) children with CP. An ActiGraph was worn during a videotaped assessment. Behavior was coded as sedentary or nonsedentary. Receiver operating characteristic-area under the curve analysis determined the classification accuracy of accelerometer data. Predictive validity was determined using the Bland-Altman analysis. Classification accuracy for uniaxial data was fair for the ambulatory CP and TDC group but poor for the nonambulatory CP group. Triaxial data showed good classification accuracy for all groups. The uniaxial ambulatory CP and TDC cut points significantly overestimated sedentary time (bias = -10.5%, 95% limits of agreement [LoA] = -30.2% to 9.1%; bias = -17.3%, 95% LoA = -44.3% to 8.3%). The triaxial ambulatory and nonambulatory CP and TDC cut points provided accurate group-level measures of sedentary time (bias = -1.5%, 95% LoA = -20% to 16.8%; bias = 2.1%, 95% LoA = -17.3% to 21.5%; bias = -5.1%, 95% LoA = -27.5% to 16.1%). Triaxial accelerometers provide useful group-level measures of sedentary time in children with CP across the spectrum of functional abilities and TDC. Uniaxial cut points are not recommended.

  4. The Segmentation of Point Clouds with K-Means and ANN (artifical Neural Network)

    NASA Astrophysics Data System (ADS)

    Kuçak, R. A.; Özdemir, E.; Erol, S.

    2017-05-01

    Segmentation of point clouds is recently used in many Geomatics Engineering applications such as the building extraction in urban areas, Digital Terrain Model (DTM) generation and the road or urban furniture extraction. Segmentation is a process of dividing point clouds according to their special characteristic layers. The present paper discusses K-means and self-organizing map (SOM) which is a type of ANN (Artificial Neural Network) segmentation algorithm which treats the segmentation of point cloud. The point clouds which generate with photogrammetric method and Terrestrial Lidar System (TLS) were segmented according to surface normal, intensity and curvature. Thus, the results were evaluated. LIDAR (Light Detection and Ranging) and Photogrammetry are commonly used to obtain point clouds in many remote sensing and geodesy applications. By photogrammetric method or LIDAR method, it is possible to obtain point cloud from terrestrial or airborne systems. In this study, the measurements were made with a Leica C10 laser scanner in LIDAR method. In photogrammetric method, the point cloud was obtained from photographs taken from the ground with a 13 MP non-metric camera.

  5. On mechanisms separating stars into normal and chemically peculiar

    NASA Astrophysics Data System (ADS)

    Glagolevskij, Yu. V.

    2017-10-01

    The paper argues in favor of the assumption that magnetic and non-magnetic protostars, from which CP stars were formed, are the objects that had rotation velocities of the parent cloud V smaller than a critical value V c . At V greater than the critical value, differential rotation emerges in the collapsing protostellar cloud, which twists magnetic lines of force into an' invisible' toroidal shape and disturbs the stability of the atmosphere. In magnetic protostars, the loss of angular momentum is due to magnetic braking, while in metallic protostars, the loss of rotation momentum occurs due to tidal interactions with a close component. HgMn stars are most likely not affected by some braking mechanism, but originated from the slowest protostellar rotators. The boundary of V c where the differential rotation occurs is not sharp. The slower the protostar rotates, the greater the probability of suppressing the differential rotation and the more likely the possibility of CP star birth.

  6. Applicability Analysis of Cloth Simulation Filtering Algorithm for Mobile LIDAR Point Cloud

    NASA Astrophysics Data System (ADS)

    Cai, S.; Zhang, W.; Qi, J.; Wan, P.; Shao, J.; Shen, A.

    2018-04-01

    Classifying the original point clouds into ground and non-ground points is a key step in LiDAR (light detection and ranging) data post-processing. Cloth simulation filtering (CSF) algorithm, which based on a physical process, has been validated to be an accurate, automatic and easy-to-use algorithm for airborne LiDAR point cloud. As a new technique of three-dimensional data collection, the mobile laser scanning (MLS) has been gradually applied in various fields, such as reconstruction of digital terrain models (DTM), 3D building modeling and forest inventory and management. Compared with airborne LiDAR point cloud, there are some different features (such as point density feature, distribution feature and complexity feature) for mobile LiDAR point cloud. Some filtering algorithms for airborne LiDAR data were directly used in mobile LiDAR point cloud, but it did not give satisfactory results. In this paper, we explore the ability of the CSF algorithm for mobile LiDAR point cloud. Three samples with different shape of the terrain are selected to test the performance of this algorithm, which respectively yields total errors of 0.44 %, 0.77 % and1.20 %. Additionally, large area dataset is also tested to further validate the effectiveness of this algorithm, and results show that it can quickly and accurately separate point clouds into ground and non-ground points. In summary, this algorithm is efficient and reliable for mobile LiDAR point cloud.

  7. Investigating the Accuracy of Point Clouds Generated for Rock Surfaces

    NASA Astrophysics Data System (ADS)

    Seker, D. Z.; Incekara, A. H.

    2016-12-01

    Point clouds which are produced by means of different techniques are widely used to model the rocks and obtain the properties of rock surfaces like roughness, volume and area. These point clouds can be generated by applying laser scanning and close range photogrammetry techniques. Laser scanning is the most common method to produce point cloud. In this method, laser scanner device produces 3D point cloud at regular intervals. In close range photogrammetry, point cloud can be produced with the help of photographs taken in appropriate conditions depending on developing hardware and software technology. Many photogrammetric software which is open source or not currently provide the generation of point cloud support. Both methods are close to each other in terms of accuracy. Sufficient accuracy in the mm and cm range can be obtained with the help of a qualified digital camera and laser scanner. In both methods, field work is completed in less time than conventional techniques. In close range photogrammetry, any part of rock surfaces can be completely represented owing to overlapping oblique photographs. In contrast to the proximity of the data, these two methods are quite different in terms of cost. In this study, whether or not point cloud produced by photographs can be used instead of point cloud produced by laser scanner device is investigated. In accordance with this purpose, rock surfaces which have complex and irregular shape located in İstanbul Technical University Ayazaga Campus were selected as study object. Selected object is mixture of different rock types and consists of both partly weathered and fresh parts. Study was performed on a part of 30m x 10m rock surface. 2D and 3D analysis were performed for several regions selected from the point clouds of the surface models. 2D analysis is area-based and 3D analysis is volume-based. Analysis conclusions showed that point clouds in both are similar and can be used as alternative to each other. This proved that point cloud produced using photographs which are both economical and enables to produce data in less time can be used in several studies instead of point cloud produced by laser scanner.

  8. Study on ice cloud optical thickness retrieval with MODIS IR spectral bands

    NASA Astrophysics Data System (ADS)

    Zhang, Hong; Li, Jun

    2005-01-01

    The operational Moderate-Resolution Imaging Spectroradiometer (MODIS) products for cloud properties such as cloud-top pressure (CTP), effective cloud amount (ECA), cloud particle size (CPS), cloud optical thickness (COT), and cloud phase (CP) have been available for users globally. An approach to retrieve COT is investigated using MODIS infrared (IR) window spectral bands (8.5 mm, 11mm, and 12 mm). The COT retrieval from MODIS IR bands has the potential to provide microphysical properties with high spatial resolution during night. The results are compared with those from operational MODIS products derived from the visible (VIS) and near-infrared (NIR) bands during day. Sensitivity of COT to MODIS spectral brightness temperature (BT) and BT difference (BTD) values is studied. A look-up table is created from the cloudy radiative transfer model accounting for the cloud absorption and scattering for the cloud microphysical property retrieval. The potential applications and limitations are also discussed. This algorithm can be applied to the future imager systems such as Visible/Infrared Imager/Radiometer Suite (VIIRS) on the National Polar-orbiting Operational Environmental Satellite System (NPOESS) and Advanced Baseline Imager (ABI) on the Geostationary Operational Environmental Satellite (GOES)-R.

  9. Explicit and Observation-based Aerosol Treatment in Tropospheric NO2 Retrieval over China from the Ozone Monitoring Instrument

    NASA Astrophysics Data System (ADS)

    Liu, M.; Lin, J.; Boersma, F.; Pinardi, G.; Wang, Y.; Chimot, J.; Wagner, T.; Xie, P.; Eskes, H.; Van Roozendael, M.; Hendrick, F.

    2017-12-01

    Satellite retrieval of vertical column densities (VCDs) of tropospheric nitrogen dioxide (NO2) is influenced by aerosols substantially. Aerosols affect the retrieval of "effective cloud fraction (CF)" and "effective cloud top pressure (CP)" that are used in the subsequent NO2 retrieval to account for the presentence of clouds. And aerosol properties and vertical distributions directly affect the NO2 air mass factor (AMF) calculations. Our published POMINO algorithm uses a parallelized LIDORT-driven AMFv6 code to derive CF, CP and NO2 VCD. Daily information on aerosol optical properties are taken from GEOS-Chem simulations, with aerosol optical depth (AOD) further constrained by monthly MODIS AOD. However, the published algorithm does not include an observation-based constraint of aerosol vertical distribution. Here we construct a monthly climatological observation dataset of aerosol extinction profiles, based on Level-2 CALIOP data over 2007-2015, to further constrain aerosol vertical distributions. GEOS-Chem captures the temporal variations of CALIOP aerosol layer heights (ALH) but has an overall underestimate by about 0.3 km. It tends to overestimate the aerosol extinction by 10% below 2 km but with an underestimate by 30% above 2 km, leading to a low bias by 10-30% in the retrieved tropospheric NO2 VCD. After adjusting GEOS-Chem aerosol extinction profiles by the CALIOP monthly ALH climatology, the retrieved NO2 VCDs increase by 4-16% over China on a monthly basis in 2012. The improved NO2 VCDs are better correlated to independent MAX-DOAS observations at three sites than POMINO and DOMINO are - especially for the polluted cases, R2 reaches 0.76 for the adjusted POMINO, much higher than that for the published POMINO (0.68) and DOMINO (0.38). The newly retrieved CP increases by 60 hPa on average, because of a stronger aerosol screening effect. Compared to the CF used in DOMINO, which implicitly includes aerosol information, our improved CF is much lower and can reach a value of zero on actual cloud-free days. Overall, constraining aerosol vertical profiles greatly improves the retrievals of clouds and NO2 VCDs from satellite remote sensing. Our algorithm can be applied, with minimum modifications, to formaldehyde, sulfur dioxide and other species with similar retrieval methodologies.

  10. LiDAR Point Cloud and Stereo Image Point Cloud Fusion

    DTIC Science & Technology

    2013-09-01

    LiDAR point cloud (right) highlighting linear edge features ideal for automatic registration...point cloud (right) highlighting linear edge features ideal for automatic registration. Areas where topography is being derived, unfortunately, do...with the least amount of automatic correlation errors was used. The following graphic (Figure 12) shows the coverage of the WV1 stereo triplet as

  11. LIDAR Point Cloud Data Extraction and Establishment of 3D Modeling of Buildings

    NASA Astrophysics Data System (ADS)

    Zhang, Yujuan; Li, Xiuhai; Wang, Qiang; Liu, Jiang; Liang, Xin; Li, Dan; Ni, Chundi; Liu, Yan

    2018-01-01

    This paper takes the method of Shepard’s to deal with the original LIDAR point clouds data, and generate regular grid data DSM, filters the ground point cloud and non ground point cloud through double least square method, and obtains the rules of DSM. By using region growing method for the segmentation of DSM rules, the removal of non building point cloud, obtaining the building point cloud information. Uses the Canny operator to extract the image segmentation is needed after the edges of the building, uses Hough transform line detection to extract the edges of buildings rules of operation based on the smooth and uniform. At last, uses E3De3 software to establish the 3D model of buildings.

  12. A robust real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Wenyang; Cheung, Yam; Sawant, Amit

    2016-05-15

    Purpose: To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. Methods: The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparsemore » regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. Results: On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced occlusions. Conclusions: The authors have developed a fast and robust surface reconstruction method on point clouds captured from a 3D surface photogrammetry system, with demonstrated sub-millimeter reconstruction accuracy and subsecond reconstruction time. It is suitable for real-time motion tracking in radiotherapy, with clear surface structures for better quantifications.« less

  13. A robust real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system.

    PubMed

    Liu, Wenyang; Cheung, Yam; Sawant, Amit; Ruan, Dan

    2016-05-01

    To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced occlusions. The authors have developed a fast and robust surface reconstruction method on point clouds captured from a 3D surface photogrammetry system, with demonstrated sub-millimeter reconstruction accuracy and subsecond reconstruction time. It is suitable for real-time motion tracking in radiotherapy, with clear surface structures for better quantifications.

  14. A robust real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system

    PubMed Central

    Liu, Wenyang; Cheung, Yam; Sawant, Amit; Ruan, Dan

    2016-01-01

    Purpose: To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. Methods: The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. Results: On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced occlusions. Conclusions: The authors have developed a fast and robust surface reconstruction method on point clouds captured from a 3D surface photogrammetry system, with demonstrated sub-millimeter reconstruction accuracy and subsecond reconstruction time. It is suitable for real-time motion tracking in radiotherapy, with clear surface structures for better quantifications. PMID:27147347

  15. Automatic Registration of TLS-TLS and TLS-MLS Point Clouds Using a Genetic Algorithm

    PubMed Central

    Yan, Li; Xie, Hong; Chen, Changjun

    2017-01-01

    Registration of point clouds is a fundamental issue in Light Detection and Ranging (LiDAR) remote sensing because point clouds scanned from multiple scan stations or by different platforms need to be transformed to a uniform coordinate reference frame. This paper proposes an efficient registration method based on genetic algorithm (GA) for automatic alignment of two terrestrial LiDAR scanning (TLS) point clouds (TLS-TLS point clouds) and alignment between TLS and mobile LiDAR scanning (MLS) point clouds (TLS-MLS point clouds). The scanning station position acquired by the TLS built-in GPS and the quasi-horizontal orientation of the LiDAR sensor in data acquisition are used as constraints to narrow the search space in GA. A new fitness function to evaluate the solutions for GA, named as Normalized Sum of Matching Scores, is proposed for accurate registration. Our method is divided into five steps: selection of matching points, initialization of population, transformation of matching points, calculation of fitness values, and genetic operation. The method is verified using a TLS-TLS data set and a TLS-MLS data set. The experimental results indicate that the RMSE of registration of TLS-TLS point clouds is 3~5 mm, and that of TLS-MLS point clouds is 2~4 cm. The registration integrating the existing well-known ICP with GA is further proposed to accelerate the optimization and its optimizing time decreases by about 50%. PMID:28850100

  16. Automatic Registration of TLS-TLS and TLS-MLS Point Clouds Using a Genetic Algorithm.

    PubMed

    Yan, Li; Tan, Junxiang; Liu, Hua; Xie, Hong; Chen, Changjun

    2017-08-29

    Registration of point clouds is a fundamental issue in Light Detection and Ranging (LiDAR) remote sensing because point clouds scanned from multiple scan stations or by different platforms need to be transformed to a uniform coordinate reference frame. This paper proposes an efficient registration method based on genetic algorithm (GA) for automatic alignment of two terrestrial LiDAR scanning (TLS) point clouds (TLS-TLS point clouds) and alignment between TLS and mobile LiDAR scanning (MLS) point clouds (TLS-MLS point clouds). The scanning station position acquired by the TLS built-in GPS and the quasi-horizontal orientation of the LiDAR sensor in data acquisition are used as constraints to narrow the search space in GA. A new fitness function to evaluate the solutions for GA, named as Normalized Sum of Matching Scores, is proposed for accurate registration. Our method is divided into five steps: selection of matching points, initialization of population, transformation of matching points, calculation of fitness values, and genetic operation. The method is verified using a TLS-TLS data set and a TLS-MLS data set. The experimental results indicate that the RMSE of registration of TLS-TLS point clouds is 3~5 mm, and that of TLS-MLS point clouds is 2~4 cm. The registration integrating the existing well-known ICP with GA is further proposed to accelerate the optimization and its optimizing time decreases by about 50%.

  17. Automatic Classification of Trees from Laser Scanning Point Clouds

    NASA Astrophysics Data System (ADS)

    Sirmacek, B.; Lindenbergh, R.

    2015-08-01

    Development of laser scanning technologies has promoted tree monitoring studies to a new level, as the laser scanning point clouds enable accurate 3D measurements in a fast and environmental friendly manner. In this paper, we introduce a probability matrix computation based algorithm for automatically classifying laser scanning point clouds into 'tree' and 'non-tree' classes. Our method uses the 3D coordinates of the laser scanning points as input and generates a new point cloud which holds a label for each point indicating if it belongs to the 'tree' or 'non-tree' class. To do so, a grid surface is assigned to the lowest height level of the point cloud. The grids are filled with probability values which are calculated by checking the point density above the grid. Since the tree trunk locations appear with very high values in the probability matrix, selecting the local maxima of the grid surface help to detect the tree trunks. Further points are assigned to tree trunks if they appear in the close proximity of trunks. Since heavy mathematical computations (such as point cloud organization, detailed shape 3D detection methods, graph network generation) are not required, the proposed algorithm works very fast compared to the existing methods. The tree classification results are found reliable even on point clouds of cities containing many different objects. As the most significant weakness, false detection of light poles, traffic signs and other objects close to trees cannot be prevented. Nevertheless, the experimental results on mobile and airborne laser scanning point clouds indicate the possible usage of the algorithm as an important step for tree growth observation, tree counting and similar applications. While the laser scanning point cloud is giving opportunity to classify even very small trees, accuracy of the results is reduced in the low point density areas further away than the scanning location. These advantages and disadvantages of two laser scanning point cloud sources are discussed in detail.

  18. Georeferencing UAS Derivatives Through Point Cloud Registration with Archived Lidar Datasets

    NASA Astrophysics Data System (ADS)

    Magtalas, M. S. L. Y.; Aves, J. C. L.; Blanco, A. C.

    2016-10-01

    Georeferencing gathered images is a common step before performing spatial analysis and other processes on acquired datasets using unmanned aerial systems (UAS). Methods of applying spatial information to aerial images or their derivatives is through onboard GPS (Global Positioning Systems) geotagging, or through tying of models through GCPs (Ground Control Points) acquired in the field. Currently, UAS (Unmanned Aerial System) derivatives are limited to meter-levels of accuracy when their generation is unaided with points of known position on the ground. The use of ground control points established using survey-grade GPS or GNSS receivers can greatly reduce model errors to centimeter levels. However, this comes with additional costs not only with instrument acquisition and survey operations, but also in actual time spent in the field. This study uses a workflow for cloud-based post-processing of UAS data in combination with already existing LiDAR data. The georeferencing of the UAV point cloud is executed using the Iterative Closest Point algorithm (ICP). It is applied through the open-source CloudCompare software (Girardeau-Montaut, 2006) on a `skeleton point cloud'. This skeleton point cloud consists of manually extracted features consistent on both LiDAR and UAV data. For this cloud, roads and buildings with minimal deviations given their differing dates of acquisition are considered consistent. Transformation parameters are computed for the skeleton cloud which could then be applied to the whole UAS dataset. In addition, a separate cloud consisting of non-vegetation features automatically derived using CANUPO classification algorithm (Brodu and Lague, 2012) was used to generate a separate set of parameters. Ground survey is done to validate the transformed cloud. An RMSE value of around 16 centimeters was found when comparing validation data to the models georeferenced using the CANUPO cloud and the manual skeleton cloud. Cloud-to-cloud distance computations of CANUPO and manual skeleton clouds were obtained with values for both equal to around 0.67 meters at 1.73 standard deviation.

  19. A scalable and multi-purpose point cloud server (PCS) for easier and faster point cloud data management and processing

    NASA Astrophysics Data System (ADS)

    Cura, Rémi; Perret, Julien; Paparoditis, Nicolas

    2017-05-01

    In addition to more traditional geographical data such as images (rasters) and vectors, point cloud data are becoming increasingly available. Such data are appreciated for their precision and true three-Dimensional (3D) nature. However, managing point clouds can be difficult due to scaling problems and specificities of this data type. Several methods exist but are usually fairly specialised and solve only one aspect of the management problem. In this work, we propose a comprehensive and efficient point cloud management system based on a database server that works on groups of points (patches) rather than individual points. This system is specifically designed to cover the basic needs of point cloud users: fast loading, compressed storage, powerful patch and point filtering, easy data access and exporting, and integrated processing. Moreover, the proposed system fully integrates metadata (like sensor position) and can conjointly use point clouds with other geospatial data, such as images, vectors, topology and other point clouds. Point cloud (parallel) processing can be done in-base with fast prototyping capabilities. Lastly, the system is built on open source technologies; therefore it can be easily extended and customised. We test the proposed system with several billion points obtained from Lidar (aerial and terrestrial) and stereo-vision. We demonstrate loading speeds in the ˜50 million pts/h per process range, transparent-for-user and greater than 2 to 4:1 compression ratio, patch filtering in the 0.1 to 1 s range, and output in the 0.1 million pts/s per process range, along with classical processing methods, such as object detection.

  20. Detections of Long Carbon Chains CH_{3}CCCCH, C_{6}H, LINEAR-C_{6}H_{2} and C_{7}H in the Low-Mass Star Forming Region L1527

    NASA Astrophysics Data System (ADS)

    Araki, Mitsunori; Takano, Shuro; Sakai, Nami; Yamamoto, Satoshi; Oyama, Takahiro; Kuze, Nobuhiko; Tsukiyama, Koichi

    2017-06-01

    Carbon chains in the warm carbon chain chemistry (WCCC) region has been searched in the 42-44 GHz region by using Green Bank 100 m telescope. Long carbon chains C_{7}H, C_{6}H, CH_{3}CCCCH, and linear-C_{6}H_{2} and cyclic species C_{3}H and C_{3}H_{2}O have been detected in the low-mass star forming region L1527, performing the WCCC. C_{7}H was detected for the first time in molecular clouds. The column density of C_{7}H is derived to be 6.2 × 10^{10} cm^{-2} by using the detected J = 24.5-23.5 and 25.5-24.5 rotational lines. The ^{2}Π_{1/2} electronic state of C_{6}H, locating 21.6 K above the ^{2}Π_{3/2} electronic ground state, and the K_a = 0 line of the para species of linear-C_{6}H_{2} were also detected firstly in molecular clouds. The column densities of the ^{2}Π_{1/2} and ^{2}Π_{3/2} states of C_{6}H in L1527 were derived to be 1.6 × 10^{11} and 1.1 × 10^{12} cm^{-2}, respectively. The total column density of linear-C_{6}H_{2} is obtained to be 1.86 × 10^{11} cm^{-2}. While the abundance ratios of carbon chains in between L1527 and the starless dark cloud Taurus Molecular Cloud-1 Cyanopolyyne Peak (TMC-1 CP) have a trend of decrease by extension of carbon-chain length, column densities of CH_{3}CCCCH and C_{6}H are on the trend. However, the column densities of linear-C_{6}H_{2}, and C_{7}H are as abundant as those of TMC-1 CP in spite of long carbon chain, i.e., they are not on the trend. The abundances of linear-C_{6}H_{2} and C_{7}H show that L1527 is rich for long carbon chains as well as TMC-1 CP.

  1. Fast calculation method of computer-generated hologram using a depth camera with point cloud gridding

    NASA Astrophysics Data System (ADS)

    Zhao, Yu; Shi, Chen-Xiao; Kwon, Ki-Chul; Piao, Yan-Ling; Piao, Mei-Lan; Kim, Nam

    2018-03-01

    We propose a fast calculation method for a computer-generated hologram (CGH) of real objects that uses a point cloud gridding method. The depth information of the scene is acquired using a depth camera and the point cloud model is reconstructed virtually. Because each point of the point cloud is distributed precisely to the exact coordinates of each layer, each point of the point cloud can be classified into grids according to its depth. A diffraction calculation is performed on the grids using a fast Fourier transform (FFT) to obtain a CGH. The computational complexity is reduced dramatically in comparison with conventional methods. The feasibility of the proposed method was confirmed by numerical and optical experiments.

  2. A Catalog of Architectural Tactics for Cyber-Foraging

    DTIC Science & Technology

    2015-01-06

    Grid Access for Mobile Devices. PhD thesis, University of Southampton, 2008. [12] S.-H. Hung, J.-P. Shieh, and C.-P. Lee. Migrating android applications...computing. International Journal of Interactive Multimedia and Artificial Intelligence, 1(7):6–15, 2012. [17] K. Kumar and Y.-H. Lu. Cloud computing

  3. Physical and Mental Quality of Life (QOL) in Chronic Pancreatitis(CP): A Case-Control Study from the NAPS2 cohort

    PubMed Central

    Amann, Stephen T.; Yadav, Dhiraj; Barmada, M. Micheal; O’Connell, Michael; Kennard, Elizabeth D.; Anderson, Michelle; Baillie, John; Sherman, Stuart; Romagnuolo, Joseph; Hawes, Robert H.; AlKaade, Samer; Brand, Randall E.; Lewis, Michele D.; Gardner, Timothy B.; Gelrud, Andres; Money, Mary E.; Banks, Peter A.; Slivka, Adam; Whitcomb, David C

    2012-01-01

    Objectives Define the Quality of Life (QOL) in chronic pancreatitis (CP) subjects Methods We studied 443 well phenotyped CP subjects and 611 controls prospectively enrolled from 20 US centers between 2000–2006 in the North American Pancreatitis Study 2 (NAPS2). Responses to the SF-12 questionnaire were used to calculate the Mental (MCS) and Physical component summary scores (PCS) with norm based scoring (normal ≥50). QOL in CP subjects was compared with controls after controlling for demographic factors, drinking history, smoking and medical conditions. QOL in CP was also compared with known scores for several chronic conditions. Results Both PCS (38±11.5 vs. 52±9.4) and MCS (44±11.5 vs. 51±9.2) were significantly lower in CP compared with controls (p<0.001). On multivariable analyses, compared to controls, a profound decrease in physical QOL (PCS 12.02 points lower) and a clinically significant decrease in mental QOL (MCS 4.24 points lower) was seen due to CP. QOL in CP was similar to (heart, kidney, liver, lung disease) or worse than (non-skin cancers, diabetes mellitus, hypertension, rheumatoid arthritis) other chronic conditions. Conclusions The impact of CP on QOL appears substantial. The QOL in CP subjects appears to be worse or similar to the QOL of many other chronic conditions. PMID:23357924

  4. Processing Uav and LIDAR Point Clouds in Grass GIS

    NASA Astrophysics Data System (ADS)

    Petras, V.; Petrasova, A.; Jeziorska, J.; Mitasova, H.

    2016-06-01

    Today's methods of acquiring Earth surface data, namely lidar and unmanned aerial vehicle (UAV) imagery, non-selectively collect or generate large amounts of points. Point clouds from different sources vary in their properties such as number of returns, density, or quality. We present a set of tools with applications for different types of points clouds obtained by a lidar scanner, structure from motion technique (SfM), and a low-cost 3D scanner. To take advantage of the vertical structure of multiple return lidar point clouds, we demonstrate tools to process them using 3D raster techniques which allow, for example, the development of custom vegetation classification methods. Dense point clouds obtained from UAV imagery, often containing redundant points, can be decimated using various techniques before further processing. We implemented and compared several decimation techniques in regard to their performance and the final digital surface model (DSM). Finally, we will describe the processing of a point cloud from a low-cost 3D scanner, namely Microsoft Kinect, and its application for interaction with physical models. All the presented tools are open source and integrated in GRASS GIS, a multi-purpose open source GIS with remote sensing capabilities. The tools integrate with other open source projects, specifically Point Data Abstraction Library (PDAL), Point Cloud Library (PCL), and OpenKinect libfreenect2 library to benefit from the open source point cloud ecosystem. The implementation in GRASS GIS ensures long term maintenance and reproducibility by the scientific community but also by the original authors themselves.

  5. a Global Registration Algorithm of the Single-Closed Ring Multi-Stations Point Cloud

    NASA Astrophysics Data System (ADS)

    Yang, R.; Pan, L.; Xiang, Z.; Zeng, H.

    2018-04-01

    Aimed at the global registration problem of the single-closed ring multi-stations point cloud, a formula in order to calculate the error of rotation matrix was constructed according to the definition of error. The global registration algorithm of multi-station point cloud was derived to minimize the error of rotation matrix. And fast-computing formulas of transformation matrix with whose implementation steps and simulation experiment scheme was given. Compared three different processing schemes of multi-station point cloud, the experimental results showed that the effectiveness of the new global registration method was verified, and it could effectively complete the global registration of point cloud.

  6. Generation of Ground Truth Datasets for the Analysis of 3d Point Clouds in Urban Scenes Acquired via Different Sensors

    NASA Astrophysics Data System (ADS)

    Xu, Y.; Sun, Z.; Boerner, R.; Koch, T.; Hoegner, L.; Stilla, U.

    2018-04-01

    In this work, we report a novel way of generating ground truth dataset for analyzing point cloud from different sensors and the validation of algorithms. Instead of directly labeling large amount of 3D points requiring time consuming manual work, a multi-resolution 3D voxel grid for the testing site is generated. Then, with the help of a set of basic labeled points from the reference dataset, we can generate a 3D labeled space of the entire testing site with different resolutions. Specifically, an octree-based voxel structure is applied to voxelize the annotated reference point cloud, by which all the points are organized by 3D grids of multi-resolutions. When automatically annotating the new testing point clouds, a voting based approach is adopted to the labeled points within multiple resolution voxels, in order to assign a semantic label to the 3D space represented by the voxel. Lastly, robust line- and plane-based fast registration methods are developed for aligning point clouds obtained via various sensors. Benefiting from the labeled 3D spatial information, we can easily create new annotated 3D point clouds of different sensors of the same scene directly by considering the corresponding labels of 3D space the points located, which would be convenient for the validation and evaluation of algorithms related to point cloud interpretation and semantic segmentation.

  7. The One to Multiple Automatic High Accuracy Registration of Terrestrial LIDAR and Optical Images

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Hu, C.; Xia, G.; Xue, H.

    2018-04-01

    The registration of ground laser point cloud and close-range image is the key content of high-precision 3D reconstruction of cultural relic object. In view of the requirement of high texture resolution in the field of cultural relic at present, The registration of point cloud and image data in object reconstruction will result in the problem of point cloud to multiple images. In the current commercial software, the two pairs of registration of the two kinds of data are realized by manually dividing point cloud data, manual matching point cloud and image data, manually selecting a two - dimensional point of the same name of the image and the point cloud, and the process not only greatly reduces the working efficiency, but also affects the precision of the registration of the two, and causes the problem of the color point cloud texture joint. In order to solve the above problems, this paper takes the whole object image as the intermediate data, and uses the matching technology to realize the automatic one-to-one correspondence between the point cloud and multiple images. The matching of point cloud center projection reflection intensity image and optical image is applied to realize the automatic matching of the same name feature points, and the Rodrigo matrix spatial similarity transformation model and weight selection iteration are used to realize the automatic registration of the two kinds of data with high accuracy. This method is expected to serve for the high precision and high efficiency automatic 3D reconstruction of cultural relic objects, which has certain scientific research value and practical significance.

  8. A Secure and Verifiable Outsourced Access Control Scheme in Fog-Cloud Computing.

    PubMed

    Fan, Kai; Wang, Junxiong; Wang, Xin; Li, Hui; Yang, Yintang

    2017-07-24

    With the rapid development of big data and Internet of things (IOT), the number of networking devices and data volume are increasing dramatically. Fog computing, which extends cloud computing to the edge of the network can effectively solve the bottleneck problems of data transmission and data storage. However, security and privacy challenges are also arising in the fog-cloud computing environment. Ciphertext-policy attribute-based encryption (CP-ABE) can be adopted to realize data access control in fog-cloud computing systems. In this paper, we propose a verifiable outsourced multi-authority access control scheme, named VO-MAACS. In our construction, most encryption and decryption computations are outsourced to fog devices and the computation results can be verified by using our verification method. Meanwhile, to address the revocation issue, we design an efficient user and attribute revocation method for it. Finally, analysis and simulation results show that our scheme is both secure and highly efficient.

  9. Longitudinal associations between conduct problems and depressive symptoms among girls and boys with early conduct problems.

    PubMed

    Poirier, Martine; Déry, Michèle; Temcheff, Caroline E; Toupin, Jean; Verlaan, Pierrette; Lemelin, Jean-Pascal

    2016-07-01

    Youth with conduct problems (CP) may experience high rates of depressive symptoms (DS). However, little is known about the direction of the longitudinal associations between CP and DS in this specific population. Although girls with CP appear at greater risk than boys for presenting comorbid depression, empirical research on gender differences in these associations is even sparser. The current study used autoregressive latent trajectory models to compare four perspectives with hypotheses regarding the longitudinal associations between CP and DS, while taking into account the evolution of both problems. We also examined gender differences in the longitudinal associations. A total of 345 children (40.6 % female) presenting with a high level of CP in early elementary school (mean age at study inception = 8.52; SD = .94) were evaluated annually over a four-year period (5 measurement time points). The results revealed that CP and DS were quite stable over time. Moreover, CP and DS showed strong covariation at each measurement time point, but only one significant positive cross-lagged association between the two processes, indicating that higher levels of DS at time 3 were associated with higher levels of CP 1 year later. No differences were observed in the longitudinal associations between CP and DS in boys and girls. Given the comorbidity and stability of CP and DS, these findings suggest that DS should be systematically evaluated among children with early clinically significant CP, and treatment plans should include interventions aimed at both CP and DS among children who present with both types of problems.

  10. 78 FR 68147 - Notice of Application for Approval of Discontinuance or Modification of a Railroad Signal System

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-13

    ... control system (TCS) on main tracks between Control Point (CP) Mount Morris, Milepost (MP) CC-26.2, and CP... are at CP South Kearsley, MP CC-33.54, an at-grade railroad crossing with the Grand Trunk Railway, and at CP Holly, CC-50.42, an at-grade railroad crossing with the Canadian National Railway. These...

  11. Brute Force Matching Between Camera Shots and Synthetic Images from Point Clouds

    NASA Astrophysics Data System (ADS)

    Boerner, R.; Kröhnert, M.

    2016-06-01

    3D point clouds, acquired by state-of-the-art terrestrial laser scanning techniques (TLS), provide spatial information about accuracies up to several millimetres. Unfortunately, common TLS data has no spectral information about the covered scene. However, the matching of TLS data with images is important for monoplotting purposes and point cloud colouration. Well-established methods solve this issue by matching of close range images and point cloud data by fitting optical camera systems on top of laser scanners or rather using ground control points. The approach addressed in this paper aims for the matching of 2D image and 3D point cloud data from a freely moving camera within an environment covered by a large 3D point cloud, e.g. a 3D city model. The key advantage of the free movement affects augmented reality applications or real time measurements. Therefore, a so-called real image, captured by a smartphone camera, has to be matched with a so-called synthetic image which consists of reverse projected 3D point cloud data to a synthetic projection centre whose exterior orientation parameters match the parameters of the image, assuming an ideal distortion free camera.

  12. An Approach of Web-based Point Cloud Visualization without Plug-in

    NASA Astrophysics Data System (ADS)

    Ye, Mengxuan; Wei, Shuangfeng; Zhang, Dongmei

    2016-11-01

    With the advances in three-dimensional laser scanning technology, the demand for visualization of massive point cloud is increasingly urgent, but a few years ago point cloud visualization was limited to desktop-based solutions until the introduction of WebGL, several web renderers are available. This paper addressed the current issues in web-based point cloud visualization, and proposed a method of web-based point cloud visualization without plug-in. The method combines ASP.NET and WebGL technologies, using the spatial database PostgreSQL to store data and the open web technologies HTML5 and CSS3 to implement the user interface, a visualization system online for 3D point cloud is developed by Javascript with the web interactions. Finally, the method is applied to the real case. Experiment proves that the new model is of great practical value which avoids the shortcoming of the existing WebGIS solutions.

  13. Model for Semantically Rich Point Cloud Data

    NASA Astrophysics Data System (ADS)

    Poux, F.; Neuville, R.; Hallot, P.; Billen, R.

    2017-10-01

    This paper proposes an interoperable model for managing high dimensional point clouds while integrating semantics. Point clouds from sensors are a direct source of information physically describing a 3D state of the recorded environment. As such, they are an exhaustive representation of the real world at every scale: 3D reality-based spatial data. Their generation is increasingly fast but processing routines and data models lack of knowledge to reason from information extraction rather than interpretation. The enhanced smart point cloud developed model allows to bring intelligence to point clouds via 3 connected meta-models while linking available knowledge and classification procedures that permits semantic injection. Interoperability drives the model adaptation to potentially many applications through specialized domain ontologies. A first prototype is implemented in Python and PostgreSQL database and allows to combine semantic and spatial concepts for basic hybrid queries on different point clouds.

  14. Self-Similar Spin Images for Point Cloud Matching

    NASA Astrophysics Data System (ADS)

    Pulido, Daniel

    The rapid growth of Light Detection And Ranging (Lidar) technologies that collect, process, and disseminate 3D point clouds have allowed for increasingly accurate spatial modeling and analysis of the real world. Lidar sensors can generate massive 3D point clouds of a collection area that provide highly detailed spatial and radiometric information. However, a Lidar collection can be expensive and time consuming. Simultaneously, the growth of crowdsourced Web 2.0 data (e.g., Flickr, OpenStreetMap) have provided researchers with a wealth of freely available data sources that cover a variety of geographic areas. Crowdsourced data can be of varying quality and density. In addition, since it is typically not collected as part of a dedicated experiment but rather volunteered, when and where the data is collected is arbitrary. The integration of these two sources of geoinformation can provide researchers the ability to generate products and derive intelligence that mitigate their respective disadvantages and combine their advantages. Therefore, this research will address the problem of fusing two point clouds from potentially different sources. Specifically, we will consider two problems: scale matching and feature matching. Scale matching consists of computing feature metrics of each point cloud and analyzing their distributions to determine scale differences. Feature matching consists of defining local descriptors that are invariant to common dataset distortions (e.g., rotation and translation). Additionally, after matching the point clouds they can be registered and processed further (e.g., change detection). The objective of this research is to develop novel methods to fuse and enhance two point clouds from potentially disparate sources (e.g., Lidar and crowdsourced Web 2.0 datasets). The scope of this research is to investigate both scale and feature matching between two point clouds. The specific focus of this research will be in developing a novel local descriptor based on the concept of self-similarity to aid in the scale and feature matching steps. An open problem in fusion is how best to extract features from two point clouds and then perform feature-based matching. The proposed approach for this matching step is the use of local self-similarity as an invariant measure to match features. In particular, the proposed approach is to combine the concept of local self-similarity with a well-known feature descriptor, Spin Images, and thereby define "Self-Similar Spin Images". This approach is then extended to the case of matching two points clouds in very different coordinate systems (e.g., a geo-referenced Lidar point cloud and stereo-image derived point cloud without geo-referencing). The use of Self-Similar Spin Images is again applied to address this problem by introducing a "Self-Similar Keyscale" that matches the spatial scales of two point clouds. Another open problem is how best to detect changes in content between two point clouds. A method is proposed to find changes between two point clouds by analyzing the order statistics of the nearest neighbors between the two clouds, and thereby define the "Nearest Neighbor Order Statistic" method. Note that the well-known Hausdorff distance is a special case as being just the maximum order statistic. Therefore, by studying the entire histogram of these nearest neighbors it is expected to yield a more robust method to detect points that are present in one cloud but not the other. This approach is applied at multiple resolutions. Therefore, changes detected at the coarsest level will yield large missing targets and at finer levels will yield smaller targets.

  15. Simultaneous colour visualizations of multiple ALS point cloud attributes for land cover and vegetation analysis

    NASA Astrophysics Data System (ADS)

    Zlinszky, András; Schroiff, Anke; Otepka, Johannes; Mandlburger, Gottfried; Pfeifer, Norbert

    2014-05-01

    LIDAR point clouds hold valuable information for land cover and vegetation analysis, not only in the spatial distribution of the points but also in their various attributes. However, LIDAR point clouds are rarely used for visual interpretation, since for most users, the point cloud is difficult to interpret compared to passive optical imagery. Meanwhile, point cloud viewing software is available allowing interactive 3D interpretation, but typically only one attribute at a time. This results in a large number of points with the same colour, crowding the scene and often obscuring detail. We developed a scheme for mapping information from multiple LIDAR point attributes to the Red, Green, and Blue channels of a widely used LIDAR data format, which are otherwise mostly used to add information from imagery to create "photorealistic" point clouds. The possible combinations of parameters are therefore represented in a wide range of colours, but relative differences in individual parameter values of points can be well understood. The visualization was implemented in OPALS software, using a simple and robust batch script, and is viewer independent since the information is stored in the point cloud data file itself. In our case, the following colour channel assignment delivered best results: Echo amplitude in the Red, echo width in the Green and normalized height above a Digital Terrain Model in the Blue channel. With correct parameter scaling (but completely without point classification), points belonging to asphalt and bare soil are dark red, low grassland and crop vegetation are bright red to yellow, shrubs and low trees are green and high trees are blue. Depending on roof material and DTM quality, buildings are shown from red through purple to dark blue. Erroneously high or low points, or points with incorrect amplitude or echo width usually have colours contrasting from terrain or vegetation. This allows efficient visual interpretation of the point cloud in planar, profile and 3D views since it reduces crowding of the scene and delivers intuitive contextual information. The resulting visualization has proved useful for vegetation analysis for habitat mapping, and can also be applied as a first step for point cloud level classification. An interactive demonstration of the visualization script is shown during poster attendance, including the opportunity to view your own point cloud sample files.

  16. Point Cloud Generation from Aerial Image Data Acquired by a Quadrocopter Type Micro Unmanned Aerial Vehicle and a Digital Still Camera

    PubMed Central

    Rosnell, Tomi; Honkavaara, Eija

    2012-01-01

    The objective of this investigation was to develop and investigate methods for point cloud generation by image matching using aerial image data collected by quadrocopter type micro unmanned aerial vehicle (UAV) imaging systems. Automatic generation of high-quality, dense point clouds from digital images by image matching is a recent, cutting-edge step forward in digital photogrammetric technology. The major components of the system for point cloud generation are a UAV imaging system, an image data collection process using high image overlaps, and post-processing with image orientation and point cloud generation. Two post-processing approaches were developed: one of the methods is based on Bae Systems’ SOCET SET classical commercial photogrammetric software and another is built using Microsoft®’s Photosynth™ service available in the Internet. Empirical testing was carried out in two test areas. Photosynth processing showed that it is possible to orient the images and generate point clouds fully automatically without any a priori orientation information or interactive work. The photogrammetric processing line provided dense and accurate point clouds that followed the theoretical principles of photogrammetry, but also some artifacts were detected. The point clouds from the Photosynth processing were sparser and noisier, which is to a large extent due to the fact that the method is not optimized for dense point cloud generation. Careful photogrammetric processing with self-calibration is required to achieve the highest accuracy. Our results demonstrate the high performance potential of the approach and that with rigorous processing it is possible to reach results that are consistent with theory. We also point out several further research topics. Based on theoretical and empirical results, we give recommendations for properties of imaging sensor, data collection and processing of UAV image data to ensure accurate point cloud generation. PMID:22368479

  17. Point cloud generation from aerial image data acquired by a quadrocopter type micro unmanned aerial vehicle and a digital still camera.

    PubMed

    Rosnell, Tomi; Honkavaara, Eija

    2012-01-01

    The objective of this investigation was to develop and investigate methods for point cloud generation by image matching using aerial image data collected by quadrocopter type micro unmanned aerial vehicle (UAV) imaging systems. Automatic generation of high-quality, dense point clouds from digital images by image matching is a recent, cutting-edge step forward in digital photogrammetric technology. The major components of the system for point cloud generation are a UAV imaging system, an image data collection process using high image overlaps, and post-processing with image orientation and point cloud generation. Two post-processing approaches were developed: one of the methods is based on Bae Systems' SOCET SET classical commercial photogrammetric software and another is built using Microsoft(®)'s Photosynth™ service available in the Internet. Empirical testing was carried out in two test areas. Photosynth processing showed that it is possible to orient the images and generate point clouds fully automatically without any a priori orientation information or interactive work. The photogrammetric processing line provided dense and accurate point clouds that followed the theoretical principles of photogrammetry, but also some artifacts were detected. The point clouds from the Photosynth processing were sparser and noisier, which is to a large extent due to the fact that the method is not optimized for dense point cloud generation. Careful photogrammetric processing with self-calibration is required to achieve the highest accuracy. Our results demonstrate the high performance potential of the approach and that with rigorous processing it is possible to reach results that are consistent with theory. We also point out several further research topics. Based on theoretical and empirical results, we give recommendations for properties of imaging sensor, data collection and processing of UAV image data to ensure accurate point cloud generation.

  18. a Fast Method for Measuring the Similarity Between 3d Model and 3d Point Cloud

    NASA Astrophysics Data System (ADS)

    Zhang, Zongliang; Li, Jonathan; Li, Xin; Lin, Yangbin; Zhang, Shanxin; Wang, Cheng

    2016-06-01

    This paper proposes a fast method for measuring the partial Similarity between 3D Model and 3D point Cloud (SimMC). It is crucial to measure SimMC for many point cloud-related applications such as 3D object retrieval and inverse procedural modelling. In our proposed method, the surface area of model and the Distance from Model to point Cloud (DistMC) are exploited as measurements to calculate SimMC. Here, DistMC is defined as the weighted distance of the distances between points sampled from model and point cloud. Similarly, Distance from point Cloud to Model (DistCM) is defined as the average distance of the distances between points in point cloud and model. In order to reduce huge computational burdens brought by calculation of DistCM in some traditional methods, we define SimMC as the ratio of weighted surface area of model to DistMC. Compared to those traditional SimMC measuring methods that are only able to measure global similarity, our method is capable of measuring partial similarity by employing distance-weighted strategy. Moreover, our method is able to be faster than other partial similarity assessment methods. We demonstrate the superiority of our method both on synthetic data and laser scanning data.

  19. Motion Estimation System Utilizing Point Cloud Registration

    NASA Technical Reports Server (NTRS)

    Chen, Qi (Inventor)

    2016-01-01

    A system and method of estimation motion of a machine is disclosed. The method may include determining a first point cloud and a second point cloud corresponding to an environment in a vicinity of the machine. The method may further include generating a first extended gaussian image (EGI) for the first point cloud and a second EGI for the second point cloud. The method may further include determining a first EGI segment based on the first EGI and a second EGI segment based on the second EGI. The method may further include determining a first two dimensional distribution for points in the first EGI segment and a second two dimensional distribution for points in the second EGI segment. The method may further include estimating motion of the machine based on the first and second two dimensional distributions.

  20. Pointo - a Low Cost Solution to Point Cloud Processing

    NASA Astrophysics Data System (ADS)

    Houshiar, H.; Winkler, S.

    2017-11-01

    With advance in technology access to data especially 3D point cloud data becomes more and more an everyday task. 3D point clouds are usually captured with very expensive tools such as 3D laser scanners or very time consuming methods such as photogrammetry. Most of the available softwares for 3D point cloud processing are designed for experts and specialists in this field and are usually very large software packages containing variety of methods and tools. This results in softwares that are usually very expensive to acquire and also very difficult to use. Difficulty of use is caused by complicated user interfaces that is required to accommodate a large list of features. The aim of these complex softwares is to provide a powerful tool for a specific group of specialist. However they are not necessary required by the majority of the up coming average users of point clouds. In addition to complexity and high costs of these softwares they generally rely on expensive and modern hardware and only compatible with one specific operating system. Many point cloud customers are not point cloud processing experts or willing to spend the high acquisition costs of these expensive softwares and hardwares. In this paper we introduce a solution for low cost point cloud processing. Our approach is designed to accommodate the needs of the average point cloud user. To reduce the cost and complexity of software our approach focuses on one functionality at a time in contrast with most available softwares and tools that aim to solve as many problems as possible at the same time. Our simple and user oriented design improve the user experience and empower us to optimize our methods for creation of an efficient software. In this paper we introduce Pointo family as a series of connected softwares to provide easy to use tools with simple design for different point cloud processing requirements. PointoVIEWER and PointoCAD are introduced as the first components of the Pointo family to provide a fast and efficient visualization with the ability to add annotation and documentation to the point clouds.

  1. High-Altitude Data Assimilation System Experiments for the Northern Summer Mesosphere Season of 2007

    DTIC Science & Technology

    2009-01-01

    rates ( Eck - ermann et al., 2007) which are important for accurately modeling and assimilating temperature at these altitudes (e.g., Sassi et al., 2005...clouds and mesopause temperatures. Geophysical Research Letters 34, L24808. Webster, S., Brown, A.R., Cameron , D.R., Jones, C.P., 2003. Improvements to the

  2. 78 FR 68146 - Notice of Application for Approval of Discontinuance or Modification of a Railroad Signal System

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-13

    ..., GA 30309. NS seeks approval of the proposed discontinuance of Control Point (CP) Oak and the discontinuance of the traffic control system (TCS) between CP Maumee, Milepost (MP) DY 1.2/CD 287.65, and Stanley... discontinued on the Oakdale Connection Track between CP 286, MP XA 286.90/CD 286.75, and CP Oak, MP XA 287.80...

  3. Study of Huizhou architecture component point cloud in surface reconstruction

    NASA Astrophysics Data System (ADS)

    Zhang, Runmei; Wang, Guangyin; Ma, Jixiang; Wu, Yulu; Zhang, Guangbin

    2017-06-01

    Surface reconfiguration softwares have many problems such as complicated operation on point cloud data, too many interaction definitions, and too stringent requirements for inputing data. Thus, it has not been widely popularized so far. This paper selects the unique Huizhou Architecture chuandou wooden beam framework as the research object, and presents a complete set of implementation in data acquisition from point, point cloud preprocessing and finally implemented surface reconstruction. Firstly, preprocessing the acquired point cloud data, including segmentation and filtering. Secondly, the surface’s normals are deduced directly from the point cloud dataset. Finally, the surface reconstruction is studied by using Greedy Projection Triangulation Algorithm. Comparing the reconstructed model with the three-dimensional surface reconstruction softwares, the results show that the proposed scheme is more smooth, time efficient and portable.

  4. What's the Point of a Raster ? Advantages of 3D Point Cloud Processing over Raster Based Methods for Accurate Geomorphic Analysis of High Resolution Topography.

    NASA Astrophysics Data System (ADS)

    Lague, D.

    2014-12-01

    High Resolution Topographic (HRT) datasets are predominantly stored and analyzed as 2D raster grids of elevations (i.e., Digital Elevation Models). Raster grid processing is common in GIS software and benefits from a large library of fast algorithms dedicated to geometrical analysis, drainage network computation and topographic change measurement. Yet, all instruments or methods currently generating HRT datasets (e.g., ALS, TLS, SFM, stereo satellite imagery) output natively 3D unstructured point clouds that are (i) non-regularly sampled, (ii) incomplete (e.g., submerged parts of river channels are rarely measured), and (iii) include 3D elements (e.g., vegetation, vertical features such as river banks or cliffs) that cannot be accurately described in a DEM. Interpolating the raw point cloud onto a 2D grid generally results in a loss of position accuracy, spatial resolution and in more or less controlled interpolation. Here I demonstrate how studying earth surface topography and processes directly on native 3D point cloud datasets offers several advantages over raster based methods: point cloud methods preserve the accuracy of the original data, can better handle the evaluation of uncertainty associated to topographic change measurements and are more suitable to study vegetation characteristics and steep features of the landscape. In this presentation, I will illustrate and compare Point Cloud based and Raster based workflows with various examples involving ALS, TLS and SFM for the analysis of bank erosion processes in bedrock and alluvial rivers, rockfall statistics (including rockfall volume estimate directly from point clouds) and the interaction of vegetation/hydraulics and sedimentation in salt marshes. These workflows use 2 recently published algorithms for point cloud classification (CANUPO) and point cloud comparison (M3C2) now implemented in the open source software CloudCompare.

  5. Compression of 3D Point Clouds Using a Region-Adaptive Hierarchical Transform.

    PubMed

    De Queiroz, Ricardo; Chou, Philip A

    2016-06-01

    In free-viewpoint video, there is a recent trend to represent scene objects as solids rather than using multiple depth maps. Point clouds have been used in computer graphics for a long time and with the recent possibility of real time capturing and rendering, point clouds have been favored over meshes in order to save computation. Each point in the cloud is associated with its 3D position and its color. We devise a method to compress the colors in point clouds which is based on a hierarchical transform and arithmetic coding. The transform is a hierarchical sub-band transform that resembles an adaptive variation of a Haar wavelet. The arithmetic encoding of the coefficients assumes Laplace distributions, one per sub-band. The Laplace parameter for each distribution is transmitted to the decoder using a custom method. The geometry of the point cloud is encoded using the well-established octtree scanning. Results show that the proposed solution performs comparably to the current state-of-the-art, in many occasions outperforming it, while being much more computationally efficient. We believe this work represents the state-of-the-art in intra-frame compression of point clouds for real-time 3D video.

  6. Lattice QCD spectroscopy for hadronic CP violation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    de Vries, Jordy; Mereghetti, Emanuele; Seng, Chien -Yeah

    Here, the interpretation of nuclear electric dipole moment (EDM) experiments is clouded by large theoretical uncertainties associated with nonperturbative matrix elements. In various beyond-the-Standard Model scenarios nuclear and diamagnetic atomic EDMs are expected to be dominated by CP-violating pion–nucleon interactions that arise from quark chromo-electric dipole moments. The corresponding CP-violating pion–nucleon coupling strengths are, however, poorly known. In this work we propose a strategy to calculate these couplings by using spectroscopic lattice QCD techniques. Instead of directly calculating the pion–nucleon coupling constants, a challenging task, we use chiral symmetry relations that link the pion–nucleon couplings to nucleon sigma terms andmore » mass splittings that are significantly easier to calculate. In this work, we show that these relations are reliable up to next-to-next-to-leading order in the chiral expansion in both SU(2) and SU(3) chiral perturbation theory. We conclude with a brief discussion about practical details regarding the required lattice QCD calculations and the phenomenological impact of an improved understanding of CP-violating matrix elements.« less

  7. Lattice QCD spectroscopy for hadronic CP violation

    DOE PAGES

    de Vries, Jordy; Mereghetti, Emanuele; Seng, Chien -Yeah; ...

    2017-01-16

    Here, the interpretation of nuclear electric dipole moment (EDM) experiments is clouded by large theoretical uncertainties associated with nonperturbative matrix elements. In various beyond-the-Standard Model scenarios nuclear and diamagnetic atomic EDMs are expected to be dominated by CP-violating pion–nucleon interactions that arise from quark chromo-electric dipole moments. The corresponding CP-violating pion–nucleon coupling strengths are, however, poorly known. In this work we propose a strategy to calculate these couplings by using spectroscopic lattice QCD techniques. Instead of directly calculating the pion–nucleon coupling constants, a challenging task, we use chiral symmetry relations that link the pion–nucleon couplings to nucleon sigma terms andmore » mass splittings that are significantly easier to calculate. In this work, we show that these relations are reliable up to next-to-next-to-leading order in the chiral expansion in both SU(2) and SU(3) chiral perturbation theory. We conclude with a brief discussion about practical details regarding the required lattice QCD calculations and the phenomenological impact of an improved understanding of CP-violating matrix elements.« less

  8. TH-AB-202-08: A Robust Real-Time Surface Reconstruction Method On Point Clouds Captured From a 3D Surface Photogrammetry System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, W; Sawant, A; Ruan, D

    2016-06-15

    Purpose: Surface photogrammetry (e.g. VisionRT, C-Rad) provides a noninvasive way to obtain high-frequency measurement for patient motion monitoring in radiotherapy. This work aims to develop a real-time surface reconstruction method on the acquired point clouds, whose acquisitions are subject to noise and missing measurements. In contrast to existing surface reconstruction methods that are usually computationally expensive, the proposed method reconstructs continuous surfaces with comparable accuracy in real-time. Methods: The key idea in our method is to solve and propagate a sparse linear relationship from the point cloud (measurement) manifold to the surface (reconstruction) manifold, taking advantage of the similarity inmore » local geometric topology in both manifolds. With consistent point cloud acquisition, we propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, building the point correspondences by the iterative closest point (ICP) method. To accommodate changing noise levels and/or presence of inconsistent occlusions, we further propose a modified sparse regression (MSR) model to account for the large and sparse error built by ICP, with a Laplacian prior. We evaluated our method on both clinical acquired point clouds under consistent conditions and simulated point clouds with inconsistent occlusions. The reconstruction accuracy was evaluated w.r.t. root-mean-squared-error, by comparing the reconstructed surfaces against those from the variational reconstruction method. Results: On clinical point clouds, both the SR and MSR models achieved sub-millimeter accuracy, with mean reconstruction time reduced from 82.23 seconds to 0.52 seconds and 0.94 seconds, respectively. On simulated point cloud with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent performance despite the introduced occlusions. Conclusion: We have developed a real-time and robust surface reconstruction method on point clouds acquired by photogrammetry systems. It serves an important enabling step for real-time motion tracking in radiotherapy. This work is supported in part by NIH grant R01 CA169102-02.« less

  9. FPFH-based graph matching for 3D point cloud registration

    NASA Astrophysics Data System (ADS)

    Zhao, Jiapeng; Li, Chen; Tian, Lihua; Zhu, Jihua

    2018-04-01

    Correspondence detection is a vital step in point cloud registration and it can help getting a reliable initial alignment. In this paper, we put forward an advanced point feature-based graph matching algorithm to solve the initial alignment problem of rigid 3D point cloud registration with partial overlap. Specifically, Fast Point Feature Histograms are used to determine the initial possible correspondences firstly. Next, a new objective function is provided to make the graph matching more suitable for partially overlapping point cloud. The objective function is optimized by the simulated annealing algorithm for final group of correct correspondences. Finally, we present a novel set partitioning method which can transform the NP-hard optimization problem into a O(n3)-solvable one. Experiments on the Stanford and UWA public data sets indicates that our method can obtain better result in terms of both accuracy and time cost compared with other point cloud registration methods.

  10. High Definition Clouds and Precipitation for advancing Climate Prediction (HD(CP)2): Large Eddy Simulation Study Over Germany

    NASA Astrophysics Data System (ADS)

    Dipankar, A.; Stevens, B. B.; Zängl, G.; Pondkule, M.; Brdar, S.

    2014-12-01

    The effect of clouds on large scale dynamics is represented in climate models through parameterization of various processes, of which the parameterization of shallow and deep convection are particularly uncertain. The atmospheric boundary layer, which controls the coupling to the surface, and which defines the scale of shallow convection, is typically 1 km in depth. Thus, simulations on a O(100 m) grid largely obviate the need for such parameterizations. By crossing this threshold of O(100m) grid resolution one can begin thinking of large-eddy simulation (LES), wherein the sub-grid scale parameterization have a sounder theoretical foundation. Substantial initiatives have been taken internationally to approach this threshold. For example, Miura et al., 2007 and Mirakawa et al., 2014 approach this threshold by doing global simulations, with (gradually) decreasing grid resolution, to understand the effect of cloud-resolving scales on the general circulation. Our strategy, on the other hand, is to take a big leap forward by fixing the resolution at O(100 m), and gradually increasing the domain size. We believe that breaking this threshold would greatly help in improving the parameterization schemes and reducing the uncertainty in climate predictions. To take this forward, the German Federal Ministry of Education and Research has initiated a project on HD(CP)2 that aims for a limited area LES at resolution O(100 m) using the new unified modeling system ICON (Zängl et al., 2014). In the talk, results from the HD(CP)2 evaluation simulation will be shown that targets high resolution simulation over a small domain around Jülich, Germany. This site is chosen because high resolution HD(CP)2 Observational Prototype Experiment took place in this region from 1.04.2013 to 31.05.2013, in order to critically evaluate the model. Nesting capabilities of ICON is used to gradually increase the resolution from the outermost domain, which is forced from the COSMO-DE data, to the innermost and finest resolution domain centered around Jülich (see Fig. 1 top panel). Furthermore, detailed analyses of the simulation results against the observation data will be presented. A reprsentative figure showing time series of column integrated water vapor (IWV) for both model and observation on 24.04.2013 is shown in bottom panel of Fig. 1.

  11. Smart Point Cloud: Definition and Remaining Challenges

    NASA Astrophysics Data System (ADS)

    Poux, F.; Hallot, P.; Neuville, R.; Billen, R.

    2016-10-01

    Dealing with coloured point cloud acquired from terrestrial laser scanner, this paper identifies remaining challenges for a new data structure: the smart point cloud. This concept arises with the statement that massive and discretized spatial information from active remote sensing technology is often underused due to data mining limitations. The generalisation of point cloud data associated with the heterogeneity and temporality of such datasets is the main issue regarding structure, segmentation, classification, and interaction for an immediate understanding. We propose to use both point cloud properties and human knowledge through machine learning to rapidly extract pertinent information, using user-centered information (smart data) rather than raw data. A review of feature detection, machine learning frameworks and database systems indexed both for mining queries and data visualisation is studied. Based on existing approaches, we propose a new 3-block flexible framework around device expertise, analytic expertise and domain base reflexion. This contribution serves as the first step for the realisation of a comprehensive smart point cloud data structure.

  12. Motion-Compensated Compression of Dynamic Voxelized Point Clouds.

    PubMed

    De Queiroz, Ricardo L; Chou, Philip A

    2017-05-24

    Dynamic point clouds are a potential new frontier in visual communication systems. A few articles have addressed the compression of point clouds, but very few references exist on exploring temporal redundancies. This paper presents a novel motion-compensated approach to encoding dynamic voxelized point clouds at low bit rates. A simple coder breaks the voxelized point cloud at each frame into blocks of voxels. Each block is either encoded in intra-frame mode or is replaced by a motion-compensated version of a block in the previous frame. The decision is optimized in a rate-distortion sense. In this way, both the geometry and the color are encoded with distortion, allowing for reduced bit-rates. In-loop filtering is employed to minimize compression artifacts caused by distortion in the geometry information. Simulations reveal that this simple motion compensated coder can efficiently extend the compression range of dynamic voxelized point clouds to rates below what intra-frame coding alone can accommodate, trading rate for geometry accuracy.

  13. Solubilization of phenanthrene above cloud point of Brij 30: a new application in biodegradation.

    PubMed

    Pantsyrnaya, T; Delaunay, S; Goergen, J L; Guseva, E; Boudrant, J

    2013-06-01

    In the present study a new application of solubilization of phenanthrene above cloud point of Brij 30 in biodegradation was developed. It was shown that a temporal solubilization of phenanthrene above cloud point of Brij 30 (5wt%) permitted to obtain a stable increase of the solubility of phenanthrene even when the temperature was decreased to culture conditions of used microorganism Pseudomonas putida (28°C). A higher initial concentration of soluble phenanthrene was obtained after the cloud point treatment: 200 against 120μM without treatment. All soluble phenanthrene was metabolized and a higher final concentration of its major metabolite - 1-hydroxy-2-naphthoic acid - (160 against 85μM) was measured in the culture medium in the case of a preliminary cloud point treatment. Therefore a temporary solubilization at cloud point might have a perspective application in the enhancement of biodegradation of polycyclic aromatic hydrocarbons. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. A portable low-cost 3D point cloud acquiring method based on structure light

    NASA Astrophysics Data System (ADS)

    Gui, Li; Zheng, Shunyi; Huang, Xia; Zhao, Like; Ma, Hao; Ge, Chao; Tang, Qiuxia

    2018-03-01

    A fast and low-cost method of acquiring 3D point cloud data is proposed in this paper, which can solve the problems of lack of texture information and low efficiency of acquiring point cloud data with only one pair of cheap cameras and projector. Firstly, we put forward a scene adaptive design method of random encoding pattern, that is, a coding pattern is projected onto the target surface in order to form texture information, which is favorable for image matching. Subsequently, we design an efficient dense matching algorithm that fits the projected texture. After the optimization of global algorithm and multi-kernel parallel development with the fusion of hardware and software, a fast acquisition system of point-cloud data is accomplished. Through the evaluation of point cloud accuracy, the results show that point cloud acquired by the method proposed in this paper has higher precision. What`s more, the scanning speed meets the demand of dynamic occasion and has better practical application value.

  15. Physicochemical perspectives (aggregation, structure and dynamics) of interaction between pluronic (L31) and surfactant (SDS).

    PubMed

    Prameela, G K S; Phani Kumar, B V N; Pan, A; Aswal, V K; Subramanian, J; Mandal, A B; Moulik, S P

    2015-11-11

    The influence of the water soluble non-ionic tri-block copolymer PEO-PPO-PEO [poly(ethylene oxide)-poly(propylene oxide)-poly(ethylene oxide)] i.e., E2P16E2 (L31) on the microstructure and self-aggregation dynamics of the anionic surfactant sodium dodecylsulfate (SDS) in aqueous solution was investigated using cloud point (CP), isothermal titration calorimetry (ITC), high resolution nuclear magnetic resonance (NMR), electron paramagnetic resonance (EPR), and small-angle neutron scattering (SANS) measurements. CP provided the thermodynamic information on the Gibbs free energy, enthalpy, entropy and heat capacity changes pertaining to the phase separation of the system at elevated temperature. The ITC and NMR self-diffusion measurements helped to understand the nature of the binding isotherms of SDS in the presence of L31 in terms of the formation of mixed aggregates and free SDS micelles in solution. EPR analysis provided the micro-viscosity of the spin probe 5-DSA in terms of rotational correlation time. The SANS study indicated the presence of prolate ellipsoidal mixed aggregates, whose size increased with the increasing addition of L31. At a large [L31], SANS also revealed the progressive decreasing size of the ellipsoidal mixed aggregates of SDS-L31 into nearly globular forms with the increasing SDS addition. Wrapping of the spherical SDS micelles by L31 was also corroborated from (13)C NMR and SANS measurements.

  16. 78 FR 25344 - Notice of Application for Approval of Discontinuance or Modification of a Railroad Signal System

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-30

    ... Point (CP) South Olean at Milepost (MP) BR73.0 up to and including CP North Olean at MP BR66.49 on WNYP... Buffalo Line at grade at CP Olean in Olean, NY. All power-operated switches in the application area will...

  17. Reproducibility of Tactile Assessments for Children with Unilateral Cerebral Palsy

    ERIC Educational Resources Information Center

    Auld, Megan Louise; Ware, Robert S.; Boyd, Roslyn Nancy; Moseley, G. Lorimer; Johnston, Leanne Marie

    2012-01-01

    A systematic review identified tactile assessments used in children with cerebral palsy (CP), but their reproducibility is unknown. Sixteen children with unilateral CP and 31 typically developing children (TDC) were assessed 2-4 weeks apart. Test-retest percent agreements within one point for children with unilateral CP (and TDC) were…

  18. Joint classification and contour extraction of large 3D point clouds

    NASA Astrophysics Data System (ADS)

    Hackel, Timo; Wegner, Jan D.; Schindler, Konrad

    2017-08-01

    We present an effective and efficient method for point-wise semantic classification and extraction of object contours of large-scale 3D point clouds. What makes point cloud interpretation challenging is the sheer size of several millions of points per scan and the non-grid, sparse, and uneven distribution of points. Standard image processing tools like texture filters, for example, cannot handle such data efficiently, which calls for dedicated point cloud labeling methods. It turns out that one of the major drivers for efficient computation and handling of strong variations in point density, is a careful formulation of per-point neighborhoods at multiple scales. This allows, both, to define an expressive feature set and to extract topologically meaningful object contours. Semantic classification and contour extraction are interlaced problems. Point-wise semantic classification enables extracting a meaningful candidate set of contour points while contours help generating a rich feature representation that benefits point-wise classification. These methods are tailored to have fast run time and small memory footprint for processing large-scale, unstructured, and inhomogeneous point clouds, while still achieving high classification accuracy. We evaluate our methods on the semantic3d.net benchmark for terrestrial laser scans with >109 points.

  19. Point clouds segmentation as base for as-built BIM creation

    NASA Astrophysics Data System (ADS)

    Macher, H.; Landes, T.; Grussenmeyer, P.

    2015-08-01

    In this paper, a three steps segmentation approach is proposed in order to create 3D models from point clouds acquired by TLS inside buildings. The three scales of segmentation are floors, rooms and planes composing the rooms. First, floor segmentation is performed based on analysis of point distribution along Z axis. Then, for each floor, room segmentation is achieved considering a slice of point cloud at ceiling level. Finally, planes are segmented for each room, and planes corresponding to ceilings and floors are identified. Results of each step are analysed and potential improvements are proposed. Based on segmented point clouds, the creation of as-built BIM is considered in a future work section. Not only the classification of planes into several categories is proposed, but the potential use of point clouds acquired outside buildings is also considered.

  20. High-Precision Registration of Point Clouds Based on Sphere Feature Constraints.

    PubMed

    Huang, Junhui; Wang, Zhao; Gao, Jianmin; Huang, Youping; Towers, David Peter

    2016-12-30

    Point cloud registration is a key process in multi-view 3D measurements. Its precision affects the measurement precision directly. However, in the case of the point clouds with non-overlapping areas or curvature invariant surface, it is difficult to achieve a high precision. A high precision registration method based on sphere feature constraint is presented to overcome the difficulty in the paper. Some known sphere features with constraints are used to construct virtual overlapping areas. The virtual overlapping areas provide more accurate corresponding point pairs and reduce the influence of noise. Then the transformation parameters between the registered point clouds are solved by an optimization method with weight function. In that case, the impact of large noise in point clouds can be reduced and a high precision registration is achieved. Simulation and experiments validate the proposed method.

  1. High-Precision Registration of Point Clouds Based on Sphere Feature Constraints

    PubMed Central

    Huang, Junhui; Wang, Zhao; Gao, Jianmin; Huang, Youping; Towers, David Peter

    2016-01-01

    Point cloud registration is a key process in multi-view 3D measurements. Its precision affects the measurement precision directly. However, in the case of the point clouds with non-overlapping areas or curvature invariant surface, it is difficult to achieve a high precision. A high precision registration method based on sphere feature constraint is presented to overcome the difficulty in the paper. Some known sphere features with constraints are used to construct virtual overlapping areas. The virtual overlapping areas provide more accurate corresponding point pairs and reduce the influence of noise. Then the transformation parameters between the registered point clouds are solved by an optimization method with weight function. In that case, the impact of large noise in point clouds can be reduced and a high precision registration is achieved. Simulation and experiments validate the proposed method. PMID:28042846

  2. Biotoxicity and bioavailability of hydrophobic organic compounds solubilized in nonionic surfactant micelle phase and cloud point system.

    PubMed

    Pan, Tao; Liu, Chunyan; Zeng, Xinying; Xin, Qiao; Xu, Meiying; Deng, Yangwu; Dong, Wei

    2017-06-01

    A recent work has shown that hydrophobic organic compounds solubilized in the micelle phase of some nonionic surfactants present substrate toxicity to microorganisms with increasing bioavailability. However, in cloud point systems, biotoxicity is prevented, because the compounds are solubilized into a coacervate phase, thereby leaving a fraction of compounds with cells in a dilute phase. This study extends the understanding of the relationship between substrate toxicity and bioavailability of hydrophobic organic compounds solubilized in nonionic surfactant micelle phase and cloud point system. Biotoxicity experiments were conducted with naphthalene and phenanthrene in the presence of mixed nonionic surfactants Brij30 and TMN-3, which formed a micelle phase or cloud point system at different concentrations. Saccharomyces cerevisiae, unable to degrade these compounds, was used for the biotoxicity experiments. Glucose in the cloud point system was consumed faster than in the nonionic surfactant micelle phase, indicating that the solubilized compounds had increased toxicity to cells in the nonionic surfactant micelle phase. The results were verified by subsequent biodegradation experiments. The compounds were degraded faster by PAH-degrading bacterium in the cloud point system than in the micelle phase. All these results showed that biotoxicity of the hydrophobic organic compounds increases with bioavailability in the surfactant micelle phase but remains at a low level in the cloud point system. These results provide a guideline for the application of cloud point systems as novel media for microbial transformation or biodegradation.

  3. Nicotine induced CpG methylation of Pax6 binding motif in StAR promoter reduces the gene expression and cortisol production

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Tingting; Department of Pharmacology, Uniformed Services University of the Health Sciences, Bethesda, Maryland; Chen, Man

    Steroidogenic acute regulatory protein (StAR) mediates the rate-limiting step in the synthesis of steroid hormones, essential to fetal development. We have reported that the StAR expression in fetal adrenal is inhibited in a rat model of nicotine-induced intrauterine growth retardation (IUGR). Here using primary human fetal adrenal cortex (pHFAC) cells and a human fetal adrenal cell line NCI-H295A, we show that nicotine inhibits StAR expression and cortisol production in a dose- and time-dependent manner, and prolongs the inhibitory effect on cells proliferating over 5 passages after termination of nicotine treatment. Methylation detection within the StAR promoter region uncovers a singlemore » site CpG methylation at nt -377 that is sensitive to nicotine treatment. Nicotine-induced alterations in frequency of this point methylation correlates well with the levels of StAR expression, suggesting an important role of the single site in regulating StAR expression. Further studies using bioinformatics analysis and siRNA approach reveal that the single CpG site is part of the Pax6 binding motif (CGCCTGA) in the StAR promoter. The luciferase activity assays validate that Pax6 increases StAR gene expression by binding to the glucagon G3-like motif (CGCCTGA) and methylation of this site blocks Pax6 binding and thus suppresses StAR expression. These data identify a nicotine-sensitive CpG site at the Pax6 binding motif in the StAR promoter that may play a central role in regulating StAR expression. The results suggest an epigenetic mechanism that may explain how nicotine contributes to onset of adult diseases or disorders such as metabolic syndrome via fetal programming. -- Highlights: Black-Right-Pointing-Pointer Nicotine-induced StAR inhibition in two human adrenal cell models. Black-Right-Pointing-Pointer Nicotine-induced single CpG site methylation in StAR promoter. Black-Right-Pointing-Pointer Persistent StAR inhibition and single CpG methylation after nicotine termination. Black-Right-Pointing-Pointer Single CpG methylation located at Pax6 binding motif regulates StAR expression.« less

  4. Filtering Photogrammetric Point Clouds Using Standard LIDAR Filters Towards DTM Generation

    NASA Astrophysics Data System (ADS)

    Zhang, Z.; Gerke, M.; Vosselman, G.; Yang, M. Y.

    2018-05-01

    Digital Terrain Models (DTMs) can be generated from point clouds acquired by laser scanning or photogrammetric dense matching. During the last two decades, much effort has been paid to developing robust filtering algorithms for the airborne laser scanning (ALS) data. With the point cloud quality from dense image matching (DIM) getting better and better, the research question that arises is whether those standard Lidar filters can be used to filter photogrammetric point clouds as well. Experiments are implemented to filter two dense matching point clouds with different noise levels. Results show that the standard Lidar filter is robust to random noise. However, artefacts and blunders in the DIM points often appear due to low contrast or poor texture in the images. Filtering will be erroneous in these locations. Filtering the DIM points pre-processed by a ranking filter will bring higher Type II error (i.e. non-ground points actually labelled as ground points) but much lower Type I error (i.e. bare ground points labelled as non-ground points). Finally, the potential DTM accuracy that can be achieved by DIM points is evaluated. Two DIM point clouds derived by Pix4Dmapper and SURE are compared. On grassland dense matching generates points higher than the true terrain surface, which will result in incorrectly elevated DTMs. The application of the ranking filter leads to a reduced bias in the DTM height, but a slightly increased noise level.

  5. Estimating the Critical Point of Crowding in the Emergency Department for the Warning System

    NASA Astrophysics Data System (ADS)

    Chang, Y.; Pan, C.; Tseng, C.; Wen, J.

    2011-12-01

    The purpose of this study is to deduce a function from the admissions/discharge rate of patient flow to estimate a "Critical Point" that provides a reference for warning systems in regards to crowding in the emergency department (ED) of a hospital or medical clinic. In this study, a model of "Input-Throughput-Output" was used in our established mathematical function to evaluate the critical point. The function is defined as dPin/dt=dwait/dt+Cp×B+ dPout/dt where Pin= number of registered patients, Pwait= number of waiting patients, Cp= retention rate per bed (calculated for the critical point), B= number of licensed beds in the treatment area, and Pout= number of patients discharged from the treatment area. Using the average Cp of ED crowding, we could start the warning system at an appropriate time and then plan for necessary emergency response to facilitate the patient process more smoothly. It was concluded that ED crowding could be quantified using the average value of Cp and the value could be used as a reference for medical staff to give optimal emergency medical treatment to patients. Therefore, additional practical work should be launched to collect more precise quantitative data.

  6. Impact of Surface Active Ionic Liquids on the Cloud Points of Nonionic Surfactants and the Formation of Aqueous Micellar Two-Phase Systems.

    PubMed

    Vicente, Filipa A; Cardoso, Inês S; Sintra, Tânia E; Lemus, Jesus; Marques, Eduardo F; Ventura, Sónia P M; Coutinho, João A P

    2017-09-21

    Aqueous micellar two-phase systems (AMTPS) hold a large potential for cloud point extraction of biomolecules but are yet poorly studied and characterized, with few phase diagrams reported for these systems, hence limiting their use in extraction processes. This work reports a systematic investigation of the effect of different surface-active ionic liquids (SAILs)-covering a wide range of molecular properties-upon the clouding behavior of three nonionic Tergitol surfactants. Two different effects of the SAILs on the cloud points and mixed micelle size have been observed: ILs with a more hydrophilic character and lower critical packing parameter (CPP < 1 / 2 ) lead to the formation of smaller micelles and concomitantly increase the cloud points; in contrast, ILs with a more hydrophobic character and higher CPP (CPP ≥ 1) induce significant micellar growth and a decrease in the cloud points. The latter effect is particularly interesting and unusual for it was accepted that cloud point reduction is only induced by inorganic salts. The effects of nonionic surfactant concentration, SAIL concentration, pH, and micelle ζ potential are also studied and rationalized.

  7. Point Cloud Management Through the Realization of the Intelligent Cloud Viewer Software

    NASA Astrophysics Data System (ADS)

    Costantino, D.; Angelini, M. G.; Settembrini, F.

    2017-05-01

    The paper presents a software dedicated to the elaboration of point clouds, called Intelligent Cloud Viewer (ICV), made in-house by AESEI software (Spin-Off of Politecnico di Bari), allowing to view point cloud of several tens of millions of points, also on of "no" very high performance systems. The elaborations are carried out on the whole point cloud and managed by means of the display only part of it in order to speed up rendering. It is designed for 64-bit Windows and is fully written in C ++ and integrates different specialized modules for computer graphics (Open Inventor by SGI, Silicon Graphics Inc), maths (BLAS, EIGEN), computational geometry (CGAL, Computational Geometry Algorithms Library), registration and advanced algorithms for point clouds (PCL, Point Cloud Library), advanced data structures (BOOST, Basic Object Oriented Supporting Tools), etc. ICV incorporates a number of features such as, for example, cropping, transformation and georeferencing, matching, registration, decimation, sections, distances calculation between clouds, etc. It has been tested on photographic and TLS (Terrestrial Laser Scanner) data, obtaining satisfactory results. The potentialities of the software have been tested by carrying out the photogrammetric survey of the Castel del Monte which was already available in previous laser scanner survey made from the ground by the same authors. For the aerophotogrammetric survey has been adopted a flight height of approximately 1000ft AGL (Above Ground Level) and, overall, have been acquired over 800 photos in just over 15 minutes, with a covering not less than 80%, the planned speed of about 90 knots.

  8. Design of relative motion and attitude profiles for three-dimensional resident space object imaging with a laser rangefinder

    NASA Astrophysics Data System (ADS)

    Nayak, M.; Beck, J.; Udrea, B.

    This paper focuses on the aerospace application of a single beam laser rangefinder (LRF) for 3D imaging, shape detection, and reconstruction in the context of a space-based space situational awareness (SSA) mission scenario. The primary limitation to 3D imaging from LRF point clouds is the one-dimensional nature of the single beam measurements. A method that combines relative orbital motion and scanning attitude motion to generate point clouds has been developed and the design and characterization of multiple relative motion and attitude maneuver profiles are presented. The target resident space object (RSO) has the shape of a generic telecommunications satellite. The shape and attitude of the RSO are unknown to the chaser satellite however, it is assumed that the RSO is un-cooperative and has fixed inertial pointing. All sensors in the metrology chain are assumed ideal. A previous study by the authors used pure Keplerian motion to perform a similar 3D imaging mission at an asteroid. A new baseline for proximity operations maneuvers for LRF scanning, based on a waypoint adaptation of the Hill-Clohessy-Wiltshire (HCW) equations is examined. Propellant expenditure for each waypoint profile is discussed and combinations of relative motion and attitude maneuvers that minimize the propellant used to achieve a minimum required point cloud density are studied. Both LRF strike-point coverage and point cloud density are maximized; the capability for 3D shape registration and reconstruction from point clouds generated with a single beam LRF without catalog comparison is proven. Next, a method of using edge detection algorithms to process a point cloud into a 3D modeled image containing reconstructed shapes is presented. Weighted accuracy of edge reconstruction with respect to the true model is used to calculate a qualitative “ metric” that evaluates effectiveness of coverage. Both edge recognition algorithms and the metric are independent of point cloud densit- , therefore they are utilized to compare the quality of point clouds generated by various attitude and waypoint command profiles. The RSO model incorporates diverse irregular protruding shapes, such as open sensor covers, instrument pods and solar arrays, to test the limits of the algorithms. This analysis is used to mathematically prove that point clouds generated by a single-beam LRF can achieve sufficient edge recognition accuracy for SSA applications, with meaningful shape information extractable even from sparse point clouds. For all command profiles, reconstruction of RSO shapes from the point clouds generated with the proposed method are compared to the truth model and conclusions are drawn regarding their fidelity.

  9. An Efficient Method to Create Digital Terrain Models from Point Clouds Collected by Mobile LiDAR Systems

    NASA Astrophysics Data System (ADS)

    Gézero, L.; Antunes, C.

    2017-05-01

    The digital terrain models (DTM) assume an essential role in all types of road maintenance, water supply and sanitation projects. The demand of such information is more significant in developing countries, where the lack of infrastructures is higher. In recent years, the use of Mobile LiDAR Systems (MLS) proved to be a very efficient technique in the acquisition of precise and dense point clouds. These point clouds can be a solution to obtain the data for the production of DTM in remote areas, due mainly to the safety, precision, speed of acquisition and the detail of the information gathered. However, the point clouds filtering and algorithms to separate "terrain points" from "no terrain points", quickly and consistently, remain a challenge that has caught the interest of researchers. This work presents a method to create the DTM from point clouds collected by MLS. The method is based in two interactive steps. The first step of the process allows reducing the cloud point to a set of points that represent the terrain's shape, being the distance between points inversely proportional to the terrain variation. The second step is based on the Delaunay triangulation of the points resulting from the first step. The achieved results encourage a wider use of this technology as a solution for large scale DTM production in remote areas.

  10. Evaluation of terrestrial photogrammetric point clouds derived from thermal imagery

    NASA Astrophysics Data System (ADS)

    Metcalf, Jeremy P.; Olsen, Richard C.

    2016-05-01

    Computer vision and photogrammetric techniques have been widely applied to digital imagery producing high density 3D point clouds. Using thermal imagery as input, the same techniques can be applied to infrared data to produce point clouds in 3D space, providing surface temperature information. The work presented here is an evaluation of the accuracy of 3D reconstruction of point clouds produced using thermal imagery. An urban scene was imaged over an area at the Naval Postgraduate School, Monterey, CA, viewing from above as with an airborne system. Terrestrial thermal and RGB imagery were collected from a rooftop overlooking the site using a FLIR SC8200 MWIR camera and a Canon T1i DSLR. In order to spatially align each dataset, ground control points were placed throughout the study area using Trimble R10 GNSS receivers operating in RTK mode. Each image dataset is processed to produce a dense point cloud for 3D evaluation.

  11. 78 FR 47822 - Notice of Application for Approval of Discontinuance or Modification of a Railroad Signal System

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-06

    ..., San Pedro, CA 90731. POLA seeks approval of the proposed discontinuance of Control Point (CP) Transfer Junction at Milepost 1.2 on the Pacific Harbor Line, San Pedro Subdivision. CP Transfer Junction will... discontinuance of CP Transfer Junction. A copy of the petition, as well as any written communications concerning...

  12. Effect of target color and scanning geometry on terrestrial LiDAR point-cloud noise and plane fitting

    NASA Astrophysics Data System (ADS)

    Bolkas, Dimitrios; Martinez, Aaron

    2018-01-01

    Point-cloud coordinate information derived from terrestrial Light Detection And Ranging (LiDAR) is important for several applications in surveying and civil engineering. Plane fitting and segmentation of target-surfaces is an important step in several applications such as in the monitoring of structures. Reliable parametric modeling and segmentation relies on the underlying quality of the point-cloud. Therefore, understanding how point-cloud errors affect fitting of planes and segmentation is important. Point-cloud intensity, which accompanies the point-cloud data, often goes hand-in-hand with point-cloud noise. This study uses industrial particle boards painted with eight different colors (black, white, grey, red, green, blue, brown, and yellow) and two different sheens (flat and semi-gloss) to explore how noise and plane residuals vary with scanning geometry (i.e., distance and incidence angle) and target-color. Results show that darker colors, such as black and brown, can produce point clouds that are several times noisier than bright targets, such as white. In addition, semi-gloss targets manage to reduce noise in dark targets by about 2-3 times. The study of plane residuals with scanning geometry reveals that, in many of the cases tested, residuals decrease with increasing incidence angles, which can assist in understanding the distribution of plane residuals in a dataset. Finally, a scheme is developed to derive survey guidelines based on the data collected in this experiment. Three examples demonstrate that users should consider instrument specification, required precision of plane residuals, required point-spacing, target-color, and target-sheen, when selecting scanning locations. Outcomes of this study can aid users to select appropriate instrumentation and improve planning of terrestrial LiDAR data-acquisition.

  13. Point Cloud Classification of Tesserae from Terrestrial Laser Data Combined with Dense Image Matching for Archaeological Information Extraction

    NASA Astrophysics Data System (ADS)

    Poux, F.; Neuville, R.; Billen, R.

    2017-08-01

    Reasoning from information extraction given by point cloud data mining allows contextual adaptation and fast decision making. However, to achieve this perceptive level, a point cloud must be semantically rich, retaining relevant information for the end user. This paper presents an automatic knowledge-based method for pre-processing multi-sensory data and classifying a hybrid point cloud from both terrestrial laser scanning and dense image matching. Using 18 features including sensor's biased data, each tessera in the high-density point cloud from the 3D captured complex mosaics of Germigny-des-prés (France) is segmented via a colour multi-scale abstraction-based featuring extracting connectivity. A 2D surface and outline polygon of each tessera is generated by a RANSAC plane extraction and convex hull fitting. Knowledge is then used to classify every tesserae based on their size, surface, shape, material properties and their neighbour's class. The detection and semantic enrichment method shows promising results of 94% correct semantization, a first step toward the creation of an archaeological smart point cloud.

  14. Temporally consistent segmentation of point clouds

    NASA Astrophysics Data System (ADS)

    Owens, Jason L.; Osteen, Philip R.; Daniilidis, Kostas

    2014-06-01

    We consider the problem of generating temporally consistent point cloud segmentations from streaming RGB-D data, where every incoming frame extends existing labels to new points or contributes new labels while maintaining the labels for pre-existing segments. Our approach generates an over-segmentation based on voxel cloud connectivity, where a modified k-means algorithm selects supervoxel seeds and associates similar neighboring voxels to form segments. Given the data stream from a potentially mobile sensor, we solve for the camera transformation between consecutive frames using a joint optimization over point correspondences and image appearance. The aligned point cloud may then be integrated into a consistent model coordinate frame. Previously labeled points are used to mask incoming points from the new frame, while new and previous boundary points extend the existing segmentation. We evaluate the algorithm on newly-generated RGB-D datasets.

  15. Traffic sign detection in MLS acquired point clouds for geometric and image-based semantic inventory

    NASA Astrophysics Data System (ADS)

    Soilán, Mario; Riveiro, Belén; Martínez-Sánchez, Joaquín; Arias, Pedro

    2016-04-01

    Nowadays, mobile laser scanning has become a valid technology for infrastructure inspection. This technology permits collecting accurate 3D point clouds of urban and road environments and the geometric and semantic analysis of data became an active research topic in the last years. This paper focuses on the detection of vertical traffic signs in 3D point clouds acquired by a LYNX Mobile Mapper system, comprised of laser scanning and RGB cameras. Each traffic sign is automatically detected in the LiDAR point cloud, and its main geometric parameters can be automatically extracted, therefore aiding the inventory process. Furthermore, the 3D position of traffic signs are reprojected on the 2D images, which are spatially and temporally synced with the point cloud. Image analysis allows for recognizing the traffic sign semantics using machine learning approaches. The presented method was tested in road and urban scenarios in Galicia (Spain). The recall results for traffic sign detection are close to 98%, and existing false positives can be easily filtered after point cloud projection. Finally, the lack of a large, publicly available Spanish traffic sign database is pointed out.

  16. a Gross Error Elimination Method for Point Cloud Data Based on Kd-Tree

    NASA Astrophysics Data System (ADS)

    Kang, Q.; Huang, G.; Yang, S.

    2018-04-01

    Point cloud data has been one type of widely used data sources in the field of remote sensing. Key steps of point cloud data's pro-processing focus on gross error elimination and quality control. Owing to the volume feature of point could data, existed gross error elimination methods need spend massive memory both in space and time. This paper employed a new method which based on Kd-tree algorithm to construct, k-nearest neighbor algorithm to search, settled appropriate threshold to determine with result turns out a judgement that whether target point is or not an outlier. Experimental results show that, our proposed algorithm will help to delete gross error in point cloud data and facilitate to decrease memory consumption, improve efficiency.

  17. 78 FR 11867 - CenterPoint Energy Gas Transmission Company, LLC; Notice of Request Under Blanket Authorization

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-20

    ... Act (NGA), and CenterPoint's blanket certificate authorized in Docket Nos. CP82-384-000 and CP82-384... Regulations under the NGA (18 CFR 157.205) file a protest to the request. If no protest is filed within the... pursuant to section 7 of the NGA. Persons who wish to comment only on the environmental review of this...

  18. A Secure and Verifiable Outsourced Access Control Scheme in Fog-Cloud Computing

    PubMed Central

    Fan, Kai; Wang, Junxiong; Wang, Xin; Li, Hui; Yang, Yintang

    2017-01-01

    With the rapid development of big data and Internet of things (IOT), the number of networking devices and data volume are increasing dramatically. Fog computing, which extends cloud computing to the edge of the network can effectively solve the bottleneck problems of data transmission and data storage. However, security and privacy challenges are also arising in the fog-cloud computing environment. Ciphertext-policy attribute-based encryption (CP-ABE) can be adopted to realize data access control in fog-cloud computing systems. In this paper, we propose a verifiable outsourced multi-authority access control scheme, named VO-MAACS. In our construction, most encryption and decryption computations are outsourced to fog devices and the computation results can be verified by using our verification method. Meanwhile, to address the revocation issue, we design an efficient user and attribute revocation method for it. Finally, analysis and simulation results show that our scheme is both secure and highly efficient. PMID:28737733

  19. Object Detection using the Kinect

    DTIC Science & Technology

    2012-03-01

    Kinect camera and point cloud data from the Kinect’s structured light stereo system (figure 1). We obtain reasonable results using a single prototype...same manner we present in this report. For example, at Willow Garage , Steder uses a 3-D feature he developed to classify objects directly from point...detecting backpacks using the data available from the Kinect sensor. 4 3.1 Point Cloud Filtering Dense point clouds derived from stereo are notoriously

  20. Tunnel Point Cloud Filtering Method Based on Elliptic Cylindrical Model

    NASA Astrophysics Data System (ADS)

    Zhua, Ningning; Jiaa, Yonghong; Luo, Lun

    2016-06-01

    The large number of bolts and screws that attached to the subway shield ring plates, along with the great amount of accessories of metal stents and electrical equipments mounted on the tunnel walls, make the laser point cloud data include lots of non-tunnel section points (hereinafter referred to as non-points), therefore affecting the accuracy for modeling and deformation monitoring. This paper proposed a filtering method for the point cloud based on the elliptic cylindrical model. The original laser point cloud data was firstly projected onto a horizontal plane, and a searching algorithm was given to extract the edging points of both sides, which were used further to fit the tunnel central axis. Along the axis the point cloud was segmented regionally, and then fitted as smooth elliptic cylindrical surface by means of iteration. This processing enabled the automatic filtering of those inner wall non-points. Experiments of two groups showed coincident results, that the elliptic cylindrical model based method could effectively filter out the non-points, and meet the accuracy requirements for subway deformation monitoring. The method provides a new mode for the periodic monitoring of tunnel sections all-around deformation in subways routine operation and maintenance.

  1. 78 FR 77789 - Petition for Waiver of Compliance

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-24

    ... Subdivision, from Control Point (CP) Y901 and Kedzie may be made in accordance with signal indication and at... and from the CP Y901 with the ATC cut out and back-up moves; or, With the ATC cut out due to failure. 2. Operations on the Chicago Service Unit, Geneva Subdivision, from Kedzie and Park CP Y015, engines...

  2. 75 FR 82137 - Notice of Application for Approval of Discontinuance or Modification of a Railroad Signal System

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-29

    ...) on the Chicago Division near Gary, Indiana. The proposed discontinuance is from control point (CP) Kirk Yard Junction to, but not including, CP Stockton 2 on the Matteson Subdivision Main 1 and Main 2; and from CP Kirk Yard Junction to, but not including, Stockton 1 on the Lake Front Subdivision Main...

  3. 78 FR 46678 - Notice of Application for Approval of Discontinuance or Modification of a Railroad Signal System

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-01

    ..., GA 30309. NS seeks approval of the proposed discontinuance of Control Point (CP) CSXT Connection, at Milepost (MP) H 194.9 on the NS Roanoke District, Virginia Division, between Shenandoah and Roanoke, VA. CP... is that CP CSXT Connection is seldom used and no longer needed for railroad operations. A copy of the...

  4. A Modular Approach to Video Designation of Manipulation Targets for Manipulators

    DTIC Science & Technology

    2014-05-12

    side view of a ray going through a point cloud of a water bottle sitting on the ground. The bottom left image shows the same point cloud after it has...System (ROS), Point Cloud Library (PCL), and OpenRAVE were used to a great extent to help promote reusability of the code developed during this

  5. Automatic Matching of Large Scale Images and Terrestrial LIDAR Based on App Synergy of Mobile Phone

    NASA Astrophysics Data System (ADS)

    Xia, G.; Hu, C.

    2018-04-01

    The digitalization of Cultural Heritage based on ground laser scanning technology has been widely applied. High-precision scanning and high-resolution photography of cultural relics are the main methods of data acquisition. The reconstruction with the complete point cloud and high-resolution image requires the matching of image and point cloud, the acquisition of the homonym feature points, the data registration, etc. However, the one-to-one correspondence between image and corresponding point cloud depends on inefficient manual search. The effective classify and management of a large number of image and the matching of large image and corresponding point cloud will be the focus of the research. In this paper, we propose automatic matching of large scale images and terrestrial LiDAR based on APP synergy of mobile phone. Firstly, we develop an APP based on Android, take pictures and record related information of classification. Secondly, all the images are automatically grouped with the recorded information. Thirdly, the matching algorithm is used to match the global and local image. According to the one-to-one correspondence between the global image and the point cloud reflection intensity image, the automatic matching of the image and its corresponding laser radar point cloud is realized. Finally, the mapping relationship between global image, local image and intensity image is established according to homonym feature point. So we can establish the data structure of the global image, the local image in the global image, the local image corresponding point cloud, and carry on the visualization management and query of image.

  6. Using LIDAR and UAV-derived point clouds to evaluate surface roughness in a gravel-bed braided river (Vénéon river, French Alps)

    NASA Astrophysics Data System (ADS)

    Vázquez Tarrío, Daniel; Borgniet, Laurent; Recking, Alain; Liebault, Frédéric; Vivier, Marie

    2016-04-01

    The present research is focused on the Vénéon river at Plan du Lac (Massif des Ecrins, France), an alpine braided gravel bed stream with a glacio-nival hydrological regime. It drains a catchment area of 316 km2. The present research is focused in a 2.5 km braided reach placed immediately upstream of a small hydropower dam. An airbone LIDAR survey was accomplished in October, 2014 by EDF (the company managing the small hydropower dam), and data coming from this LIDAR survey were available for the present research. Point density of the LIDAR-derived 3D-point cloud was between 20-50 points/m2, with a vertical precision of 2-3 cm over flat surfaces. Moreover, between April and Juin, 2015, we carried out a photogrammetrical campaign based in aerial images taken with an UAV-drone. The UAV-derived point-cloud has a point density of 200-300 points/m2, and a vertical precision over flat control surfaces comparable to that of the LIDAR point cloud (2-3 cm). Simultaneously to the UAV campaign, we took several Wolman samples with the aim of characterizing the grain size distribution of bed sediment. Wolman samples were taken following a geomorphological criterion (unit bars, head/tail of compound bars). Furthermore, some of the Wolman samples were repeated with the aim of defining the uncertainty of our sampling protocol. LIDAR and UAV-derived point clouds were treated in order to check whether both point-clouds were correctly co-aligned. After that, we estimated bed roughness using the detrended standard deviation of heights, in a 40-cm window. For all this data treatment we used CloudCompare. Then, we measured the distribution of roughness in the same geomorphological units where we took the Wolman samples, and we compared with the grain size distributions measured in the field: differences between UAV-point cloud roughness distributions and measured-grain size distribution (~1-2 cm) are in the same order of magnitude of the differences found between the repeated Wolman samples (~0.5-1.5 cm). Differences with LIDAR-derived roughness distributions are only slightly higher, which could be due to the lower point density of the LIDAR point clouds.

  7. Corner-point criterion for assessing nonlinear image processing imagers

    NASA Astrophysics Data System (ADS)

    Landeau, Stéphane; Pigois, Laurent; Foing, Jean-Paul; Deshors, Gilles; Swiathy, Greggory

    2017-10-01

    Range performance modeling of optronics imagers attempts to characterize the ability to resolve details in the image. Today, digital image processing is systematically used in conjunction with the optoelectronic system to correct its defects or to exploit tiny detection signals to increase performance. In order to characterize these processing having adaptive and non-linear properties, it becomes necessary to stimulate the imagers with test patterns whose properties are similar to the actual scene image ones, in terms of dynamic range, contours, texture and singular points. This paper presents an approach based on a Corner-Point (CP) resolution criterion, derived from the Probability of Correct Resolution (PCR) of binary fractal patterns. The fundamental principle lies in the respectful perception of the CP direction of one pixel minority value among the majority value of a 2×2 pixels block. The evaluation procedure considers the actual image as its multi-resolution CP transformation, taking the role of Ground Truth (GT). After a spatial registration between the degraded image and the original one, the degradation is statistically measured by comparing the GT with the degraded image CP transformation, in terms of localized PCR at the region of interest. The paper defines this CP criterion and presents the developed evaluation techniques, such as the measurement of the number of CP resolved on the target, the transformation CP and its inverse transform that make it possible to reconstruct an image of the perceived CPs. Then, this criterion is compared with the standard Johnson criterion, in the case of a linear blur and noise degradation. The evaluation of an imaging system integrating an image display and a visual perception is considered, by proposing an analysis scheme combining two methods: a CP measurement for the highly non-linear part (imaging) with real signature test target and conventional methods for the more linear part (displaying). The application to color imaging is proposed, with a discussion about the choice of the working color space depending on the type of image enhancement processing used.

  8. Vertical Optical Scanning with Panoramic Vision for Tree Trunk Reconstruction

    PubMed Central

    Berveglieri, Adilson; Liang, Xinlian; Honkavaara, Eija

    2017-01-01

    This paper presents a practical application of a technique that uses a vertical optical flow with a fisheye camera to generate dense point clouds from a single planimetric station. Accurate data can be extracted to enable the measurement of tree trunks or branches. The images that are collected with this technique can be oriented in photogrammetric software (using fisheye models) and used to generate dense point clouds, provided that some constraints on the camera positions are adopted. A set of images was captured in a forest plot in the experiments. Weighted geometric constraints were imposed in the photogrammetric software to calculate the image orientation, perform dense image matching, and accurately generate a 3D point cloud. The tree trunks in the scenes were reconstructed and mapped in a local reference system. The accuracy assessment was based on differences between measured and estimated trunk diameters at different heights. Trunk sections from an image-based point cloud were also compared to the corresponding sections that were extracted from a dense terrestrial laser scanning (TLS) point cloud. Cylindrical fitting of the trunk sections allowed the assessment of the accuracies of the trunk geometric shapes in both clouds. The average difference between the cylinders that were fitted to the photogrammetric cloud and those to the TLS cloud was less than 1 cm, which indicates the potential of the proposed technique. The point densities that were obtained with vertical optical scanning were 1/3 less than those that were obtained with TLS. However, the point density can be improved by using higher resolution cameras. PMID:29207468

  9. Vertical Optical Scanning with Panoramic Vision for Tree Trunk Reconstruction.

    PubMed

    Berveglieri, Adilson; Tommaselli, Antonio M G; Liang, Xinlian; Honkavaara, Eija

    2017-12-02

    This paper presents a practical application of a technique that uses a vertical optical flow with a fisheye camera to generate dense point clouds from a single planimetric station. Accurate data can be extracted to enable the measurement of tree trunks or branches. The images that are collected with this technique can be oriented in photogrammetric software (using fisheye models) and used to generate dense point clouds, provided that some constraints on the camera positions are adopted. A set of images was captured in a forest plot in the experiments. Weighted geometric constraints were imposed in the photogrammetric software to calculate the image orientation, perform dense image matching, and accurately generate a 3D point cloud. The tree trunks in the scenes were reconstructed and mapped in a local reference system. The accuracy assessment was based on differences between measured and estimated trunk diameters at different heights. Trunk sections from an image-based point cloud were also compared to the corresponding sections that were extracted from a dense terrestrial laser scanning (TLS) point cloud. Cylindrical fitting of the trunk sections allowed the assessment of the accuracies of the trunk geometric shapes in both clouds. The average difference between the cylinders that were fitted to the photogrammetric cloud and those to the TLS cloud was less than 1 cm, which indicates the potential of the proposed technique. The point densities that were obtained with vertical optical scanning were 1/3 less than those that were obtained with TLS. However, the point density can be improved by using higher resolution cameras.

  10. Automatic Recognition of Indoor Navigation Elements from Kinect Point Clouds

    NASA Astrophysics Data System (ADS)

    Zeng, L.; Kang, Z.

    2017-09-01

    This paper realizes automatically the navigating elements defined by indoorGML data standard - door, stairway and wall. The data used is indoor 3D point cloud collected by Kinect v2 launched in 2011 through the means of ORB-SLAM. By contrast, it is cheaper and more convenient than lidar, but the point clouds also have the problem of noise, registration error and large data volume. Hence, we adopt a shape descriptor - histogram of distances between two randomly chosen points, proposed by Osada and merges with other descriptor - in conjunction with random forest classifier to recognize the navigation elements (door, stairway and wall) from Kinect point clouds. This research acquires navigation elements and their 3-d location information from each single data frame through segmentation of point clouds, boundary extraction, feature calculation and classification. Finally, this paper utilizes the acquired navigation elements and their information to generate the state data of the indoor navigation module automatically. The experimental results demonstrate a high recognition accuracy of the proposed method.

  11. Towards 3D Matching of Point Clouds Derived from Oblique and Nadir Airborne Imagery

    NASA Astrophysics Data System (ADS)

    Zhang, Ming

    Because of the low-expense high-efficient image collection process and the rich 3D and texture information presented in the images, a combined use of 2D airborne nadir and oblique images to reconstruct 3D geometric scene has a promising market for future commercial usage like urban planning or first responders. The methodology introduced in this thesis provides a feasible way towards fully automated 3D city modeling from oblique and nadir airborne imagery. In this thesis, the difficulty of matching 2D images with large disparity is avoided by grouping the images first and applying the 3D registration afterward. The procedure starts with the extraction of point clouds using a modified version of the RIT 3D Extraction Workflow. Then the point clouds are refined by noise removal and surface smoothing processes. Since the point clouds extracted from different image groups use independent coordinate systems, there are translation, rotation and scale differences existing. To figure out these differences, 3D keypoints and their features are extracted. For each pair of point clouds, an initial alignment and a more accurate registration are applied in succession. The final transform matrix presents the parameters describing the translation, rotation and scale requirements. The methodology presented in the thesis has been shown to behave well for test data. The robustness of this method is discussed by adding artificial noise to the test data. For Pictometry oblique aerial imagery, the initial alignment provides a rough alignment result, which contains a larger offset compared to that of test data because of the low quality of the point clouds themselves, but it can be further refined through the final optimization. The accuracy of the final registration result is evaluated by comparing it to the result obtained from manual selection of matched points. Using the method introduced, point clouds extracted from different image groups could be combined with each other to build a more complete point cloud, or be used as a complement to existing point clouds extracted from other sources. This research will both improve the state of the art of 3D city modeling and inspire new ideas in related fields.

  12. Fast Semantic Segmentation of 3d Point Clouds with Strongly Varying Density

    NASA Astrophysics Data System (ADS)

    Hackel, Timo; Wegner, Jan D.; Schindler, Konrad

    2016-06-01

    We describe an effective and efficient method for point-wise semantic classification of 3D point clouds. The method can handle unstructured and inhomogeneous point clouds such as those derived from static terrestrial LiDAR or photogammetric reconstruction; and it is computationally efficient, making it possible to process point clouds with many millions of points in a matter of minutes. The key issue, both to cope with strong variations in point density and to bring down computation time, turns out to be careful handling of neighborhood relations. By choosing appropriate definitions of a point's (multi-scale) neighborhood, we obtain a feature set that is both expressive and fast to compute. We evaluate our classification method both on benchmark data from a mobile mapping platform and on a variety of large, terrestrial laser scans with greatly varying point density. The proposed feature set outperforms the state of the art with respect to per-point classification accuracy, while at the same time being much faster to compute.

  13. A Voxel-Based Approach for Imaging Voids in Three-Dimensional Point Clouds

    NASA Astrophysics Data System (ADS)

    Salvaggio, Katie N.

    Geographically accurate scene models have enormous potential beyond that of just simple visualizations in regard to automated scene generation. In recent years, thanks to ever increasing computational efficiencies, there has been significant growth in both the computer vision and photogrammetry communities pertaining to automatic scene reconstruction from multiple-view imagery. The result of these algorithms is a three-dimensional (3D) point cloud which can be used to derive a final model using surface reconstruction techniques. However, the fidelity of these point clouds has not been well studied, and voids often exist within the point cloud. Voids exist in texturally difficult areas, as well as areas where multiple views were not obtained during collection, constant occlusion existed due to collection angles or overlapping scene geometry, or in regions that failed to triangulate accurately. It may be possible to fill in small voids in the scene using surface reconstruction or hole-filling techniques, but this is not the case with larger more complex voids, and attempting to reconstruct them using only the knowledge of the incomplete point cloud is neither accurate nor aesthetically pleasing. A method is presented for identifying voids in point clouds by using a voxel-based approach to partition the 3D space. By using collection geometry and information derived from the point cloud, it is possible to detect unsampled voxels such that voids can be identified. This analysis takes into account the location of the camera and the 3D points themselves to capitalize on the idea of free space, such that voxels that lie on the ray between the camera and point are devoid of obstruction, as a clear line of sight is a necessary requirement for reconstruction. Using this approach, voxels are classified into three categories: occupied (contains points from the point cloud), free (rays from the camera to the point passed through the voxel), and unsampled (does not contain points and no rays passed through the area). Voids in the voxel space are manifested as unsampled voxels. A similar line-of-sight analysis can then be used to pinpoint locations at aircraft altitude at which the voids in the point clouds could theoretically be imaged. This work is based on the assumption that inclusion of more images of the void areas in the 3D reconstruction process will reduce the number of voids in the point cloud that were a result of lack of coverage. Voids resulting from texturally difficult areas will not benefit from more imagery in the reconstruction process, and thus are identified and removed prior to the determination of future potential imaging locations.

  14. Classification by Using Multispectral Point Cloud Data

    NASA Astrophysics Data System (ADS)

    Liao, C. T.; Huang, H. H.

    2012-07-01

    Remote sensing images are generally recorded in two-dimensional format containing multispectral information. Also, the semantic information is clearly visualized, which ground features can be better recognized and classified via supervised or unsupervised classification methods easily. Nevertheless, the shortcomings of multispectral images are highly depending on light conditions, and classification results lack of three-dimensional semantic information. On the other hand, LiDAR has become a main technology for acquiring high accuracy point cloud data. The advantages of LiDAR are high data acquisition rate, independent of light conditions and can directly produce three-dimensional coordinates. However, comparing with multispectral images, the disadvantage is multispectral information shortage, which remains a challenge in ground feature classification through massive point cloud data. Consequently, by combining the advantages of both LiDAR and multispectral images, point cloud data with three-dimensional coordinates and multispectral information can produce a integrate solution for point cloud classification. Therefore, this research acquires visible light and near infrared images, via close range photogrammetry, by matching images automatically through free online service for multispectral point cloud generation. Then, one can use three-dimensional affine coordinate transformation to compare the data increment. At last, the given threshold of height and color information is set as threshold in classification.

  15. Line segment extraction for large scale unorganized point clouds

    NASA Astrophysics Data System (ADS)

    Lin, Yangbin; Wang, Cheng; Cheng, Jun; Chen, Bili; Jia, Fukai; Chen, Zhonggui; Li, Jonathan

    2015-04-01

    Line segment detection in images is already a well-investigated topic, although it has received considerably less attention in 3D point clouds. Benefiting from current LiDAR devices, large-scale point clouds are becoming increasingly common. Most human-made objects have flat surfaces. Line segments that occur where pairs of planes intersect give important information regarding the geometric content of point clouds, which is especially useful for automatic building reconstruction and segmentation. This paper proposes a novel method that is capable of accurately extracting plane intersection line segments from large-scale raw scan points. The 3D line-support region, namely, a point set near a straight linear structure, is extracted simultaneously. The 3D line-support region is fitted by our Line-Segment-Half-Planes (LSHP) structure, which provides a geometric constraint for a line segment, making the line segment more reliable and accurate. We demonstrate our method on the point clouds of large-scale, complex, real-world scenes acquired by LiDAR devices. We also demonstrate the application of 3D line-support regions and their LSHP structures on urban scene abstraction.

  16. Characterizing Sorghum Panicles using 3D Point Clouds

    NASA Astrophysics Data System (ADS)

    Lonesome, M.; Popescu, S. C.; Horne, D. W.; Pugh, N. A.; Rooney, W.

    2017-12-01

    To address demands of population growth and impacts of global climate change, plant breeders must increase crop yield through genetic improvement. However, plant phenotyping, the characterization of a plant's physical attributes, remains a primary bottleneck in modern crop improvement programs. 3D point clouds generated from terrestrial laser scanning (TLS) and unmanned aerial systems (UAS) based structure from motion (SfM) are a promising data source to increase the efficiency of screening plant material in breeding programs. This study develops and evaluates methods for characterizing sorghum (Sorghum bicolor) panicles (heads) in field plots from both TLS and UAS-based SfM point clouds. The TLS point cloud over experimental sorghum field at Texas A&M farm in Burleston County TX were collected using a FARO Focus X330 3D laser scanner. SfM point cloud was generated from UAS imagery captured using a Phantom 3 Professional UAS at 10m altitude and 85% image overlap. The panicle detection method applies point cloud reflectance, height and point density attributes characteristic of sorghum panicles to detect them and estimate their dimensions (panicle length and width) through image classification and clustering procedures. We compare the derived panicle counts and panicle sizes with field-based and manually digitized measurements in selected plots and study the strengths and limitations of each data source for sorghum panicle characterization.

  17. Low-Temperature Biodiesel Research Reveals Potential Key to Successful Blend Performance (Fact Sheet)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    Relatively low-cost solutions could improve reliability while making biodiesel blends an affordable option. While biodiesel has very low production costs and the potential to displace up to 10% of petroleum diesel, until now, issues with cold weather performance have prevented biodiesel blends from being widely adopted. Some biodiesel blends have exhibited unexplained low-temperature performance problems even at blend levels as low as 2% by volume. The most common low-temperature performance issue is vehicle stalling caused by fuel filter clogging, which prevents fuel from reaching the engine. Research at the National Renewable Energy Laboratory (NREL) reveals the properties responsible for thesemore » problems, clearing a path for the development of solutions and expanded use of energy-conserving and low-emissions alternative fuel. NREL researchers set out to study the unpredictable nature of biodiesel crystallization, the condition that impedes the flow of fuel in cold weather. Their research revealed for the first time that saturated monoglyceride impurities common to the biodiesel manufacturing process create crystals that can cause fuel filter clogging and other problems when cooling at slow rates. Biodiesel low-temperature operational problems are commonly referred to as 'precipitates above the cloud point (CP).' NREL's Advanced Biofuels team spiked distilled soy and animal fat-derived B100, as well as B20, B10, and B5 biodiesel blends with three saturated monoglycerides (SMGs) at concentration levels comparable to those of real-world fuels. Above a threshold or eutectic concentration, the SMGs (monomyristin, monopalmitin, and monostearin) were shown to significantly raise the biodiesel CP, and had an even greater impact on the final melting temperature. Researchers discovered that upon cooling, monoglyceride initially precipitates as a metastable crystal, but it transforms over time or upon slight heating into a more stable crystal with a much lower solubility and higher melting temperature - and with increased potential to cause vehicle performance issues. This explains why fuel-filter clogging typically occurs over the course of long, repeated diurnal cooling cycles. The elevated final melting points mean that restarting vehicles with clogged filters can be difficult even after ambient temperatures have warmed to well above CP. By examining how biodiesel impurities affect filtration and crystallization during warming and cooling cycles, NREL researchers uncovered an explanation for poor biodiesel performance at low temperatures. The observation of a eutectic point, or a concentration below which SMGs have no effect, indicates that SMGs do not have to be completely removed from biodiesel to solve low-temperature performance problems.« less

  18. Arrests for child pornography production: data at two time points from a national sample of U.S. law enforcement agencies.

    PubMed

    Wolak, Janis; Finkelhor, David; Mitchell, Kimberly J; Jones, Lisa M

    2011-08-01

    This study collected information on arrests for child pornography (CP) production at two points (2000-2001 and 2006) from a national sample of more than 2,500 law enforcement agencies. In addition to providing descriptive data about an understudied crime, the authors examined whether trends in arrests suggested increasing CP production, shifts in victim populations, and challenges to law enforcement. Arrests for CP production more than doubled from an estimated 402 in 2000-2001 to an estimated 859 in 2006. Findings suggest the increase was related to increased law enforcement activity rather than to growth in the population of CP producers. Adolescent victims increased, but there was no increase in the proportion of arrest cases involving very young victims or violent images. Producers distributed images in 23% of arrest cases, a proportion that did not change over time. This suggests that much CP production may be primarily for private use. Proactive law enforcement operations increased, as did other features consistent with a robust law enforcement response.

  19. Efficient terrestrial laser scan segmentation exploiting data structure

    NASA Astrophysics Data System (ADS)

    Mahmoudabadi, Hamid; Olsen, Michael J.; Todorovic, Sinisa

    2016-09-01

    New technologies such as lidar enable the rapid collection of massive datasets to model a 3D scene as a point cloud. However, while hardware technology continues to advance, processing 3D point clouds into informative models remains complex and time consuming. A common approach to increase processing efficiently is to segment the point cloud into smaller sections. This paper proposes a novel approach for point cloud segmentation using computer vision algorithms to analyze panoramic representations of individual laser scans. These panoramas can be quickly created using an inherent neighborhood structure that is established during the scanning process, which scans at fixed angular increments in a cylindrical or spherical coordinate system. In the proposed approach, a selected image segmentation algorithm is applied on several input layers exploiting this angular structure including laser intensity, range, normal vectors, and color information. These segments are then mapped back to the 3D point cloud so that modeling can be completed more efficiently. This approach does not depend on pre-defined mathematical models and consequently setting parameters for them. Unlike common geometrical point cloud segmentation methods, the proposed method employs the colorimetric and intensity data as another source of information. The proposed algorithm is demonstrated on several datasets encompassing variety of scenes and objects. Results show a very high perceptual (visual) level of segmentation and thereby the feasibility of the proposed algorithm. The proposed method is also more efficient compared to Random Sample Consensus (RANSAC), which is a common approach for point cloud segmentation.

  20. Three-dimensional reconstruction of indoor whole elements based on mobile LiDAR point cloud data

    NASA Astrophysics Data System (ADS)

    Gong, Yuejian; Mao, Wenbo; Bi, Jiantao; Ji, Wei; He, Zhanjun

    2014-11-01

    Ground-based LiDAR is one of the most effective city modeling tools at present, which has been widely used for three-dimensional reconstruction of outdoor objects. However, as for indoor objects, there are some technical bottlenecks due to lack of GPS signal. In this paper, based on the high-precision indoor point cloud data which was obtained by LiDAR, an international advanced indoor mobile measuring equipment, high -precision model was fulfilled for all indoor ancillary facilities. The point cloud data we employed also contain color feature, which is extracted by fusion with CCD images. Thus, it has both space geometric feature and spectral information which can be used for constructing objects' surface and restoring color and texture of the geometric model. Based on Autodesk CAD platform and with help of PointSence plug, three-dimensional reconstruction of indoor whole elements was realized. Specifically, Pointools Edit Pro was adopted to edit the point cloud, then different types of indoor point cloud data was processed, including data format conversion, outline extracting and texture mapping of the point cloud model. Finally, three-dimensional visualization of the real-world indoor was completed. Experiment results showed that high-precision 3D point cloud data obtained by indoor mobile measuring equipment can be used for indoor whole elements' 3-d reconstruction and that methods proposed in this paper can efficiently realize the 3 -d construction of indoor whole elements. Moreover, the modeling precision could be controlled within 5 cm, which was proved to be a satisfactory result.

  1. 3D point cloud analysis of structured light registration in computer-assisted navigation in spinal surgeries

    NASA Astrophysics Data System (ADS)

    Gupta, Shaurya; Guha, Daipayan; Jakubovic, Raphael; Yang, Victor X. D.

    2017-02-01

    Computer-assisted navigation is used by surgeons in spine procedures to guide pedicle screws to improve placement accuracy and in some cases, to better visualize patient's underlying anatomy. Intraoperative registration is performed to establish a correlation between patient's anatomy and the pre/intra-operative image. Current algorithms rely on seeding points obtained directly from the exposed spinal surface to achieve clinically acceptable registration accuracy. Registration of these three dimensional surface point-clouds are prone to various systematic errors. The goal of this study was to evaluate the robustness of surgical navigation systems by looking at the relationship between the optical density of an acquired 3D point-cloud and the corresponding surgical navigation error. A retrospective review of a total of 48 registrations performed using an experimental structured light navigation system developed within our lab was conducted. For each registration, the number of points in the acquired point cloud was evaluated relative to whether the registration was acceptable, the corresponding system reported error and target registration error. It was demonstrated that the number of points in the point cloud neither correlates with the acceptance/rejection of a registration or the system reported error. However, a negative correlation was observed between the number of the points in the point-cloud and the corresponding sagittal angular error. Thus, system reported total registration points and accuracy are insufficient to gauge the accuracy of a navigation system and the operating surgeon must verify and validate registration based on anatomical landmarks prior to commencing surgery.

  2. Study on Huizhou architecture of point cloud registration based on optimized ICP algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Runmei; Wu, Yulu; Zhang, Guangbin; Zhou, Wei; Tao, Yuqian

    2018-03-01

    In view of the current point cloud registration software has high hardware requirements, heavy workload and moltiple interactive definition, the source of software with better processing effect is not open, a two--step registration method based on normal vector distribution feature and coarse feature based iterative closest point (ICP) algorithm is proposed in this paper. This method combines fast point feature histogram (FPFH) algorithm, define the adjacency region of point cloud and the calculation model of the distribution of normal vectors, setting up the local coordinate system for each key point, and obtaining the transformation matrix to finish rough registration, the rough registration results of two stations are accurately registered by using the ICP algorithm. Experimental results show that, compared with the traditional ICP algorithm, the method used in this paper has obvious time and precision advantages for large amount of point clouds.

  3. Automatic markerless registration of point clouds with semantic-keypoint-based 4-points congruent sets

    NASA Astrophysics Data System (ADS)

    Ge, Xuming

    2017-08-01

    The coarse registration of point clouds from urban building scenes has become a key topic in applications of terrestrial laser scanning technology. Sampling-based algorithms in the random sample consensus (RANSAC) model have emerged as mainstream solutions to address coarse registration problems. In this paper, we propose a novel combined solution to automatically align two markerless point clouds from building scenes. Firstly, the method segments non-ground points from ground points. Secondly, the proposed method detects feature points from each cross section and then obtains semantic keypoints by connecting feature points with specific rules. Finally, the detected semantic keypoints from two point clouds act as inputs to a modified 4PCS algorithm. Examples are presented and the results compared with those of K-4PCS to demonstrate the main contributions of the proposed method, which are the extension of the original 4PCS to handle heavy datasets and the use of semantic keypoints to improve K-4PCS in relation to registration accuracy and computational efficiency.

  4. Object-Based Coregistration of Terrestrial Photogrammetric and ALS Point Clouds in Forested Areas

    NASA Astrophysics Data System (ADS)

    Polewski, P.; Erickson, A.; Yao, W.; Coops, N.; Krzystek, P.; Stilla, U.

    2016-06-01

    Airborne Laser Scanning (ALS) and terrestrial photogrammetry are methods applicable for mapping forested environments. While ground-based techniques provide valuable information about the forest understory, the measured point clouds are normally expressed in a local coordinate system, whose transformation into a georeferenced system requires additional effort. In contrast, ALS point clouds are usually georeferenced, yet the point density near the ground may be poor under dense overstory conditions. In this work, we propose to combine the strengths of the two data sources by co-registering the respective point clouds, thus enriching the georeferenced ALS point cloud with detailed understory information in a fully automatic manner. Due to markedly different sensor characteristics, coregistration methods which expect a high geometric similarity between keypoints are not suitable in this setting. Instead, our method focuses on the object (tree stem) level. We first calculate approximate stem positions in the terrestrial and ALS point clouds and construct, for each stem, a descriptor which quantifies the 2D and vertical distances to other stem centers (at ground height). Then, the similarities between all descriptor pairs from the two point clouds are calculated, and standard graph maximum matching techniques are employed to compute corresponding stem pairs (tiepoints). Finally, the tiepoint subset yielding the optimal rigid transformation between the terrestrial and ALS coordinate systems is determined. We test our method on simulated tree positions and a plot situated in the northern interior of the Coast Range in western Oregon, USA, using ALS data (76 x 121 m2) and a photogrammetric point cloud (33 x 35 m2) derived from terrestrial photographs taken with a handheld camera. Results on both simulated and real data show that the proposed stem descriptors are discriminative enough to derive good correspondences. Specifically, for the real plot data, 24 corresponding stems were coregistered with an average 2D position deviation of 66 cm.

  5. Large-scale urban point cloud labeling and reconstruction

    NASA Astrophysics Data System (ADS)

    Zhang, Liqiang; Li, Zhuqiang; Li, Anjian; Liu, Fangyu

    2018-04-01

    The large number of object categories and many overlapping or closely neighboring objects in large-scale urban scenes pose great challenges in point cloud classification. In this paper, a novel framework is proposed for classification and reconstruction of airborne laser scanning point cloud data. To label point clouds, we present a rectified linear units neural network named ReLu-NN where the rectified linear units (ReLu) instead of the traditional sigmoid are taken as the activation function in order to speed up the convergence. Since the features of the point cloud are sparse, we reduce the number of neurons by the dropout to avoid over-fitting of the training process. The set of feature descriptors for each 3D point is encoded through self-taught learning, and forms a discriminative feature representation which is taken as the input of the ReLu-NN. The segmented building points are consolidated through an edge-aware point set resampling algorithm, and then they are reconstructed into 3D lightweight models using the 2.5D contouring method (Zhou and Neumann, 2010). Compared with deep learning approaches, the ReLu-NN introduced can easily classify unorganized point clouds without rasterizing the data, and it does not need a large number of training samples. Most of the parameters in the network are learned, and thus the intensive parameter tuning cost is significantly reduced. Experimental results on various datasets demonstrate that the proposed framework achieves better performance than other related algorithms in terms of classification accuracy and reconstruction quality.

  6. Early demethylation of non-CpG, CpC-rich, elements in the myogenin 5′-flanking region

    PubMed Central

    Fuso, Andrea; Ferraguti, Giampiero; Grandoni, Francesco; Ruggeri, Raffaella; Scarpa, Sigfrido; Strom, Roberto

    2010-01-01

    The dynamic changes and structural patterns of DNA methylation of genes without CpG islands are poorly characterized. The relevance of CpG to the non-CpG methylation equilibrium in transcriptional repression is unknown. In this work, we analyzed the DNA methylation pattern of the 5′-flanking of the myogenin gene, a positive regulator of muscle differentiation with no CpG island and low CpG density, in both C2C12 muscle satellite cells and embryonic muscle. Embryonic brain was studied as a non-expressing tissue. High levels of both CpG and non-CpG methylation were observed in non-expressing experimental conditions. Both CpG and non-CpG methylation rapidly dropped during muscle differentiation and myogenin transcriptional activation with active demethylation dynamics. Non-CpG demethylation occurred more rapidly than CpG demethylation. Demethylation spread from initially highly methylated short CpC-rich elements to a virtually unmethylated status. These short elements have a high CpC content and density, share some motifs and largely coincide with putative recognition sequences of some differentiation-related transcription factors. Our findings point to a dynamically controlled equilibrium between CpG and non-CpG active demethylation in the transcriptional control of tissue-specific genes. The short CpC-rich elements are new structural features of the methylation machinery, whose functions may include priming the complete demethylation of a transcriptionally crucial DNA region. PMID:20935518

  7. A keyword searchable attribute-based encryption scheme with attribute update for cloud storage.

    PubMed

    Wang, Shangping; Ye, Jian; Zhang, Yaling

    2018-01-01

    Ciphertext-policy attribute-based encryption (CP-ABE) scheme is a new type of data encryption primitive, which is very suitable for data cloud storage for its fine-grained access control. Keyword-based searchable encryption scheme enables users to quickly find interesting data stored in the cloud server without revealing any information of the searched keywords. In this work, we provide a keyword searchable attribute-based encryption scheme with attribute update for cloud storage, which is a combination of attribute-based encryption scheme and keyword searchable encryption scheme. The new scheme supports the user's attribute update, especially in our new scheme when a user's attribute need to be updated, only the user's secret key related with the attribute need to be updated, while other user's secret key and the ciphertexts related with this attribute need not to be updated with the help of the cloud server. In addition, we outsource the operation with high computation cost to cloud server to reduce the user's computational burden. Moreover, our scheme is proven to be semantic security against chosen ciphertext-policy and chosen plaintext attack in the general bilinear group model. And our scheme is also proven to be semantic security against chosen keyword attack under bilinear Diffie-Hellman (BDH) assumption.

  8. A keyword searchable attribute-based encryption scheme with attribute update for cloud storage

    PubMed Central

    Wang, Shangping; Zhang, Yaling

    2018-01-01

    Ciphertext-policy attribute-based encryption (CP-ABE) scheme is a new type of data encryption primitive, which is very suitable for data cloud storage for its fine-grained access control. Keyword-based searchable encryption scheme enables users to quickly find interesting data stored in the cloud server without revealing any information of the searched keywords. In this work, we provide a keyword searchable attribute-based encryption scheme with attribute update for cloud storage, which is a combination of attribute-based encryption scheme and keyword searchable encryption scheme. The new scheme supports the user's attribute update, especially in our new scheme when a user's attribute need to be updated, only the user's secret key related with the attribute need to be updated, while other user's secret key and the ciphertexts related with this attribute need not to be updated with the help of the cloud server. In addition, we outsource the operation with high computation cost to cloud server to reduce the user's computational burden. Moreover, our scheme is proven to be semantic security against chosen ciphertext-policy and chosen plaintext attack in the general bilinear group model. And our scheme is also proven to be semantic security against chosen keyword attack under bilinear Diffie-Hellman (BDH) assumption. PMID:29795577

  9. Superposition and alignment of labeled point clouds.

    PubMed

    Fober, Thomas; Glinca, Serghei; Klebe, Gerhard; Hüllermeier, Eyke

    2011-01-01

    Geometric objects are often represented approximately in terms of a finite set of points in three-dimensional euclidean space. In this paper, we extend this representation to what we call labeled point clouds. A labeled point cloud is a finite set of points, where each point is not only associated with a position in three-dimensional space, but also with a discrete class label that represents a specific property. This type of model is especially suitable for modeling biomolecules such as proteins and protein binding sites, where a label may represent an atom type or a physico-chemical property. Proceeding from this representation, we address the question of how to compare two labeled points clouds in terms of their similarity. Using fuzzy modeling techniques, we develop a suitable similarity measure as well as an efficient evolutionary algorithm to compute it. Moreover, we consider the problem of establishing an alignment of the structures in the sense of a one-to-one correspondence between their basic constituents. From a biological point of view, alignments of this kind are of great interest, since mutually corresponding molecular constituents offer important information about evolution and heredity, and can also serve as a means to explain a degree of similarity. In this paper, we therefore develop a method for computing pairwise or multiple alignments of labeled point clouds. To this end, we proceed from an optimal superposition of the corresponding point clouds and construct an alignment which is as much as possible in agreement with the neighborhood structure established by this superposition. We apply our methods to the structural analysis of protein binding sites.

  10. Automatic registration of terrestrial point clouds based on panoramic reflectance images and efficient BaySAC

    NASA Astrophysics Data System (ADS)

    Kang, Zhizhong

    2013-10-01

    This paper presents a new approach to automatic registration of terrestrial laser scanning (TLS) point clouds utilizing a novel robust estimation method by an efficient BaySAC (BAYes SAmpling Consensus). The proposed method directly generates reflectance images from 3D point clouds, and then using SIFT algorithm extracts keypoints to identify corresponding image points. The 3D corresponding points, from which transformation parameters between point clouds are computed, are acquired by mapping the 2D ones onto the point cloud. To remove false accepted correspondences, we implement a conditional sampling method to select the n data points with the highest inlier probabilities as a hypothesis set and update the inlier probabilities of each data point using simplified Bayes' rule for the purpose of improving the computation efficiency. The prior probability is estimated by the verification of the distance invariance between correspondences. The proposed approach is tested on four data sets acquired by three different scanners. The results show that, comparing with the performance of RANSAC, BaySAC leads to less iterations and cheaper computation cost when the hypothesis set is contaminated with more outliers. The registration results also indicate that, the proposed algorithm can achieve high registration accuracy on all experimental datasets.

  11. Developing disease resistance in CP-Cultivars

    USDA-ARS?s Scientific Manuscript database

    Disease resistance is an important selection criterion in the Canal Point (CP) Sugarcane Cultivar Development Program. Ratoon stunt (RSD, caused by Leifsonia xyli subsp. Xyli Evtsuhenko et al.), leaf scald (caused by Xanthomonas albilineans Ashby, Dowson), mosaic (caused by Sugarcane mosaic virus st...

  12. Continuum Limit of Total Variation on Point Clouds

    NASA Astrophysics Data System (ADS)

    García Trillos, Nicolás; Slepčev, Dejan

    2016-04-01

    We consider point clouds obtained as random samples of a measure on a Euclidean domain. A graph representing the point cloud is obtained by assigning weights to edges based on the distance between the points they connect. Our goal is to develop mathematical tools needed to study the consistency, as the number of available data points increases, of graph-based machine learning algorithms for tasks such as clustering. In particular, we study when the cut capacity, and more generally total variation, on these graphs is a good approximation of the perimeter (total variation) in the continuum setting. We address this question in the setting of Γ-convergence. We obtain almost optimal conditions on the scaling, as the number of points increases, of the size of the neighborhood over which the points are connected by an edge for the Γ-convergence to hold. Taking of the limit is enabled by a transportation based metric which allows us to suitably compare functionals defined on different point clouds.

  13. Point cloud registration from local feature correspondences-Evaluation on challenging datasets.

    PubMed

    Petricek, Tomas; Svoboda, Tomas

    2017-01-01

    Registration of laser scans, or point clouds in general, is a crucial step of localization and mapping with mobile robots or in object modeling pipelines. A coarse alignment of the point clouds is generally needed before applying local methods such as the Iterative Closest Point (ICP) algorithm. We propose a feature-based approach to point cloud registration and evaluate the proposed method and its individual components on challenging real-world datasets. For a moderate overlap between the laser scans, the method provides a superior registration accuracy compared to state-of-the-art methods including Generalized ICP, 3D Normal-Distribution Transform, Fast Point-Feature Histograms, and 4-Points Congruent Sets. Compared to the surface normals, the points as the underlying features yield higher performance in both keypoint detection and establishing local reference frames. Moreover, sign disambiguation of the basis vectors proves to be an important aspect in creating repeatable local reference frames. A novel method for sign disambiguation is proposed which yields highly repeatable reference frames.

  14. Dynamic balance control in elders: gait initiation assessment as a screening tool

    NASA Technical Reports Server (NTRS)

    Chang, H.; Krebs, D. E.; Wall, C. C. (Principal Investigator)

    1999-01-01

    OBJECTIVE: To determine whether measurements of center of gravity-center of pressure separation (CG-CP moment arm) during gait initiation can differentiate healthy from disabled subjects with sufficient specificity and sensitivity to be useful as a screening test for dynamic balance in elderly patients. SUBJECTS: Three groups of elderly subjects (age, 74.97+/-6.56 yrs): healthy elders (HE, n = 21), disabled elders (DE, n = 20), and elders with vestibular hypofunction (VH, n = 18). DESIGN: Cross-sectional, intact-groups research design. Peak CG-CP moment arm measures how far the subject will tolerate the whole-body CG to deviate from the ground reaction force's CP; it represents dynamic balance control. Screening test cutoff points at 16 to 18 cm peak CG-CP moment arm predicted group membership. RESULTS: The magnitude of peak CG-CP moment arm was significantly greater in HE than in DE and VH subjects (p<.01) and was not different between the DE and VH groups. The peak CG-CP moment arm occurred at the end of single stance phase in all groups. As a screening test, the peak moment arm has greater than 50% sensitivity and specificity to discriminate the HE group from the DE and VH groups with peak CG-CP moment arm cutoff points between 16 and 18 cm. CONCLUSIONS: Examining dynamic balance through the use of the CG-CP moment arm during single stance in gait initiation discriminates between nondisabled and disabled older persons and warrants further investigation as a potential tool to identify people with balance dysfunction.

  15. On the performance of metrics to predict quality in point cloud representations

    NASA Astrophysics Data System (ADS)

    Alexiou, Evangelos; Ebrahimi, Touradj

    2017-09-01

    Point clouds are a promising alternative for immersive representation of visual contents. Recently, an increased interest has been observed in the acquisition, processing and rendering of this modality. Although subjective and objective evaluations are critical in order to assess the visual quality of media content, they still remain open problems for point cloud representation. In this paper we focus our efforts on subjective quality assessment of point cloud geometry, subject to typical types of impairments such as noise corruption and compression-like distortions. In particular, we propose a subjective methodology that is closer to real-life scenarios of point cloud visualization. The performance of the state-of-the-art objective metrics is assessed by considering the subjective scores as the ground truth. Moreover, we investigate the impact of adopting different test methodologies by comparing them. Advantages and drawbacks of every approach are reported, based on statistical analysis. The results and conclusions of this work provide useful insights that could be considered in future experimentation.

  16. Semantic Segmentation of Building Elements Using Point Cloud Hashing

    NASA Astrophysics Data System (ADS)

    Chizhova, M.; Gurianov, A.; Hess, M.; Luhmann, T.; Brunn, A.; Stilla, U.

    2018-05-01

    For the interpretation of point clouds, the semantic definition of extracted segments from point clouds or images is a common problem. Usually, the semantic of geometrical pre-segmented point cloud elements are determined using probabilistic networks and scene databases. The proposed semantic segmentation method is based on the psychological human interpretation of geometric objects, especially on fundamental rules of primary comprehension. Starting from these rules the buildings could be quite well and simply classified by a human operator (e.g. architect) into different building types and structural elements (dome, nave, transept etc.), including particular building parts which are visually detected. The key part of the procedure is a novel method based on hashing where point cloud projections are transformed into binary pixel representations. A segmentation approach released on the example of classical Orthodox churches is suitable for other buildings and objects characterized through a particular typology in its construction (e.g. industrial objects in standardized enviroments with strict component design allowing clear semantic modelling).

  17. Comparison of Uas-Based Photogrammetry Software for 3d Point Cloud Generation: a Survey Over a Historical Site

    NASA Astrophysics Data System (ADS)

    Alidoost, F.; Arefi, H.

    2017-11-01

    Nowadays, Unmanned Aerial System (UAS)-based photogrammetry offers an affordable, fast and effective approach to real-time acquisition of high resolution geospatial information and automatic 3D modelling of objects for numerous applications such as topography mapping, 3D city modelling, orthophoto generation, and cultural heritages preservation. In this paper, the capability of four different state-of-the-art software packages as 3DSurvey, Agisoft Photoscan, Pix4Dmapper Pro and SURE is examined to generate high density point cloud as well as a Digital Surface Model (DSM) over a historical site. The main steps of this study are including: image acquisition, point cloud generation, and accuracy assessment. The overlapping images are first captured using a quadcopter and next are processed by different software to generate point clouds and DSMs. In order to evaluate the accuracy and quality of point clouds and DSMs, both visual and geometric assessments are carry out and the comparison results are reported.

  18. Multiview point clouds denoising based on interference elimination

    NASA Astrophysics Data System (ADS)

    Hu, Yang; Wu, Qian; Wang, Le; Jiang, Huanyu

    2018-03-01

    Newly emerging low-cost depth sensors offer huge potentials for three-dimensional (3-D) modeling, but existing high noise restricts these sensors from obtaining accurate results. Thus, we proposed a method for denoising registered multiview point clouds with high noise to solve that problem. The proposed method is aimed at fully using redundant information to eliminate the interferences among point clouds of different views based on an iterative procedure. In each iteration, noisy points are either deleted or moved to their weighted average targets in accordance with two cases. Simulated data and practical data captured by a Kinect v2 sensor were tested in experiments qualitatively and quantitatively. Results showed that the proposed method can effectively reduce noise and recover local features from highly noisy multiview point clouds with good robustness, compared to truncated signed distance function and moving least squares (MLS). Moreover, the resulting low-noise point clouds can be further smoothed by the MLS to achieve improved results. This study provides the feasibility of obtaining fine 3-D models with high-noise devices, especially for depth sensors, such as Kinect.

  19. Feature-based three-dimensional registration for repetitive geometry in machine vision

    PubMed Central

    Gong, Yuanzheng; Seibel, Eric J.

    2016-01-01

    As an important step in three-dimensional (3D) machine vision, 3D registration is a process of aligning two or multiple 3D point clouds that are collected from different perspectives together into a complete one. The most popular approach to register point clouds is to minimize the difference between these point clouds iteratively by Iterative Closest Point (ICP) algorithm. However, ICP does not work well for repetitive geometries. To solve this problem, a feature-based 3D registration algorithm is proposed to align the point clouds that are generated by vision-based 3D reconstruction. By utilizing texture information of the object and the robustness of image features, 3D correspondences can be retrieved so that the 3D registration of two point clouds is to solve a rigid transformation. The comparison of our method and different ICP algorithms demonstrates that our proposed algorithm is more accurate, efficient and robust for repetitive geometry registration. Moreover, this method can also be used to solve high depth uncertainty problem caused by little camera baseline in vision-based 3D reconstruction. PMID:28286703

  20. An efficient global energy optimization approach for robust 3D plane segmentation of point clouds

    NASA Astrophysics Data System (ADS)

    Dong, Zhen; Yang, Bisheng; Hu, Pingbo; Scherer, Sebastian

    2018-03-01

    Automatic 3D plane segmentation is necessary for many applications including point cloud registration, building information model (BIM) reconstruction, simultaneous localization and mapping (SLAM), and point cloud compression. However, most of the existing 3D plane segmentation methods still suffer from low precision and recall, and inaccurate and incomplete boundaries, especially for low-quality point clouds collected by RGB-D sensors. To overcome these challenges, this paper formulates the plane segmentation problem as a global energy optimization because it is robust to high levels of noise and clutter. First, the proposed method divides the raw point cloud into multiscale supervoxels, and considers planar supervoxels and individual points corresponding to nonplanar supervoxels as basic units. Then, an efficient hybrid region growing algorithm is utilized to generate initial plane set by incrementally merging adjacent basic units with similar features. Next, the initial plane set is further enriched and refined in a mutually reinforcing manner under the framework of global energy optimization. Finally, the performances of the proposed method are evaluated with respect to six metrics (i.e., plane precision, plane recall, under-segmentation rate, over-segmentation rate, boundary precision, and boundary recall) on two benchmark datasets. Comprehensive experiments demonstrate that the proposed method obtained good performances both in high-quality TLS point clouds (i.e., http://SEMANTIC3D.NET)

  1. Indoor Modelling from Slam-Based Laser Scanner: Door Detection to Envelope Reconstruction

    NASA Astrophysics Data System (ADS)

    Díaz-Vilariño, L.; Verbree, E.; Zlatanova, S.; Diakité, A.

    2017-09-01

    Updated and detailed indoor models are being increasingly demanded for various applications such as emergency management or navigational assistance. The consolidation of new portable and mobile acquisition systems has led to a higher availability of 3D point cloud data from indoors. In this work, we explore the combined use of point clouds and trajectories from SLAM-based laser scanner to automate the reconstruction of building indoors. The methodology starts by door detection, since doors represent transitions from one indoor space to other, which constitutes an initial approach about the global configuration of the point cloud into building rooms. For this purpose, the trajectory is used to create a vertical point cloud profile in which doors are detected as local minimum of vertical distances. As point cloud and trajectory are related by time stamp, this feature is used to subdivide the point cloud into subspaces according to the location of the doors. The correspondence between subspaces and building rooms is not unambiguous. One subspace always corresponds to one room, but one room is not necessarily depicted by just one subspace, for example, in case of a room containing several doors and in which the acquisition is performed in a discontinue way. The labelling problem is formulated as combinatorial approach solved as a minimum energy optimization. Once the point cloud is subdivided into building rooms, envelop (conformed by walls, ceilings and floors) is reconstructed for each space. The connectivity between spaces is included by adding the previously detected doors to the reconstructed model. The methodology is tested in a real case study.

  2. a Point Cloud Classification Approach Based on Vertical Structures of Ground Objects

    NASA Astrophysics Data System (ADS)

    Zhao, Y.; Hu, Q.; Hu, W.

    2018-04-01

    This paper proposes a novel method for point cloud classification using vertical structural characteristics of ground objects. Since urbanization develops rapidly nowadays, urban ground objects also change frequently. Conventional photogrammetric methods cannot satisfy the requirements of updating the ground objects' information efficiently, so LiDAR (Light Detection and Ranging) technology is employed to accomplish this task. LiDAR data, namely point cloud data, can obtain detailed three-dimensional coordinates of ground objects, but this kind of data is discrete and unorganized. To accomplish ground objects classification with point cloud, we first construct horizontal grids and vertical layers to organize point cloud data, and then calculate vertical characteristics, including density and measures of dispersion, and form characteristic curves for each grids. With the help of PCA processing and K-means algorithm, we analyze the similarities and differences of characteristic curves. Curves that have similar features will be classified into the same class and point cloud correspond to these curves will be classified as well. The whole process is simple but effective, and this approach does not need assistance of other data sources. In this study, point cloud data are classified into three classes, which are vegetation, buildings, and roads. When horizontal grid spacing and vertical layer spacing are 3 m and 1 m respectively, vertical characteristic is set as density, and the number of dimensions after PCA processing is 11, the overall precision of classification result is about 86.31 %. The result can help us quickly understand the distribution of various ground objects.

  3. Integrated Change Detection and Classification in Urban Areas Based on Airborne Laser Scanning Point Clouds.

    PubMed

    Tran, Thi Huong Giang; Ressl, Camillo; Pfeifer, Norbert

    2018-02-03

    This paper suggests a new approach for change detection (CD) in 3D point clouds. It combines classification and CD in one step using machine learning. The point cloud data of both epochs are merged for computing features of four types: features describing the point distribution, a feature relating to relative terrain elevation, features specific for the multi-target capability of laser scanning, and features combining the point clouds of both epochs to identify the change. All these features are merged in the points and then training samples are acquired to create the model for supervised classification, which is then applied to the whole study area. The final results reach an overall accuracy of over 90% for both epochs of eight classes: lost tree, new tree, lost building, new building, changed ground, unchanged building, unchanged tree, and unchanged ground.

  4. A curvature-based weighted fuzzy c-means algorithm for point clouds de-noising

    NASA Astrophysics Data System (ADS)

    Cui, Xin; Li, Shipeng; Yan, Xiutian; He, Xinhua

    2018-04-01

    In order to remove the noise of three-dimensional scattered point cloud and smooth the data without damnify the sharp geometric feature simultaneity, a novel algorithm is proposed in this paper. The feature-preserving weight is added to fuzzy c-means algorithm which invented a curvature weighted fuzzy c-means clustering algorithm. Firstly, the large-scale outliers are removed by the statistics of r radius neighboring points. Then, the algorithm estimates the curvature of the point cloud data by using conicoid parabolic fitting method and calculates the curvature feature value. Finally, the proposed clustering algorithm is adapted to calculate the weighted cluster centers. The cluster centers are regarded as the new points. The experimental results show that this approach is efficient to different scale and intensities of noise in point cloud with a high precision, and perform a feature-preserving nature at the same time. Also it is robust enough to different noise model.

  5. Evaluation of Methods for Coregistration and Fusion of Rpas-Based 3d Point Clouds and Thermal Infrared Images

    NASA Astrophysics Data System (ADS)

    Hoegner, L.; Tuttas, S.; Xu, Y.; Eder, K.; Stilla, U.

    2016-06-01

    This paper discusses the automatic coregistration and fusion of 3d point clouds generated from aerial image sequences and corresponding thermal infrared (TIR) images. Both RGB and TIR images have been taken from a RPAS platform with a predefined flight path where every RGB image has a corresponding TIR image taken from the same position and with the same orientation with respect to the accuracy of the RPAS system and the inertial measurement unit. To remove remaining differences in the exterior orientation, different strategies for coregistering RGB and TIR images are discussed: (i) coregistration based on 2D line segments for every single TIR image and the corresponding RGB image. This method implies a mainly planar scene to avoid mismatches; (ii) coregistration of both the dense 3D point clouds from RGB images and from TIR images by coregistering 2D image projections of both point clouds; (iii) coregistration based on 2D line segments in every single TIR image and 3D line segments extracted from intersections of planes fitted in the segmented dense 3D point cloud; (iv) coregistration of both the dense 3D point clouds from RGB images and from TIR images using both ICP and an adapted version based on corresponding segmented planes; (v) coregistration of both image sets based on point features. The quality is measured by comparing the differences of the back projection of homologous points in both corrected RGB and TIR images.

  6. Geospatial Field Methods: An Undergraduate Course Built Around Point Cloud Construction and Analysis to Promote Spatial Learning and Use of Emerging Technology in Geoscience

    NASA Astrophysics Data System (ADS)

    Bunds, M. P.

    2017-12-01

    Point clouds are a powerful data source in the geosciences, and the emergence of structure-from-motion (SfM) photogrammetric techniques has allowed them to be generated quickly and inexpensively. Consequently, applications of them as well as methods to generate, manipulate, and analyze them warrant inclusion in undergraduate curriculum. In a new course called Geospatial Field Methods at Utah Valley University, students in small groups use SfM to generate a point cloud from imagery collected with a small unmanned aerial system (sUAS) and use it as a primary data source for a research project. Before creating their point clouds, students develop needed technical skills in laboratory and class activities. The students then apply the skills to construct the point clouds, and the research projects and point cloud construction serve as a central theme for the class. Intended student outcomes for the class include: technical skills related to acquiring, processing, and analyzing geospatial data; improved ability to carry out a research project; and increased knowledge related to their specific project. To construct the point clouds, students first plan their field work by outlining the field site, identifying locations for ground control points (GCPs), and loading them onto a handheld GPS for use in the field. They also estimate sUAS flight elevation, speed, and the flight path grid spacing required to produce a point cloud with the resolution required for their project goals. In the field, the students place the GCPs using handheld GPS, and survey the GCP locations using post-processed-kinematic (PPK) or real-time-kinematic (RTK) methods. The students pilot the sUAS and operate its camera according to the parameters that they estimated in planning their field work. Data processing includes obtaining accurate locations for the PPK/RTK base station and GCPs, and SfM processing with Agisoft Photoscan. The resulting point clouds are rasterized into digital surface models, assessed for accuracy, and analyzed in Geographic Information System software. Student projects have included mapping and analyzing landslide morphology, fault scarps, and earthquake ground surface rupture. Students have praised the geospatial skills they learn, whereas helping them stay on schedule to finish their projects is a challenge.

  7. Neutral D→KK^{*} Decays as Discovery Channels for Charm CP Violation.

    PubMed

    Nierste, Ulrich; Schacht, Stefan

    2017-12-22

    We point out that the CP asymmetries in the decays D^{0}→K_{S}K^{*0} and D^{0}→K_{S}K[over ¯]^{*0} are potential discovery channels for charm CP violation in the standard model. We stress that no flavor tagging is necessary, the untagged CP asymmetry a_{CP}^{dir}(D[over (-)]→K_{S}K^{*0}) is essentially equal to the tagged one, so that the untagged measurement comes with a significant statistical gain. Depending on the relevant strong phase, |a_{CP}^{dir,untag}| can be as large as 0.003. The CP asymmetry is dominantly generated by exchange diagrams and does not require nonvanishing penguin amplitudes. While the CP asymmetry is smaller than in the case of D^{0}→K_{S}K_{S}, the experimental analysis is more efficient due to the prompt decay K^{*0}→K^{+}π^{-}. One may further search for favorable strong phases in the Dalitz plot in the vicinity of the K^{*0} peak.

  8. Building a LiDAR point cloud simulator: Testing algorithms for high resolution topographic change

    NASA Astrophysics Data System (ADS)

    Carrea, Dario; Abellán, Antonio; Derron, Marc-Henri; Jaboyedoff, Michel

    2014-05-01

    Terrestrial laser technique (TLS) is becoming a common tool in Geosciences, with clear applications ranging from the generation of a high resolution 3D models to the monitoring of unstable slopes and the quantification of morphological changes. Nevertheless, like every measurement techniques, TLS still has some limitations that are not clearly understood and affect the accuracy of the dataset (point cloud). A challenge in LiDAR research is to understand the influence of instrumental parameters on measurement errors during LiDAR acquisition. Indeed, different critical parameters interact with the scans quality at different ranges: the existence of shadow areas, the spatial resolution (point density), and the diameter of the laser beam, the incidence angle and the single point accuracy. The objective of this study is to test the main limitations of different algorithms usually applied on point cloud data treatment, from alignment to monitoring. To this end, we built in MATLAB(c) environment a LiDAR point cloud simulator able to recreate the multiple sources of errors related to instrumental settings that we normally observe in real datasets. In a first step we characterized the error from single laser pulse by modelling the influence of range and incidence angle on single point data accuracy. In a second step, we simulated the scanning part of the system in order to analyze the shifting and angular error effects. Other parameters have been added to the point cloud simulator, such as point spacing, acquisition window, etc., in order to create point clouds of simple and/or complex geometries. We tested the influence of point density and vitiating point of view on the Iterative Closest Point (ICP) alignment and also in some deformation tracking algorithm with same point cloud geometry, in order to determine alignment and deformation detection threshold. We also generated a series of high resolution point clouds in order to model small changes on different environments (erosion, landslide monitoring, etc) and we then tested the use of filtering techniques using 3D moving windows along the space and time, which considerably reduces data scattering due to the benefits of data redundancy. In conclusion, the simulator allowed us to improve our different algorithms and to understand how instrumental error affects final results. And also, improve the methodology of scans acquisition to find the best compromise between point density, positioning and acquisition time with the best accuracy possible to characterize the topographic change.

  9. Motor learning characterizes habilitation of children with hemiplegic cerebral palsy.

    PubMed

    Krebs, Hermano I; Fasoli, Susan E; Dipietro, Laura; Fragala-Pinkham, Maria; Hughes, Richard; Stein, Joel; Hogan, Neville

    2012-09-01

    This study tested in children with cerebral palsy (CP) whether motor habilitation resembles motor learning. Twelve children with hemiplegic CP ages 5 to 12 years with moderate to severe motor impairments underwent a 16-session robot-mediated planar therapy program to improve upper limb reach, with a focus on shoulder and elbow movements. Participants were trained to execute point-to-point movements (with robot assistance) with the affected arm and were evaluated (without robot assistance) in trained (point-to-point) and untrained (circle-drawing) conditions. Outcomes were measured at baseline, midpoint, immediately after the program, and 1 month postcompletion. Outcome measures were the Fugl-Meyer (FM), Quality of Upper Extremity Skills Test (QUEST), and Modified Ashworth Scale (MAS) scores; parent questionnaire; and robot-based kinematic metrics. To assess whether learning best characterizes motor habilitation in CP, the authors quantified (a) improvement on trained tasks at completion of training (acquisition) and 1 month following completion (retention) and (b) quantified generalization of improvement to untrained tasks. After robotic intervention, the authors found significant gains in the FM, QUEST, and parent questionnaire. Robot-based evaluations demonstrated significant improvement in trained movements and that improvement was sustained at follow-up. Furthermore, children improved their performance in untrained movements indicating generalization. Motor habilitation in CP exhibits some traits of motor learning. Optimal treatment may not require an extensive repertoire of tasks but rather a select set to promote generalization.

  10. On the strong-CP problem and its axion solution in torsionful theories

    NASA Astrophysics Data System (ADS)

    Karananas, Georgios K.

    2018-06-01

    Gravitational effects may interfere with the axion solution to the strong-CP problem. We point out that gravity can potentially provide a protection mechanism against itself, in the form of an additional axion-like field associated with torsion.

  11. Mapping Urban Tree Canopy Cover Using Fused Airborne LIDAR and Satellite Imagery Data

    NASA Astrophysics Data System (ADS)

    Parmehr, Ebadat G.; Amati, Marco; Fraser, Clive S.

    2016-06-01

    Urban green spaces, particularly urban trees, play a key role in enhancing the liveability of cities. The availability of accurate and up-to-date maps of tree canopy cover is important for sustainable development of urban green spaces. LiDAR point clouds are widely used for the mapping of buildings and trees, and several LiDAR point cloud classification techniques have been proposed for automatic mapping. However, the effectiveness of point cloud classification techniques for automated tree extraction from LiDAR data can be impacted to the point of failure by the complexity of tree canopy shapes in urban areas. Multispectral imagery, which provides complementary information to LiDAR data, can improve point cloud classification quality. This paper proposes a reliable method for the extraction of tree canopy cover from fused LiDAR point cloud and multispectral satellite imagery data. The proposed method initially associates each LiDAR point with spectral information from the co-registered satellite imagery data. It calculates the normalised difference vegetation index (NDVI) value for each LiDAR point and corrects tree points which have been misclassified as buildings. Then, region growing of tree points, taking the NDVI value into account, is applied. Finally, the LiDAR points classified as tree points are utilised to generate a canopy cover map. The performance of the proposed tree canopy cover mapping method is experimentally evaluated on a data set of airborne LiDAR and WorldView 2 imagery covering a suburb in Melbourne, Australia.

  12. Expression of SRY-related HMG Box Transcription Factors (Sox) 2 and 9 in Craniopharyngioma Subtypes and Surrounding Brain Tissue.

    PubMed

    Thimsen, Vivian; John, Nora; Buchfelder, Michael; Flitsch, Jörg; Fahlbusch, Rudolf; Stefanits, Harald; Knosp, Engelbert; Losa, Marco; Buslei, Rolf; Hölsken, Annett

    2017-11-20

    Stem cells have been discovered as key players in the genesis of different neoplasms including craniopharyngioma (CP), a rare tumour entity in the sellar region. Sox2 and Sox9 are well-known stem cell markers involved in pituitary development. In this study we analysed the expression of both transcription factors using immunohistochemistry in a large cohort of 64 adamantinomatous (aCP) and 9 papillary CP (pCP) and quantitative PCR in 26 aCP and 7 pCP. Whereas immunohistochemically Sox2+ cells were verifiable in only five aCP (7.8%) and in 39.1% of the respective surrounding cerebral tissue, pCP specimens appeared always negative. In contrast, Sox9 was detectable in all tumours with a significantly higher expression in aCP compared to pCP (protein, p < 0.0001; mRNA p = 0.0484) This was also true for the respective tumour adjacent CNS where 63 aCP (98.4%) and six pCP (66.7%) showed Sox9+ cells. We further confirmed absence of Sox9 expression in nuclear β-catenin accumulating cells of aCP. Our results point to the conclusion that Sox2 and Sox9, seem to play essential roles not only in the specific formation of aCP, but also in processes involving the cerebral tumour environment, which needs to be illuminated in the future.

  13. Registration of Vehicle-Borne Point Clouds and Panoramic Images Based on Sensor Constellations.

    PubMed

    Yao, Lianbi; Wu, Hangbin; Li, Yayun; Meng, Bin; Qian, Jinfei; Liu, Chun; Fan, Hongchao

    2017-04-11

    A mobile mapping system (MMS) is usually utilized to collect environmental data on and around urban roads. Laser scanners and panoramic cameras are the main sensors of an MMS. This paper presents a new method for the registration of the point clouds and panoramic images based on sensor constellation. After the sensor constellation was analyzed, a feature point, the intersection of the connecting line between the global positioning system (GPS) antenna and the panoramic camera with a horizontal plane, was utilized to separate the point clouds into blocks. The blocks for the central and sideward laser scanners were extracted with the segmentation feature points. Then, the point clouds located in the blocks were separated from the original point clouds. Each point in the blocks was used to find the accurate corresponding pixel in the relative panoramic images via a collinear function, and the position and orientation relationship amongst different sensors. A search strategy is proposed for the correspondence of laser scanners and lenses of panoramic cameras to reduce calculation complexity and improve efficiency. Four cases of different urban road types were selected to verify the efficiency and accuracy of the proposed method. Results indicate that most of the point clouds (with an average of 99.7%) were successfully registered with the panoramic images with great efficiency. Geometric evaluation results indicate that horizontal accuracy was approximately 0.10-0.20 m, and vertical accuracy was approximately 0.01-0.02 m for all cases. Finally, the main factors that affect registration accuracy, including time synchronization amongst different sensors, system positioning and vehicle speed, are discussed.

  14. A polysaccharide fraction of adlay seed (Coixlachryma-jobi L.) induces apoptosis in human non-small cell lung cancer A549 cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Xiangyi; Liu, Wei; Wu, Junhua

    Highlights: Black-Right-Pointing-Pointer A polysaccharide from adlay seed, its molecular mass, optical rotation and sugars was determined. Black-Right-Pointing-Pointer We demonstrated that a polysaccharide from adlay can induce apoptosis in cancer cells. Black-Right-Pointing-Pointer The polysaccharide inhibited the metabolism and proliferation of NSCLC A549 cells. Black-Right-Pointing-Pointer The polysaccharide may trigger apoptosis via the mitochondria-dependent pathway. -- Abstract: Different seed extracts from Coix lachryma-jobi (adlay seed) have been used for the treatment of various cancers in China, and clinical data support the use of these extracts for cancer therapy; however, their underlying molecular mechanisms have not been well defined. A polysaccharide fraction, designated asmore » CP-1, was extracted from the C.lachryma-jobi L. var. using the ethanol subsiding method. CP-1 induced apoptosis in A549 cells in a dose-dependent manner, as determined by MTT assay. Apoptotic bodies were observed in the cells by scanning electronic microscopy. Apoptosis and DNA accumulation during S-phase of the cell cycle were determined by annexin V-FITC and PI staining, respectively, and measured by flow cytometry. CP-1 also extended the comet tail length on single cell gel electrophoresis, and disrupted the mitochondrial membrane potential. Further analysis by western blotting showed that the expression of caspase-3 and caspase-9 proteins was increased. Taken together, our results demonstrate that CP-1 is capable of inhibiting A549 cell proliferation and inducing apoptosis via a mechanism primarily involving the activation of the intrinsic mitochondrial pathway. The assay data suggest that in addition to its nutritional properties, CP-1 is a very promising candidate polysaccharide for the development of anti-cancer medicines.« less

  15. Hypothermia and cerebral protection strategies in aortic arch surgery: a comparative effectiveness analysis from the STS Adult Cardiac Surgery Database.

    PubMed

    Englum, Brian R; He, Xia; Gulack, Brian C; Ganapathi, Asvin M; Mathew, Joseph P; Brennan, J Matthew; Reece, T Brett; Keeling, W Brent; Leshnower, Bradley G; Chen, Edward P; Jacobs, Jeffrey P; Thourani, Vinod H; Hughes, G Chad

    2017-09-01

    Hypothermic circulatory arrest is essential to aortic arch surgery, although consensus regarding optimal cerebral protection strategy remains lacking. We evaluated the current use and comparative effectiveness of hypothermia/cerebral perfusion (CP) strategies in aortic arch surgery. Using the Society of Thoracic Surgeons Database, cases of aortic arch surgery with hypothermic circulatory arrest from 2011 to 2014 were categorized by hypothermia strategy-deep/profound (D/P; ≤20°C), low-moderate (L-M; 20.1-24°C), and high-moderate (H-M; 24.1-28°C)-and CP strategy-no CP, antegrade (ACP), retrograde (RCP) or both ACP/RCP. After adjusting for potential confounders, strategies were compared by composite end-point (operative mortality or neurologic complication). Of the 12 521 aortic arch repairs with hypothermic circulatory arrest, the most common combined strategies were straight D/P without CP (25%), D/P + RCP (16%) and D/P + ACP (14%). Overall rates of the primary end-point, operative mortality and stroke were 23%, 12% and 8%, respectively. Among the 7 most common strategies, the 2 not utilizing CP (straight D/P and straight L-M) appeared inferior, associated with significantly higher risk of the composite end-point (odds ratio: 1.6; P < 0.01); there was no significant difference in composite outcome between the remaining strategies (D/P + ACP, D/P + RCP, L-M + ACP, L-M + RCP and H-M + ACP). In a comparative effectiveness study of cerebral protection strategies for aortic arch repair, strategies without adjunctive CP, including the most commonly utilized strategy of straight D/P hypothermia, appeared inferior to those utilizing CP. There was no clearly superior strategy among remaining techniques, and randomized trials are needed to define best practice. © The Author 2017. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.

  16. Advanced Visualization and Interactive Display Rapid Innovation and Discovery Evaluation Research (VISRIDER) Program Task 6: Point Cloud Visualization Techniques for Desktop and Web Platforms

    DTIC Science & Technology

    2017-04-01

    ADVANCED VISUALIZATION AND INTERACTIVE DISPLAY RAPID INNOVATION AND DISCOVERY EVALUATION RESEARCH (VISRIDER) PROGRAM TASK 6: POINT CLOUD...To) OCT 2013 – SEP 2014 4. TITLE AND SUBTITLE ADVANCED VISUALIZATION AND INTERACTIVE DISPLAY RAPID INNOVATION AND DISCOVERY EVALUATION RESEARCH...various point cloud visualization techniques for viewing large scale LiDAR datasets. Evaluate their potential use for thick client desktop platforms

  17. Inventory of File WAFS_blended_2014102006f06.grib2

    Science.gov Websites

    ) [%] 004 700 mb CTP 6 hour fcst In-Cloud Turbulence [%] spatial ave,code table 4.15=3,#points=1 005 700 mb CTP 6 hour fcst In-Cloud Turbulence [%] spatial max,code table 4.15=3,#points=1 006 600 mb CTP 6 hour fcst In-Cloud Turbulence [%] spatial ave,code table 4.15=3,#points=1 007 600 mb CTP 6 hour fcst In

  18. Observations of the boundary layer, cloud, and aerosol variability in the southeast Pacific near-coastal marine stratocumulus during VOCALS-REx

    NASA Astrophysics Data System (ADS)

    Zheng, X.; Albrecht, B.; Jonsson, H. H.; Khelif, D.; Feingold, G.; Minnis, P.; Ayers, K.; Chuang, P.; Donaher, S.; Rossiter, D.; Ghate, V.; Ruiz-Plancarte, J.; Sun-Mack, S.

    2011-09-01

    Aircraft observations made off the coast of northern Chile in the Southeastern Pacific (20° S, 72° W; named Point Alpha) from 16 October to 13 November 2008 during the VAMOS Ocean-Cloud- Atmosphere-Land Study-Regional Experiment (VOCALS-REx), combined with meteorological reanalysis, satellite measurements, and radiosonde data, are used to investigate the boundary layer (BL) and aerosol-cloud-drizzle variations in this region. On days without predominately synoptic and meso-scale influences, the BL at Point Alpha was typical of a non-drizzling stratocumulus-topped BL. Entrainment rates calculated from the near cloud-top fluxes and turbulence in the BL at Point Alpha appeared to be weaker than those in the BL over the open ocean west of Point Alpha and the BL near the coast of the northeast Pacific. The cloud liquid water path (LWP) varied between 15 g m-2 and 160 g m-2. The BL had a depth of 1140 ± 120 m, was generally well-mixed and capped by a sharp inversion without predominately synoptic and meso-scale influences. The wind direction generally switched from southerly within the BL to northerly above the inversion. On days when a synoptic system and related mesoscale costal circulations affected conditions at Point Alpha (29 October-4 November), a moist layer above the inversion moved over Point Alpha, and the total-water mixing ratio above the inversion was larger than that within the BL. The accumulation mode aerosol varied from 250 to 700 cm-3 within the BL, and CCN at 0.2 % supersaturation within the BL ranged between 150 and 550 cm-3. The main aerosol source at Point Alpha was horizontal advection within the BL from south. The average cloud droplet number concentration ranged between 80 and 400 cm-3. While the mean LWP retrieved from GOES was in good agreement with the in situ measurements, the GOES-derived cloud droplet effective radius tended to be larger than that from the aircraft in situ observations near cloud top. The aerosol and cloud LWP relationship reveals that during the typical well-mixed BL days the cloud LWP increased with the CCN concentrations. On the other hand, meteorological factors and the decoupling processes have large influences on the cloud LWP variation as well.

  19. Impact of survey workflow on precision and accuracy of terrestrial LiDAR datasets

    NASA Astrophysics Data System (ADS)

    Gold, P. O.; Cowgill, E.; Kreylos, O.

    2009-12-01

    Ground-based LiDAR (Light Detection and Ranging) survey techniques are enabling remote visualization and quantitative analysis of geologic features at unprecedented levels of detail. For example, digital terrain models computed from LiDAR data have been used to measure displaced landforms along active faults and to quantify fault-surface roughness. But how accurately do terrestrial LiDAR data represent the true ground surface, and in particular, how internally consistent and precise are the mosaiced LiDAR datasets from which surface models are constructed? Addressing this question is essential for designing survey workflows that capture the necessary level of accuracy for a given project while minimizing survey time and equipment, which is essential for effective surveying of remote sites. To address this problem, we seek to define a metric that quantifies how scan registration error changes as a function of survey workflow. Specifically, we are using a Trimble GX3D laser scanner to conduct a series of experimental surveys to quantify how common variables in field workflows impact the precision of scan registration. Primary variables we are testing include 1) use of an independently measured network of control points to locate scanner and target positions, 2) the number of known-point locations used to place the scanner and point clouds in 3-D space, 3) the type of target used to measure distances between the scanner and the known points, and 4) setting up the scanner over a known point as opposed to resectioning of known points. Precision of the registered point cloud is quantified using Trimble Realworks software by automatic calculation of registration errors (errors between locations of the same known points in different scans). Accuracy of the registered cloud (i.e., its ground-truth) will be measured in subsequent experiments. To obtain an independent measure of scan-registration errors and to better visualize the effects of these errors on a registered point cloud, we scan from multiple locations an object of known geometry (a cylinder mounted above a square box). Preliminary results show that even in a controlled experimental scan of an object of known dimensions, there is significant variability in the precision of the registered point cloud. For example, when 3 scans of the central object are registered using 4 known points (maximum time, maximum equipment), the point clouds align to within ~1 cm (normal to the object surface). However, when the same point clouds are registered with only 1 known point (minimum time, minimum equipment), misalignment of the point clouds can range from 2.5 to 5 cm, depending on target type. The greater misalignment of the 3 point clouds when registered with fewer known points stems from the field method employed in acquiring the dataset and demonstrates the impact of field workflow on LiDAR dataset precision. By quantifying the degree of scan mismatch in results such as this, we can provide users with the information needed to maximize efficiency in remote field surveys.

  20. Quality of life in Chronic Pancreatitis is determined by constant pain, disability/unemployment, current smoking and associated co-morbidities

    PubMed Central

    Machicado, Jorge D.; Amann, Stephen T; Anderson, Michelle A.; Abberbock, Judah; Sherman, Stuart; Conwell, Darwin; Cote, Gregory A.; Singh, Vikesh K.; Lewis, Michele; Alkaade, Samer; Sandhu, Bimaljit S.; Guda, Nalini M.; Muniraj, Thiruvengadam; Tang, Gong; Baillie, John; Brand, Randall; Gardner, Timothy B.; Gelrud, Andres; Forsmark, Christopher E.; Banks, Peter A.; Slivka, Adam; Wilcox, C. Mel; Whitcomb, David C.; Yadav, Dhiraj

    2018-01-01

    Background Chronic pancreatitis (CP) has a profound independent effect on quality of life (QOL). Our aim was to identify factors that impact the QOL in CP patients. Methods We used data on 1,024 CP patients enrolled in the three NAPS2 studies. Information on demographics, risk factors, co-morbidities, disease phenotype and treatments was obtained from responses to structured questionnaires. Physical (PCS) and mental (MCS) component summary scores generated using responses to the Short Form-12 (SF-12) survey were used to assess QOL at enrollment. Multivariable linear regression models determined independent predictors of QOL. Results Mean PCS and MCS scores were 36.7±11.7 and 42.4±12.2, respectively. Significant (p<0.05) negative impact on PCS scores in multivariable analyses was noted due to constant mild-moderate pain with episodes of severe pain or constant severe pain (10 points), constant mild-moderate pain (5.2), pain-related disability/unemployment (5.1), current smoking (2.9 points) and medical co-morbidities. Significant (p<0.05) negative impact on MCS scores was related to constant pain irrespective of severity (6.8-6.9 points), current smoking (3.9 points) and pain-related disability/unemployment (2.4 points). In women, disability/unemployment resulted in an additional reduction 3.7 point reduction in MCS score. Final multivariable models explained 27% and 18% of the variance in PCS and MCS scores, respectively. Etiology, disease duration, pancreatic morphology, diabetes, exocrine insufficiency and prior endotherapy/pancreatic surgery had no significant independent effect on QOL. Conclusion Constant pain, pain-related disability/unemployment, current smoking, and concurrent co-morbidities significantly affect the QOL in CP. Further research is needed to identify factors impacting QOL not explained by our analyses. PMID:28244497

  1. Quality of Life in Chronic Pancreatitis is Determined by Constant Pain, Disability/Unemployment, Current Smoking, and Associated Co-Morbidities.

    PubMed

    Machicado, Jorge D; Amann, Stephen T; Anderson, Michelle A; Abberbock, Judah; Sherman, Stuart; Conwell, Darwin L; Cote, Gregory A; Singh, Vikesh K; Lewis, Michele D; Alkaade, Samer; Sandhu, Bimaljit S; Guda, Nalini M; Muniraj, Thiruvengadam; Tang, Gong; Baillie, John; Brand, Randall E; Gardner, Timothy B; Gelrud, Andres; Forsmark, Christopher E; Banks, Peter A; Slivka, Adam; Wilcox, C Mel; Whitcomb, David C; Yadav, Dhiraj

    2017-04-01

    Chronic pancreatitis (CP) has a profound independent effect on quality of life (QOL). Our aim was to identify factors that impact the QOL in CP patients. We used data on 1,024 CP patients enrolled in the three NAPS2 studies. Information on demographics, risk factors, co-morbidities, disease phenotype, and treatments was obtained from responses to structured questionnaires. Physical and mental component summary (PCS and MCS, respectively) scores generated using responses to the Short Form-12 (SF-12) survey were used to assess QOL at enrollment. Multivariable linear regression models determined independent predictors of QOL. Mean PCS and MCS scores were 36.7±11.7 and 42.4±12.2, respectively. Significant (P<0.05) negative impact on PCS scores in multivariable analyses was noted owing to constant mild-moderate pain with episodes of severe pain or constant severe pain (10 points), constant mild-moderate pain (5.2), pain-related disability/unemployment (5.1), current smoking (2.9 points), and medical co-morbidities. Significant (P<0.05) negative impact on MCS scores was related to constant pain irrespective of severity (6.8-6.9 points), current smoking (3.9 points), and pain-related disability/unemployment (2.4 points). In women, disability/unemployment resulted in an additional 3.7 point reduction in MCS score. Final multivariable models explained 27% and 18% of the variance in PCS and MCS scores, respectively. Etiology, disease duration, pancreatic morphology, diabetes, exocrine insufficiency, and prior endotherapy/pancreatic surgery had no significant independent effect on QOL. Constant pain, pain-related disability/unemployment, current smoking, and concurrent co-morbidities significantly affect the QOL in CP. Further research is needed to identify factors impacting QOL not explained by our analyses.

  2. Comparison of 3D point clouds produced by LIDAR and UAV photoscan in the Rochefort cave (Belgium)

    NASA Astrophysics Data System (ADS)

    Watlet, Arnaud; Triantafyllou, Antoine; Kaufmann, Olivier; Le Mouelic, Stéphane

    2016-04-01

    Amongst today's techniques that are able to produce 3D point clouds, LIDAR and UAV (Unmanned Aerial Vehicle) photogrammetry are probably the most commonly used. Both methods have their own advantages and limitations. LIDAR scans create high resolution and high precision 3D point clouds, but such methods are generally costly, especially for sporadic surveys. Compared to LIDAR, UAV (e.g. drones) are cheap and flexible to use in different kind of environments. Moreover, the photogrammetric processing workflow of digital images taken with UAV becomes easier with the rise of many affordable software packages (e.g. Agisoft, PhotoModeler3D, VisualSFM). We present here a challenging study made at the Rochefort Cave Laboratory (South Belgium) comprising surface and underground surveys. The site is located in the Belgian Variscan fold-and-thrust belt, a region that shows many karstic networks within Devonian limestone units. A LIDAR scan has been acquired in the main chamber of the cave (~ 15000 m³) to spatialize 3D point cloud of its inner walls and infer geological beds and structures. Even if the use of LIDAR instrument was not really comfortable in such caving environment, the collected data showed a remarkable precision according to few control points geometry. We also decided to perform another challenging survey of the same cave chamber by modelling a 3D point cloud using photogrammetry of a set of DSLR camera pictures taken from the ground and UAV pictures. The aim was to compare both techniques in terms of (i) implementation of data acquisition and processing, (ii) quality of resulting 3D points clouds (points density, field vs cloud recovery and points precision), (iii) their application for geological purposes. Through Rochefort case study, main conclusions are that LIDAR technique provides higher density point clouds with slightly higher precision than photogrammetry method. However, 3D data modeled by photogrammetry provide visible light spectral information for each modeled voxel and interpolated vertices that can be a useful attributes for clustering during data treatment. We thus illustrate such applications to the Rochefort cave by using both sources of 3D information to quantify the orientation of inaccessible geological structures (e.g. faults, tectonic and gravitational joints, and sediments bedding), cluster these structures using color information gathered from UAV's 3D point cloud and compare these data to structural data surveyed on the field. An additional drone photoscan was also conducted in the surface sinkhole giving access to the surveyed underground cavity to seek geological bodies' connections.

  3. Study into Point Cloud Geometric Rigidity and Accuracy of TLS-Based Identification of Geometric Bodies

    NASA Astrophysics Data System (ADS)

    Klapa, Przemyslaw; Mitka, Bartosz; Zygmunt, Mariusz

    2017-12-01

    Capability of obtaining a multimillion point cloud in a very short time has made the Terrestrial Laser Scanning (TLS) a widely used tool in many fields of science and technology. The TLS accuracy matches traditional devices used in land surveying (tacheometry, GNSS - RTK), but like any measurement it is burdened with error which affects the precise identification of objects based on their image in the form of a point cloud. The point’s coordinates are determined indirectly by means of measuring the angles and calculating the time of travel of the electromagnetic wave. Each such component has a measurement error which is translated into the final result. The XYZ coordinates of a measuring point are determined with some uncertainty and the very accuracy of determining these coordinates is reduced as the distance to the instrument increases. The paper presents the results of examination of geometrical stability of a point cloud obtained by means terrestrial laser scanner and accuracy evaluation of solids determined using the cloud. Leica P40 scanner and two different settings of measuring points were used in the tests. The first concept involved placing a few balls in the field and then scanning them from various sides at similar distances. The second part of measurement involved placing balls and scanning them a few times from one side but at varying distances from the instrument to the object. Each measurement encompassed a scan of the object with automatic determination of its position and geometry. The desk studies involved a semiautomatic fitting of solids and measurement of their geometrical elements, and comparison of parameters that determine their geometry and location in space. The differences of measures of geometrical elements of balls and translations vectors of the solids centres indicate the geometrical changes of the point cloud depending on the scanning distance and parameters. The results indicate the changes in the geometry of scanned objects depending on the point cloud quality and distance from the measuring instrument. Varying geometrical dimensions of the same element suggest also that the point cloud does not keep a stable geometry of measured objects.

  4. The Impacts of Bias in Cloud-Radiation-Dynamics Interactions on Central Pacific Seasonal and El Niño Simulations in Contemporary GCMs

    NASA Astrophysics Data System (ADS)

    Li, J.-L. F.; Suhas, E.; Richardson, Mark; Lee, Wei-Liang; Wang, Yi-Hui; Yu, Jia-Yuh; Lee, Tong; Fetzer, Eric; Stephens, Graeme; Shen, Min-Hua

    2018-02-01

    Most of the global climate models (GCMs) in the Coupled Model Intercomparison Project, phase 5 do not include precipitating ice (aka falling snow) in their radiation calculations. We examine the importance of the radiative effects of precipitating ice on simulated surface wind stress and sea surface temperatures (SSTs) in terms of seasonal variation and in the evolution of central Pacific El Niño (CP-El Niño) events. Using controlled simulations with the CESM1 model, we show that the exclusion of precipitating ice radiative effects generates a persistent excessive upper-level radiative cooling and an increasingly unstable atmosphere over convective regions such as the western Pacific and tropical convergence zones. The invigorated convection leads to persistent anomalous low-level outflows which weaken the easterly trade winds, reducing upper-ocean mixing and leading to a positive SST bias in the model mean state. In CP-El Niño events, this means that outflow from the modeled convection in the central Pacific reduces winds to the east, allowing unrealistic eastward propagation of warm SST anomalies following the peak in CP-El Niño activity. Including the radiative effects of precipitating ice reduces these model biases and improves the simulated life cycle of the CP-El Niño. Improved simulations of present-day tropical seasonal variations and CP-El Niño events would increase the confidence in simulating their future behavior.

  5. Low-melting point inorganic nitrate salt heat transfer fluid

    DOEpatents

    Bradshaw, Robert W [Livermore, CA; Brosseau, Douglas A [Albuquerque, NM

    2009-09-15

    A low-melting point, heat transfer fluid made of a mixture of four inorganic nitrate salts: 9-18 wt % NaNO.sub.3, 40-52 wt % KNO.sub.3, 13-21 wt % LiNO.sub.3, and 20-27 wt % Ca(NO.sub.3).sub.2. These compositions can have liquidus temperatures less than 100 C; thermal stability limits greater than 500 C; and viscosity in the range of 5-6 cP at 300 C; and 2-3 cP at 400 C.

  6. Structure-From for Calibration of a Vehicle Camera System with Non-Overlapping Fields-Of in AN Urban Environment

    NASA Astrophysics Data System (ADS)

    Hanel, A.; Stilla, U.

    2017-05-01

    Vehicle environment cameras observing traffic participants in the area around a car and interior cameras observing the car driver are important data sources for driver intention recognition algorithms. To combine information from both camera groups, a camera system calibration can be performed. Typically, there is no overlapping field-of-view between environment and interior cameras. Often no marked reference points are available in environments, which are a large enough to cover a car for the system calibration. In this contribution, a calibration method for a vehicle camera system with non-overlapping camera groups in an urban environment is described. A-priori images of an urban calibration environment taken with an external camera are processed with the structure-frommotion method to obtain an environment point cloud. Images of the vehicle interior, taken also with an external camera, are processed to obtain an interior point cloud. Both point clouds are tied to each other with images of both image sets showing the same real-world objects. The point clouds are transformed into a self-defined vehicle coordinate system describing the vehicle movement. On demand, videos can be recorded with the vehicle cameras in a calibration drive. Poses of vehicle environment cameras and interior cameras are estimated separately using ground control points from the respective point cloud. All poses of a vehicle camera estimated for different video frames are optimized in a bundle adjustment. In an experiment, a point cloud is created from images of an underground car park, as well as a point cloud of the interior of a Volkswagen test car is created. Videos of two environment and one interior cameras are recorded. Results show, that the vehicle camera poses are estimated successfully especially when the car is not moving. Position standard deviations in the centimeter range can be achieved for all vehicle cameras. Relative distances between the vehicle cameras deviate between one and ten centimeters from tachymeter reference measurements.

  7. Comparison of 3D point clouds obtained by photogrammetric UAVs and TLS to determine the attitude of dolerite outcrops discontinuities.

    NASA Astrophysics Data System (ADS)

    Duarte, João; Gonçalves, Gil; Duarte, Diogo; Figueiredo, Fernando; Mira, Maria

    2015-04-01

    Photogrammetric Unmanned Aerial Vehicles (UAVs) and Terrestrial Laser Scanners (TLS) are two emerging technologies that allows the production of dense 3D point clouds of the sensed topographic surfaces. Although image-based stereo-photogrammetric point clouds could not, in general, compete on geometric quality over TLS point clouds, fully automated mapping solutions based on ultra-light UAVs (or drones) have recently become commercially available at very reasonable accuracy and cost for engineering and geological applications. The purpose of this paper is to compare the two point clouds generated by these two technologies, in order to automatize the manual process tasks commonly used to detect and represent the attitude of discontinuities (Stereographic projection: Schmidt net - Equal area). To avoid the difficulties of access and guarantee the data survey security conditions, this fundamental step in all geological/geotechnical studies, applied to the extractive industry and engineering works, has to be replaced by a more expeditious and reliable methodology. This methodology will allow, in a more actuated clear way, give answers to the needs of evaluation of rock masses, by mapping the structures present, which will reduce considerably the associated risks (investment, structures dimensioning, security, etc.). A case study of a dolerite outcrop locate in the center of Portugal (the dolerite outcrop is situated in the volcanic complex of Serra de Todo-o-Mundo, Casais Gaiola, intruded in Jurassic sandstones) will be used to assess this methodology. The results obtained show that the 3D point cloud produced by the Photogrammetric UAV platform has the appropriate geometric quality for extracting the parameters that define the discontinuities of the dolerite outcrops. Although, they are comparable to the manual extracted parameters, their quality is inferior to parameters extracted from the TLS point cloud.

  8. Knee medial and lateral contact forces in a musculoskeletal model with subject-specific contact point trajectories.

    PubMed

    Zeighami, A; Aissaoui, R; Dumas, R

    2018-03-01

    Contact point (CP) trajectory is a crucial parameter in estimating medial/lateral tibio-femoral contact forces from the musculoskeletal (MSK) models. The objective of the present study was to develop a method to incorporate the subject-specific CP trajectories into the MSK model. Ten healthy subjects performed 45 s treadmill gait trials. The subject-specific CP trajectories were constructed on the tibia and femur as a function of extension-flexion using low-dose bi-plane X-ray images during a quasi-static squat. At each extension-flexion position, the tibia and femur CPs were superimposed in the three directions on the medial side, and in the anterior-posterior and proximal-distal directions on the lateral side to form the five kinematic constraints of the knee joint. The Lagrange multipliers associated to these constraints directly yielded the medial/lateral contact forces. The results from the personalized CP trajectory model were compared against the linear CP trajectory and sphere-on-plane CP trajectory models which were adapted from the commonly used MSK models. Changing the CP trajectory had a remarkable impact on the knee kinematics and changed the medial and lateral contact forces by 1.03 BW and 0.65 BW respectively, in certain subjects. The direction and magnitude of the medial/lateral contact force were highly variable among the subjects and the medial-lateral shift of the CPs alone could not determine the increase/decrease pattern of the contact forces. The suggested kinematic constraints are adaptable to the CP trajectories derived from a variety of joint models and those experimentally measured from the 3D imaging techniques. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. Transdermal permeation of WIN 55,212-2 and CP 55,940 in human skin in vitro.

    PubMed

    Valiveti, Satyanarayana; Kiptoo, Paul K; Hammell, Dana C; Stinchcomb, Audra L

    2004-06-18

    Synthetic cannabinoids have a promising future as treatments for nausea, appetite modulation, pain, and many neurological disorders. Transdermal delivery is a convenient and desirable dosage form for these drugs and health conditions. The aim of the present study was to investigate the in vitro transdermal permeation of two synthetic cannabinoids, WIN 55,212-2 and CP 55,940. Transdermal flux, drug content in the skin, and lag times were measured in split-thickness human abdominal skin in flow-through diffusion cells with receiver solutions of 4% bovine serum albumin (BSA) or 0.5% Brij 98. Differential thermal analysis (DSC) was performed in order to determine heats of fusion, melting points, and relative thermodynamic activities. The in vitro diffusion studies in 0.5% Brij 98 indicated that WIN 55,212-2 diffuses across human skin faster than CP 55,940. The WIN 55,212-2 skin disposition concentration levels were also significantly higher than that of CP 55,940. Correspondingly, CP 55,940 was significantly metabolized in the skin. WIN 55,212-2 flux and skin disposition were significantly lower into 4% BSA than into 0.5% Brij 98 receiver solutions. There was no significant difference in the flux, lag time, and drug content in the skin of CP 55,940 in 4% BSA versus 0.5% Brij 98 receiver solutions. The DSC studies showed that CP 55,940 had a significantly lower melting point, smaller heat of fusion, and corresponding higher calculated thermodynamic activity than the more crystalline WIN 55,212-2 mesylate salt. The permeation results indicated that WIN 55,212-2 mesylate, CP 55,940, and other potent synthetic cannabinoids with these physicochemical properties could be ideal candidates for the development of a transdermal therapeutic system. Copyright 2004 Elsevier B.V.

  10. Cloud-point detection using a portable thickness shear mode crystal resonator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mansure, A.J.; Spates, J.J.; Germer, J.W.

    1997-08-01

    The Thickness Shear Mode (TSM) crystal resonator monitors the crude oil by propagating a shear wave into the oil. The coupling of the shear wave and the crystal vibrations is a function of the viscosity of the oil. By driving the crystal with circuitry that incorporates feedback, it is possible to determine the change from Newtonian to non-Newtonian viscosity at the cloud point. A portable prototype TSM Cloud Point Detector (CPD) has performed flawlessly during field and lab tests proving the technique is less subjective or operator dependent than the ASTM standard. The TSM CPD, in contrast to standard viscositymore » techniques, makes the measurement in a closed container capable of maintaining up to 100 psi. The closed container minimizes losses of low molecular weight volatiles, allowing samples (25 ml) to be retested with the addition of chemicals. By cycling/thermal soaking the sample, the effects of thermal history can be investigated and eliminated as a source of confusion. The CPD is portable, suitable for shipping the field offices for use by personnel without special training or experience in cloud point measurements. As such, it can make cloud point data available without the delays and inconvenience of sending samples to special labs. The crystal resonator technology can be adapted to in-line monitoring of cloud point and deposition detection.« less

  11. Geomorphological activity at a rock glacier front detected with a 3D density-based clustering algorithm

    NASA Astrophysics Data System (ADS)

    Micheletti, Natan; Tonini, Marj; Lane, Stuart N.

    2017-02-01

    Acquisition of high density point clouds using terrestrial laser scanners (TLSs) has become commonplace in geomorphic science. The derived point clouds are often interpolated onto regular grids and the grids compared to detect change (i.e. erosion and deposition/advancement movements). This procedure is necessary for some applications (e.g. digital terrain analysis), but it inevitably leads to a certain loss of potentially valuable information contained within the point clouds. In the present study, an alternative methodology for geomorphological analysis and feature detection from point clouds is proposed. It rests on the use of the Density-Based Spatial Clustering of Applications with Noise (DBSCAN), applied to TLS data for a rock glacier front slope in the Swiss Alps. The proposed methods allowed the detection and isolation of movements directly from point clouds which yield to accuracies in the following computation of volumes that depend only on the actual registered distance between points. We demonstrated that these values are more conservative than volumes computed with the traditional DEM comparison. The results are illustrated for the summer of 2015, a season of enhanced geomorphic activity associated with exceptionally high temperatures.

  12. Metric Scale Calculation for Visual Mapping Algorithms

    NASA Astrophysics Data System (ADS)

    Hanel, A.; Mitschke, A.; Boerner, R.; Van Opdenbosch, D.; Hoegner, L.; Brodie, D.; Stilla, U.

    2018-05-01

    Visual SLAM algorithms allow localizing the camera by mapping its environment by a point cloud based on visual cues. To obtain the camera locations in a metric coordinate system, the metric scale of the point cloud has to be known. This contribution describes a method to calculate the metric scale for a point cloud of an indoor environment, like a parking garage, by fusing multiple individual scale values. The individual scale values are calculated from structures and objects with a-priori known metric extension, which can be identified in the unscaled point cloud. Extensions of building structures, like the driving lane or the room height, are derived from density peaks in the point distribution. The extension of objects, like traffic signs with a known metric size, are derived using projections of their detections in images onto the point cloud. The method is tested with synthetic image sequences of a drive with a front-looking mono camera through a virtual 3D model of a parking garage. It has been shown, that each individual scale value improves either the robustness of the fused scale value or reduces its error. The error of the fused scale is comparable to other recent works.

  13. Biodegradability of Chlorophenols in Surface Waters from the Urban Area of Buenos Aires.

    PubMed

    Gallego, A; Laurino Soulé, J; Napolitano, H; Rossi, S L; Vescina, C; Korol, S E

    2018-04-01

    Biodegradability of 2-Chlorophenol (2-CP), 3-Chlorophenol (3-CP), 4-Chlorophenol (4-CP), 2,4-Dichlorophenol (2,4-DCP) and 2,4,6 Trichlorophenol (2,4,6-TCP) has been tested in surface waters in the urban area of Buenos Aires. Samples were taken from the La Plata River and from the Reconquista and Matanza-Riachuelo basins, with a total amount of 18 sampling points. Water quality was established measuring chemical oxygen demand (COD), biochemical oxygen demand (BOD 5 ), and both Escherichia coli and Enterococcus counts. Biodegradability was carried out by a respirometric method, using a concentration of 20 mg L -1 of chlorophenol, and the surface water as inoculum. Chlorophenols concentration in the same water samples were simultaneously measured by a solid phase microextraction (SPME) procedure followed by gas chromatography-mass spectrometry (GC-MS). 2,4-DCP was the most degradable compound followed by 2,4,6-TCP, 4-CP, 3-CP and 2-CP. Biodegradability showed no correlation with compound concentration. At most sampling points the concentration was below the detection limit for all congeners. Biodegradability does not correlate even with COD, BOD 5 , or fecal contamination. Biodegradability assays highlighted information about bacterial exposure to contaminants that parameters routinely used for watercourse characterization do not reveal. For this reason, they might be a helpful tool to complete the characterization of a site.

  14. GPU-Based Point Cloud Superpositioning for Structural Comparisons of Protein Binding Sites.

    PubMed

    Leinweber, Matthias; Fober, Thomas; Freisleben, Bernd

    2018-01-01

    In this paper, we present a novel approach to solve the labeled point cloud superpositioning problem for performing structural comparisons of protein binding sites. The solution is based on a parallel evolution strategy that operates on large populations and runs on GPU hardware. The proposed evolution strategy reduces the likelihood of getting stuck in a local optimum of the multimodal real-valued optimization problem represented by labeled point cloud superpositioning. The performance of the GPU-based parallel evolution strategy is compared to a previously proposed CPU-based sequential approach for labeled point cloud superpositioning, indicating that the GPU-based parallel evolution strategy leads to qualitatively better results and significantly shorter runtimes, with speed improvements of up to a factor of 1,500 for large populations. Binary classification tests based on the ATP, NADH, and FAD protein subsets of CavBase, a database containing putative binding sites, show average classification rate improvements from about 92 percent (CPU) to 96 percent (GPU). Further experiments indicate that the proposed GPU-based labeled point cloud superpositioning approach can be superior to traditional protein comparison approaches based on sequence alignments.

  15. Real object-based 360-degree integral-floating display using multiple depth camera

    NASA Astrophysics Data System (ADS)

    Erdenebat, Munkh-Uchral; Dashdavaa, Erkhembaatar; Kwon, Ki-Chul; Wu, Hui-Ying; Yoo, Kwan-Hee; Kim, Young-Seok; Kim, Nam

    2015-03-01

    A novel 360-degree integral-floating display based on the real object is proposed. The general procedure of the display system is similar with conventional 360-degree integral-floating displays. Unlike previously presented 360-degree displays, the proposed system displays the 3D image generated from the real object in 360-degree viewing zone. In order to display real object in 360-degree viewing zone, multiple depth camera have been utilized to acquire the depth information around the object. Then, the 3D point cloud representations of the real object are reconstructed according to the acquired depth information. By using a special point cloud registration method, the multiple virtual 3D point cloud representations captured by each depth camera are combined as single synthetic 3D point cloud model, and the elemental image arrays are generated for the newly synthesized 3D point cloud model from the given anamorphic optic system's angular step. The theory has been verified experimentally, and it shows that the proposed 360-degree integral-floating display can be an excellent way to display real object in the 360-degree viewing zone.

  16. The Feasibility of 3d Point Cloud Generation from Smartphones

    NASA Astrophysics Data System (ADS)

    Alsubaie, N.; El-Sheimy, N.

    2016-06-01

    This paper proposes a new technique for increasing the accuracy of direct geo-referenced image-based 3D point cloud generated from low-cost sensors in smartphones. The smartphone's motion sensors are used to directly acquire the Exterior Orientation Parameters (EOPs) of the captured images. These EOPs, along with the Interior Orientation Parameters (IOPs) of the camera/ phone, are used to reconstruct the image-based 3D point cloud. However, because smartphone motion sensors suffer from poor GPS accuracy, accumulated drift and high signal noise, inaccurate 3D mapping solutions often result. Therefore, horizontal and vertical linear features, visible in each image, are extracted and used as constraints in the bundle adjustment procedure. These constraints correct the relative position and orientation of the 3D mapping solution. Once the enhanced EOPs are estimated, the semi-global matching algorithm (SGM) is used to generate the image-based dense 3D point cloud. Statistical analysis and assessment are implemented herein, in order to demonstrate the feasibility of 3D point cloud generation from the consumer-grade sensors in smartphones.

  17. Registration of Vehicle-Borne Point Clouds and Panoramic Images Based on Sensor Constellations

    PubMed Central

    Yao, Lianbi; Wu, Hangbin; Li, Yayun; Meng, Bin; Qian, Jinfei; Liu, Chun; Fan, Hongchao

    2017-01-01

    A mobile mapping system (MMS) is usually utilized to collect environmental data on and around urban roads. Laser scanners and panoramic cameras are the main sensors of an MMS. This paper presents a new method for the registration of the point clouds and panoramic images based on sensor constellation. After the sensor constellation was analyzed, a feature point, the intersection of the connecting line between the global positioning system (GPS) antenna and the panoramic camera with a horizontal plane, was utilized to separate the point clouds into blocks. The blocks for the central and sideward laser scanners were extracted with the segmentation feature points. Then, the point clouds located in the blocks were separated from the original point clouds. Each point in the blocks was used to find the accurate corresponding pixel in the relative panoramic images via a collinear function, and the position and orientation relationship amongst different sensors. A search strategy is proposed for the correspondence of laser scanners and lenses of panoramic cameras to reduce calculation complexity and improve efficiency. Four cases of different urban road types were selected to verify the efficiency and accuracy of the proposed method. Results indicate that most of the point clouds (with an average of 99.7%) were successfully registered with the panoramic images with great efficiency. Geometric evaluation results indicate that horizontal accuracy was approximately 0.10–0.20 m, and vertical accuracy was approximately 0.01–0.02 m for all cases. Finally, the main factors that affect registration accuracy, including time synchronization amongst different sensors, system positioning and vehicle speed, are discussed. PMID:28398256

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martin, Shawn

    This code consists of Matlab routines which enable the user to perform non-manifold surface reconstruction via triangulation from high dimensional point cloud data. The code was based on an algorithm originally developed in [Freedman (2007), An Incremental Algorithm for Reconstruction of Surfaces of Arbitrary Codimension Computational Geometry: Theory and Applications, 36(2):106-116]. This algorithm has been modified to accommodate non-manifold surface according to the work described in [S. Martin and J.-P. Watson (2009), Non-Manifold Surface Reconstruction from High Dimensional Point Cloud DataSAND #5272610].The motivation for developing the code was a point cloud describing the molecular conformation space of cyclooctane (C8H16). Cyclooctanemore » conformation space was represented using points in 72 dimensions (3 coordinates for each molecule). The code was used to triangulate the point cloud and thereby study the geometry and topology of cyclooctane. Futures applications are envisioned for peptides and proteins.« less

  19. Classification of Mobile Laser Scanning Point Clouds from Height Features

    NASA Astrophysics Data System (ADS)

    Zheng, M.; Lemmens, M.; van Oosterom, P.

    2017-09-01

    The demand for 3D maps of cities and road networks is steadily growing and mobile laser scanning (MLS) systems are often the preferred geo-data acquisition method for capturing such scenes. Because MLS systems are mounted on cars or vans they can acquire billions of points of road scenes within a few hours of survey. Manual processing of point clouds is labour intensive and thus time consuming and expensive. Hence, the need for rapid and automated methods for 3D mapping of dense point clouds is growing exponentially. The last five years the research on automated 3D mapping of MLS data has tremendously intensified. In this paper, we present our work on automated classification of MLS point clouds. In the present stage of the research we exploited three features - two height components and one reflectance value, and achieved an overall accuracy of 73 %, which is really encouraging for further refining our approach.

  20. Outdoor Illegal Construction Identification Algorithm Based on 3D Point Cloud Segmentation

    NASA Astrophysics Data System (ADS)

    An, Lu; Guo, Baolong

    2018-03-01

    Recently, various illegal constructions occur significantly in our surroundings, which seriously restrict the orderly development of urban modernization. The 3D point cloud data technology is used to identify the illegal buildings, which could address the problem above effectively. This paper proposes an outdoor illegal construction identification algorithm based on 3D point cloud segmentation. Initially, in order to save memory space and reduce processing time, a lossless point cloud compression method based on minimum spanning tree is proposed. Then, a ground point removing method based on the multi-scale filtering is introduced to increase accuracy. Finally, building clusters on the ground can be obtained using a region growing method, as a result, the illegal construction can be marked. The effectiveness of the proposed algorithm is verified using a publicly data set collected from the International Society for Photogrammetry and Remote Sensing (ISPRS).

  1. Effect of electromagnetic field on Kordylewski clouds formation

    NASA Astrophysics Data System (ADS)

    Salnikova, Tatiana; Stepanov, Sergey

    2018-05-01

    In previous papers the authors suggest a clarification of the phenomenon of appearance-disappearance of Kordylewski clouds - accumulation of cosmic dust mass in the vicinity of the triangle libration points of the Earth-Moon system. Under gravi-tational and light perturbation of the Sun the triangle libration points aren't the points of relative equilibrium. However, there exist the stable periodic motion of the particles, surrounding every of the triangle libration points. Due to this fact we can consider a probabilistic model of the dust clouds formation. These clouds move along the periodical orbits in small vicinity of the point of periodical orbit. To continue this research we suggest a mathematical model to investigate also the electromagnetic influences, arising under consideration of the charged dust particles in the vicinity of the triangle libration points of the Earth-Moon system. In this model we take under consideration the self-unduced force field within the set of charged particles, the probability distribution density evolves according to the Vlasov equation.

  2. Point Cloud Based Change Detection - an Automated Approach for Cloud-based Services

    NASA Astrophysics Data System (ADS)

    Collins, Patrick; Bahr, Thomas

    2016-04-01

    The fusion of stereo photogrammetric point clouds with LiDAR data or terrain information derived from SAR interferometry has a significant potential for 3D topographic change detection. In the present case study latest point cloud generation and analysis capabilities are used to examine a landslide that occurred in the village of Malin in Maharashtra, India, on 30 July 2014, and affected an area of ca. 44.000 m2. It focuses on Pléiades high resolution satellite imagery and the Airbus DS WorldDEMTM as a product of the TanDEM-X mission. This case study was performed using the COTS software package ENVI 5.3. Integration of custom processes and automation is supported by IDL (Interactive Data Language). Thus, ENVI analytics is running via the object-oriented and IDL-based ENVITask API. The pre-event topography is represented by the WorldDEMTM product, delivered with a raster of 12 m x 12 m and based on the EGM2008 geoid (called pre-DEM). For the post-event situation a Pléiades 1B stereo image pair of the AOI affected was obtained. The ENVITask "GeneratePointCloudsByDenseImageMatching" was implemented to extract passive point clouds in LAS format from the panchromatic stereo datasets: • A dense image-matching algorithm is used to identify corresponding points in the two images. • A block adjustment is applied to refine the 3D coordinates that describe the scene geometry. • Additionally, the WorldDEMTM was input to constrain the range of heights in the matching area, and subsequently the length of the epipolar line. The "PointCloudFeatureExtraction" task was executed to generate the post-event digital surface model from the photogrammetric point clouds (called post-DEM). Post-processing consisted of the following steps: • Adding the geoid component (EGM 2008) to the post-DEM. • Pre-DEM reprojection to the UTM Zone 43N (WGS-84) coordinate system and resizing. • Subtraction of the pre-DEM from the post-DEM. • Filtering and threshold based classification of the DEM difference to analyze the surface changes in 3D. The automated point cloud generation and analysis introduced here can be embedded in virtually any existing geospatial workflow for operational applications. Three integration options were implemented in this case study: • Integration within any ArcGIS environment whether deployed on the desktop, in the cloud, or online. Execution uses a customized ArcGIS script tool. A Python script file retrieves the parameters from the user interface and runs the precompiled IDL code. That IDL code is used to interface between the Python script and the relevant ENVITasks. • Publishing the point cloud processing tasks as services via the ENVI Services Engine (ESE). ESE is a cloud-based image analysis solution to publish and deploy advanced ENVI image and data analytics to existing enterprise infrastructures. For this purpose the entire IDL code can be capsuled in a single ENVITask. • Integration in an existing geospatial workflow using the Python-to-IDL Bridge. This mechanism allows calling IDL code within Python on a user-defined platform. The results of this case study allow a 3D estimation of the topographic changes within the tectonically active and anthropogenically invaded Malin area after the landslide event. Accordingly, the point cloud analysis was correlated successfully with modelled displacement contours of the slope. Based on optical satellite imagery, such point clouds of high precision and density distribution can be obtained in a few minutes to support the operational monitoring of landslide processes.

  3. Observations of the boundary layer, cloud, and aerosol variability in the southeast Pacific coastal marine stratocumulus during VOCALS-REx

    NASA Astrophysics Data System (ADS)

    Zheng, X.; Albrecht, B.; Jonsson, H. H.; Khelif, D.; Feingold, G.; Minnis, P.; Ayers, K.; Chuang, P.; Donaher, S.; Rossiter, D.; Ghate, V.; Ruiz-Plancarte, J.; Sun-Mack, S.

    2011-05-01

    Aircraft observations made off the coast of northern Chile in the Southeastern Pacific (20° S, 72° W; named Point Alpha) from 16 October to 13 November 2008 during the VAMOS Ocean-Cloud-Atmosphere-Land Study-Regional Experiment (VOCALS-REx), combined with meteorological reanalysis, satellite measurements, and radiosonde data, are used to investigate the boundary layer (BL) and aerosol-cloud-drizzle variations in this region. The BL at Point Alpha was typical of a non-drizzling stratocumulus-topped BL on days without predominately synoptic and meso-scale influences. The BL had a depth of 1140 ± 120 m, was well-mixed and capped by a sharp inversion. The wind direction generally switched from southerly within the BL to northerly above the inversion. The cloud liquid water path (LWP) varied between 15 g m-2 and 160 g m-2. From 29 October to 4 November, when a synoptic system affected conditions at Point Alpha, the cloud LWP was higher than on the other days by around 40 g m-2. On 1 and 2 November, a moist layer above the inversion moved over Point Alpha. The total-water specific humidity above the inversion was larger than that within the BL during these days. Entrainment rates (average of 1.5 ± 0.6 mm s-1) calculated from the near cloud-top fluxes and turbulence (vertical velocity variance) in the BL at Point Alpha appeared to be weaker than those in the BL over the open ocean west of Point Alpha and the BL near the coast of the northeast Pacific. The accumulation mode aerosol varied from 250 to 700 cm-3 within the BL, and CCN at 0.2 % supersaturation within the BL ranged between 150 and 550 cm-3. The main aerosol source at Point Alpha was horizontal advection within the BL from south. The average cloud droplet number concentration ranged between 80 and 400 cm-3, which was consistent with the satellite-derived values. The relationship of cloud droplet number concentration and CCN at 0.2 % supersaturation from 18 flights is Nd =4.6 × CCN0.71. While the mean LWP retrieved from GOES was in good agreement with the in situ measurements, the GOES-derived cloud droplet effective radius tended to be larger than that from the aircraft {in situ} observations near cloud top. The aerosol and cloud LWP relationship reveals that during the typical well-mixed BL days the cloud LWP increased with the CCN concentrations. On the other hand, meteorological factors and the decoupling processes have large influences on the cloud LWP variation as well.

  4. Higgs C P violation from vectorlike quarks

    DOE PAGES

    Chen, Chien-Yi; Dawson, S.; Zhang, Yue

    2015-10-20

    We explore CP violating aspects in the Higgs sector of models where new vectorlike quarks carry Yukawa couplings mainly to the third generation quarks of the Standard Model. We point out that in the simplest model, Higgs CP violating interactions only exist in the hWW channel. At low energy, we nd that rare B decays can place similarly strong constraints as those from electric dipole moments on the source of CP violation. These observations offer a new handle to discriminate from other Higgs CP violating scenarios such as scalar sector extensions of the Standard Model, and imply an interesting futuremore » interplay among limits from different experiments.« less

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warner-Schmid, D.; Hoshi, Suwaru; Armstrong, D.W.

    Aqueous solutions of nonionic surfactants are known to undergo phase separations at elevated temperatures. This phenomenon is known as clouding,' and the temperature at which it occurs is refereed to as the cloud point. Permethylhydroxypropyl-[beta]-cyclodextrin (PMHP-[beta]-CD) was synthesized and aqueous solutions containing it were found to undergo similar cloud-point behavior. Factors that affect the phase separation of PMHP-[beta]-CD were investigated. Subsequently, the cloud-point extractions of several aromatic compounds (i.e., acetanilide, aniline, 2,2[prime]-dihydroxybiphenyl, N-methylaniline, 2-naphthol, o-nitroaniline, m-nitroaniline, p-nitroaniline, nitrobenzene, o-nitrophenol, m-nitrophenol, p-nitrophenol, 4-phenazophenol, 3-phenylphenol, and 2-phenylbenzimidazole) from dilute aqueous solution were evaluated. Although the extraction efficiency of the compounds varied, mostmore » can be quantitatively extracted if sufficient PMHP-[beta]-CD is used. For those few compounds that are not extracted (e.g., o-nitroacetanilide), the cloud-point procedure may be an effective one-step isolation or purification method. 18 refs., 2 figs., 3 tabs.« less

  6. Registration of ‘CP 09-1430’ Sugarcane

    USDA-ARS?s Scientific Manuscript database

    ‘CP 09-1430’ (Reg. No. ; PI 686940 sugarcane (a complex hybrid of Saccharum spp.) was developed and released (6 Jun. 2016) through cooperative research conducted by the USDA-ARS Sugarcane Field Station , Canal Point, the University of Florida, and the Florida Sugar Cane League, Inc. for use on ...

  7. Motor Learning Characterizes Habilitation of Children With Hemiplegic Cerebral Palsy

    PubMed Central

    Krebs, Hermano I.; Fasoli, Susan E.; Dipietro, Laura; Fragala-Pinkham, Maria; Hughes, Richard; Stein, Joel; Hogan, Neville

    2015-01-01

    Background This study tested in children with cerebral palsy (CP) whether motor habilitation resembles motor learning. Methods Twelve children with hemiplegic CP ages 5 to 12 years with moderate to severe motor impairments underwent a 16-session robot-mediated planar therapy program to improve upper limb reach, with a focus on shoulder and elbow movements. Participants were trained to execute point-to-point movements (with robot assistance) with the affected arm and were evaluated (without robot assistance) in trained (point-to-point) and untrained (circle-drawing) conditions. Outcomes were measured at baseline, midpoint, immediately after the program, and 1 month postcompletion. Outcome measures were the Fugl-Meyer (FM), Quality of Upper Extremity Skills Test (QUEST), and Modified Ashworth Scale (MAS) scores; parent questionnaire; and robot-based kinematic metrics. To assess whether learning best characterizes motor habilitation in CP, the authors quantified (a) improvement on trained tasks at completion of training (acquisition) and 1 month following completion (retention) and (b) quantified generalization of improvement to untrained tasks. Results After robotic intervention, the authors found significant gains in the FM, QUEST, and parent questionnaire. Robot-based evaluations demonstrated significant improvement in trained movements and that improvement was sustained at follow-up. Furthermore, children improved their performance in untrained movements indicating generalization. Conclusions Motor habilitation in CP exhibits some traits of motor learning. Optimal treatment may not require an extensive repertoire of tasks but rather a select set to promote generalization. PMID:22331211

  8. Change Analysis in Structural Laser Scanning Point Clouds: The Baseline Method

    PubMed Central

    Shen, Yueqian; Lindenbergh, Roderik; Wang, Jinhu

    2016-01-01

    A method is introduced for detecting changes from point clouds that avoids registration. For many applications, changes are detected between two scans of the same scene obtained at different times. Traditionally, these scans are aligned to a common coordinate system having the disadvantage that this registration step introduces additional errors. In addition, registration requires stable targets or features. To avoid these issues, we propose a change detection method based on so-called baselines. Baselines connect feature points within one scan. To analyze changes, baselines connecting corresponding points in two scans are compared. As feature points either targets or virtual points corresponding to some reconstructable feature in the scene are used. The new method is implemented on two scans sampling a masonry laboratory building before and after seismic testing, that resulted in damages in the order of several centimeters. The centres of the bricks of the laboratory building are automatically extracted to serve as virtual points. Baselines connecting virtual points and/or target points are extracted and compared with respect to a suitable structural coordinate system. Changes detected from the baseline analysis are compared to a traditional cloud to cloud change analysis demonstrating the potential of the new method for structural analysis. PMID:28029121

  9. Change Analysis in Structural Laser Scanning Point Clouds: The Baseline Method.

    PubMed

    Shen, Yueqian; Lindenbergh, Roderik; Wang, Jinhu

    2016-12-24

    A method is introduced for detecting changes from point clouds that avoids registration. For many applications, changes are detected between two scans of the same scene obtained at different times. Traditionally, these scans are aligned to a common coordinate system having the disadvantage that this registration step introduces additional errors. In addition, registration requires stable targets or features. To avoid these issues, we propose a change detection method based on so-called baselines. Baselines connect feature points within one scan. To analyze changes, baselines connecting corresponding points in two scans are compared. As feature points either targets or virtual points corresponding to some reconstructable feature in the scene are used. The new method is implemented on two scans sampling a masonry laboratory building before and after seismic testing, that resulted in damages in the order of several centimeters. The centres of the bricks of the laboratory building are automatically extracted to serve as virtual points. Baselines connecting virtual points and/or target points are extracted and compared with respect to a suitable structural coordinate system. Changes detected from the baseline analysis are compared to a traditional cloud to cloud change analysis demonstrating the potential of the new method for structural analysis.

  10. Optimization of hepatobiliary phase delay time of Gd-EOB-DTPA-enhanced magnetic resonance imaging for identification of hepatocellular carcinoma in patients with cirrhosis of different degrees of severity.

    PubMed

    Wu, Jian-Wei; Yu, Yue-Cheng; Qu, Xian-Li; Zhang, Yan; Gao, Hong

    2018-01-21

    To optimize the hepatobiliary phase delay time (HBP-DT) of Gd-EOB-DTPA-enhanced magnetic resonance imaging (GED-MRI) for more efficient identification of hepatocellular carcinoma (HCC) occurring in different degrees of cirrhosis assessed by Child-Pugh (CP) score. The liver parenchyma signal intensity (LPSI), the liver parenchyma (LP)/HCC signal ratios, and the visibility of HCC at HBP-DT of 5, 10, 15, 20, and 25 min ( i.e ., DT-5, DT-10, DT-15, DT-20, and DT-25 ) after injection of Gd-EOB-DTPA were collected and analyzed in 73 patients with cirrhosis of different degrees of severity (including 42 patients suffering from HCC) and 18 healthy adult controls. The LPSI increased with HBP-DT more significantly in the healthy group than in the cirrhosis group ( F = 17.361, P < 0.001). The LP/HCC signal ratios had a significant difference ( F = 12.453, P < 0.001) among various HBP-DT points, as well as between CP-A and CP-B/C subgroups ( F = 9.761, P < 0.001). The constituent ratios of HCC foci identified as obvious hypointensity (+++), moderate hypointensity (++), and mild hypointensity or isointensity (+/-) kept stable from DT-10 to DT-25: 90.6%, 9.4%, and 0.0% in the CP-A subgroup; 50.0%, 50.0%, and 0.0% in the CP-B subgroup; and 0.0%, 0.0%, and 100.0% in the CP-C subgroup, respectively. The severity of liver cirrhosis has significant negative influence on the HCC visualization by GED-MRI. DT-10 is more efficient and practical than other HBP-DT points to identify most of HCC foci emerging in CP-A cirrhosis, as well as in CP-B cirrhosis; but an HBP-DT of 15 min or longer seems more appropriate than DT-10 for visualization of HCC in patients with CP-C cirrhosis.

  11. Terrestrial laser scanning point clouds time series for the monitoring of slope movements: displacement measurement using image correlation and 3D feature tracking

    NASA Astrophysics Data System (ADS)

    Bornemann, Pierrick; Jean-Philippe, Malet; André, Stumpf; Anne, Puissant; Julien, Travelletti

    2016-04-01

    Dense multi-temporal point clouds acquired with terrestrial laser scanning (TLS) have proved useful for the study of structure and kinematics of slope movements. Most of the existing deformation analysis methods rely on the use of interpolated data. Approaches that use multiscale image correlation provide a precise and robust estimation of the observed movements; however, for non-rigid motion patterns, these methods tend to underestimate all the components of the movement. Further, for rugged surface topography, interpolated data introduce a bias and a loss of information in some local places where the point cloud information is not sufficiently dense. Those limits can be overcome by using deformation analysis exploiting directly the original 3D point clouds assuming some hypotheses on the deformation (e.g. the classic ICP algorithm requires an initial guess by the user of the expected displacement patterns). The objective of this work is therefore to propose a deformation analysis method applied to a series of 20 3D point clouds covering the period October 2007 - October 2015 at the Super-Sauze landslide (South East French Alps). The dense point clouds have been acquired with a terrestrial long-range Optech ILRIS-3D laser scanning device from the same base station. The time series are analyzed using two approaches: 1) a method of correlation of gradient images, and 2) a method of feature tracking in the raw 3D point clouds. The estimated surface displacements are then compared with GNSS surveys on reference targets. Preliminary results tend to show that the image correlation method provides a good estimation of the displacement fields at first order, but shows limitations such as the inability to track some deformation patterns, and the use of a perspective projection that does not maintain original angles and distances in the correlated images. Results obtained with 3D point clouds comparison algorithms (C2C, ICP, M3C2) bring additional information on the displacement fields. Displacement fields derived from both approaches are then combined and provide a better understanding of the landslide kinematics.

  12. Image-Based Airborne LiDAR Point Cloud Encoding for 3d Building Model Retrieval

    NASA Astrophysics Data System (ADS)

    Chen, Yi-Chen; Lin, Chao-Hung

    2016-06-01

    With the development of Web 2.0 and cyber city modeling, an increasing number of 3D models have been available on web-based model-sharing platforms with many applications such as navigation, urban planning, and virtual reality. Based on the concept of data reuse, a 3D model retrieval system is proposed to retrieve building models similar to a user-specified query. The basic idea behind this system is to reuse these existing 3D building models instead of reconstruction from point clouds. To efficiently retrieve models, the models in databases are compactly encoded by using a shape descriptor generally. However, most of the geometric descriptors in related works are applied to polygonal models. In this study, the input query of the model retrieval system is a point cloud acquired by Light Detection and Ranging (LiDAR) systems because of the efficient scene scanning and spatial information collection. Using Point clouds with sparse, noisy, and incomplete sampling as input queries is more difficult than that by using 3D models. Because that the building roof is more informative than other parts in the airborne LiDAR point cloud, an image-based approach is proposed to encode both point clouds from input queries and 3D models in databases. The main goal of data encoding is that the models in the database and input point clouds can be consistently encoded. Firstly, top-view depth images of buildings are generated to represent the geometry surface of a building roof. Secondly, geometric features are extracted from depth images based on height, edge and plane of building. Finally, descriptors can be extracted by spatial histograms and used in 3D model retrieval system. For data retrieval, the models are retrieved by matching the encoding coefficients of point clouds and building models. In experiments, a database including about 900,000 3D models collected from the Internet is used for evaluation of data retrieval. The results of the proposed method show a clear superiority over related methods.

  13. a Threshold-Free Filtering Algorithm for Airborne LIDAR Point Clouds Based on Expectation-Maximization

    NASA Astrophysics Data System (ADS)

    Hui, Z.; Cheng, P.; Ziggah, Y. Y.; Nie, Y.

    2018-04-01

    Filtering is a key step for most applications of airborne LiDAR point clouds. Although lots of filtering algorithms have been put forward in recent years, most of them suffer from parameters setting or thresholds adjusting, which will be time-consuming and reduce the degree of automation of the algorithm. To overcome this problem, this paper proposed a threshold-free filtering algorithm based on expectation-maximization. The proposed algorithm is developed based on an assumption that point clouds are seen as a mixture of Gaussian models. The separation of ground points and non-ground points from point clouds can be replaced as a separation of a mixed Gaussian model. Expectation-maximization (EM) is applied for realizing the separation. EM is used to calculate maximum likelihood estimates of the mixture parameters. Using the estimated parameters, the likelihoods of each point belonging to ground or object can be computed. After several iterations, point clouds can be labelled as the component with a larger likelihood. Furthermore, intensity information was also utilized to optimize the filtering results acquired using the EM method. The proposed algorithm was tested using two different datasets used in practice. Experimental results showed that the proposed method can filter non-ground points effectively. To quantitatively evaluate the proposed method, this paper adopted the dataset provided by the ISPRS for the test. The proposed algorithm can obtain a 4.48 % total error which is much lower than most of the eight classical filtering algorithms reported by the ISPRS.

  14. Street curb recognition in 3d point cloud data using morphological operations

    NASA Astrophysics Data System (ADS)

    Rodríguez-Cuenca, Borja; Concepción Alonso-Rodríguez, María; García-Cortés, Silverio; Ordóñez, Celestino

    2015-04-01

    Accurate and automatic detection of cartographic-entities saves a great deal of time and money when creating and updating cartographic databases. The current trend in remote sensing feature extraction is to develop methods that are as automatic as possible. The aim is to develop algorithms that can obtain accurate results with the least possible human intervention in the process. Non-manual curb detection is an important issue in road maintenance, 3D urban modeling, and autonomous navigation fields. This paper is focused on the semi-automatic recognition of curbs and street boundaries using a 3D point cloud registered by a mobile laser scanner (MLS) system. This work is divided into four steps. First, a coordinate system transformation is carried out, moving from a global coordinate system to a local one. After that and in order to simplify the calculations involved in the procedure, a rasterization based on the projection of the measured point cloud on the XY plane was carried out, passing from the 3D original data to a 2D image. To determine the location of curbs in the image, different image processing techniques such as thresholding and morphological operations were applied. Finally, the upper and lower edges of curbs are detected by an unsupervised classification algorithm on the curvature and roughness of the points that represent curbs. The proposed method is valid in both straight and curved road sections and applicable both to laser scanner and stereo vision 3D data due to the independence of its scanning geometry. This method has been successfully tested with two datasets measured by different sensors. The first dataset corresponds to a point cloud measured by a TOPCON sensor in the Spanish town of Cudillero. That point cloud comprises more than 6,000,000 points and covers a 400-meter street. The second dataset corresponds to a point cloud measured by a RIEGL sensor in the Austrian town of Horn. That point cloud comprises 8,000,000 points and represents a 160-meter street. The proposed method provides success rates in curb recognition of over 85% in both datasets.

  15. Construction of noninterpenetrating and interpenetrating Co(ii) networks with halogenated carboxylate modulated by auxiliary N-donor co-ligands: structural diversity, electrochemical and photocatalytic properties.

    PubMed

    Hao, Shao Yun; Hou, Suo Xia; Van Hecke, Kristof; Cui, Guang Hua

    2017-02-14

    Six Co(ii)-based coordination polymers (CPs) with characteristic frameworks and topologies-namely, [Co(L1)(DCTP)] n (1), [Co(L2)(DCTP)] n (2), [Co(L3)(DCTP)] n (3), {[Co 3 (L4) 3 (DCTP) 3 ·H 2 O]·H 2 O} n (4), [Co(L5) 1.5 (DCTP)] n (5) and [Co(L6)(DCTP)] n (6)-were successfully hydrothermally synthesized by employing the halogenated linear ligand 2,5-dichloroterephthalic acid (H 2 DCTP). The interpenetrated structures could be rationally modulated by auxiliary N-donor co-ligands containing 1,1'-(1,4-butanediyl)bis-1H-benzimidazole (L1), 1,4-bis(5,6-dimethylbenzimidazol-1-yl)-2-butylene (L2), 1,2-bis(2-methylbenzimidazol-1-ylmethyl)benzene (L3), 1,4-bis(2-methylbenzimidazol-1-ylmethyl)benzene (L4), 1,2-bis(5,6-dimethylbenzimidazol-1-ylmethyl)benzene (L5) and 1,4-bis(5,6-dimethylbenzimidazol-1-ylmethyl)benzene (L6). These diaphanous crystals were clearly characterized by elemental analysis, infrared (IR) spectra and X-ray powder diffraction (XRPD) as well as single-crystal X-ray diffraction analysis. With the aid of the flexible N-donor co-ligands, CP 1 occupies a non-interpenetrated 2D sheet with the point symbol {4 4 ·6 2 } sql net topology, CP 2 possesses a 3D hexagon-shaped network with the point symbol {6 6 } three-fold interpenetrated sqc6 topology, CP 3 exhibits a 2D layer with the point symbol {4 4 ·6 2 } sql net topology, CP 4 reveals an unusual 3D framework with the point symbol {4 2 ·6 3 ·8} three-fold interpenetrated sra topology, CP 5 has a 3D hexagon-shaped network with the point symbol {6 6 } two-fold interpenetrated sqc6 topology, while CP 6 displays a 3D hexagon-shaped network with the point symbol {6 6 } three-fold interpenetrated sqc6 topology. The diverse structures of CPs 1-6 illustrate that the substitute group and position of the methyl group of the bis(benzimidazole) derivatives play a significant role in the assembly of such interpenetrated frameworks. Moreover, luminescence properties and thermal behavior, as well as the electrochemical and photocatalytic properties of CPs 1-6 on the degradation of methylene blue, are also presented.

  16. 3D reconstruction from non-uniform point clouds via local hierarchical clustering

    NASA Astrophysics Data System (ADS)

    Yang, Jiaqi; Li, Ruibo; Xiao, Yang; Cao, Zhiguo

    2017-07-01

    Raw scanned 3D point clouds are usually irregularly distributed due to the essential shortcomings of laser sensors, which therefore poses a great challenge for high-quality 3D surface reconstruction. This paper tackles this problem by proposing a local hierarchical clustering (LHC) method to improve the consistency of point distribution. Specifically, LHC consists of two steps: 1) adaptive octree-based decomposition of 3D space, and 2) hierarchical clustering. The former aims at reducing the computational complexity and the latter transforms the non-uniform point set into uniform one. Experimental results on real-world scanned point clouds validate the effectiveness of our method from both qualitative and quantitative aspects.

  17. Applications of low altitude photogrammetry for morphometry, displacements, and landform modeling

    NASA Astrophysics Data System (ADS)

    Gomez, F. G.; Polun, S. G.; Hickcox, K.; Miles, C.; Delisle, C.; Beem, J. R.

    2016-12-01

    Low-altitude aerial surveying is emerging as a tool that greatly improves the ease and efficiency of measuring landforms for quantitative geomorphic analyses. High-resolution, close-range photogrammetry produces dense, 3-dimensional point clouds that facilitate the construction of digital surface models, as well as a potential means of classifying ground targets using spatial structure. This study presents results from recent applications of UAS-based photogrammetry, including high resolution surface morphometry of a lava flow, repeat-pass applications to mass movements, and fault scarp degradation modeling. Depending upon the desired photographic resolution and the platform/payload flown, aerial photos are typically acquired at altitudes of 40 - 100 meters above the ground surface. In all cases, high-precision ground control points are key for accurate (and repeatable) orientation - relying on low-precision GPS coordinates (whether on the ground or geotags in the aerial photos) typically results in substantial rotations (tilt) of the reference frame. Using common ground control points between repeat surveys results in matching point clouds with RMS residuals better than 10 cm. In arid regions, the point cloud is used to assess lava flow surface roughness using multi-scale measurements of point cloud dimensionality. For the landslide study, the point cloud provides a basis for assessing possible displacements. In addition, the high resolution orthophotos facilitate mapping of fractures and their growth. For neotectonic applications, we compare fault scarp modeling results from UAV-derived point clouds versus field-based surveys (kinematic GPS and electronic distance measurements). In summary, there is a wide ranging toolbox of low-altitude aerial platforms becoming available for field geoscientists. In many instances, these tools will present convenience and reduced cost compared with the effort and expense to contract acquisitions of aerial imagery.

  18. SEMANTIC3D.NET: a New Large-Scale Point Cloud Classification Benchmark

    NASA Astrophysics Data System (ADS)

    Hackel, T.; Savinov, N.; Ladicky, L.; Wegner, J. D.; Schindler, K.; Pollefeys, M.

    2017-05-01

    This paper presents a new 3D point cloud classification benchmark data set with over four billion manually labelled points, meant as input for data-hungry (deep) learning methods. We also discuss first submissions to the benchmark that use deep convolutional neural networks (CNNs) as a work horse, which already show remarkable performance improvements over state-of-the-art. CNNs have become the de-facto standard for many tasks in computer vision and machine learning like semantic segmentation or object detection in images, but have no yet led to a true breakthrough for 3D point cloud labelling tasks due to lack of training data. With the massive data set presented in this paper, we aim at closing this data gap to help unleash the full potential of deep learning methods for 3D labelling tasks. Our semantic3D.net data set consists of dense point clouds acquired with static terrestrial laser scanners. It contains 8 semantic classes and covers a wide range of urban outdoor scenes: churches, streets, railroad tracks, squares, villages, soccer fields and castles. We describe our labelling interface and show that our data set provides more dense and complete point clouds with much higher overall number of labelled points compared to those already available to the research community. We further provide baseline method descriptions and comparison between methods submitted to our online system. We hope semantic3D.net will pave the way for deep learning methods in 3D point cloud labelling to learn richer, more general 3D representations, and first submissions after only a few months indicate that this might indeed be the case.

  19. Analysis of Structures, Functions, and Epitopes of Cysteine Protease from Spirometra erinaceieuropaei Spargana

    PubMed Central

    Liu, Li Na; Cui, Jing; Zhang, Xi; Wei, Tong; Jiang, Peng; Wang, Zhong Quan

    2013-01-01

    Spirometra erinaceieuropaei cysteine protease (SeCP) in sparganum ES proteins recognized by early infection sera was identified by MALDI-TOF/TOF-MS. The aim of this study was to predict the structures and functions of SeCP protein by using the full length cDNA sequence of SeCP gene with online sites and software programs. The SeCP gene sequence was of 1 053 bp length with a 1011 bp biggest ORF encoding 336-amino acid protein with a complete cathepsin propeptide inhibitor domain and a peptidase C1A conserved domain. The predicted molecular weight and isoelectric point of SeCP were 37.87 kDa and 6.47, respectively. The SeCP has a signal peptide site and no transmembrane domain, located outside the membrane. The secondary structure of SeCP contained 8 α-helixes, 7 β-strands, and 20 coils. The SeCP had 15 potential antigenic epitopes and 19 HLA-I restricted epitopes. Based on the phylogenetic analysis of SeCP, S. erinaceieuropaei has the closest evolutionary status with S. mansonoides. SeCP was a kind of proteolytic enzyme with a variety of biological functions and its antigenic epitopes could provide important insights on the diagnostic antigens and target molecular of antisparganum drugs. PMID:24392448

  20. Balloon borne Antarctic frost point measurements and their impact on polar stratospheric cloud theories

    NASA Technical Reports Server (NTRS)

    Rosen, James M.; Hofmann, D. J.; Carpenter, J. R.; Harder, J. W.; Oltmans, S. J.

    1988-01-01

    The first balloon-borne frost point measurements over Antarctica were made during September and October, 1987 as part of the NOZE 2 effort at McMurdo. The results indicate water vapor mixing ratios on the order of 2 ppmv in the 15 to 20 km region which is somewhat smaller than the typical values currently being used significantly smaller than the typical values currently being used in polar stratospheric cloud (PSC) theories. The observed water vapor mixing ratio would correspond to saturated conditions for what is thought to be the lowest stratospheric temperatures encountered over the Antarctic. Through the use of available lidar observations there appears to be significant evidence that some PSCs form at temperatures higher than the local frost point (with respect to water) in the 10 to 20 km region thus supporting the nitric acid theory of PSC composition. Clouds near 15 km and below appear to form in regions saturated with respect to water and thus are probably mostly ice water clouds although they could contain relatively small amounts of other constituents. Photographic evidence suggests that the clouds forming above the frost point probably have an appearance quite different from the lower altitude iridescent, colored nacreous clouds.

  1. An approach of point cloud denoising based on improved bilateral filtering

    NASA Astrophysics Data System (ADS)

    Zheng, Zeling; Jia, Songmin; Zhang, Guoliang; Li, Xiuzhi; Zhang, Xiangyin

    2018-04-01

    An omnidirectional mobile platform is designed for building point cloud based on an improved filtering algorithm which is employed to handle the depth image. First, the mobile platform can move flexibly and the control interface is convenient to control. Then, because the traditional bilateral filtering algorithm is time-consuming and inefficient, a novel method is proposed which called local bilateral filtering (LBF). LBF is applied to process depth image obtained by the Kinect sensor. The results show that the effect of removing noise is improved comparing with the bilateral filtering. In the condition of off-line, the color images and processed images are used to build point clouds. Finally, experimental results demonstrate that our method improves the speed of processing time of depth image and the effect of point cloud which has been built.

  2. Point cloud modeling using the homogeneous transformation for non-cooperative pose estimation

    NASA Astrophysics Data System (ADS)

    Lim, Tae W.

    2015-06-01

    A modeling process to simulate point cloud range data that a lidar (light detection and ranging) sensor produces is presented in this paper in order to support the development of non-cooperative pose (relative attitude and position) estimation approaches which will help improve proximity operation capabilities between two adjacent vehicles. The algorithms in the modeling process were based on the homogeneous transformation, which has been employed extensively in robotics and computer graphics, as well as in recently developed pose estimation algorithms. Using a flash lidar in a laboratory testing environment, point cloud data of a test article was simulated and compared against the measured point cloud data. The simulated and measured data sets match closely, validating the modeling process. The modeling capability enables close examination of the characteristics of point cloud images of an object as it undergoes various translational and rotational motions. Relevant characteristics that will be crucial in non-cooperative pose estimation were identified such as shift, shadowing, perspective projection, jagged edges, and differential point cloud density. These characteristics will have to be considered in developing effective non-cooperative pose estimation algorithms. The modeling capability will allow extensive non-cooperative pose estimation performance simulations prior to field testing, saving development cost and providing performance metrics of the pose estimation concepts and algorithms under evaluation. The modeling process also provides "truth" pose of the test objects with respect to the sensor frame so that the pose estimation error can be quantified.

  3. Quality Assessment and Comparison of Smartphone and Leica C10 Laser Scanner Based Point Clouds

    NASA Astrophysics Data System (ADS)

    Sirmacek, Beril; Lindenbergh, Roderik; Wang, Jinhu

    2016-06-01

    3D urban models are valuable for urban map generation, environment monitoring, safety planning and educational purposes. For 3D measurement of urban structures, generally airborne laser scanning sensors or multi-view satellite images are used as a data source. However, close-range sensors (such as terrestrial laser scanners) and low cost cameras (which can generate point clouds based on photogrammetry) can provide denser sampling of 3D surface geometry. Unfortunately, terrestrial laser scanning sensors are expensive and trained persons are needed to use them for point cloud acquisition. A potential effective 3D modelling can be generated based on a low cost smartphone sensor. Herein, we show examples of using smartphone camera images to generate 3D models of urban structures. We compare a smartphone based 3D model of an example structure with a terrestrial laser scanning point cloud of the structure. This comparison gives us opportunity to discuss the differences in terms of geometrical correctness, as well as the advantages, disadvantages and limitations in data acquisition and processing. We also discuss how smartphone based point clouds can help to solve further problems with 3D urban model generation in a practical way. We show that terrestrial laser scanning point clouds which do not have color information can be colored using smartphones. The experiments, discussions and scientific findings might be insightful for the future studies in fast, easy and low-cost 3D urban model generation field.

  4. Knowledge-Based Object Detection in Laser Scanning Point Clouds

    NASA Astrophysics Data System (ADS)

    Boochs, F.; Karmacharya, A.; Marbs, A.

    2012-07-01

    Object identification and object processing in 3D point clouds have always posed challenges in terms of effectiveness and efficiency. In practice, this process is highly dependent on human interpretation of the scene represented by the point cloud data, as well as the set of modeling tools available for use. Such modeling algorithms are data-driven and concentrate on specific features of the objects, being accessible to numerical models. We present an approach that brings the human expert knowledge about the scene, the objects inside, and their representation by the data and the behavior of algorithms to the machine. This "understanding" enables the machine to assist human interpretation of the scene inside the point cloud. Furthermore, it allows the machine to understand possibilities and limitations of algorithms and to take this into account within the processing chain. This not only assists the researchers in defining optimal processing steps, but also provides suggestions when certain changes or new details emerge from the point cloud. Our approach benefits from the advancement in knowledge technologies within the Semantic Web framework. This advancement has provided a strong base for applications based on knowledge management. In the article we will present and describe the knowledge technologies used for our approach such as Web Ontology Language (OWL), used for formulating the knowledge base and the Semantic Web Rule Language (SWRL) with 3D processing and topologic built-ins, aiming to combine geometrical analysis of 3D point clouds, and specialists' knowledge of the scene and algorithmic processing.

  5. Roughness Estimation from Point Clouds - A Comparison of Terrestrial Laser Scanning and Image Matching by Unmanned Aerial Vehicle Acquisitions

    NASA Astrophysics Data System (ADS)

    Rutzinger, Martin; Bremer, Magnus; Ragg, Hansjörg

    2013-04-01

    Recently, terrestrial laser scanning (TLS) and matching of images acquired by unmanned arial vehicles (UAV) are operationally used for 3D geodata acquisition in Geoscience applications. However, the two systems cover different application domains in terms of acquisition conditions and data properties i.e. accuracy and line of sight. In this study we investigate the major differences between the two platforms for terrain roughness estimation. Terrain roughness is an important input for various applications such as morphometry studies, geomorphologic mapping, and natural process modeling (e.g. rockfall, avalanche, and hydraulic modeling). Data has been collected simultaneously by TLS using an Optech ILRIS3D and a rotary UAV using an octocopter from twins.nrn for a 900 m² test site located in a riverbed in Tyrol, Austria (Judenbach, Mieming). The TLS point cloud has been acquired from three scan positions. These have been registered using iterative closest point algorithm and a target-based referencing approach. For registration geometric targets (spheres) with a diameter of 20 cm were used. These targets were measured with dGPS for absolute georeferencing. The TLS point cloud has an average point density of 19,000 pts/m², which represents a point spacing of about 5 mm. 15 images where acquired by UAV in a height of 20 m using a calibrated camera with focal length of 18.3 mm. A 3D point cloud containing RGB attributes was derived using APERO/MICMAC software, by a direct georeferencing approach based on the aircraft IMU data. The point cloud is finally co-registered with the TLS data to guarantee an optimal preparation in order to perform the analysis. The UAV point cloud has an average point density of 17,500 pts/m², which represents a point spacing of 7.5 mm. After registration and georeferencing the level of detail of roughness representation in both point clouds have been compared considering elevation differences, roughness and representation of different grain sizes. UAV closes the gap between aerial and terrestrial surveys in terms of resolution and acquisition flexibility. This is also true for the data accuracy. Considering these data collection and data quality properties of both systems they have their merit on its own in terms of scale, data quality, data collection speed and application.

  6. Point Cloud Based Relative Pose Estimation of a Satellite in Close Range

    PubMed Central

    Liu, Lujiang; Zhao, Gaopeng; Bo, Yuming

    2016-01-01

    Determination of the relative pose of satellites is essential in space rendezvous operations and on-orbit servicing missions. The key problems are the adoption of suitable sensor on board of a chaser and efficient techniques for pose estimation. This paper aims to estimate the pose of a target satellite in close range on the basis of its known model by using point cloud data generated by a flash LIDAR sensor. A novel model based pose estimation method is proposed; it includes a fast and reliable pose initial acquisition method based on global optimal searching by processing the dense point cloud data directly, and a pose tracking method based on Iterative Closest Point algorithm. Also, a simulation system is presented in this paper in order to evaluate the performance of the sensor and generate simulated sensor point cloud data. It also provides truth pose of the test target so that the pose estimation error can be quantified. To investigate the effectiveness of the proposed approach and achievable pose accuracy, numerical simulation experiments are performed; results demonstrate algorithm capability of operating with point cloud directly and large pose variations. Also, a field testing experiment is conducted and results show that the proposed method is effective. PMID:27271633

  7. Automatic registration of fused lidar/digital imagery (texel images) for three-dimensional image creation

    NASA Astrophysics Data System (ADS)

    Budge, Scott E.; Badamikar, Neeraj S.; Xie, Xuan

    2015-03-01

    Several photogrammetry-based methods have been proposed that the derive three-dimensional (3-D) information from digital images from different perspectives, and lidar-based methods have been proposed that merge lidar point clouds and texture the merged point clouds with digital imagery. Image registration alone has difficulty with smooth regions with low contrast, whereas point cloud merging alone has difficulty with outliers and a lack of proper convergence in the merging process. This paper presents a method to create 3-D images that uses the unique properties of texel images (pixel-fused lidar and digital imagery) to improve the quality and robustness of fused 3-D images. The proposed method uses both image processing and point-cloud merging to combine texel images in an iterative technique. Since the digital image pixels and the lidar 3-D points are fused at the sensor level, more accurate 3-D images are generated because registration of image data automatically improves the merging of the point clouds, and vice versa. Examples illustrate the value of this method over other methods. The proposed method also includes modifications for the situation where an estimate of position and attitude of the sensor is known, when obtained from low-cost global positioning systems and inertial measurement units sensors.

  8. Comparison of the filtering models for airborne LiDAR data by three classifiers with exploration on model transfer

    NASA Astrophysics Data System (ADS)

    Ma, Hongchao; Cai, Zhan; Zhang, Liang

    2018-01-01

    This paper discusses airborne light detection and ranging (LiDAR) point cloud filtering (a binary classification problem) from the machine learning point of view. We compared three supervised classifiers for point cloud filtering, namely, Adaptive Boosting, support vector machine, and random forest (RF). Nineteen features were generated from raw LiDAR point cloud based on height and other geometric information within a given neighborhood. The test datasets issued by the International Society for Photogrammetry and Remote Sensing (ISPRS) were used to evaluate the performance of the three filtering algorithms; RF showed the best results with an average total error of 5.50%. The paper also makes tentative exploration in the application of transfer learning theory to point cloud filtering, which has not been introduced into the LiDAR field to the authors' knowledge. We performed filtering of three datasets from real projects carried out in China with RF models constructed by learning from the 15 ISPRS datasets and then transferred with little to no change of the parameters. Reliable results were achieved, especially in rural area (overall accuracy achieved 95.64%), indicating the feasibility of model transfer in the context of point cloud filtering for both easy automation and acceptable accuracy.

  9. Large Scale Ice Water Path and 3-D Ice Water Content

    DOE Data Explorer

    Liu, Guosheng

    2008-01-15

    Cloud ice water concentration is one of the most important, yet poorly observed, cloud properties. Developing physical parameterizations used in general circulation models through single-column modeling is one of the key foci of the ARM program. In addition to the vertical profiles of temperature, water vapor and condensed water at the model grids, large-scale horizontal advective tendencies of these variables are also required as forcing terms in the single-column models. Observed horizontal advection of condensed water has not been available because the radar/lidar/radiometer observations at the ARM site are single-point measurement, therefore, do not provide horizontal distribution of condensed water. The intention of this product is to provide large-scale distribution of cloud ice water by merging available surface and satellite measurements. The satellite cloud ice water algorithm uses ARM ground-based measurements as baseline, produces datasets for 3-D cloud ice water distributions in a 10 deg x 10 deg area near ARM site. The approach of the study is to expand a (surface) point measurement to an (satellite) areal measurement. That is, this study takes the advantage of the high quality cloud measurements at the point of ARM site. We use the cloud characteristics derived from the point measurement to guide/constrain satellite retrieval, then use the satellite algorithm to derive the cloud ice water distributions within an area, i.e., 10 deg x 10 deg centered at ARM site.

  10. Influence of pedal cadence on the respiratory compensation point and its relation to critical power.

    PubMed

    Broxterman, R M; Ade, C J; Barker, T; Barstow, T J

    2015-03-01

    It is not known if the respiratory compensation point (RCP) is a distinct work rate (Watts (W)) or metabolic rate V̇(O2) and if the RCP is mechanistically related to critical power (CP). To examine these relationships, 10 collegiate men athletes performed cycling incremental and constant-power tests at 60 and 100 rpm to determine RCP and CP. RCP work rate was significantly (p≤0.05) lower for 100 than 60 rpm (197±24 W vs. 222±24 W), while RCP V̇(O2) was not significantly different (3.00±0.33 l min(-1) vs. 3.12±0.41 l min(-1)). CP at 60 rpm (214±51 W; V̇(O2): 3.01±0.69 l min(-1)) and 100 rpm (196±46 W; V̇(O2): 2.95±0.54 l min(-1)) were not significantly different from RCP. However, RCP and CP were not significantly correlated. These findings demonstrate that RCP represents a distinct metabolic rate, which can be achieved at different power outputs, but that RCP and CP are not equivalent parameters and should not, therefore, be used synonymously. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Coarse Point Cloud Registration by Egi Matching of Voxel Clusters

    NASA Astrophysics Data System (ADS)

    Wang, Jinhu; Lindenbergh, Roderik; Shen, Yueqian; Menenti, Massimo

    2016-06-01

    Laser scanning samples the surface geometry of objects efficiently and records versatile information as point clouds. However, often more scans are required to fully cover a scene. Therefore, a registration step is required that transforms the different scans into a common coordinate system. The registration of point clouds is usually conducted in two steps, i.e. coarse registration followed by fine registration. In this study an automatic marker-free coarse registration method for pair-wise scans is presented. First the two input point clouds are re-sampled as voxels and dimensionality features of the voxels are determined by principal component analysis (PCA). Then voxel cells with the same dimensionality are clustered. Next, the Extended Gaussian Image (EGI) descriptor of those voxel clusters are constructed using significant eigenvectors of each voxel in the cluster. Correspondences between clusters in source and target data are obtained according to the similarity between their EGI descriptors. The random sampling consensus (RANSAC) algorithm is employed to remove outlying correspondences until a coarse alignment is obtained. If necessary, a fine registration is performed in a final step. This new method is illustrated on scan data sampling two indoor scenarios. The results of the tests are evaluated by computing the point to point distance between the two input point clouds. The presented two tests resulted in mean distances of 7.6 mm and 9.5 mm respectively, which are adequate for fine registration.

  12. CP4 miracle: shaping Yukawa sector with CP symmetry of order four

    NASA Astrophysics Data System (ADS)

    Ferreira, P. M.; Ivanov, Igor P.; Jiménez, Enrique; Pasechnik, Roman; Serôdio, Hugo

    2018-01-01

    We explore the phenomenology of a unique three-Higgs-doublet model based on the single CP symmetry of order 4 (CP4) without any accidental symmetries. The CP4 symmetry is imposed on the scalar potential and Yukawa interactions, strongly shaping both sectors of the model and leading to a very characteristic phenomenology. The scalar sector is analyzed in detail, and in the Yukawa sector we list all possible CP4-symmetric structures which do not run into immediate conflict with experiment, namely, do not lead to massless or mass-degenerate quarks nor to insufficient mixing or CP -violation in the CKM matrix. We show that the parameter space of the model, although very constrained by CP4, is large enough to comply with the electroweak precision data and the LHC results for the 125 GeV Higgs boson phenomenology, as well as to perfectly reproduce all fermion masses, mixing, and CP violation. Despite the presence of flavor changing neutral currents mediated by heavy Higgs scalars, we find through a parameter space scan many points which accurately reproduce the kaon CP -violating parameter ɛ K as well as oscillation parameters in K and B ( s) mesons. Thus, CP4 offers a novel minimalistic framework for building models with very few assumptions, sufficient predictive power, and rich phenomenology yet to be explored.

  13. New Perspectives of Point Clouds Color Management - the Development of Tool in Matlab for Applications in Cultural Heritage

    NASA Astrophysics Data System (ADS)

    Pepe, M.; Ackermann, S.; Fregonese, L.; Achille, C.

    2017-02-01

    The paper describes a method for Point Clouds Color management and Integration obtained from Terrestrial Laser Scanner (TLS) and Image Based (IB) survey techniques. Especially in the Cultural Heritage (CH) environment, methods and techniques to improve the color quality of Point Clouds have a key role because a homogenous texture brings to a more accurate reconstruction of the investigated object and to a more pleasant perception of the color object as well. A color management method for point clouds can be useful in case of single data set acquired by TLS or IB technique as well as in case of chromatic heterogeneity resulting by merging different datasets. The latter condition can occur when the scans are acquired in different moments of the same day or when scans of the same object are performed in a period of weeks or months, and consequently with a different environment/lighting condition. In this paper, a procedure to balance the point cloud color in order to uniform the different data sets, to improve the chromatic quality and to highlight further details will be presented and discussed.

  14. Classification of Aerial Photogrammetric 3d Point Clouds

    NASA Astrophysics Data System (ADS)

    Becker, C.; Häni, N.; Rosinskaya, E.; d'Angelo, E.; Strecha, C.

    2017-05-01

    We present a powerful method to extract per-point semantic class labels from aerial photogrammetry data. Labelling this kind of data is important for tasks such as environmental modelling, object classification and scene understanding. Unlike previous point cloud classification methods that rely exclusively on geometric features, we show that incorporating color information yields a significant increase in accuracy in detecting semantic classes. We test our classification method on three real-world photogrammetry datasets that were generated with Pix4Dmapper Pro, and with varying point densities. We show that off-the-shelf machine learning techniques coupled with our new features allow us to train highly accurate classifiers that generalize well to unseen data, processing point clouds containing 10 million points in less than 3 minutes on a desktop computer.

  15. Geometries and properties of bimetallic phosphido-bridged complex Cp(CO) 2W(μ-PPh 2)W(CO) 5 and Cp(CO) 3W(μ-PPh 2)W(CO) 5

    NASA Astrophysics Data System (ADS)

    Wang, Fang; Yang, Hongmei; Yang, Zuoyin; Zhang, Jingchang; Cao, Weiliang

    2007-01-01

    Complete geometry optimizations were carried out by HF and DFT methods to study the molecular structure of binuclear transition-metal compounds (Cp(CO) 3W(μ-PPh 2)W(CO) 5) (I) and (Cp(CO) 2W(μ-PPh 2)W(CO) 5) (II). A comparison of the experimental data and calculated structural parameters demonstrates that the most accurate geometry parameters are predicted by the MPW1PW91/LANL2DZ among the three DFT methods. Topological properties of molecular charge distributions were analyzed with the theory of atoms in molecules. (3, -1) critical points, namely bond critical point, were found between the two tungsten atoms, and between W1 and C10 in complex II, which confirms the existence of the metal-metal bond and a semi-bridging CO between the two tungsten atoms. The result provided a theoretical guidance of detailed study on the binuclear phosphido-bridged complex containing transition metal-metal bond, which could be useful in the further study of the heterobimetallic phosphido-bridged complexes.

  16. Microphysical Processes Affecting the Pinatubo Volcanic Plume

    NASA Technical Reports Server (NTRS)

    Hamill, Patrick; Houben, Howard; Young, Richard; Turco, Richard; Zhao, Jingxia

    1996-01-01

    In this paper we consider microphysical processes which affect the formation of sulfate particles and their size distribution in a dispersing cloud. A model for the dispersion of the Mt. Pinatubo volcanic cloud is described. We then consider a single point in the dispersing cloud and study the effects of nucleation, condensation and coagulation on the time evolution of the particle size distribution at that point.

  17. 3D local feature BKD to extract road information from mobile laser scanning point clouds

    NASA Astrophysics Data System (ADS)

    Yang, Bisheng; Liu, Yuan; Dong, Zhen; Liang, Fuxun; Li, Bijun; Peng, Xiangyang

    2017-08-01

    Extracting road information from point clouds obtained through mobile laser scanning (MLS) is essential for autonomous vehicle navigation, and has hence garnered a growing amount of research interest in recent years. However, the performance of such systems is seriously affected due to varying point density and noise. This paper proposes a novel three-dimensional (3D) local feature called the binary kernel descriptor (BKD) to extract road information from MLS point clouds. The BKD consists of Gaussian kernel density estimation and binarization components to encode the shape and intensity information of the 3D point clouds that are fed to a random forest classifier to extract curbs and markings on the road. These are then used to derive road information, such as the number of lanes, the lane width, and intersections. In experiments, the precision and recall of the proposed feature for the detection of curbs and road markings on an urban dataset and a highway dataset were as high as 90%, thus showing that the BKD is accurate and robust against varying point density and noise.

  18. Hierarchical Regularization of Polygons for Photogrammetric Point Clouds of Oblique Images

    NASA Astrophysics Data System (ADS)

    Xie, L.; Hu, H.; Zhu, Q.; Wu, B.; Zhang, Y.

    2017-05-01

    Despite the success of multi-view stereo (MVS) reconstruction from massive oblique images in city scale, only point clouds and triangulated meshes are available from existing MVS pipelines, which are topologically defect laden, free of semantical information and hard to edit and manipulate interactively in further applications. On the other hand, 2D polygons and polygonal models are still the industrial standard. However, extraction of the 2D polygons from MVS point clouds is still a non-trivial task, given the fact that the boundaries of the detected planes are zigzagged and regularities, such as parallel and orthogonal, cannot preserve. Aiming to solve these issues, this paper proposes a hierarchical polygon regularization method for the photogrammetric point clouds from existing MVS pipelines, which comprises of local and global levels. After boundary points extraction, e.g. using alpha shapes, the local level is used to consolidate the original points, by refining the orientation and position of the points using linear priors. The points are then grouped into local segments by forward searching. In the global level, regularities are enforced through a labeling process, which encourage the segments share the same label and the same label represents segments are parallel or orthogonal. This is formulated as Markov Random Field and solved efficiently. Preliminary results are made with point clouds from aerial oblique images and compared with two classical regularization methods, which have revealed that the proposed method are more powerful in abstracting a single building and is promising for further 3D polygonal model reconstruction and GIS applications.

  19. Variability and genetic structure of the population of watermelon mosaic virus infecting melon in Spain.

    PubMed

    Moreno, I M; Malpica, J M; Díaz-Pendón, J A; Moriones, E; Fraile, A; García-Arenal, F

    2004-01-05

    The genetic structure of the population of Watermelon mosaic virus (WMV) in Spain was analysed by the biological and molecular characterisation of isolates sampled from its main host plant, melon. The population was a highly homogeneous one, built of a single pathotype, and comprising isolates closely related genetically. There was indication of temporal replacement of genotypes, but not of spatial structure of the population. Analyses of nucleotide sequences in three genomic regions, that is, in the cistrons for the P1, cylindrical inclusion (CI) and capsid (CP) proteins, showed lower similar values of nucleotide diversity for the P1 than for the CI or CP cistrons. The CI protein and the CP were under tighter evolutionary constraints than the P1 protein. Also, for the CI and CP cistrons, but not for the P1 cistron, two groups of sequences, defining two genetic strains, were apparent. Thus, different genomic regions of WMV show different evolutionary dynamics. Interestingly, for the CI and CP cistrons, sequences were clustered into two regions of the sequence space, defining the two strains above, and no intermediary sequences were identified. Recombinant isolates were found, accounting for at least 7% of the population. These recombinants presented two interesting features: (i) crossover points were detected between the analysed regions in the CI and CP cistrons, but not between those in the P1 and CI cistrons, (ii) crossover points were not observed within the analysed coding regions for the P1, CI or CP proteins. This indicates strong selection against isolates with recombinant proteins, even when originated from closely related strains. Hence, data indicate that genotypes of WMV, generated by mutation or recombination, outside of acceptable, discrete, regions in the evolutionary space, are eliminated from the virus population by negative selection.

  20. Amyloid protein-mediated differential DNA methylation status regulates gene expression in Alzheimer's disease model cell line

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sung, Hye Youn; Choi, Eun Nam; Ahn Jo, Sangmee

    2011-11-04

    Highlights: Black-Right-Pointing-Pointer Genome-wide DNA methylation pattern in Alzheimer's disease model cell line. Black-Right-Pointing-Pointer Integrated analysis of CpG methylation and mRNA expression profiles. Black-Right-Pointing-Pointer Identify three Swedish mutant target genes; CTIF, NXT2 and DDR2 gene. Black-Right-Pointing-Pointer The effect of Swedish mutation on alteration of DNA methylation and gene expression. -- Abstract: The Swedish mutation of amyloid precursor protein (APP-sw) has been reported to dramatically increase beta amyloid production through aberrant cleavage at the beta secretase site, causing early-onset Alzheimer's disease (AD). DNA methylation has been reported to be associated with AD pathogenesis, but the underlying molecular mechanism of APP-sw-mediated epigenetic alterationsmore » in AD pathogenesis remains largely unknown. We analyzed genome-wide interplay between promoter CpG DNA methylation and gene expression in an APP-sw-expressing AD model cell line. To identify genes whose expression was regulated by DNA methylation status, we performed integrated analysis of CpG methylation and mRNA expression profiles, and identified three target genes of the APP-sw mutant; hypomethylated CTIF (CBP80/CBP20-dependent translation initiation factor) and NXT2 (nuclear exporting factor 2), and hypermethylated DDR2 (discoidin domain receptor 2). Treatment with the demethylating agent 5-aza-2 Prime -deoxycytidine restored mRNA expression of these three genes, implying methylation-dependent transcriptional regulation. The profound alteration in the methylation status was detected at the -435, -295, and -271 CpG sites of CTIF, and at the -505 to -341 region in the promoter of DDR2. In the promoter region of NXT2, only one CpG site located at -432 was differentially unmethylated in APP-sw cells. Thus, we demonstrated the effect of the APP-sw mutation on alteration of DNA methylation and subsequent gene expression. This epigenetic regulatory mechanism may contribute to the pathogenesis of AD.« less

  1. Multiview 3D sensing and analysis for high quality point cloud reconstruction

    NASA Astrophysics Data System (ADS)

    Satnik, Andrej; Izquierdo, Ebroul; Orjesek, Richard

    2018-04-01

    Multiview 3D reconstruction techniques enable digital reconstruction of 3D objects from the real world by fusing different viewpoints of the same object into a single 3D representation. This process is by no means trivial and the acquisition of high quality point cloud representations of dynamic 3D objects is still an open problem. In this paper, an approach for high fidelity 3D point cloud generation using low cost 3D sensing hardware is presented. The proposed approach runs in an efficient low-cost hardware setting based on several Kinect v2 scanners connected to a single PC. It performs autocalibration and runs in real-time exploiting an efficient composition of several filtering methods including Radius Outlier Removal (ROR), Weighted Median filter (WM) and Weighted Inter-Frame Average filtering (WIFA). The performance of the proposed method has been demonstrated through efficient acquisition of dense 3D point clouds of moving objects.

  2. Performance testing of 3D point cloud software

    NASA Astrophysics Data System (ADS)

    Varela-González, M.; González-Jorge, H.; Riveiro, B.; Arias, P.

    2013-10-01

    LiDAR systems are being used widely in recent years for many applications in the engineering field: civil engineering, cultural heritage, mining, industry and environmental engineering. One of the most important limitations of this technology is the large computational requirements involved in data processing, especially for large mobile LiDAR datasets. Several software solutions for data managing are available in the market, including open source suites, however, users often unknown methodologies to verify their performance properly. In this work a methodology for LiDAR software performance testing is presented and four different suites are studied: QT Modeler, VR Mesh, AutoCAD 3D Civil and the Point Cloud Library running in software developed at the University of Vigo (SITEGI). The software based on the Point Cloud Library shows better results in the loading time of the point clouds and CPU usage. However, it is not as strong as commercial suites in working set and commit size tests.

  3. Triangulation Error Analysis for the Barium Ion Cloud Experiment. M.S. Thesis - North Carolina State Univ.

    NASA Technical Reports Server (NTRS)

    Long, S. A. T.

    1973-01-01

    The triangulation method developed specifically for the Barium Ion Cloud Project is discussed. Expression for the four displacement errors, the three slope errors, and the curvature error in the triangulation solution due to a probable error in the lines-of-sight from the observation stations to points on the cloud are derived. The triangulation method is then used to determine the effect of the following on these different errors in the solution: the number and location of the stations, the observation duration, east-west cloud drift, the number of input data points, and the addition of extra cameras to one of the stations. The pointing displacement errors, and the pointing slope errors are compared. The displacement errors in the solution due to a probable error in the position of a moving station plus the weighting factors for the data from the moving station are also determined.

  4. 3D reconstruction of wooden member of ancient architecture from point clouds

    NASA Astrophysics Data System (ADS)

    Zhang, Ruiju; Wang, Yanmin; Li, Deren; Zhao, Jun; Song, Daixue

    2006-10-01

    This paper presents a 3D reconstruction method to model wooden member of ancient architecture from point clouds based on improved deformable model. Three steps are taken to recover the shape of wooden member. Firstly, Hessian matrix is adopted to compute the axe of wooden member. Secondly, an initial model of wooden member is made by contour orthogonal to its axis. Thirdly, an accurate model is got through the coupling effect between the initial model and the point clouds of the wooden member according to the theory of improved deformable model. Every step and algorithm is studied and described in the paper. Using the point clouds captured from Forbidden City of China, shaft member and beam member are taken as examples to test the method proposed in the paper. Results show the efficiency and robustness of the method addressed in the literature to model the wooden member of ancient architecture.

  5. 77 FR 34380 - CenterPoint Energy Gas Transmission Company, LLC; Notice of Request Under Blanket Authorization

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-11

    ... injection and since CenterPoint can no longer purchase replacement parts for the existing compressor unit... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. CP12-467-000] CenterPoint... May 22, 2012, CenterPoint Energy Gas Transmission Company, LLC (CenterPoint), 1111 Louisiana Street...

  6. HIV-1 gp120 neurotoxicity proximally and at a distance from the point of exposure: protection by rSV40 delivery of antioxidant enzymes.

    PubMed

    Louboutin, Jean-Pierre; Agrawal, Lokesh; Reyes, Beverly A S; Van Bockstaele, Elisabeth J; Strayer, David S

    2009-06-01

    Toxicity of HIV-1 envelope glycoprotein (gp120) for substantia nigra (SN) neurons may contribute to the Parkinsonian manifestations often seen in HIV-1-associated dementia (HAD). We studied the neurotoxicity of gp120 for dopaminergic neurons and potential neuroprotection by antioxidant gene delivery. Rats were injected stereotaxically into their caudate-putamen (CP); CP and (substantia nigra) SN neuron loss was quantified. The area of neuron loss extended several millimeters from the injection site, approximately 35% of the CP area. SN neurons, outside of this area of direct neurotoxicity, were also severely affected. Dopaminergic SN neurons (expressing tyrosine hydroxylase, TH, in the SN and dopamine transporter, DAT, in the CP) were mostly affected: intra-CP gp120 caused approximately 50% DAT+ SN neuron loss. Prior intra-CP gene delivery of Cu/Zn superoxide dismutase (SOD1) or glutathione peroxidase (GPx1) protected SN neurons from intra-CP gp120. Thus, SN dopaminergic neurons are highly sensitive to HIV-1 gp120-induced neurotoxicity, and antioxidant gene delivery, even at a distance, is protective.

  7. Comparision of photogrammetric point clouds with BIM building elements for construction progress monitoring

    NASA Astrophysics Data System (ADS)

    Tuttas, S.; Braun, A.; Borrmann, A.; Stilla, U.

    2014-08-01

    For construction progress monitoring a planned state of the construction at a certain time (as-planed) has to be compared to the actual state (as-built). The as-planed state is derived from a building information model (BIM), which contains the geometry of the building and the construction schedule. In this paper we introduce an approach for the generation of an as-built point cloud by photogrammetry. It is regarded that that images on a construction cannot be taken from everywhere it seems to be necessary. Because of this we use a combination of structure from motion process together with control points to create a scaled point cloud in a consistent coordinate system. Subsequently this point cloud is used for an as-built - as-planed comparison. For that voxels of an octree are marked as occupied, free or unknown by raycasting based on the triangulated points and the camera positions. This allows to identify not existing building parts. For the verification of the existence of building parts a second test based on the points in front and behind the as-planed model planes is performed. The proposed procedure is tested based on an inner city construction site under real conditions.

  8. Localization of Pathology on Complex Architecture Building Surfaces

    NASA Astrophysics Data System (ADS)

    Sidiropoulos, A. A.; Lakakis, K. N.; Mouza, V. K.

    2017-02-01

    The technology of 3D laser scanning is considered as one of the most common methods for heritage documentation. The point clouds that are being produced provide information of high detail, both geometric and thematic. There are various studies that examine techniques of the best exploitation of this information. In this study, an algorithm of pathology localization, such as cracks and fissures, on complex building surfaces is being tested. The algorithm makes use of the points' position in the point cloud and tries to distinguish them in two groups-patterns; pathology and non-pathology. The extraction of the geometric information that is being used for recognizing the pattern of the points is being accomplished via Principal Component Analysis (PCA) in user-specified neighborhoods in the whole point cloud. The implementation of PCA leads to the definition of the normal vector at each point of the cloud. Two tests that operate separately examine both local and global geometric criteria among the points and conclude which of them should be categorized as pathology. The proposed algorithm was tested on parts of the Gazi Evrenos Baths masonry, which are located at the city of Giannitsa at Northern Greece.

  9. Error reduction in three-dimensional metrology combining optical and touch probe data

    NASA Astrophysics Data System (ADS)

    Gerde, Janice R.; Christens-Barry, William A.

    2010-08-01

    Analysis of footwear under the Harmonized Tariff Schedule of the United States (HTSUS) is partly based on identifying the boundary ("parting line") between the "external surface area upper" (ESAU) and the sample's sole. Often, that boundary is obscured. We establish the parting line as the curved intersection between the sample outer surface and its insole surface. The outer surface is determined by discrete point cloud coordinates obtained using a laser scanner. The insole surface is defined by point cloud data, obtained using a touch probe device-a coordinate measuring machine (CMM). Because these point cloud data sets do not overlap spatially, a polynomial surface is fitted to the insole data and extended to intersect a mesh fitted to the outer surface point cloud. This line of intersection defines the ESAU boundary, permitting further fractional area calculations to proceed. The defined parting line location is sensitive to the polynomial used to fit experimental data. Extrapolation to the intersection with the ESAU can heighten this sensitivity. We discuss a methodology for transforming these data into a common reference frame. Three scenarios are considered: measurement error in point cloud coordinates, from fitting a polynomial surface to a point cloud then extrapolating beyond the data set, and error from reference frame transformation. These error sources can influence calculated surface areas. We describe experiments to assess error magnitude, the sensitivity of calculated results on these errors, and minimizing error impact on calculated quantities. Ultimately, we must ensure that statistical error from these procedures is minimized and within acceptance criteria.

  10. Photogrammetric DSM denoising

    NASA Astrophysics Data System (ADS)

    Nex, F.; Gerke, M.

    2014-08-01

    Image matching techniques can nowadays provide very dense point clouds and they are often considered a valid alternative to LiDAR point cloud. However, photogrammetric point clouds are often characterized by a higher level of random noise compared to LiDAR data and by the presence of large outliers. These problems constitute a limitation in the practical use of photogrammetric data for many applications but an effective way to enhance the generated point cloud has still to be found. In this paper we concentrate on the restoration of Digital Surface Models (DSM), computed from dense image matching point clouds. A photogrammetric DSM, i.e. a 2.5D representation of the surface is still one of the major products derived from point clouds. Four different algorithms devoted to DSM denoising are presented: a standard median filter approach, a bilateral filter, a variational approach (TGV: Total Generalized Variation), as well as a newly developed algorithm, which is embedded into a Markov Random Field (MRF) framework and optimized through graph-cuts. The ability of each algorithm to recover the original DSM has been quantitatively evaluated. To do that, a synthetic DSM has been generated and different typologies of noise have been added to mimic the typical errors of photogrammetric DSMs. The evaluation reveals that standard filters like median and edge preserving smoothing through a bilateral filter approach cannot sufficiently remove typical errors occurring in a photogrammetric DSM. The TGV-based approach much better removes random noise, but large areas with outliers still remain. Our own method which explicitly models the degradation properties of those DSM outperforms the others in all aspects.

  11. Scan-To Output Validation: Towards a Standardized Geometric Quality Assessment of Building Information Models Based on Point Clouds

    NASA Astrophysics Data System (ADS)

    Bonduel, M.; Bassier, M.; Vergauwen, M.; Pauwels, P.; Klein, R.

    2017-11-01

    The use of Building Information Modeling (BIM) for existing buildings based on point clouds is increasing. Standardized geometric quality assessment of the BIMs is needed to make them more reliable and thus reusable for future users. First, available literature on the subject is studied. Next, an initial proposal for a standardized geometric quality assessment is presented. Finally, this method is tested and evaluated with a case study. The number of specifications on BIM relating to existing buildings is limited. The Levels of Accuracy (LOA) specification of the USIBD provides definitions and suggestions regarding geometric model accuracy, but lacks a standardized assessment method. A deviation analysis is found to be dependent on (1) the used mathematical model, (2) the density of the point clouds and (3) the order of comparison. Results of the analysis can be graphical and numerical. An analysis on macro (building) and micro (BIM object) scale is necessary. On macro scale, the complete model is compared to the original point cloud and vice versa to get an overview of the general model quality. The graphical results show occluded zones and non-modeled objects respectively. Colored point clouds are derived from this analysis and integrated in the BIM. On micro scale, the relevant surface parts are extracted per BIM object and compared to the complete point cloud. Occluded zones are extracted based on a maximum deviation. What remains is classified according to the LOA specification. The numerical results are integrated in the BIM with the use of object parameters.

  12. Utilizing the Iterative Closest Point (ICP) algorithm for enhanced registration of high resolution surface models - more than a simple black-box application

    NASA Astrophysics Data System (ADS)

    Stöcker, Claudia; Eltner, Anette

    2016-04-01

    Advances in computer vision and digital photogrammetry (i.e. structure from motion) allow for fast and flexible high resolution data supply. Within geoscience applications and especially in the field of small surface topography, high resolution digital terrain models and dense 3D point clouds are valuable data sources to capture actual states as well as for multi-temporal studies. However, there are still some limitations regarding robust registration and accuracy demands (e.g. systematic positional errors) which impede the comparison and/or combination of multi-sensor data products. Therefore, post-processing of 3D point clouds can heavily enhance data quality. In this matter the Iterative Closest Point (ICP) algorithm represents an alignment tool which iteratively minimizes distances of corresponding points within two datasets. Even though tool is widely used; it is often applied as a black-box application within 3D data post-processing for surface reconstruction. Aiming for precise and accurate combination of multi-sensor data sets, this study looks closely at different variants of the ICP algorithm including sub-steps of point selection, point matching, weighting, rejection, error metric and minimization. Therefore, an agricultural utilized field was investigated simultaneously by terrestrial laser scanning (TLS) and unmanned aerial vehicle (UAV) sensors two times (once covered with sparse vegetation and once bare soil). Due to different perspectives both data sets show diverse consistency in terms of shadowed areas and thus gaps so that data merging would provide consistent surface reconstruction. Although photogrammetric processing already included sub-cm accurate ground control surveys, UAV point cloud exhibits an offset towards TLS point cloud. In order to achieve the transformation matrix for fine registration of UAV point clouds, different ICP variants were tested. Statistical analyses of the results show that final success of registration and therefore data quality depends particularly on parameterization and choice of error metric, especially for erroneous data sets as in the case of sparse vegetation cover. At this, the point-to-point metric is more sensitive to data "noise" than the point-to-plane metric which results in considerably higher cloud-to-cloud distances. Concluding, in order to comply with accuracy demands of high resolution surface reconstruction and the aspect that ground control surveys can reach their limits both in time exposure and terrain accessibility ICP algorithm represents a great tool to refine rough initial alignment. Here different variants of registration modules allow for individual application according to the quality of the input data.

  13. Chicken pox after pediatric liver transplantation.

    PubMed

    Levitsky, Josh; Kalil, Andre C; Meza, Jane L; Hurst, Glenn E; Freifeld, Alison

    2005-12-01

    Previous case series have reported serious complications of chicken pox (CP) after pediatric liver transplantation (PLT), mainly due to visceral dissemination. The goal of our study was to determine the incidence, risk factors, and outcomes of CP after PLT. A case-control study of all CP infections in pediatric transplant recipients followed at our center from September 1993 to April 2004 was performed. Data were collected before and after infection and at the same time points in age-, gender-, and transplant year-matched controls. Potential risk factors prior to CP and adverse outcomes after infection were compared between cases and controls. Twenty (6.2%) developed CP at a median of 1.8 yr (0.6-4.8) after PLT. All CP infections were cutaneous, with no evidence of organ involvement. Twelve were hospitalized: 9 only to receive intravenous acyclovir and 3 stayed > or =2 weeks for other complications. Risk factors were not statistically different among cases and controls. Of the outcomes analyzed, cases were significantly more likely to develop non-CP infections within one year of CP than controls (Hazard Ratio = 12.6, 95% confidence interval = 3.1-51.7; P < 0.001). These infections were often bacterial and occurred long after CP infection. In conclusion, CP is uncommon after PLT and has a low likelihood of organ dissemination. No risk factors were identified. Some cases required prolonged hospitalizations. Close monitoring for the development of late bacterial infections is warranted.

  14. Attention-deficit/hyperactivity disorder and callous-unemotional traits as moderators of conduct problems when examining impairment in emerging adults.

    PubMed

    Babinski, Dara E; Neely, Kristina A; Kunselman, Allen; Waschbusch, Daniel A

    2017-12-01

    This study examines attention-deficit/hyperactivity disorder (ADHD) and callous-unemotional (CU) traits as moderators of the association between conduct problems (CP) and young adult functioning. Young adults (n = 283; M age = 20.82 years; 53.4% female), oversampled for attention and behavior problems, provided self-ratings of ADHD, CP, and CU, and adaptive functioning and psychopathology. ADHD and CU simultaneously moderated relationships between CP and family functioning, tobacco use, and internalizing symptoms. In addition, ADHD moderated the relation between CP and job functioning, and main effects of ADHD in the expected direction were found for educational performance and drug use. CU was associated with poorer educational outcomes. Interestingly, no ADHD, CU, or CP effects were observed for reported alcohol use. Our results highlight the importance of considering ADHD and CU in understanding the impact of CP on young adult functioning and psychopathology, and point to the importance of continued work on this topic. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. New observables for $CP$ violation in Higgs decays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yi; Falkowski, Adam; Low, Ian

    Current experimental data on the 125 GeV Higgs boson still allow room for large $CP$ violation. The observables usually considered in this context are triple product asymmetries, which require an input of four visible particles after imposing momentum conservation. Here, we point out a new class of $CP$ -violating observables in Higgs physics which require only three reconstructed momenta. They may arise if the process involves an interference of amplitudes with different intermediate particles, which provide distinct “strong phases” in the form of the Breit-Wigner widths, in addition to possible “weak phases” that arise from $CP$ -violating couplings of themore » Higgs in the Lagrangian. As an example, we propose a forward-backward asymmetry of the charged lepton in the three-body Higgs decay, h → ℓ $-$ ℓ + γ , as a probe for $CP$ -violating Higgs couplings to Zγ and γγ pairs. In conclusion, we also discuss other processes exhibiting this type of $CP$ violation.« less

  16. New observables for $CP$ violation in Higgs decays

    DOE PAGES

    Chen, Yi; Falkowski, Adam; Low, Ian; ...

    2014-12-09

    Current experimental data on the 125 GeV Higgs boson still allow room for large $CP$ violation. The observables usually considered in this context are triple product asymmetries, which require an input of four visible particles after imposing momentum conservation. Here, we point out a new class of $CP$ -violating observables in Higgs physics which require only three reconstructed momenta. They may arise if the process involves an interference of amplitudes with different intermediate particles, which provide distinct “strong phases” in the form of the Breit-Wigner widths, in addition to possible “weak phases” that arise from $CP$ -violating couplings of themore » Higgs in the Lagrangian. As an example, we propose a forward-backward asymmetry of the charged lepton in the three-body Higgs decay, h → ℓ $-$ ℓ + γ , as a probe for $CP$ -violating Higgs couplings to Zγ and γγ pairs. In conclusion, we also discuss other processes exhibiting this type of $CP$ violation.« less

  17. Chance Encounter with a Stratospheric Kerosene Rocket Plume From Russia Over California

    NASA Technical Reports Server (NTRS)

    Newman, P. A.; Wilson, J. C.; Ross, M. N.; Brock, C. A.; Sheridan, P. J.; Schoeberl, M. R.; Lait, L. R.; Bui, T. P.; Loewenstein, M.; Podolske, J. R.; hide

    2000-01-01

    A high-altitude aircraft flight on April 18, 1997 detected an enormous aerosol cloud at 20 km altitude near California (37 N). Not visually observed, the cloud had high concentrations of soot and sulfate aerosol, and was over 180 km in horizontal extent. The cloud was probably a large hydrocarbon fueled vehicle, most likely from rocket motors burning liquid oxygen and kerosene. One of two Russian Soyuz rockets could have produced the cloud: a launch from the Baikonur Cosmodrome, Kazakhstan on April 6; or from Plesetsk, Russia on April 9. Parcel trajectories and long-lived trace gas concentrations suggest the Baikonur launch as the cloud source. Cloud trajectories do not trace the Soyuz plume from Asia to North America, illustrating the uncertainties of point-to-point trajectories. This cloud encounter is the only stratospheric measurement of a hydrocarbon fuel powered rocket.

  18. 75 FR 353 - AES Sparrows Point LNG, LLC and Mid-Atlantic Express, LLC; Notice of Availability of the Final...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-05

    ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket Nos. CP07-62-000; CP07-63-000... December 29, 2009. The staff of the Federal Energy Regulatory Commission (FERC or Commission) has prepared... impacts associated with the construction and operation of a liquefied natural gas (LNG) import terminal...

  19. Chopped or Long Roughage: What Do Calves Prefer? Using Cross Point Analysis of Double Demand Functions

    PubMed Central

    Webb, Laura E.; Bak Jensen, Margit; Engel, Bas; van Reenen, Cornelis G.; Gerrits, Walter J. J.; de Boer, Imke J. M.; Bokkers, Eddie A. M.

    2014-01-01

    The present study aimed to quantify calves'(Bos taurus) preference for long versus chopped hay and straw, and hay versus straw, using cross point analysis of double demand functions, in a context where energy intake was not a limiting factor. Nine calves, fed milk replacer and concentrate, were trained to work for roughage rewards from two simultaneously available panels. The cost (number of muzzle presses) required on the panels varied in each session (left panel/right panel): 7/35, 14/28, 21/21, 28/14, 35/7. Demand functions were estimated from the proportion of rewards achieved on one panel relative to the total number of rewards achieved in one session. Cross points (cp) were calculated as the cost at which an equal number of rewards was achieved from both panels. The deviation of the cp from the midpoint (here 21) indicates the strength of the preference. Calves showed a preference for long versus chopped hay (cp  = 14.5; P  = 0.004), and for hay versus straw (cp  = 38.9; P = 0.004), both of which improve rumen function. Long hay may stimulate chewing more than chopped hay, and the preference for hay versus straw could be related to hedonic characteristics. No preference was found for chopped versus long straw (cp  = 20.8; P = 0.910). These results could be used to improve the welfare of calves in production systems; for example, in systems where calves are fed hay along with high energy concentrate, providing long hay instead of chopped could promote roughage intake, rumen development, and rumination. PMID:24558426

  20. a Method for the Registration of Hemispherical Photographs and Tls Intensity Images

    NASA Astrophysics Data System (ADS)

    Schmidt, A.; Schilling, A.; Maas, H.-G.

    2012-07-01

    Terrestrial laser scanners generate dense and accurate 3D point clouds with minimal effort, which represent the geometry of real objects, while image data contains texture information of object surfaces. Based on the complementary characteristics of both data sets, a combination is very appealing for many applications, including forest-related tasks. In the scope of our research project, independent data sets of a plain birch stand have been taken by a full-spherical laser scanner and a hemispherical digital camera. Previously, both kinds of data sets have been considered separately: Individual trees were successfully extracted from large 3D point clouds, and so-called forest inventory parameters could be determined. Additionally, a simplified tree topology representation was retrieved. From hemispherical images, leaf area index (LAI) values, as a very relevant parameter for describing a stand, have been computed. The objective of our approach is to merge a 3D point cloud with image data in a way that RGB values are assigned to each 3D point. So far, segmentation and classification of TLS point clouds in forestry applications was mainly based on geometrical aspects of the data set. However, a 3D point cloud with colour information provides valuable cues exceeding simple statistical evaluation of geometrical object features and thus may facilitate the analysis of the scan data significantly.

  1. Segmentation of Large Unstructured Point Clouds Using Octree-Based Region Growing and Conditional Random Fields

    NASA Astrophysics Data System (ADS)

    Bassier, M.; Bonduel, M.; Van Genechten, B.; Vergauwen, M.

    2017-11-01

    Point cloud segmentation is a crucial step in scene understanding and interpretation. The goal is to decompose the initial data into sets of workable clusters with similar properties. Additionally, it is a key aspect in the automated procedure from point cloud data to BIM. Current approaches typically only segment a single type of primitive such as planes or cylinders. Also, current algorithms suffer from oversegmenting the data and are often sensor or scene dependent. In this work, a method is presented to automatically segment large unstructured point clouds of buildings. More specifically, the segmentation is formulated as a graph optimisation problem. First, the data is oversegmented with a greedy octree-based region growing method. The growing is conditioned on the segmentation of planes as well as smooth surfaces. Next, the candidate clusters are represented by a Conditional Random Field after which the most likely configuration of candidate clusters is computed given a set of local and contextual features. The experiments prove that the used method is a fast and reliable framework for unstructured point cloud segmentation. Processing speeds up to 40,000 points per second are recorded for the region growing. Additionally, the recall and precision of the graph clustering is approximately 80%. Overall, nearly 22% of oversegmentation is reduced by clustering the data. These clusters will be classified and used as a basis for the reconstruction of BIM models.

  2. SigVox - A 3D feature matching algorithm for automatic street object recognition in mobile laser scanning point clouds

    NASA Astrophysics Data System (ADS)

    Wang, Jinhu; Lindenbergh, Roderik; Menenti, Massimo

    2017-06-01

    Urban road environments contain a variety of objects including different types of lamp poles and traffic signs. Its monitoring is traditionally conducted by visual inspection, which is time consuming and expensive. Mobile laser scanning (MLS) systems sample the road environment efficiently by acquiring large and accurate point clouds. This work proposes a methodology for urban road object recognition from MLS point clouds. The proposed method uses, for the first time, shape descriptors of complete objects to match repetitive objects in large point clouds. To do so, a novel 3D multi-scale shape descriptor is introduced, that is embedded in a workflow that efficiently and automatically identifies different types of lamp poles and traffic signs. The workflow starts by tiling the raw point clouds along the scanning trajectory and by identifying non-ground points. After voxelization of the non-ground points, connected voxels are clustered to form candidate objects. For automatic recognition of lamp poles and street signs, a 3D significant eigenvector based shape descriptor using voxels (SigVox) is introduced. The 3D SigVox descriptor is constructed by first subdividing the points with an octree into several levels. Next, significant eigenvectors of the points in each voxel are determined by principal component analysis (PCA) and mapped onto the appropriate triangle of a sphere approximating icosahedron. This step is repeated for different scales. By determining the similarity of 3D SigVox descriptors between candidate point clusters and training objects, street furniture is automatically identified. The feasibility and quality of the proposed method is verified on two point clouds obtained in opposite direction of a stretch of road of 4 km. 6 types of lamp pole and 4 types of road sign were selected as objects of interest. Ground truth validation showed that the overall accuracy of the ∼170 automatically recognized objects is approximately 95%. The results demonstrate that the proposed method is able to recognize street furniture in a practical scenario. Remaining difficult cases are touching objects, like a lamp pole close to a tree.

  3. Genomic cloud computing: legal and ethical points to consider

    PubMed Central

    Dove, Edward S; Joly, Yann; Tassé, Anne-Marie; Burton, Paul; Chisholm, Rex; Fortier, Isabel; Goodwin, Pat; Harris, Jennifer; Hveem, Kristian; Kaye, Jane; Kent, Alistair; Knoppers, Bartha Maria; Lindpaintner, Klaus; Little, Julian; Riegman, Peter; Ripatti, Samuli; Stolk, Ronald; Bobrow, Martin; Cambon-Thomsen, Anne; Dressler, Lynn; Joly, Yann; Kato, Kazuto; Knoppers, Bartha Maria; Rodriguez, Laura Lyman; McPherson, Treasa; Nicolás, Pilar; Ouellette, Francis; Romeo-Casabona, Carlos; Sarin, Rajiv; Wallace, Susan; Wiesner, Georgia; Wilson, Julia; Zeps, Nikolajs; Simkevitz, Howard; De Rienzo, Assunta; Knoppers, Bartha M

    2015-01-01

    The biggest challenge in twenty-first century data-intensive genomic science, is developing vast computer infrastructure and advanced software tools to perform comprehensive analyses of genomic data sets for biomedical research and clinical practice. Researchers are increasingly turning to cloud computing both as a solution to integrate data from genomics, systems biology and biomedical data mining and as an approach to analyze data to solve biomedical problems. Although cloud computing provides several benefits such as lower costs and greater efficiency, it also raises legal and ethical issues. In this article, we discuss three key ‘points to consider' (data control; data security, confidentiality and transfer; and accountability) based on a preliminary review of several publicly available cloud service providers' Terms of Service. These ‘points to consider' should be borne in mind by genomic research organizations when negotiating legal arrangements to store genomic data on a large commercial cloud service provider's servers. Diligent genomic cloud computing means leveraging security standards and evaluation processes as a means to protect data and entails many of the same good practices that researchers should always consider in securing their local infrastructure. PMID:25248396

  4. Genomic cloud computing: legal and ethical points to consider.

    PubMed

    Dove, Edward S; Joly, Yann; Tassé, Anne-Marie; Knoppers, Bartha M

    2015-10-01

    The biggest challenge in twenty-first century data-intensive genomic science, is developing vast computer infrastructure and advanced software tools to perform comprehensive analyses of genomic data sets for biomedical research and clinical practice. Researchers are increasingly turning to cloud computing both as a solution to integrate data from genomics, systems biology and biomedical data mining and as an approach to analyze data to solve biomedical problems. Although cloud computing provides several benefits such as lower costs and greater efficiency, it also raises legal and ethical issues. In this article, we discuss three key 'points to consider' (data control; data security, confidentiality and transfer; and accountability) based on a preliminary review of several publicly available cloud service providers' Terms of Service. These 'points to consider' should be borne in mind by genomic research organizations when negotiating legal arrangements to store genomic data on a large commercial cloud service provider's servers. Diligent genomic cloud computing means leveraging security standards and evaluation processes as a means to protect data and entails many of the same good practices that researchers should always consider in securing their local infrastructure.

  5. Critical behavior of the contact process on small-world networks

    NASA Astrophysics Data System (ADS)

    Ferreira, Ronan S.; Ferreira, Silvio C.

    2013-11-01

    We investigate the role of clustering on the critical behavior of the contact process (CP) on small-world networks using the Watts-Strogatz (WS) network model with an edge rewiring probability p. The critical point is well predicted by a homogeneous cluster-approximation for the limit of vanishing clustering ( p → 1). The critical exponents and dimensionless moment ratios of the CP are in agreement with those predicted by the mean-field theory for any p > 0. This independence on the network clustering shows that the small-world property is a sufficient condition for the mean-field theory to correctly predict the universality of the model. Moreover, we compare the CP dynamics on WS networks with rewiring probability p = 1 and random regular networks and show that the weak heterogeneity of the WS network slightly changes the critical point but does not alter other critical quantities of the model.

  6. Comparison of roadway roughness derived from LIDAR and SFM 3D point clouds.

    DOT National Transportation Integrated Search

    2015-10-01

    This report describes a short-term study undertaken to investigate the potential for using dense three-dimensional (3D) point : clouds generated from light detection and ranging (LIDAR) and photogrammetry to assess roadway roughness. Spatially : cont...

  7. D Modeling of Components of a Garden by Using Point Cloud Data

    NASA Astrophysics Data System (ADS)

    Kumazakia, R.; Kunii, Y.

    2016-06-01

    Laser measurement is currently applied to several tasks such as plumbing management, road investigation through mobile mapping systems, and elevation model utilization through airborne LiDAR. Effective laser measurement methods have been well-documented in civil engineering, but few attempts have been made to establish equally effective methods in landscape engineering. By using point cloud data acquired through laser measurement, the aesthetic landscaping of Japanese gardens can be enhanced. This study focuses on simple landscape simulations for pruning and rearranging trees as well as rearranging rocks, lanterns, and other garden features by using point cloud data. However, such simulations lack concreteness. Therefore, this study considers the construction of a library of garden features extracted from point cloud data. The library would serve as a resource for creating new gardens and simulating gardens prior to conducting repairs. Extracted garden features are imported as 3ds Max objects, and realistic 3D models are generated by using a material editor system. As further work toward the publication of a 3D model library, file formats for tree crowns and trunks should be adjusted. Moreover, reducing the size of created models is necessary. Models created using point cloud data are informative because simply shaped garden features such as trees are often seen in the 3D industry.

  8. Extractive biodegradation and bioavailability assessment of phenanthrene in the cloud point system by Sphingomonas polyaromaticivorans.

    PubMed

    Pan, Tao; Deng, Tao; Zeng, Xinying; Dong, Wei; Yu, Shuijing

    2016-01-01

    The biological treatment of polycyclic aromatic hydrocarbons is an important issue. Most microbes have limited practical applications because of the poor bioavailability of polycyclic aromatic hydrocarbons. In this study, the extractive biodegradation of phenanthrene by Sphingomonas polyaromaticivorans was conducted by introducing the cloud point system. The cloud point system is composed of a mixture of (40 g/L) Brij 30 and Tergitol TMN-3, which are nonionic surfactants, in equal proportions. After phenanthrene degradation, a higher wet cell weight and lower phenanthrene residue were obtained in the cloud point system than that in the control system. According to the results of high-performance liquid chromatography, the residual phenanthrene preferred to partition from the dilute phase into the coacervate phase. The concentration of residual phenanthrene in the dilute phase (below 0.001 mg/L) is lower than its solubility in water (1.18 mg/L) after extractive biodegradation. Therefore, dilute phase detoxification was achieved, thus indicating that the dilute phase could be discharged without causing phenanthrene pollution. Bioavailability was assessed by introducing the apparent logP in the cloud point system. Apparent logP decreased significantly, thus indicating that the bioavailability of phenanthrene increased remarkably in the system. This study provides a potential application of biological treatment in water and soil contaminated by phenanthrene.

  9. Quantitative evaluation for small surface damage based on iterative difference and triangulation of 3D point cloud

    NASA Astrophysics Data System (ADS)

    Zhang, Yuyan; Guo, Quanli; Wang, Zhenchun; Yang, Degong

    2018-03-01

    This paper proposes a non-contact, non-destructive evaluation method for the surface damage of high-speed sliding electrical contact rails. The proposed method establishes a model of damage identification and calculation. A laser scanning system is built to obtain the 3D point cloud data of the rail surface. In order to extract the damage region of the rail surface, the 3D point cloud data are processed using iterative difference, nearest neighbours search and a data registration algorithm. The curvature of the point cloud data in the damage region is mapped to RGB color information, which can directly reflect the change trend of the curvature of the point cloud data in the damage region. The extracted damage region is divided into three prism elements by a method of triangulation. The volume and mass of a single element are calculated by the method of geometric segmentation. Finally, the total volume and mass of the damage region are obtained by the principle of superposition. The proposed method is applied to several typical injuries and the results are discussed. The experimental results show that the algorithm can identify damage shapes and calculate damage mass with milligram precision, which are useful for evaluating the damage in a further research stage.

  10. Effects of intrathecal baclofen therapy on motor and cognitive functions in a rat model of cerebral palsy.

    PubMed

    Nomura, Sadahiro; Kagawa, Yoshiteru; Kida, Hiroyuki; Maruta, Yuichi; Imoto, Hirochika; Fujii, Masami; Suzuki, Michiyasu

    2012-02-01

    Cerebral palsy (CP) arises in the early stages of brain development and manifests as spastic paresis that is often associated with cognitive dysfunction. Available CP treatments are aimed at the management of spasticity and include botulinum toxin administration, selective dorsal rhizotomy, and intrathecal baclofen (ITB). In this study, the authors investigated whether the management of spasticity with ITB therapy affected motor function and whether the release of spasticity was associated with an improvement in intellectual function. Newborn Sprague-Dawley rats were divided into the following groups: control, CP model, and CP model with ITB therapy. For the CP model, postnatal Day 7 (P7) rats were exposed to hypoxic conditions (8% O(2)) for 150 minutes after ligation of the right common carotid artery. In the groups receiving ITB therapy, a spinal catheter was connected to an osmotic pump filled with baclofen and placed in the spinal subarachnoid space on P21 in the early group and on P35 in the late group. A daily dose of 12 μg of baclofen was continuously administered until P49, resulting in 28 days of therapy in the early group and 14 days in the late group. Changes in spasticity in the CP and CP with ITB treatment groups were confirmed by assessing the motor evoked potential in the plantar muscle. In the CP group, the time required to complete a beam-walking test on P49 was significantly longer than that in the control and ITB treatment groups (4.15 ± 0.60 vs 2.10 ± 0.18 and 2.22 ± 0.22 seconds, respectively). Results of the beam-walking test are expressed as the mean ± SD. Radial arm maze performance on P49 indicated that spatial reference memory had significantly deteriorated in the CP group compared with controls (2.33 ± 0.87 vs 0.86 ± 0.90 points); moreover, working memory was also negatively affected by CP (0.78 ± 1.09 vs 0.14 ± 0.38 points). Results of the memory tests are expressed as the mean ± SE. These memory functions did not recover after ITB treatment. Management of spasticity with ITB therapy improved the walking ability in the rat CP model. Intrathecal baclofen therapy-which reduces harmful sensory and motor stimulations caused by spasticity to more optimal levels-contributed to motor function recovery; however, it had no effect on intellectual recovery as assessed by memory performance in the rat CP model.

  11. Comparison of DSMs acquired by terrestrial laser scanning, UAV-based aerial images and ground-based optical images at the Super-Sauze landslide

    NASA Astrophysics Data System (ADS)

    Rothmund, Sabrina; Niethammer, Uwe; Walter, Marco; Joswig, Manfred

    2013-04-01

    In recent years, the high-resolution and multi-temporal 3D mapping of the Earth's surface using terrestrial laser scanning (TLS), ground-based optical images and especially low-cost UAV-based aerial images (Unmanned Aerial Vehicle) has grown in importance. This development resulted from the progressive technical improvement of the imaging systems and the freely available multi-view stereo (MVS) software packages. These different methods of data acquisition for the generation of accurate, high-resolution digital surface models (DSMs) were applied as part of an eight-week field campaign at the Super-Sauze landslide (South French Alps). An area of approximately 10,000 m² with long-term average displacement rates greater than 0.01 m/day has been investigated. The TLS-based point clouds were acquired at different viewpoints with an average point spacing between 10 to 40 mm and at different dates. On these days, more than 50 optical images were taken on points along a predefined line on the side part of the landslide by a low-cost digital compact camera. Additionally, aerial images were taken by a radio-controlled mini quad-rotor UAV equipped with another low-cost digital compact camera. The flight altitude ranged between 20 m and 250 m and produced a corresponding ground resolution between 0.6 cm and 7 cm. DGPS measurements were carried out as well in order to geo-reference and validate the point cloud data. To generate unscaled photogrammetric 3D point clouds from a disordered and tilted image set, we use the widespread open-source software package Bundler and PMVS2 (University of Washington). These multi-temporal DSMs are required on the one hand to determine the three-dimensional surface deformations and on the other hand it will be required for differential correction for orthophoto production. Drawing on the example of the acquired data at the Super-Sauze landslide, we demonstrate the potential but also the limitations of the photogrammetric point clouds. To determine the quality of the photogrammetric point cloud, these point clouds are compared with the TLS-based DSMs. The comparison shows that photogrammetric points accuracies are in the range of cm to dm, therefore don't reach the quality of the high-resolution TLS-based DSMs. Further, the validation of the photogrammetric point clouds reveals that some of them have internal curvature effects. The advantage of the photogrammetric 3D data acquisition is the use of low-cost equipment and less time-consuming data collection in the field. While the accuracy of the photogrammetric point clouds is not as high as TLS-based DSMs, the advantages of the former method are seen when applied in areas where dm-range is sufficient.

  12. 50 years of CP violation — What have we learned?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McKellar, Bruce H. J.

    Early after the discovery of CP violation, the explanation of how the Standard Model of particle physics could allow CP violation was quickly given, but it took many years for the original observation to be unequivocally explained on that basis. It was also proposed that this observation opened up the possibility that we could now explain the fact that the universe is made of matter. Remarkably, 50 years later we have no evidence in particle physics that there is any CP violation except that of the Kobayashi Maskawa mechanism of the standard model. Yet we fail completely to explain themore » baryon asymmetry of the Universe through that mechanism. After reviewing the main points in the history I describe the present experimental attempts to find CP violation beyond the standard model, and explain the theoretical attempts to explain the matter in the Universe.« less

  13. Automated Coarse Registration of Point Clouds in 3d Urban Scenes Using Voxel Based Plane Constraint

    NASA Astrophysics Data System (ADS)

    Xu, Y.; Boerner, R.; Yao, W.; Hoegner, L.; Stilla, U.

    2017-09-01

    For obtaining a full coverage of 3D scans in a large-scale urban area, the registration between point clouds acquired via terrestrial laser scanning (TLS) is normally mandatory. However, due to the complex urban environment, the automatic registration of different scans is still a challenging problem. In this work, we propose an automatic marker free method for fast and coarse registration between point clouds using the geometric constrains of planar patches under a voxel structure. Our proposed method consists of four major steps: the voxelization of the point cloud, the approximation of planar patches, the matching of corresponding patches, and the estimation of transformation parameters. In the voxelization step, the point cloud of each scan is organized with a 3D voxel structure, by which the entire point cloud is partitioned into small individual patches. In the following step, we represent points of each voxel with the approximated plane function, and select those patches resembling planar surfaces. Afterwards, for matching the corresponding patches, a RANSAC-based strategy is applied. Among all the planar patches of a scan, we randomly select a planar patches set of three planar surfaces, in order to build a coordinate frame via their normal vectors and their intersection points. The transformation parameters between scans are calculated from these two coordinate frames. The planar patches set with its transformation parameters owning the largest number of coplanar patches are identified as the optimal candidate set for estimating the correct transformation parameters. The experimental results using TLS datasets of different scenes reveal that our proposed method can be both effective and efficient for the coarse registration task. Especially, for the fast orientation between scans, our proposed method can achieve a registration error of less than around 2 degrees using the testing datasets, and much more efficient than the classical baseline methods.

  14. Cloud Modeling

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo; Moncrieff, Mitchell; Einaud, Franco (Technical Monitor)

    2001-01-01

    Numerical cloud models have been developed and applied extensively to study cloud-scale and mesoscale processes during the past four decades. The distinctive aspect of these cloud models is their ability to treat explicitly (or resolve) cloud-scale dynamics. This requires the cloud models to be formulated from the non-hydrostatic equations of motion that explicitly include the vertical acceleration terms since the vertical and horizontal scales of convection are similar. Such models are also necessary in order to allow gravity waves, such as those triggered by clouds, to be resolved explicitly. In contrast, the hydrostatic approximation, usually applied in global or regional models, does allow the presence of gravity waves. In addition, the availability of exponentially increasing computer capabilities has resulted in time integrations increasing from hours to days, domain grids boxes (points) increasing from less than 2000 to more than 2,500,000 grid points with 500 to 1000 m resolution, and 3-D models becoming increasingly prevalent. The cloud resolving model is now at a stage where it can provide reasonably accurate statistical information of the sub-grid, cloud-resolving processes poorly parameterized in climate models and numerical prediction models.

  15. Development of clinical pharmacy key performance indicators for hospital pharmacists using a modified Delphi approach.

    PubMed

    Fernandes, Olavo; Gorman, Sean K; Slavik, Richard S; Semchuk, William M; Shalansky, Steve; Bussières, Jean-François; Doucette, Douglas; Bannerman, Heather; Lo, Jennifer; Shukla, Simone; Chan, Winnie W Y; Benninger, Natalie; MacKinnon, Neil J; Bell, Chaim M; Slobodan, Jeremy; Lyder, Catherine; Zed, Peter J; Toombs, Kent

    2015-06-01

    Key performance indicators (KPIs) are quantifiable measures of quality. There are no published, systematically derived clinical pharmacy KPIs (cpKPIs). A group of hospital pharmacists aimed to develop national cpKPIs to advance clinical pharmacy practice and improve patient care. A cpKPI working group established a cpKPI definition, 8 evidence-derived cpKPI critical activity areas, 26 candidate cpKPIs, and 11 cpKPI ideal attributes in addition to 1 overall consensus criterion. Twenty-six clinical pharmacists and hospital pharmacy leaders participated in an internet-based 3-round modified Delphi survey. Panelists rated 26 candidate cpKPIs using 11 cpKPI ideal attributes and 1 overall consensus criterion on a 9-point Likert scale. A meeting was facilitated between rounds 2 and 3 to debate the merits and wording of candidate cpKPIs. Consensus was reached if 75% or more of panelists assigned a score of 7 to 9 on the consensus criterion during the third Delphi round. All panelists completed the 3 Delphi rounds, and 25/26 (96%) attended the meeting. Eight candidate cpKPIs met the consensus definition: (1) performing admission medication reconciliation (including best-possible medication history), (2) participating in interprofessional patient care rounds, (3) completing pharmaceutical care plans, (4) resolving drug therapy problems, (5) providing in-person disease and medication education to patients, (6) providing discharge patient medication education, (7) performing discharge medication reconciliation, and (8) providing bundled, proactive direct patient care activities. A Delphi panel of hospital pharmacists was successful in determining 8 consensus cpKPIs. Measurement and assessment of these cpKPIs will serve to advance clinical pharmacy practice and improve patient care. © The Author(s) 2015.

  16. Rapid Topographic Mapping Using TLS and UAV in a Beach-dune-wetland Environment: Case Study in Freeport, Texas, USA

    NASA Astrophysics Data System (ADS)

    Ding, J.; Wang, G.; Xiong, L.; Zhou, X.; England, E.

    2017-12-01

    Coastal regions are naturally vulnerable to impact from long-term coastal erosion and episodic coastal hazards caused by extreme weather events. Major geomorphic changes can occur within a few hours during storms. Prediction of storm impact, costal planning and resilience observation after natural events all require accurate and up-to-date topographic maps of coastal morphology. Thus, the ability to conduct rapid and high-resolution-high-accuracy topographic mapping is of critical importance for long-term coastal management and rapid response after natural hazard events. Terrestrial laser scanning (TLS) techniques have been frequently applied to beach and dune erosion studies and post hazard responses. However, TLS surveying is relatively slow and costly for rapid surveying. Furthermore, TLS surveying unavoidably retains gray areas that cannot be reached by laser pulses, particularly in wetland areas where lack of direct access in most cases. Aerial mapping using photogrammetry from images taken by unmanned aerial vehicles (UAV) has become a new technique for rapid topographic mapping. UAV photogrammetry mapping techniques provide the ability to map coastal features quickly, safely, inexpensively, on short notice and with minimal impact. The primary products from photogrammetry are point clouds similar to the LiDAR point clouds. However, a large number of ground control points (ground truth) are essential for obtaining high-accuracy UAV maps. The ground control points are often obtained by GPS survey simultaneously with the TLS survey in the field. The GPS survey could be a slow and arduous process in the field. This study aims to develop methods for acquiring a huge number of ground control points from TLS survey and validating point clouds obtained from photogrammetry with the TLS point clouds. A Rigel VZ-2000 TLS scanner was used for developing laser point clouds and a DJI Phantom 4 Pro UAV was used for acquiring images. The aerial images were processed with the Photogrammetry mapping software Agisoft PhotoScan. A workflow for conducting rapid TLS and UAV survey in the field and integrating point clouds obtained from TLS and UAV surveying will be introduced. Key words: UAV photogrammetry, ground control points, TLS, coastal morphology, topographic mapping

  17. Enzyme complex containing carbohydrases and phytase improves growth performance and bone mineralization of broilers fed reduced nutrient corn-soybean-based diets.

    PubMed

    Francesch, M; Geraert, P A

    2009-09-01

    One experiment was conducted to investigate the benefits of a multi-enzyme complex, containing carbohydrases (from Penicillium funiculosum) and phytase (bacterial 6-phytase) activities, on the performance and bone mineralization of broiler chickens fed corn-soybean meal diets. A total of 2,268 male broilers were allocated to 9 treatments, replicated 6 times, in a randomized complete block design from 1 to 43 d. A positive control (PC) diet formulated to be adequate in nutrients and 4 reduced nutrient diets (NC1 to NC4), with gradual decrease on AME, CP, and digestible amino acids (CP-dAA) and available P (avP) and Ca contents, with or without enzyme supplementation, were tested. The nutrient reductions applied were NC1 (-65 kcal/kg, -1.5% CP-dAA) and NC2 (-85 kcal/kg, -3.0% CP-dAA) both with -0.15 percent point avP and -0.12 percent point Ca and NC3 (-65 kcal/kg, -1.5% CP-dAA) and NC4 (-85 kcal/kg, -3.0% CP-dAA) both with -0.20 percent point avP and -0.16 percent point Ca. Supplementation of the NC diets with the enzyme complex increased ADFI (P<0.001), ADG (P<0.001), and reduced feed:gain (P<0.01). The magnitude of the enzyme effect in increasing feed intake and weight gain was greater for the diets with greatest reductions in avP and Ca. Enzyme supplementation increased (P<0.001) feed intake of birds fed on NC diets close to the level of feed consumption of the PC. Enzyme supplementation to NC diets resulted in all cases in lower (P<0.05) feed:gain than the PC. Enzyme supplementation to NC1 and NC3 diets restored bone mineralization to that of the PC, whereas ash and Ca with NC2 and NC4 diets and P with NC4 diet remained lower (P<0.05). These results suggest that the dietary supplementation with a multi-enzyme complex containing nonstarch polysaccharide enzymes and phytase is efficient in reducing the P, energy, protein, and amino acid specifications of corn-soybean meal diets.

  18. Sparse Unorganized Point Cloud Based Relative Pose Estimation for Uncooperative Space Target.

    PubMed

    Yin, Fang; Chou, Wusheng; Wu, Yun; Yang, Guang; Xu, Song

    2018-03-28

    This paper proposes an autonomous algorithm to determine the relative pose between the chaser spacecraft and the uncooperative space target, which is essential in advanced space applications, e.g., on-orbit serving missions. The proposed method, named Congruent Tetrahedron Align (CTA) algorithm, uses the very sparse unorganized 3D point cloud acquired by a LIDAR sensor, and does not require any prior pose information. The core of the method is to determine the relative pose by looking for the congruent tetrahedron in scanning point cloud and model point cloud on the basis of its known model. The two-level index hash table is built for speeding up the search speed. In addition, the Iterative Closest Point (ICP) algorithm is used for pose tracking after CTA. In order to evaluate the method in arbitrary initial attitude, a simulated system is presented. Specifically, the performance of the proposed method to provide the initial pose needed for the tracking algorithm is demonstrated, as well as their robustness against noise. Finally, a field experiment is conducted and the results demonstrated the effectiveness of the proposed method.

  19. Interactive Classification of Construction Materials: Feedback Driven Framework for Annotation and Analysis of 3d Point Clouds

    NASA Astrophysics Data System (ADS)

    Hess, M. R.; Petrovic, V.; Kuester, F.

    2017-08-01

    Digital documentation of cultural heritage structures is increasingly more common through the application of different imaging techniques. Many works have focused on the application of laser scanning and photogrammetry techniques for the acquisition of threedimensional (3D) geometry detailing cultural heritage sites and structures. With an abundance of these 3D data assets, there must be a digital environment where these data can be visualized and analyzed. Presented here is a feedback driven visualization framework that seamlessly enables interactive exploration and manipulation of massive point cloud data. The focus of this work is on the classification of different building materials with the goal of building more accurate as-built information models of historical structures. User defined functions have been tested within the interactive point cloud visualization framework to evaluate automated and semi-automated classification of 3D point data. These functions include decisions based on observed color, laser intensity, normal vector or local surface geometry. Multiple case studies are presented here to demonstrate the flexibility and utility of the presented point cloud visualization framework to achieve classification objectives.

  20. Automated Point Cloud Correspondence Detection for Underwater Mapping Using AUVs

    NASA Technical Reports Server (NTRS)

    Hammond, Marcus; Clark, Ashley; Mahajan, Aditya; Sharma, Sumant; Rock, Stephen

    2015-01-01

    An algorithm for automating correspondence detection between point clouds composed of multibeam sonar data is presented. This allows accurate initialization for point cloud alignment techniques even in cases where accurate inertial navigation is not available, such as iceberg profiling or vehicles with low-grade inertial navigation systems. Techniques from computer vision literature are used to extract, label, and match keypoints between "pseudo-images" generated from these point clouds. Image matches are refined using RANSAC and information about the vehicle trajectory. The resulting correspondences can be used to initialize an iterative closest point (ICP) registration algorithm to estimate accumulated navigation error and aid in the creation of accurate, self-consistent maps. The results presented use multibeam sonar data obtained from multiple overlapping passes of an underwater canyon in Monterey Bay, California. Using strict matching criteria, the method detects 23 between-swath correspondence events in a set of 155 pseudo-images with zero false positives. Using less conservative matching criteria doubles the number of matches but introduces several false positive matches as well. Heuristics based on known vehicle trajectory information are used to eliminate these.

  1. Feasibility of Smartphone Based Photogrammetric Point Clouds for the Generation of Accessibility Maps

    NASA Astrophysics Data System (ADS)

    Angelats, E.; Parés, M. E.; Kumar, P.

    2018-05-01

    Accessible cities with accessible services are an old claim of people with reduced mobility. But this demand is still far away of becoming a reality as lot of work is required to be done yet. First step towards accessible cities is to know about real situation of the cities and its pavement infrastructure. Detailed maps or databases on street slopes, access to sidewalks, mobility in public parks and gardens, etc. are required. In this paper, we propose to use smartphone based photogrammetric point clouds, as a starting point to create accessible maps or databases. This paper analyses the performance of these point clouds and the complexity of the image acquisition procedure required to obtain them. The paper proves, through two test cases, that smartphone technology is an economical and feasible solution to get the required information, which is quite often seek by city planners to generate accessible maps. The proposed approach paves the way to generate, in a near term, accessibility maps through the use of point clouds derived from crowdsourced smartphone imagery.

  2. Sparse Unorganized Point Cloud Based Relative Pose Estimation for Uncooperative Space Target

    PubMed Central

    Chou, Wusheng; Wu, Yun; Yang, Guang; Xu, Song

    2018-01-01

    This paper proposes an autonomous algorithm to determine the relative pose between the chaser spacecraft and the uncooperative space target, which is essential in advanced space applications, e.g., on-orbit serving missions. The proposed method, named Congruent Tetrahedron Align (CTA) algorithm, uses the very sparse unorganized 3D point cloud acquired by a LIDAR sensor, and does not require any prior pose information. The core of the method is to determine the relative pose by looking for the congruent tetrahedron in scanning point cloud and model point cloud on the basis of its known model. The two-level index hash table is built for speeding up the search speed. In addition, the Iterative Closest Point (ICP) algorithm is used for pose tracking after CTA. In order to evaluate the method in arbitrary initial attitude, a simulated system is presented. Specifically, the performance of the proposed method to provide the initial pose needed for the tracking algorithm is demonstrated, as well as their robustness against noise. Finally, a field experiment is conducted and the results demonstrated the effectiveness of the proposed method. PMID:29597323

  3. 3D granulometry: grain-scale shape and size distribution from point cloud dataset of river environments

    NASA Astrophysics Data System (ADS)

    Steer, Philippe; Lague, Dimitri; Gourdon, Aurélie; Croissant, Thomas; Crave, Alain

    2016-04-01

    The grain-scale morphology of river sediments and their size distribution are important factors controlling the efficiency of fluvial erosion and transport. In turn, constraining the spatial evolution of these two metrics offer deep insights on the dynamics of river erosion and sediment transport from hillslopes to the sea. However, the size distribution of river sediments is generally assessed using statistically-biased field measurements and determining the grain-scale shape of river sediments remains a real challenge in geomorphology. Here we determine, with new methodological approaches based on the segmentation and geomorphological fitting of 3D point cloud dataset, the size distribution and grain-scale shape of sediments located in river environments. Point cloud segmentation is performed using either machine-learning algorithms or geometrical criterion, such as local plan fitting or curvature analysis. Once the grains are individualized into several sub-clouds, each grain-scale morphology is determined using a 3D geometrical fitting algorithm applied on the sub-cloud. If different geometrical models can be conceived and tested, only ellipsoidal models were used in this study. A phase of results checking is then performed to remove grains showing a best-fitting model with a low level of confidence. The main benefits of this automatic method are that it provides 1) an un-biased estimate of grain-size distribution on a large range of scales, from centimeter to tens of meters; 2) access to a very large number of data, only limited by the number of grains in the point-cloud dataset; 3) access to the 3D morphology of grains, in turn allowing to develop new metrics characterizing the size and shape of grains. The main limit of this method is that it is only able to detect grains with a characteristic size greater than the resolution of the point cloud. This new 3D granulometric method is then applied to river terraces both in the Poerua catchment in New-Zealand and along the Laonong river in Taiwan, which point clouds were obtained using both terrestrial lidar scanning and structure from motion photogrammetry.

  4. Surface Fitting Filtering of LIDAR Point Cloud with Waveform Information

    NASA Astrophysics Data System (ADS)

    Xing, S.; Li, P.; Xu, Q.; Wang, D.; Li, P.

    2017-09-01

    Full-waveform LiDAR is an active technology of photogrammetry and remote sensing. It provides more detailed information about objects along the path of a laser pulse than discrete-return topographic LiDAR. The point cloud and waveform information with high quality can be obtained by waveform decomposition, which could make contributions to accurate filtering. The surface fitting filtering method with waveform information is proposed to present such advantage. Firstly, discrete point cloud and waveform parameters are resolved by global convergent Levenberg Marquardt decomposition. Secondly, the ground seed points are selected, of which the abnormal ones are detected by waveform parameters and robust estimation. Thirdly, the terrain surface is fitted and the height difference threshold is determined in consideration of window size and mean square error. Finally, the points are classified gradually with the rising of window size. The filtering process is finished until window size is larger than threshold. The waveform data in urban, farmland and mountain areas from "WATER (Watershed Allied Telemetry Experimental Research)" are selected for experiments. Results prove that compared with traditional method, the accuracy of point cloud filtering is further improved and the proposed method has highly practical value.

  5. Achievable Rate Estimation of IEEE 802.11ad Visual Big-Data Uplink Access in Cloud-Enabled Surveillance Applications.

    PubMed

    Kim, Joongheon; Kim, Jong-Kook

    2016-01-01

    This paper addresses the computation procedures for estimating the impact of interference in 60 GHz IEEE 802.11ad uplink access in order to construct visual big-data database from randomly deployed surveillance camera sensing devices. The acquired large-scale massive visual information from surveillance camera devices will be used for organizing big-data database, i.e., this estimation is essential for constructing centralized cloud-enabled surveillance database. This performance estimation study captures interference impacts on the target cloud access points from multiple interference components generated by the 60 GHz wireless transmissions from nearby surveillance camera devices to their associated cloud access points. With this uplink interference scenario, the interference impacts on the main wireless transmission from a target surveillance camera device to its associated target cloud access point with a number of settings are measured and estimated under the consideration of 60 GHz radiation characteristics and antenna radiation pattern models.

  6. Cloud Point and Liquid-Liquid Equilibrium Behavior of Thermosensitive Polymer L61 and Salt Aqueous Two-Phase System.

    PubMed

    Rao, Wenwei; Wang, Yun; Han, Juan; Wang, Lei; Chen, Tong; Liu, Yan; Ni, Liang

    2015-06-25

    The cloud point of thermosensitive triblock polymer L61, poly(ethylene oxide)-poly(propylene oxide)-poly(ethylene oxide) (PEO-PPO-PEO), was determined in the presence of various electrolytes (K2HPO4, (NH4)3C6H5O7, and K3C6H5O7). The cloud point of L61 was lowered by the addition of electrolytes, and the cloud point of L61 decreased linearly with increasing electrolyte concentration. The efficacy of electrolytes on reducing cloud point followed the order: K3C6H5O7 > (NH4)3C6H5O7 > K2HPO4. With the increase in salt concentration, aqueous two-phase systems exhibited a phase inversion. In addition, increasing the temperature reduced the concentration of salt needed that could promote phase inversion. The phase diagrams and liquid-liquid equilibrium data of the L61-K2HPO4/(NH4)3C6H5O7/K3C6H5O7 aqueous two-phase systems (before the phase inversion but also after phase inversion) were determined at T = (25, 30, and 35) °C. Phase diagrams of aqueous two-phase systems were fitted to a four-parameter empirical nonlinear expression. Moreover, the slopes of the tie-lines and the area of two-phase region in the diagram have a tendency to rise with increasing temperature. The capacity of different salts to induce aqueous two-phase system formation was the same order as the ability of salts to reduce the cloud point.

  7. Beliefs and Ideologies Linked with Approval of Corporal Punishment: A Content Analysis of Online Comments

    ERIC Educational Resources Information Center

    Taylor, C. A.; Al-Hiyari, R.; Lee, S. J.; Priebe, A.; Guerrero, L. W.; Bales, A.

    2016-01-01

    This study employs a novel strategy for identifying points of resistance to education efforts aimed at reducing rates of child physical abuse and use of corporal punishment (CP). We analyzed online comments (n = 581) generated in response to media coverage of a study linking CP with increased child aggression. Most comments (71%) reflected…

  8. Intensity-corrected Herschel Observations of Nearby Isolated Low-mass Clouds

    NASA Astrophysics Data System (ADS)

    Sadavoy, Sarah I.; Keto, Eric; Bourke, Tyler L.; Dunham, Michael M.; Myers, Philip C.; Stephens, Ian W.; Di Francesco, James; Webb, Kristi; Stutz, Amelia M.; Launhardt, Ralf; Tobin, John J.

    2018-01-01

    We present intensity-corrected Herschel maps at 100, 160, 250, 350, and 500 μm for 56 isolated low-mass clouds. We determine the zero-point corrections for Herschel Photodetector Array Camera and Spectrometer (PACS) and Spectral Photometric Imaging Receiver (SPIRE) maps from the Herschel Science Archive (HSA) using Planck data. Since these HSA maps are small, we cannot correct them using typical methods. Here we introduce a technique to measure the zero-point corrections for small Herschel maps. We use radial profiles to identify offsets between the observed HSA intensities and the expected intensities from Planck. Most clouds have reliable offset measurements with this technique. In addition, we find that roughly half of the clouds have underestimated HSA-SPIRE intensities in their outer envelopes relative to Planck, even though the HSA-SPIRE maps were previously zero-point corrected. Using our technique, we produce corrected Herschel intensity maps for all 56 clouds and determine their line-of-sight average dust temperatures and optical depths from modified blackbody fits. The clouds have typical temperatures of ∼14–20 K and optical depths of ∼10‑5–10‑3. Across the whole sample, we find an anticorrelation between temperature and optical depth. We also find lower temperatures than what was measured in previous Herschel studies, which subtracted out a background level from their intensity maps to circumvent the zero-point correction. Accurate Herschel observations of clouds are key to obtaining accurate density and temperature profiles. To make such future analyses possible, intensity-corrected maps for all 56 clouds are publicly available in the electronic version. Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.

  9. Comparing and characterizing three-dimensional point clouds derived by structure from motion photogrammetry

    NASA Astrophysics Data System (ADS)

    Schwind, Michael

    Structure from Motion (SfM) is a photogrammetric technique whereby three-dimensional structures (3D) are estimated from overlapping two-dimensional (2D) image sequences. It is studied in the field of computer vision and utilized in fields such as archeology, engineering, and the geosciences. Currently, many SfM software packages exist that allow for the generation of 3D point clouds. Little work has been done to show how topographic data generated from these software differ over varying terrain types and why they might produce different results. This work aims to compare and characterize the differences between point clouds generated by three different SfM software packages: two well-known proprietary solutions (Pix4D, Agisoft PhotoScan) and one open source solution (OpenDroneMap). Five terrain types were imaged utilizing a DJI Phantom 3 Professional small unmanned aircraft system (sUAS). These terrain types include a marsh environment, a gently sloped sandy beach and jetties, a forested peninsula, a house, and a flat parking lot. Each set of imagery was processed with each software and then directly compared to each other. Before processing the sets of imagery, the software settings were analyzed and chosen in a manner that allowed for the most similar settings to be set across the three software types. This was done in an attempt to minimize point cloud differences caused by dissimilar settings. The characteristics of the resultant point clouds were then compared with each other. Furthermore, a terrestrial light detection and ranging (LiDAR) survey was conducted over the flat parking lot using a Riegl VZ- 400 scanner. This data served as ground truth in order to conduct an accuracy assessment of the sUAS-SfM point clouds. Differences were found between the different results, apparent not only in the characteristics of the clouds, but also the accuracy. This study allows for users of SfM photogrammetry to have a better understanding of how different processing software compare and the inherent sensitivity of SfM automation in 3D reconstruction. Because this study used mostly default settings within the software, it would be beneficial for further research to investigate the effects of changing parameters have on the fidelity of point cloud datasets generated from different SfM software packages.

  10. Sloped terrain segmentation for autonomous drive using sparse 3D point cloud.

    PubMed

    Cho, Seoungjae; Kim, Jonghyun; Ikram, Warda; Cho, Kyungeun; Jeong, Young-Sik; Um, Kyhyun; Sim, Sungdae

    2014-01-01

    A ubiquitous environment for road travel that uses wireless networks requires the minimization of data exchange between vehicles. An algorithm that can segment the ground in real time is necessary to obtain location data between vehicles simultaneously executing autonomous drive. This paper proposes a framework for segmenting the ground in real time using a sparse three-dimensional (3D) point cloud acquired from undulating terrain. A sparse 3D point cloud can be acquired by scanning the geography using light detection and ranging (LiDAR) sensors. For efficient ground segmentation, 3D point clouds are quantized in units of volume pixels (voxels) and overlapping data is eliminated. We reduce nonoverlapping voxels to two dimensions by implementing a lowermost heightmap. The ground area is determined on the basis of the number of voxels in each voxel group. We execute ground segmentation in real time by proposing an approach to minimize the comparison between neighboring voxels. Furthermore, we experimentally verify that ground segmentation can be executed at about 19.31 ms per frame.

  11. Person detection and tracking with a 360° lidar system

    NASA Astrophysics Data System (ADS)

    Hammer, Marcus; Hebel, Marcus; Arens, Michael

    2017-10-01

    Today it is easily possible to generate dense point clouds of the sensor environment using 360° LiDAR (Light Detection and Ranging) sensors which are available since a number of years. The interpretation of these data is much more challenging. For the automated data evaluation the detection and classification of objects is a fundamental task. Especially in urban scenarios moving objects like persons or vehicles are of particular interest, for instance in automatic collision avoidance, for mobile sensor platforms or surveillance tasks. In literature there are several approaches for automated person detection in point clouds. While most techniques show acceptable results in object detection, the computation time is often crucial. The runtime can be problematic, especially due to the amount of data in the panoramic 360° point clouds. On the other hand, for most applications an object detection and classification in real time is needed. The paper presents a proposal for a fast, real-time capable algorithm for person detection, classification and tracking in panoramic point clouds.

  12. Linking Advanced Visualization and MATLAB for the Analysis of 3D Gene Expression Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruebel, Oliver; Keranen, Soile V.E.; Biggin, Mark

    Three-dimensional gene expression PointCloud data generated by the Berkeley Drosophila Transcription Network Project (BDTNP) provides quantitative information about the spatial and temporal expression of genes in early Drosophila embryos at cellular resolution. The BDTNP team visualizes and analyzes Point-Cloud data using the software application PointCloudXplore (PCX). To maximize the impact of novel, complex data sets, such as PointClouds, the data needs to be accessible to biologists and comprehensible to developers of analysis functions. We address this challenge by linking PCX and Matlab via a dedicated interface, thereby providing biologists seamless access to advanced data analysis functions and giving bioinformatics researchersmore » the opportunity to integrate their analysis directly into the visualization application. To demonstrate the usefulness of this approach, we computationally model parts of the expression pattern of the gene even skipped using a genetic algorithm implemented in Matlab and integrated into PCX via our Matlab interface.« less

  13. End-group-functionalized poly(N,N-diethylacrylamide) via free-radical chain transfer polymerization: Influence of sulfur oxidation and cyclodextrin on self-organization and cloud points in water

    PubMed Central

    Reinelt, Sebastian; Steinke, Daniel

    2014-01-01

    Summary In this work we report the synthesis of thermo-, oxidation- and cyclodextrin- (CD) responsive end-group-functionalized polymers, based on N,N-diethylacrylamide (DEAAm). In a classical free-radical chain transfer polymerization, using thiol-functionalized 4-alkylphenols, namely 3-(4-(1,1-dimethylethan-1-yl)phenoxy)propane-1-thiol and 3-(4-(2,4,4-trimethylpentan-2-yl)phenoxy)propane-1-thiol, poly(N,N-diethylacrylamide) (PDEAAm) with well-defined hydrophobic end-groups is obtained. These end-group-functionalized polymers show different cloud point values, depending on the degree of polymerization and the presence of randomly methylated β-cyclodextrin (RAMEB-CD). Additionally, the influence of the oxidation of the incorporated thioether linkages on the cloud point is investigated. The resulting hydrophilic sulfoxides show higher cloud point values for the lower critical solution temperature (LCST). A high degree of functionalization is supported by 1H NMR-, SEC-, FTIR- and MALDI–TOF measurements. PMID:24778720

  14. Interstellar and Solar System Organic Matter Preserved in Interplanetary Dust

    NASA Technical Reports Server (NTRS)

    Messenger, Scott; Nakamura-Messenger, Keiko

    2015-01-01

    Interplanetary dust particles (IDPs) collected in the Earth's stratosphere derive from collisions among asteroids and by the disruption and outgassing of short-period comets. Chondritic porous (CP) IDPs are among the most primitive Solar System materials. CP-IDPs have been linked to cometary parent bodies by their mineralogy, textures, C-content, and dynamical histories. CP-IDPs are fragile, fine-grained (less than um) assemblages of anhydrous amorphous and crystalline silicates, oxides and sulfides bound together by abundant carbonaceous material. Ancient silicate, oxide, and SiC stardust grains exhibiting highly anomalous isotopic compositions are abundant in CP-IDPs, constituting 0.01 - 1 % of the mass of the particles. The organic matter in CP-IDPs is isotopically anomalous, with enrichments in D/H reaching 50x the terrestrial SMOW value and 15N/14N ratios up to 3x terrestrial standard compositions. These anomalies are indicative of low T (10-100 K) mass fractionation in cold molecular cloud or the outermost reaches of the protosolar disk. The organic matter shows distinct morphologies, including sub-um globules, bubbly textures, featureless, and with mineral inclusions. Infrared spectroscopy and mass spectrometry studies of organic matter in IDPs reveals diverse species including aliphatic and aromatic compounds. The organic matter with the highest isotopic anomalies appears to be richer in aliphatic compounds. These materials also bear similarities and differences with primitive, isotopically anomalous organic matter in carbonaceous chondrite meteorites. The diversity of the organic chemistry, morphology, and isotopic properties in IDPs and meteorites reflects variable preservation of interstellar/primordial components and Solar System processing. One unifying feature is the presence of sub-um isotopically anomalous organic globules among all primitive materials, including IDPs, meteorites, and comet Wild-2 samples returned by the Stardust mission.

  15. The ventilatory anaerobic threshold is related to, but is lower than, the critical power, but does not explain exercise tolerance at this workrate.

    PubMed

    Okudan, N; Gökbel, H

    2006-03-01

    The aim of the present study was to investigate the relationships between critical power (CP), maximal aerobic power and the anaerobic threshold and whether exercise time to exhaustion and work at the CP can be used as an index in the determination of endurance. An incremental maximal cycle exercise test was performed on 30 untrained males aged 18-22 years. Lactate analysis was carried out on capillary blood samples at every 2 minutes. From gas exchange parameters and heart rate and lactate values, ventilatory anaerobic thresholds, heart rate deflection point and the onset of blood lactate accumulation were calculated. CP was determined with linear work-time method using 3 loads. The subjects exercised until they could no longer maintain a cadence above 24 rpm at their CP and exercise time to exhaustion was determined. CP was lower than the power output corresponding to VO2max, higher than the power outputs corresponding to anaerobic threshold. CP was correlated with VO2max and anaerobic threshold. Exercise time to exhaustion and work at CP were not correlated with VO2max and anaerobic threshold. Because of the correlations of the CP with VO2max and anaerobic threshold and no correlation of exercise time to exhaustion and work at the CP with these parameters, we conclude that exercise time to exhaustion and work at the CP cannot be used as an index in the determination of endurance.

  16. Free Form Low Cost Fabrication Using Titanium

    DTIC Science & Technology

    2007-06-29

    Compaction Metals) "* CP Ti (International Titanium Powders, LLC) "* Gas Atomized Ti-6AI- 4V (Carpenter Powder Products, Bridgeville, PA) "* Gas Atomized CP...analytical data for the titanium alloys represented in this report Alloy Al C Fe H Mo N2 02 al V TI CP-Ti Grade II 0.1 0.3 0.015 0.03 025 Balance TI-6AI- 4V ...Ti-6A1- 4V is titanium alloyed with 6% Aluminum and 4% Vanadium. This alloy has a melting point range of 1604-1660’C, which is not suitable for

  17. Comparative Analysis of Data Structures for Storing Massive Tins in a Dbms

    NASA Astrophysics Data System (ADS)

    Kumar, K.; Ledoux, H.; Stoter, J.

    2016-06-01

    Point cloud data are an important source for 3D geoinformation. Modern day 3D data acquisition and processing techniques such as airborne laser scanning and multi-beam echosounding generate billions of 3D points for simply an area of few square kilometers. With the size of the point clouds exceeding the billion mark for even a small area, there is a need for their efficient storage and management. These point clouds are sometimes associated with attributes and constraints as well. Storing billions of 3D points is currently possible which is confirmed by the initial implementations in Oracle Spatial SDO PC and the PostgreSQL Point Cloud extension. But to be able to analyse and extract useful information from point clouds, we need more than just points i.e. we require the surface defined by these points in space. There are different ways to represent surfaces in GIS including grids, TINs, boundary representations, etc. In this study, we investigate the database solutions for the storage and management of massive TINs. The classical (face and edge based) and compact (star based) data structures are discussed at length with reference to their structure, advantages and limitations in handling massive triangulations and are compared with the current solution of PostGIS Simple Feature. The main test dataset is the TIN generated from third national elevation model of the Netherlands (AHN3) with a point density of over 10 points/m2. PostgreSQL/PostGIS DBMS is used for storing the generated TIN. The data structures are tested with the generated TIN models to account for their geometry, topology, storage, indexing, and loading time in a database. Our study is useful in identifying what are the limitations of the existing data structures for storing massive TINs and what is required to optimise these structures for managing massive triangulations in a database.

  18. International consensus statements on early chronic Pancreatitis. Recommendations from the working group for the international consensus guidelines for chronic pancreatitis in collaboration with The International Association of Pancreatology, American Pancreatic Association, Japan Pancreas Society, PancreasFest Working Group and European Pancreatic Club.

    PubMed

    Whitcomb, David C; Shimosegawa, Tooru; Chari, Suresh T; Forsmark, Christopher E; Frulloni, Luca; Garg, Pramod; Hegyi, Peter; Hirooka, Yoshiki; Irisawa, Atsushi; Ishikawa, Takuya; Isaji, Shuiji; Lerch, Markus M; Levy, Philippe; Masamune, Atsushi; Wilcox, Charles M; Windsor, John; Yadav, Dhiraj; Sheel, Andrea; Neoptolemos, John P

    2018-05-21

    Chronic pancreatitis (CP) is a progressive inflammatory disorder currently diagnosed by morphologic features. In contrast, an accurate diagnosis of Early CP is not possible using imaging criteria alone. If this were possible and early treatment instituted, the later, irreversible features and complications of CP could possibly be prevented. An international working group supported by four major pancreas societies (IAP, APA, JPS, and EPC) and a PancreasFest working group sought to develop a consensus definition and diagnostic criteria for Early CP. Ten statements (S1-10) concerning Early CP were used to gauge consensus on the Early CP concept using anonymous voting with a 9 point Likert scale. Consensus required an alpha ≥0.80. No consensus statement could be developed for a definition of Early-CP or diagnostic criteria. There was consensus on 5 statements: (S2) The word "Early" in early chronic pancreatitis is used to describe disease state, not disease duration. (S4) Early CP defines a stage of CP with preserved pancreatic function and potentially reversible features. (S8) Genetic variants are important risk factors for Early CP and can add specificity to the likely etiology, but they are neither necessary nor sufficient to make a diagnosis. (S9) Environmental risk factors can provide evidence to support the diagnosis of Early CP, but are neither necessary nor sufficient to make a diagnosis. (S10) The differential diagnosis for Early CP includes other disorders with morphological and functional features that overlap with CP. Morphology based diagnosis of Early CP is not possible without additional information. New approaches to the accurate diagnosis of Early CP will require a mechanistic definition that considers risk factors, biomarkers, clinical context and new models of disease. Such a definition will require prospective validation. Copyright © 2018. Published by Elsevier B.V.

  19. Fully Convolutional Networks for Ground Classification from LIDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Rizaldy, A.; Persello, C.; Gevaert, C. M.; Oude Elberink, S. J.

    2018-05-01

    Deep Learning has been massively used for image classification in recent years. The use of deep learning for ground classification from LIDAR point clouds has also been recently studied. However, point clouds need to be converted into an image in order to use Convolutional Neural Networks (CNNs). In state-of-the-art techniques, this conversion is slow because each point is converted into a separate image. This approach leads to highly redundant computation during conversion and classification. The goal of this study is to design a more efficient data conversion and ground classification. This goal is achieved by first converting the whole point cloud into a single image. The classification is then performed by a Fully Convolutional Network (FCN), a modified version of CNN designed for pixel-wise image classification. The proposed method is significantly faster than state-of-the-art techniques. On the ISPRS Filter Test dataset, it is 78 times faster for conversion and 16 times faster for classification. Our experimental analysis on the same dataset shows that the proposed method results in 5.22 % of total error, 4.10 % of type I error, and 15.07 % of type II error. Compared to the previous CNN-based technique and LAStools software, the proposed method reduces the total error and type I error (while type II error is slightly higher). The method was also tested on a very high point density LIDAR point clouds resulting in 4.02 % of total error, 2.15 % of type I error and 6.14 % of type II error.

  20. Towards semi-automatic rock mass discontinuity orientation and set analysis from 3D point clouds

    NASA Astrophysics Data System (ADS)

    Guo, Jiateng; Liu, Shanjun; Zhang, Peina; Wu, Lixin; Zhou, Wenhui; Yu, Yinan

    2017-06-01

    Obtaining accurate information on rock mass discontinuities for deformation analysis and the evaluation of rock mass stability is important. Obtaining measurements for high and steep zones with the traditional compass method is difficult. Photogrammetry, three-dimensional (3D) laser scanning and other remote sensing methods have gradually become mainstream methods. In this study, a method that is based on a 3D point cloud is proposed to semi-automatically extract rock mass structural plane information. The original data are pre-treated prior to segmentation by removing outlier points. The next step is to segment the point cloud into different point subsets. Various parameters, such as the normal, dip/direction and dip, can be calculated for each point subset after obtaining the equation of the best fit plane for the relevant point subset. A cluster analysis (a point subset that satisfies some conditions and thus forms a cluster) is performed based on the normal vectors by introducing the firefly algorithm (FA) and the fuzzy c-means (FCM) algorithm. Finally, clusters that belong to the same discontinuity sets are merged and coloured for visualization purposes. A prototype system is developed based on this method to extract the points of the rock discontinuity from a 3D point cloud. A comparison with existing software shows that this method is feasible. This method can provide a reference for rock mechanics, 3D geological modelling and other related fields.

  1. Automatic Monitoring of Tunnel Deformation Based on High Density Point Clouds Data

    NASA Astrophysics Data System (ADS)

    Du, L.; Zhong, R.; Sun, H.; Wu, Q.

    2017-09-01

    An automated method for tunnel deformation monitoring using high density point clouds data is presented. Firstly, the 3D point clouds data are converted to two-dimensional surface by projection on the XOY plane, the projection point set of central axis on XOY plane named Uxoy is calculated by combining the Alpha Shape algorithm with RANSAC (Random Sampling Consistency) algorithm, and then the projection point set of central axis on YOZ plane named Uyoz is obtained by highest and lowest points which are extracted by intersecting straight lines that through each point of Uxoy and perpendicular to the two -dimensional surface with the tunnel point clouds, Uxoy and Uyoz together form the 3D center axis finally. Secondly, the buffer of each cross section is calculated by K-Nearest neighbor algorithm, and the initial cross-sectional point set is quickly constructed by projection method. Finally, the cross sections are denoised and the section lines are fitted using the method of iterative ellipse fitting. In order to improve the accuracy of the cross section, a fine adjustment method is proposed to rotate the initial sectional plane around the intercept point in the horizontal and vertical direction within the buffer. The proposed method is used in Shanghai subway tunnel, and the deformation of each section in the direction of 0 to 360 degrees is calculated. The result shows that the cross sections becomes flat circles from regular circles due to the great pressure at the top of the tunnel

  2. Remote Sensing of Life using Circular Polarization

    NASA Astrophysics Data System (ADS)

    Nagdimunov, L.; Kolokolova, L.; Sparks, W. B.

    2012-12-01

    An emerging interest in circular polarization (CP) has developed over the last fifteen years in astronomy, stimulated by the discovery of high CP in the Orion nebula, and its possible connection to prebiotic chemistry. Traditionally, CP was thought to be rarely present in astronomy, and has been technically difficult to measure. Nevertheless, CP has now been reliably measured in planets, interstellar dust, molecular clouds, stars, protoplanetary disks, and comets. Several effects can produce CP in such objects: multiple scattering in asymmetric media, scattering by aligned particles, and scattering by intrinsically asymmetric particles; the later effect is of particular interest to this study. One of the most widespread and intriguing intrinsic asymmetries is homochirality, which is the dominance of one handedness of chiral organic molecules that exist in two mirror-symmetric forms. Homochirality is a property shared by all terrestrial life, and the presence of this microscopic asymmetry has the potential to have macroscopic consequences by introducing CP in scattered light. Recently this effect has been studied in the lab by Sparks et al [2009, PNAS, 7816], who found that light scattered by photosynthesizing organisms (such as macroscopic vegetation or microscopic bacteria), has a significant degree of CP with a peculiar and possibly unique spectral pattern. Non-homochiral aggregates do not display any detectable CP. To further investigate CP induced by homochirality, we modeled light scattering by biological objects, representing them as aggregates of spheres since aggregated structure is typical for many biological objects, e.g. chlorophyll in leaves and colonies of bacteria. Our computations were based on the T-matrix code recently updated to treat chiral materials [Mackowski et al, 2011, JQSRT 112, 1726]. Results of our computations replicated the lab measurements. They showed that inside the absorption band, CP experienced a dramatic change in slope, which resulted in a change of its sign. For non-biological materials, CP was zero even within absorption bands. Our modeling not only adds weight to the plausibility of using CP to detect life but also allows us to provide some recommendations to observers. Due to the steep change in CP occurring in the absorption bands typical for photosynthetic pigments, one needs to observe at the wavelengths where photosynthesis, which is plausibly going to be a wide spread phenomenon on Earth-like planets, is most efficient. We also found that the maximum CP tended to occur around phase angle of 90 deg.; this is fortuitous since this is the angle most suitable for observations of exoplanets. Finally, the values of CP and its spectral and angular behavior appeared to be strongly affected by the characteristics of aggregates. This may allow using CP to study structural characteristics of biological objects. We believe that a pathway towards a search for life using CP has been developed. The next step would be a development of a Stokes polarimeter for ground-based and space research (see http://arxiv.org/abs/1206.7106) and a systematic search for biological and prebiological organics in the solar system (e.g., on Europa, Titan, Mars, Enceladus, and in comets). As technology advances permit, this approach may even have application to a search for photosynthetic processes on exoplanets. This work was supported by the NASA Astrobiology Program.

  3. D Land Cover Classification Based on Multispectral LIDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Zou, Xiaoliang; Zhao, Guihua; Li, Jonathan; Yang, Yuanxi; Fang, Yong

    2016-06-01

    Multispectral Lidar System can emit simultaneous laser pulses at the different wavelengths. The reflected multispectral energy is captured through a receiver of the sensor, and the return signal together with the position and orientation information of sensor is recorded. These recorded data are solved with GNSS/IMU data for further post-processing, forming high density multispectral 3D point clouds. As the first commercial multispectral airborne Lidar sensor, Optech Titan system is capable of collecting point clouds data from all three channels at 532nm visible (Green), at 1064 nm near infrared (NIR) and at 1550nm intermediate infrared (IR). It has become a new source of data for 3D land cover classification. The paper presents an Object Based Image Analysis (OBIA) approach to only use multispectral Lidar point clouds datasets for 3D land cover classification. The approach consists of three steps. Firstly, multispectral intensity images are segmented into image objects on the basis of multi-resolution segmentation integrating different scale parameters. Secondly, intensity objects are classified into nine categories by using the customized features of classification indexes and a combination the multispectral reflectance with the vertical distribution of object features. Finally, accuracy assessment is conducted via comparing random reference samples points from google imagery tiles with the classification results. The classification results show higher overall accuracy for most of the land cover types. Over 90% of overall accuracy is achieved via using multispectral Lidar point clouds for 3D land cover classification.

  4. Accuracy Assessment of a Canal-Tunnel 3d Model by Comparing Photogrammetry and Laserscanning Recording Techniques

    NASA Astrophysics Data System (ADS)

    Charbonnier, P.; Chavant, P.; Foucher, P.; Muzet, V.; Prybyla, D.; Perrin, T.; Grussenmeyer, P.; Guillemin, S.

    2013-07-01

    With recent developments in the field of technology and computer science, conventional methods are being supplanted by laser scanning and digital photogrammetry. These two different surveying techniques generate 3-D models of real world objects or structures. In this paper, we consider the application of terrestrial Laser scanning (TLS) and photogrammetry to the surveying of canal tunnels. The inspection of such structures requires time, safe access, specific processing and professional operators. Therefore, a French partnership proposes to develop a dedicated equipment based on image processing for visual inspection of canal tunnels. A 3D model of the vault and side walls of the tunnel is constructed from images recorded onboard a boat moving inside the tunnel. To assess the accuracy of this photogrammetric model (PM), a reference model is build using static TLS. We here address the problem comparing the resulting point clouds. Difficulties arise because of the highly differentiated acquisition processes, which result in very different point densities. We propose a new tool, designed to compare differences between pairs of point cloud or surfaces (triangulated meshes). Moreover, dealing with huge datasets requires the implementation of appropriate structures and algorithms. Several techniques are presented : point-to-point, cloud-to-cloud and cloud-to-mesh. In addition farthest point resampling, octree structure and Hausdorff distance are adopted and described. Experimental results are shown for a 475 m long canal tunnel located in France.

  5. Terrestrial laser scanning in monitoring of anthropogenic objects

    NASA Astrophysics Data System (ADS)

    Zaczek-Peplinska, Janina; Kowalska, Maria

    2017-12-01

    The registered xyz coordinates in the form of a point cloud captured by terrestrial laser scanner and the intensity values (I) assigned to them make it possible to perform geometric and spectral analyses. Comparison of point clouds registered in different time periods requires conversion of the data to a common coordinate system and proper data selection is necessary. Factors like point distribution dependant on the distance between the scanner and the surveyed surface, angle of incidence, tasked scan's density and intensity value have to be taken into consideration. A prerequisite for running a correct analysis of the obtained point clouds registered during periodic measurements using a laser scanner is the ability to determine the quality and accuracy of the analysed data. The article presents a concept of spectral data adjustment based on geometric analysis of a surface as well as examples of geometric analyses integrating geometric and physical data in one cloud of points: cloud point coordinates, recorded intensity values, and thermal images of an object. The experiments described here show multiple possibilities of usage of terrestrial laser scanning data and display the necessity of using multi-aspect and multi-source analyses in anthropogenic object monitoring. The article presents examples of multisource data analyses with regard to Intensity value correction due to the beam's incidence angle. The measurements were performed using a Leica Nova MS50 scanning total station, Z+F Imager 5010 scanner and the integrated Z+F T-Cam thermal camera.

  6. D Point Cloud Model Colorization by Dense Registration of Digital Images

    NASA Astrophysics Data System (ADS)

    Crombez, N.; Caron, G.; Mouaddib, E.

    2015-02-01

    Architectural heritage is a historic and artistic property which has to be protected, preserved, restored and must be shown to the public. Modern tools like 3D laser scanners are more and more used in heritage documentation. Most of the time, the 3D laser scanner is completed by a digital camera which is used to enrich the accurate geometric informations with the scanned objects colors. However, the photometric quality of the acquired point clouds is generally rather low because of several problems presented below. We propose an accurate method for registering digital images acquired from any viewpoints on point clouds which is a crucial step for a good colorization by colors projection. We express this image-to-geometry registration as a pose estimation problem. The camera pose is computed using the entire images intensities under a photometric visual and virtual servoing (VVS) framework. The camera extrinsic and intrinsic parameters are automatically estimated. Because we estimates the intrinsic parameters we do not need any informations about the camera which took the used digital image. Finally, when the point cloud model and the digital image are correctly registered, we project the 3D model in the digital image frame and assign new colors to the visible points. The performance of the approach is proven in simulation and real experiments on indoor and outdoor datasets of the cathedral of Amiens, which highlight the success of our method, leading to point clouds with better photometric quality and resolution.

  7. Analysis of Uncertainty in a Middle-Cost Device for 3D Measurements in BIM Perspective

    PubMed Central

    Sánchez, Alonso; Naranjo, José-Manuel; Jiménez, Antonio; González, Alfonso

    2016-01-01

    Medium-cost devices equipped with sensors are being developed to get 3D measurements. Some allow for generating geometric models and point clouds. Nevertheless, the accuracy of these measurements should be evaluated, taking into account the requirements of the Building Information Model (BIM). This paper analyzes the uncertainty in outdoor/indoor three-dimensional coordinate measures and point clouds (using Spherical Accuracy Standard (SAS) methods) for Eyes Map, a medium-cost tablet manufactured by e-Capture Research & Development Company, Mérida, Spain. To achieve it, in outdoor tests, by means of this device, the coordinates of targets were measured from 1 to 6 m and cloud points were obtained. Subsequently, these were compared to the coordinates of the same targets measured by a Total Station. The Euclidean average distance error was 0.005–0.027 m for measurements by Photogrammetry and 0.013–0.021 m for the point clouds. All of them satisfy the tolerance for point cloud acquisition (0.051 m) according to the BIM Guide for 3D Imaging (General Services Administration); similar results are obtained in the indoor tests, with values of 0.022 m. In this paper, we establish the optimal distances for the observations in both, Photogrammetry and 3D Photomodeling modes (outdoor) and point out some working conditions to avoid in indoor environments. Finally, the authors discuss some recommendations for improving the performance and working methods of the device. PMID:27669245

  8. Point-cloud-to-point-cloud technique on tool calibration for dental implant surgical path tracking

    NASA Astrophysics Data System (ADS)

    Lorsakul, Auranuch; Suthakorn, Jackrit; Sinthanayothin, Chanjira

    2008-03-01

    Dental implant is one of the most popular methods of tooth root replacement used in prosthetic dentistry. Computerize navigation system on a pre-surgical plan is offered to minimize potential risk of damage to critical anatomic structures of patients. Dental tool tip calibrating is basically an important procedure of intraoperative surgery to determine the relation between the hand-piece tool tip and hand-piece's markers. With the transferring coordinates from preoperative CT data to reality, this parameter is a part of components in typical registration problem. It is a part of navigation system which will be developed for further integration. A high accuracy is required, and this relation is arranged by point-cloud-to-point-cloud rigid transformations and singular value decomposition (SVD) for minimizing rigid registration errors. In earlier studies, commercial surgical navigation systems from, such as, BrainLAB and Materialize, have flexibility problem on tool tip calibration. Their systems either require a special tool tip calibration device or are unable to change the different tool. The proposed procedure is to use the pointing device or hand-piece to touch on the pivot and the transformation matrix. This matrix is calculated every time when it moves to the new position while the tool tip stays at the same point. The experiment acquired on the information of tracking device, image acquisition and image processing algorithms. The key success is that point-to-point-cloud requires only 3 post images of tool to be able to converge to the minimum errors 0.77%, and the obtained result is correct in using the tool holder to track the path simulation line displayed in graphic animation.

  9. Restoration of prostaglandin E2-producing splenic macrophages in sup 89 Sr-treated mice with bone marrow from Corynebacterium parvum primed donors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shibata, Y.

    1989-05-01

    Administration of Corynebacterium parvum (CP), 56 mg/kg ip to CBA/J mice effected the induction of prostaglandin E2 (PGE2) producing macrophages (M phi) in the bone marrow and the spleen. Maximal release of PGE2 from M phi cultured in vitro with calcium ionophore A23187 for 2 h was reached by marrow M phi removed on 5 days after CP (450 ng/mg cell protein), and by splenic M phi 9 days after CP (400 ng/mg). Neither M phi population, however, yielded more than 6.0 ng/mg leukotriene C4. To assess ontogenic relationships mice were depleted of bone marrow and blood monocytes by ivmore » injection of the bone-seeking isotope, 89Sr. CP was given at several points before or after bone marrow cell depletion. PGE2 production by splenic M phi harvested on day 9 after CP was profoundly impaired when CP was administered either concurrently with or 3 days after 89Sr. When CP was administered 1, 3, 5, and 7 days before 89Sr, however, the induction of PGE2-producing M phi in the spleen was unaffected. To determine whether bone marrow cells from CP-injected donors can restore PGE2-producing splenic M phi (PGSM) in 89Sr-mice, recipient mice which had and had not received CP 3 days after 89Sr were transfused with 5 x 10(6) syngeneic bone marrow cells from donor mice prepared at varying intervals after CP administration. The results clearly indicate the capacity of bone marrow cells harvested on either day 1 or 2 following CP to restore PGSM in CP-primed, but not unprimed, recipients.« less

  10. Physical properties of aqueous solutions of a thermo-responsive neutral copolymer and an anionic surfactant: turbidity and small-angle neutron scattering studies.

    PubMed

    Galant, Céline; Kjøniksen, Anna-Lena; Knudsen, Kenneth D; Helgesen, Geir; Lund, Reidar; Laukkanen, Antti; Tenhu, Heikki; Nyström, Bo

    2005-08-16

    Aqueous mixtures of the anionic sodium dodecyl sulfate (SDS) surfactant and thermo-responsive poly(N-vinylcaprolactam) chains grafted with omega-methoxy poly(ethylene oxide) undecyl alpha-methacrylate (PVCL-g-C11EO42) have been characterized using turbidimetry and small-angle neutron scattering (SANS). Turbidity measurements show that the addition of SDS to a dilute aqueous copolymer solution (1.0 wt %) induces an increase of the cloud point (CP) value and a decrease of the turbidity at high temperatures. In parallel, SANS results show a decrease of both the average distance between chains and the global size of the objects in solution at high temperatures as the SDS concentration is increased. Combination of these findings reveals that the presence of SDS in the PVCL-g-C11EO42 solutions (1.0 wt %) promotes the formation of smaller aggregates and, consequently, leads to a more homogeneous distribution of the chains in solution upon heating of the mixtures. Moreover, the SANS data results show that the internal structure of the formed aggregates becomes more swollen as the SDS concentration increases. On the other hand, the addition of moderate amounts of SDS (up to 4 mm) to a semidilute copolymer solution (5.0 wt %) gives rise to a more pronounced aggregation as the temperature rises; turbidity and SANS studies reveal in this case a decrease of the CP value and an increase of the scattered intensity at low q. The overall picture that emerges from this study is that the degree of aggregation can be accurately tuned by varying parameters such as the temperature, level of surfactant addition, and polymer concentration.

  11. Layer stacking: A novel algorithm for individual forest tree segmentation from LiDAR point clouds

    Treesearch

    Elias Ayrey; Shawn Fraver; John A. Kershaw; Laura S. Kenefic; Daniel Hayes; Aaron R. Weiskittel; Brian E. Roth

    2017-01-01

    As light detection and ranging (LiDAR) technology advances, it has become common for datasets to be acquired at a point density high enough to capture structural information from individual trees. To process these data, an automatic method of isolating individual trees from a LiDAR point cloud is required. Traditional methods for segmenting trees attempt to isolate...

  12. A case study of microphysical structures and hydrometeor phase in convection using radar Doppler spectra at Darwin, Australia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riihimaki, Laura D.; Comstock, Jennifer M.; Luke, Edward

    To understand the microphysical processes that impact diabatic heating and cloud lifetimes in convection, we need to characterize the spatial distribution of supercooled liquid water. To address this observational challenge, vertically pointing active sensors at the Darwin Atmospheric Radiation Measurement (ARM) site are used to classify cloud phase within a deep convective cloud in a shallow to deep convection transitional case. The cloud cannot be fully observed by a lidar due to signal attenuation. Thus we develop an objective method for identifying hydrometeor classes, including mixed-phase conditions, using k-means clustering on parameters that describe the shape of the Doppler spectramore » from vertically pointing Ka band cloud radar. This approach shows that multiple, overlapping mixed-phase layers exist within the cloud, rather than a single region of supercooled liquid, indicating complexity to how ice growth and diabatic heating occurs in the vertical structure of the cloud.« less

  13. A case study of microphysical structures and hydrometeor phase in convection using radar Doppler spectra at Darwin, Australia

    NASA Astrophysics Data System (ADS)

    Riihimaki, L. D.; Comstock, J. M.; Luke, E.; Thorsen, T. J.; Fu, Q.

    2017-07-01

    To understand the microphysical processes that impact diabatic heating and cloud lifetimes in convection, we need to characterize the spatial distribution of supercooled liquid water. To address this observational challenge, ground-based vertically pointing active sensors at the Darwin Atmospheric Radiation Measurement site are used to classify cloud phase within a deep convective cloud. The cloud cannot be fully observed by a lidar due to signal attenuation. Therefore, we developed an objective method for identifying hydrometeor classes, including mixed-phase conditions, using k-means clustering on parameters that describe the shape of the Doppler spectra from vertically pointing Ka-band cloud radar. This approach shows that multiple, overlapping mixed-phase layers exist within the cloud, rather than a single region of supercooled liquid. Diffusional growth calculations show that the conditions for the Wegener-Bergeron-Findeisen process exist within one of these mixed-phase microstructures.

  14. Automatic registration of Iphone images to LASER point clouds of the urban structures using shape features

    NASA Astrophysics Data System (ADS)

    Sirmacek, B.; Lindenbergh, R. C.; Menenti, M.

    2013-10-01

    Fusion of 3D airborne laser (LIDAR) data and terrestrial optical imagery can be applied in 3D urban modeling and model up-dating. The most challenging aspect of the fusion procedure is registering the terrestrial optical images on the LIDAR point clouds. In this article, we propose an approach for registering these two different data from different sensor sources. As we use iPhone camera images which are taken in front of the interested urban structure by the application user and the high resolution LIDAR point clouds of the acquired by an airborne laser sensor. After finding the photo capturing position and orientation from the iPhone photograph metafile, we automatically select the area of interest in the point cloud and transform it into a range image which has only grayscale intensity levels according to the distance from the image acquisition position. We benefit from local features for registering the iPhone image to the generated range image. In this article, we have applied the registration process based on local feature extraction and graph matching. Finally, the registration result is used for facade texture mapping on the 3D building surface mesh which is generated from the LIDAR point cloud. Our experimental results indicate possible usage of the proposed algorithm framework for 3D urban map updating and enhancing purposes.

  15. Assessment of different models for computing the probability of a clear line of sight

    NASA Astrophysics Data System (ADS)

    Bojin, Sorin; Paulescu, Marius; Badescu, Viorel

    2017-12-01

    This paper is focused on modeling the morphological properties of the cloud fields in terms of the probability of a clear line of sight (PCLOS). PCLOS is defined as the probability that a line of sight between observer and a given point of the celestial vault goes freely without intersecting a cloud. A variety of PCLOS models assuming the cloud shape hemisphere, semi-ellipsoid and ellipsoid are tested. The effective parameters (cloud aspect ratio and absolute cloud fraction) are extracted from high-resolution series of sunshine number measurements. The performance of the PCLOS models is evaluated from the perspective of their ability in retrieving the point cloudiness. The advantages and disadvantages of the tested models are discussed, aiming to a simplified parameterization of PCLOS models.

  16. Augmented reality system using lidar point cloud data for displaying dimensional information of objects on mobile phones

    NASA Astrophysics Data System (ADS)

    Gupta, S.; Lohani, B.

    2014-05-01

    Mobile augmented reality system is the next generation technology to visualise 3D real world intelligently. The technology is expanding at a fast pace to upgrade the status of a smart phone to an intelligent device. The research problem identified and presented in the current work is to view actual dimensions of various objects that are captured by a smart phone in real time. The methodology proposed first establishes correspondence between LiDAR point cloud, that are stored in a server, and the image t hat is captured by a mobile. This correspondence is established using the exterior and interior orientation parameters of the mobile camera and the coordinates of LiDAR data points which lie in the viewshed of the mobile camera. A pseudo intensity image is generated using LiDAR points and their intensity. Mobile image and pseudo intensity image are then registered using image registration method SIFT thereby generating a pipeline to locate a point in point cloud corresponding to a point (pixel) on the mobile image. The second part of the method uses point cloud data for computing dimensional information corresponding to the pairs of points selected on mobile image and fetch the dimensions on top of the image. This paper describes all steps of the proposed method. The paper uses an experimental setup to mimic the mobile phone and server system and presents some initial but encouraging results

  17. Formation of massive, dense cores by cloud-cloud collisions

    NASA Astrophysics Data System (ADS)

    Takahira, Ken; Shima, Kazuhiro; Habe, Asao; Tasker, Elizabeth J.

    2018-03-01

    We performed sub-parsec (˜ 0.014 pc) scale simulations of cloud-cloud collisions of two idealized turbulent molecular clouds (MCs) with different masses in the range of (0.76-2.67) × 104 M_{⊙} and with collision speeds of 5-30 km s-1. Those parameters are larger than in Takahira, Tasker, and Habe (2014, ApJ, 792, 63), in which study the colliding system showed a partial gaseous arc morphology that supports the NANTEN observations of objects indicated to be colliding MCs using numerical simulations. Gas clumps with density greater than 10-20 g cm-3 were identified as pre-stellar cores and tracked through the simulation to investigate the effects of the mass of colliding clouds and the collision speeds on the resulting core population. Our results demonstrate that the smaller cloud property is more important for the results of cloud-cloud collisions. The mass function of formed cores can be approximated by a power-law relation with an index γ = -1.6 in slower cloud-cloud collisions (v ˜ 5 km s-1), and is in good agreement with observation of MCs. A faster relative speed increases the number of cores formed in the early stage of collisions and shortens the gas accretion phase of cores in the shocked region, leading to the suppression of core growth. The bending point appears in the high-mass part of the core mass function and the bending point mass decreases with increase in collision speed for the same combination of colliding clouds. The higher-mass part of the core mass function than the bending point mass can be approximated by a power law with γ = -2-3 that is similar to the power index of the massive part of the observed stellar initial mass function. We discuss implications of our results for the massive-star formation in our Galaxy.

  18. Formation of massive, dense cores by cloud-cloud collisions

    NASA Astrophysics Data System (ADS)

    Takahira, Ken; Shima, Kazuhiro; Habe, Asao; Tasker, Elizabeth J.

    2018-05-01

    We performed sub-parsec (˜ 0.014 pc) scale simulations of cloud-cloud collisions of two idealized turbulent molecular clouds (MCs) with different masses in the range of (0.76-2.67) × 104 M_{⊙} and with collision speeds of 5-30 km s-1. Those parameters are larger than in Takahira, Tasker, and Habe (2014, ApJ, 792, 63), in which study the colliding system showed a partial gaseous arc morphology that supports the NANTEN observations of objects indicated to be colliding MCs using numerical simulations. Gas clumps with density greater than 10-20 g cm-3 were identified as pre-stellar cores and tracked through the simulation to investigate the effects of the mass of colliding clouds and the collision speeds on the resulting core population. Our results demonstrate that the smaller cloud property is more important for the results of cloud-cloud collisions. The mass function of formed cores can be approximated by a power-law relation with an index γ = -1.6 in slower cloud-cloud collisions (v ˜ 5 km s-1), and is in good agreement with observation of MCs. A faster relative speed increases the number of cores formed in the early stage of collisions and shortens the gas accretion phase of cores in the shocked region, leading to the suppression of core growth. The bending point appears in the high-mass part of the core mass function and the bending point mass decreases with increase in collision speed for the same combination of colliding clouds. The higher-mass part of the core mass function than the bending point mass can be approximated by a power law with γ = -2-3 that is similar to the power index of the massive part of the observed stellar initial mass function. We discuss implications of our results for the massive-star formation in our Galaxy.

  19. Influence of polymethyl acrylate additive on the formation of particulate matter and NOX emission of a biodiesel-diesel-fueled engine.

    PubMed

    Monirul, Islam Mohammad; Masjuki, Haji Hassan; Kalam, Mohammad Abdul; Zulkifli, Nurin Wahidah Mohd; Shancita, Islam

    2017-08-01

    The aim of this study is to investigate the effect of the polymethyl acrylate (PMA) additive on the formation of particulate matter (PM) and nitrogen oxide (NO X ) emission from a diesel coconut and/or Calophyllum inophyllum biodiesel-fueled engine. The physicochemical properties of 20% of coconut and/or C. inophyllum biodiesel-diesel blend (B20), 0.03 wt% of PMA with B20 (B20P), and diesel fuel were measured and compared to ASTM D6751, D7467, and EN 14214 standard. The test results showed that the addition of PMA additive with B20 significantly improves the cold-flow properties such as pour point (PP), cloud point (CP), and cold filter plugging point (CFPP). The addition of PMA additives reduced the engine's brake-specific energy consumption of all tested fuels. Engine emission results showed that the additive-added fuel reduce PM concentration than B20 and diesel, whereas the PM size and NO X emission both increased than B20 fuel and baseline diesel fuel. Also, the effect of adding PMA into B20 reduced Carbon (C), Aluminum (Al), Potassium (K), and volatile materials in the soot, whereas it increased Oxygen (O), Fluorine (F), Zinc (Zn), Barium (Ba), Chlorine (Cl), Sodium (Na), and fixed carbon. The scanning electron microscope (SEM) results for B20P showed the lower agglomeration than B20 and diesel fuel. Therefore, B20P fuel can be used as an alternative to diesel fuel in diesel engines to lower the harmful emissions without compromising the fuel quality.

  20. Clouds off the Aleutian Islands

    NASA Image and Video Library

    2017-12-08

    March 23, 2010 - Clouds off the Aleutian Islands Interesting cloud patterns were visible over the Aleutian Islands in this image, captured by the MODIS on the Aqua satellite on March 14, 2010. Turbulence, caused by the wind passing over the highest points of the islands, is producing the pronounced eddies that swirl the clouds into a pattern called a vortex "street". In this image, the clouds have also aligned in parallel rows or streets. Cloud streets form when low-level winds move between and over obstacles causing the clouds to line up into rows (much like streets) that match the direction of the winds. At the point where the clouds first form streets, they're very narrow and well-defined. But as they age, they lose their definition, and begin to spread out and rejoin each other into a larger cloud mass. The Aleutians are a chain of islands that extend from Alaska toward the Kamchatka Peninsula in Russia. For more information related to this image go to: modis.gsfc.nasa.gov/gallery/individual.php?db_date=2010-0... For more information about Goddard Space Flight Center go here: www.nasa.gov/centers/goddard/home/index.html

  1. Galvanizability of Advanced High-Strength Steels 1180TRIP and 1180CP

    NASA Astrophysics Data System (ADS)

    Kim, M. S.; Kwak, J. H.; Kim, J. S.; Liu, Y. H.; Gao, N.; Tang, N.-Y.

    2009-08-01

    In general, Si-bearing advanced high-strength steels (AHSS) possess excellent mechanical properties but poor galvanizability. The galvanizability of a transformation-induced plasticity (TRIP) steel 1180TRIP containing 2.2 pct Mn and 1.7 pct Si and a complex phase steel 1180CP containing 2.7 pct Mn and 0.2 pct Si was extensively studied using a galvanizing simulator. The steel coupons were annealed at fixed dew points in the simulator. The surface features of the as-annealed steel coupons, together with galvanized and galvannealed coatings, were carefully examined using a variety of advanced analysis techniques. It was found that various oxides formed on the surface of these steels, depending on the steel composition and on the dew point control. Coating quality was good at 0 °C dew point but deteriorated as the dew point decreased to -35 °C and -65 °C. Based on the findings, guidance was provided for improving galvanizability by adjusting the Mn:Si ratio in steel compositions according to the dew point.

  2. Study protocol for a randomised, double-blinded, placebo-controlled, clinical trial of S-ketamine for pain treatment in patients with chronic pancreatitis (RESET trial)

    PubMed Central

    Juel, Jacob; Olesen, Søren Schou; Olesen, Anne Estrup; Poulsen, Jakob Lykke; Dahan, Albert; Wilder-Smith, Oliver; Madzak, Adnan; Frøkjær, Jens Brøndum; Drewes, Asbjørn Mohr

    2015-01-01

    Introduction Chronic pancreatitis (CP) is an inflammatory disease that causes irreversible damage to pancreatic tissue. Pain is its most prominent symptom. In the absence of pathology suitable for endoscopic or surgical interventions, pain treatment usually includes opioids. However, opioids often have limited efficacy. Moreover, side effects are common and bothersome. Hence, novel approaches to control pain associated with CP are highly desirable. Sensitisation of the central nervous system is reported to play a key role in pain generation and chronification. Fundamental to the process of central sensitisation is abnormal activation of the N-methyl-d-aspartate receptor, which can be antagonised by S-ketamine. The RESET trial is investigating the analgaesic and antihyperalgesic effect of S-ketamine in patients with CP. Methods and analysis 40 patients with CP will be enrolled. Patients are randomised to receive 8 h of intravenous S-ketamine followed by oral S-ketamine, or matching placebo, for 4 weeks. To improve blinding, 1 mg of midazolam will be added to active and placebo treatment. The primary end point is clinical pain relief as assessed by a daily pain diary. Secondary end points include changes in patient-reported outcome measures, opioid consumption and rates of side effects. The end points are registered through the 4-week medication period and for an additional follow-up period of 8 weeks to investigate long-term effects. In addition, experimental pain measures also serves as secondary end points, and neurophysiological imaging parameters are collected. Furthermore, experimental baseline recordings are compared to recordings from a group of healthy controls to evaluate general aspects of pain processing in CP. Ethics and dissemination The protocol is approved by the North Denmark Region Committee on Health Research Ethics (N-20130040) and the Danish Health and Medicines Authorities (EudraCT number: 2013-003357-17). The results will be disseminated in peer-reviewed journals and at scientific conferences. Trial registration number The study is registered at http://www.clinicaltrialsregister.eu (EudraCT number 2013-003357-17). PMID:25757947

  3. Automated extraction and analysis of rock discontinuity characteristics from 3D point clouds

    NASA Astrophysics Data System (ADS)

    Bianchetti, Matteo; Villa, Alberto; Agliardi, Federico; Crosta, Giovanni B.

    2016-04-01

    A reliable characterization of fractured rock masses requires an exhaustive geometrical description of discontinuities, including orientation, spacing, and size. These are required to describe discontinuum rock mass structure, perform Discrete Fracture Network and DEM modelling, or provide input for rock mass classification or equivalent continuum estimate of rock mass properties. Although several advanced methodologies have been developed in the last decades, a complete characterization of discontinuity geometry in practice is still challenging, due to scale-dependent variability of fracture patterns and difficult accessibility to large outcrops. Recent advances in remote survey techniques, such as terrestrial laser scanning and digital photogrammetry, allow a fast and accurate acquisition of dense 3D point clouds, which promoted the development of several semi-automatic approaches to extract discontinuity features. Nevertheless, these often need user supervision on algorithm parameters which can be difficult to assess. To overcome this problem, we developed an original Matlab tool, allowing fast, fully automatic extraction and analysis of discontinuity features with no requirements on point cloud accuracy, density and homogeneity. The tool consists of a set of algorithms which: (i) process raw 3D point clouds, (ii) automatically characterize discontinuity sets, (iii) identify individual discontinuity surfaces, and (iv) analyse their spacing and persistence. The tool operates in either a supervised or unsupervised mode, starting from an automatic preliminary exploration data analysis. The identification and geometrical characterization of discontinuity features is divided in steps. First, coplanar surfaces are identified in the whole point cloud using K-Nearest Neighbor and Principal Component Analysis algorithms optimized on point cloud accuracy and specified typical facet size. Then, discontinuity set orientation is calculated using Kernel Density Estimation and principal vector similarity criteria. Poles to points are assigned to individual discontinuity objects using easy custom vector clustering and Jaccard distance approaches, and each object is segmented into planar clusters using an improved version of the DBSCAN algorithm. Modal set orientations are then recomputed by cluster-based orientation statistics to avoid the effects of biases related to cluster size and density heterogeneity of the point cloud. Finally, spacing values are measured between individual discontinuity clusters along scanlines parallel to modal pole vectors, whereas individual feature size (persistence) is measured using 3D convex hull bounding boxes. Spacing and size are provided both as raw population data and as summary statistics. The tool is optimized for parallel computing on 64bit systems, and a Graphic User Interface (GUI) has been developed to manage data processing, provide several outputs, including reclassified point clouds, tables, plots, derived fracture intensity parameters, and export to modelling software tools. We present test applications performed both on synthetic 3D data (simple 3D solids) and real case studies, validating the results with existing geomechanical datasets.

  4. Applications of 3D-EDGE Detection for ALS Point Cloud

    NASA Astrophysics Data System (ADS)

    Ni, H.; Lin, X. G.; Zhang, J. X.

    2017-09-01

    Edge detection has been one of the major issues in the field of remote sensing and photogrammetry. With the fast development of sensor technology of laser scanning system, dense point clouds have become increasingly common. Precious 3D-edges are able to be detected from these point clouds and a great deal of edge or feature line extraction methods have been proposed. Among these methods, an easy-to-use 3D-edge detection method, AGPN (Analyzing Geometric Properties of Neighborhoods), has been proposed. The AGPN method detects edges based on the analysis of geometric properties of a query point's neighbourhood. The AGPN method detects two kinds of 3D-edges, including boundary elements and fold edges, and it has many applications. This paper presents three applications of AGPN, i.e., 3D line segment extraction, ground points filtering, and ground breakline extraction. Experiments show that the utilization of AGPN method gives a straightforward solution to these applications.

  5. Building Facade Modeling Under Line Feature Constraint Based on Close-Range Images

    NASA Astrophysics Data System (ADS)

    Liang, Y.; Sheng, Y. H.

    2018-04-01

    To solve existing problems in modeling facade of building merely with point feature based on close-range images , a new method for modeling building facade under line feature constraint is proposed in this paper. Firstly, Camera parameters and sparse spatial point clouds data were restored using the SFM , and 3D dense point clouds were generated with MVS; Secondly, the line features were detected based on the gradient direction , those detected line features were fit considering directions and lengths , then line features were matched under multiple types of constraints and extracted from multi-image sequence. At last, final facade mesh of a building was triangulated with point cloud and line features. The experiment shows that this method can effectively reconstruct the geometric facade of buildings using the advantages of combining point and line features of the close - range image sequence, especially in restoring the contour information of the facade of buildings.

  6. [Impact of disabling chronic pain: results of a cross-sectional population study with face-to-face interview].

    PubMed

    Cabrera-Leon, Andrés; Cantero-Braojos, Miguel Ángel

    2017-11-16

    To assess the impact of disabling chronic pain (DCP) on quality of life, work, consumption of medication and usage of health services. Cross-sectional population study with face-to-face interview. Andalusian Health Survey (2011 edition). 6,507 people over the age of 16 (p=q=0.5; confidence level=95%; sampling error=1.49, design effect=1.52). Not applicable. Dependent variable: DCP: population limited in their activity by any of the CP specified in the survey. quality of life, absence from work, consumption of medication and utilization of health services. Compared to a population without CP, DCP impact is 6 points less on the mental quality of life and 12 points on the physical one, medication consumption is triple, health services utilization is almost double, and long absence from work is triple. On the other hand, a population with nondisabling chronic pain (nDCP) presents similar results to a population without CP. We have considered DCP as another CP category because of its huge impact, as is shown in our study, on the study variables. On the contrary, the population with nDCP does not obtain significant impact differences when compared to the population without CP. Therefore, we believe that Primary Care and Public Health should lead different prevention strategies for DCP as well as for the identification of the nDCP population to decrease its possible deterioration towards DCP. Copyright © 2017 Elsevier España, S.L.U. All rights reserved.

  7. High-frequency electroacupuncture versus carprofen in an incisional pain model in rats.

    PubMed

    Teixeira, F M; Castro, L L; Ferreira, R T; Pires, P A; Vanderlinde, F A; Medeiros, M A

    2012-12-01

    The objective of the present study was to compare the effect of electroacupuncture (EA) and carprofen (CP) on postoperative incisional pain using the plantar incision (PI) model in rats. A 1-cm longitudinal incision was made through skin, fascia and muscles of a hind paw of male Wistar rats and the development of mechanical and thermal hypersensitivity was determined over 4 days using the von Frey and Hargreaves methods, respectively. Based on the experimental treatments received on the third postoperative day, the animals were divided into the following groups: PI+CP (CP, 2 mg/kg, po); PI+EAST36 (100-Hz EA applied bilaterally at the Zusanli point (ST36)); PI+EANP (EA applied to a non-acupoint region); PI+IMMO (immobilization only); PI (vehicle). In the von Frey test, the PI+EAST36 group had higher withdrawal force thresholds in response to mechanical stimuli than the PI, PI+IMMO and PI+EANP groups at several times studied. Furthermore, the PI+EAST36 group showed paw withdrawal thresholds in response to mechanical stimuli that were similar to those of the PI+CP group. In the Hargreaves test, all groups had latencies higher than those observed with PI. The PI+EAST36 group was similar to the PI+IMMO, PI+EANP and PI+CP groups. We conclude that 100-Hz EA at the ST36 point, but not at non-acupoints, can reduce mechanical nociception in the rat model of incisional pain, and its effectiveness is comparable to that of carprofen.

  8. Real-time terrain storage generation from multiple sensors towards mobile robot operation interface.

    PubMed

    Song, Wei; Cho, Seoungjae; Xi, Yulong; Cho, Kyungeun; Um, Kyhyun

    2014-01-01

    A mobile robot mounted with multiple sensors is used to rapidly collect 3D point clouds and video images so as to allow accurate terrain modeling. In this study, we develop a real-time terrain storage generation and representation system including a nonground point database (PDB), ground mesh database (MDB), and texture database (TDB). A voxel-based flag map is proposed for incrementally registering large-scale point clouds in a terrain model in real time. We quantize the 3D point clouds into 3D grids of the flag map as a comparative table in order to remove the redundant points. We integrate the large-scale 3D point clouds into a nonground PDB and a node-based terrain mesh using the CPU. Subsequently, we program a graphics processing unit (GPU) to generate the TDB by mapping the triangles in the terrain mesh onto the captured video images. Finally, we produce a nonground voxel map and a ground textured mesh as a terrain reconstruction result. Our proposed methods were tested in an outdoor environment. Our results show that the proposed system was able to rapidly generate terrain storage and provide high resolution terrain representation for mobile mapping services and a graphical user interface between remote operators and mobile robots.

  9. Sideloading - Ingestion of Large Point Clouds Into the Apache Spark Big Data Engine

    NASA Astrophysics Data System (ADS)

    Boehm, J.; Liu, K.; Alis, C.

    2016-06-01

    In the geospatial domain we have now reached the point where data volumes we handle have clearly grown beyond the capacity of most desktop computers. This is particularly true in the area of point cloud processing. It is therefore naturally lucrative to explore established big data frameworks for big geospatial data. The very first hurdle is the import of geospatial data into big data frameworks, commonly referred to as data ingestion. Geospatial data is typically encoded in specialised binary file formats, which are not naturally supported by the existing big data frameworks. Instead such file formats are supported by software libraries that are restricted to single CPU execution. We present an approach that allows the use of existing point cloud file format libraries on the Apache Spark big data framework. We demonstrate the ingestion of large volumes of point cloud data into a compute cluster. The approach uses a map function to distribute the data ingestion across the nodes of a cluster. We test the capabilities of the proposed method to load billions of points into a commodity hardware compute cluster and we discuss the implications on scalability and performance. The performance is benchmarked against an existing native Apache Spark data import implementation.

  10. Forest understory trees can be segmented accurately within sufficiently dense airborne laser scanning point clouds.

    PubMed

    Hamraz, Hamid; Contreras, Marco A; Zhang, Jun

    2017-07-28

    Airborne laser scanning (LiDAR) point clouds over large forested areas can be processed to segment individual trees and subsequently extract tree-level information. Existing segmentation procedures typically detect more than 90% of overstory trees, yet they barely detect 60% of understory trees because of the occlusion effect of higher canopy layers. Although understory trees provide limited financial value, they are an essential component of ecosystem functioning by offering habitat for numerous wildlife species and influencing stand development. Here we model the occlusion effect in terms of point density. We estimate the fractions of points representing different canopy layers (one overstory and multiple understory) and also pinpoint the required density for reasonable tree segmentation (where accuracy plateaus). We show that at a density of ~170 pt/m² understory trees can likely be segmented as accurately as overstory trees. Given the advancements of LiDAR sensor technology, point clouds will affordably reach this required density. Using modern computational approaches for big data, the denser point clouds can efficiently be processed to ultimately allow accurate remote quantification of forest resources. The methodology can also be adopted for other similar remote sensing or advanced imaging applications such as geological subsurface modelling or biomedical tissue analysis.

  11. Real-Time Terrain Storage Generation from Multiple Sensors towards Mobile Robot Operation Interface

    PubMed Central

    Cho, Seoungjae; Xi, Yulong; Cho, Kyungeun

    2014-01-01

    A mobile robot mounted with multiple sensors is used to rapidly collect 3D point clouds and video images so as to allow accurate terrain modeling. In this study, we develop a real-time terrain storage generation and representation system including a nonground point database (PDB), ground mesh database (MDB), and texture database (TDB). A voxel-based flag map is proposed for incrementally registering large-scale point clouds in a terrain model in real time. We quantize the 3D point clouds into 3D grids of the flag map as a comparative table in order to remove the redundant points. We integrate the large-scale 3D point clouds into a nonground PDB and a node-based terrain mesh using the CPU. Subsequently, we program a graphics processing unit (GPU) to generate the TDB by mapping the triangles in the terrain mesh onto the captured video images. Finally, we produce a nonground voxel map and a ground textured mesh as a terrain reconstruction result. Our proposed methods were tested in an outdoor environment. Our results show that the proposed system was able to rapidly generate terrain storage and provide high resolution terrain representation for mobile mapping services and a graphical user interface between remote operators and mobile robots. PMID:25101321

  12. The Iqmulus Urban Showcase: Automatic Tree Classification and Identification in Huge Mobile Mapping Point Clouds

    NASA Astrophysics Data System (ADS)

    Böhm, J.; Bredif, M.; Gierlinger, T.; Krämer, M.; Lindenberg, R.; Liu, K.; Michel, F.; Sirmacek, B.

    2016-06-01

    Current 3D data capturing as implemented on for example airborne or mobile laser scanning systems is able to efficiently sample the surface of a city by billions of unselective points during one working day. What is still difficult is to extract and visualize meaningful information hidden in these point clouds with the same efficiency. This is where the FP7 IQmulus project enters the scene. IQmulus is an interactive facility for processing and visualizing big spatial data. In this study the potential of IQmulus is demonstrated on a laser mobile mapping point cloud of 1 billion points sampling ~ 10 km of street environment in Toulouse, France. After the data is uploaded to the IQmulus Hadoop Distributed File System, a workflow is defined by the user consisting of retiling the data followed by a PCA driven local dimensionality analysis, which runs efficiently on the IQmulus cloud facility using a Spark implementation. Points scattering in 3 directions are clustered in the tree class, and are separated next into individual trees. Five hours of processing at the 12 node computing cluster results in the automatic identification of 4000+ urban trees. Visualization of the results in the IQmulus fat client helps users to appreciate the results, and developers to identify remaining flaws in the processing workflow.

  13. Automatic Modelling of Rubble Mound Breakwaters from LIDAR Data

    NASA Astrophysics Data System (ADS)

    Bueno, M.; Díaz-Vilariño, L.; González-Jorge, H.; Martínez-Sánchez, J.; Arias, P.

    2015-08-01

    Rubble mound breakwaters maintenance is critical to the protection of beaches and ports. LiDAR systems provide accurate point clouds from the emerged part of the structure that can be modelled to make it more useful and easy to handle. This work introduces a methodology for the automatic modelling of breakwaters with armour units of cube shape. The algorithm is divided in three main steps: normal vector computation, plane segmentation, and cube reconstruction. Plane segmentation uses the normal orientation of the points and the edge length of the cube. Cube reconstruction uses the intersection of three perpendicular planes and the edge length. Three point clouds cropped from the main point cloud of the structure are used for the tests. The number of cubes detected is around 56 % for two of the point clouds and 32 % for the third one over the total physical cubes. Accuracy assessment is done by comparison with manually drawn cubes calculating the differences between the vertexes. It ranges between 6.4 cm and 15 cm. Computing time ranges between 578.5 s and 8018.2 s. The computing time increases with the number of cubes and the requirements of collision detection.

  14. Application of Template Matching for Improving Classification of Urban Railroad Point Clouds

    PubMed Central

    Arastounia, Mostafa; Oude Elberink, Sander

    2016-01-01

    This study develops an integrated data-driven and model-driven approach (template matching) that clusters the urban railroad point clouds into three classes of rail track, contact cable, and catenary cable. The employed dataset covers 630 m of the Dutch urban railroad corridors in which there are four rail tracks, two contact cables, and two catenary cables. The dataset includes only geometrical information (three dimensional (3D) coordinates of the points) with no intensity data and no RGB data. The obtained results indicate that all objects of interest are successfully classified at the object level with no false positives and no false negatives. The results also show that an average 97.3% precision and an average 97.7% accuracy at the point cloud level are achieved. The high precision and high accuracy of the rail track classification (both greater than 96%) at the point cloud level stems from the great impact of the employed template matching method on excluding the false positives. The cables also achieve quite high average precision (96.8%) and accuracy (98.4%) due to their high sampling and isolated position in the railroad corridor. PMID:27973452

  15. The Use of Uas for Rapid 3d Mapping in Geomatics Education

    NASA Astrophysics Data System (ADS)

    Teo, Tee-Ann; Tian-Yuan Shih, Peter; Yu, Sz-Cheng; Tsai, Fuan

    2016-06-01

    With the development of technology, UAS is an advance technology to support rapid mapping for disaster response. The aim of this study is to develop educational modules for UAS data processing in rapid 3D mapping. The designed modules for this study are focused on UAV data processing from available freeware or trial software for education purpose. The key modules include orientation modelling, 3D point clouds generation, image georeferencing and visualization. The orientation modelling modules adopts VisualSFM to determine the projection matrix for each image station. Besides, the approximate ground control points are measured from OpenStreetMap for absolute orientation. The second module uses SURE and the orientation files from previous module for 3D point clouds generation. Then, the ground point selection and digital terrain model generation can be archived by LAStools. The third module stitches individual rectified images into a mosaic image using Microsoft ICE (Image Composite Editor). The last module visualizes and measures the generated dense point clouds in CloudCompare. These comprehensive UAS processing modules allow the students to gain the skills to process and deliver UAS photogrammetric products in rapid 3D mapping. Moreover, they can also apply the photogrammetric products for analysis in practice.

  16. Vertical stratification of forest canopy for segmentation of understory trees within small-footprint airborne LiDAR point clouds

    NASA Astrophysics Data System (ADS)

    Hamraz, Hamid; Contreras, Marco A.; Zhang, Jun

    2017-08-01

    Airborne LiDAR point cloud representing a forest contains 3D data, from which vertical stand structure even of understory layers can be derived. This paper presents a tree segmentation approach for multi-story stands that stratifies the point cloud to canopy layers and segments individual tree crowns within each layer using a digital surface model based tree segmentation method. The novelty of the approach is the stratification procedure that separates the point cloud to an overstory and multiple understory tree canopy layers by analyzing vertical distributions of LiDAR points within overlapping locales. The procedure does not make a priori assumptions about the shape and size of the tree crowns and can, independent of the tree segmentation method, be utilized to vertically stratify tree crowns of forest canopies. We applied the proposed approach to the University of Kentucky Robinson Forest - a natural deciduous forest with complex and highly variable terrain and vegetation structure. The segmentation results showed that using the stratification procedure strongly improved detecting understory trees (from 46% to 68%) at the cost of introducing a fair number of over-segmented understory trees (increased from 1% to 16%), while barely affecting the overall segmentation quality of overstory trees. Results of vertical stratification of the canopy showed that the point density of understory canopy layers were suboptimal for performing a reasonable tree segmentation, suggesting that acquiring denser LiDAR point clouds would allow more improvements in segmenting understory trees. As shown by inspecting correlations of the results with forest structure, the segmentation approach is applicable to a variety of forest types.

  17. A Preliminary Investigation of The Structure of Southern Yucca Flat, Massachusetts Mountain, and CP Basin, Nevada Test Site, Nevada, Based on Geophysical Modeling

    USGS Publications Warehouse

    Phelps, Geoffrey A.; Justet, Leigh; Moring, Barry C.; Roberts, Carter W.

    2006-01-01

    New gravity and magnetic data collected in the vicinity of Massachusetts Mountain and CP basin (Nevada Test Site, NV) provides a more complex view of the structural relationships present in the vicinity of CP basin than previous geologic models, helps define the position and extent of structures in southern Yucca Flat and CP basin, and better constrains the configuration of the basement structure separating CP basin and Frenchman Flat. The density and gravity modeling indicates that CP basin is a shallow, oval-shaped basin which trends north-northeast and contains ~800 m of basin-filling rocks and sediment at its deepest point in the northeast. CP basin is separated from the deeper Frenchman Flat basin by a subsurface ridge that may represent a Tertiary erosion surface at the top of the Paleozoic strata. The magnetic modeling indicates that the Cane Spring fault appears to merge with faults in northwest Massachusetts Mountain, rather than cut through to Yucca Flat basin and that the basin is downed-dropped relative to Massachusetts Mountain. The magnetic modeling indicates volcanic units within Yucca Flat basin are down-dropped on the west and supports the interpretations of Phelps and KcKee (1999). The magnetic data indicate that the only faults that appear to be through-going from Yucca Flat into either Frenchman Flat or CP basin are the faults that bound the CP hogback. In general, the north-trending faults present along the length of Yucca Flat bend, merge, and disappear before reaching CP hogback and Massachusetts Mountain or French Peak.

  18. Profile of refractive errors in cerebral palsy: impact of severity of motor impairment (GMFCS) and CP subtype on refractive outcome.

    PubMed

    Saunders, Kathryn J; Little, Julie-Anne; McClelland, Julie F; Jackson, A Jonathan

    2010-06-01

    To describe refractive status in children and young adults with cerebral palsy (CP) and relate refractive error to standardized measures of type and severity of CP impairment and to ocular dimensions. A population-based sample of 118 participants aged 4 to 23 years with CP (mean 11.64 +/- 4.06) and an age-appropriate control group (n = 128; age, 4-16 years; mean, 9.33 +/- 3.52) were recruited. Motor impairment was described with the Gross Motor Function Classification Scale (GMFCS), and subtype was allocated with the Surveillance of Cerebral Palsy in Europe (SCPE). Measures of refractive error were obtained from all participants and ocular biometry from a subgroup with CP. A significantly higher prevalence and magnitude of refractive error was found in the CP group compared to the control group. Axial length and spherical refractive error were strongly related. This relation did not improve with inclusion of corneal data. There was no relation between the presence or magnitude of spherical refractive errors in CP and the level of motor impairment, intellectual impairment, or the presence of communication difficulties. Higher spherical refractive errors were significantly associated with the nonspastic CP subtype. The presence and magnitude of astigmatism were greater when intellectual impairment was more severe, and astigmatic errors were explained by corneal dimensions. Conclusions. High refractive errors are common in CP, pointing to impairment of the emmetropization process. Biometric data support this In contrast to other functional vision measures, spherical refractive error is unrelated to CP severity, but those with nonspastic CP tend to demonstrate the most extreme errors in refraction.

  19. Solid and liquid heat capacities of n-alkyl para-aminobenzoates near the melting point.

    PubMed

    Neau, S H; Flynn, G L

    1990-11-01

    The expression that relates the ideal mole fraction solubility of a crystalline compound to physicochemical properties of the compound includes a term involving the difference in the heat capacities of the solid and liquid forms of the solute, delta Cp. There are two alternate conventions which are employed to eliminate this term. The first assumes that the term involving delta Cp, or delta Cp itself, is zero. The alternate assumption assigns the value of the entropy of fusion to the differential heat capacity. The relative validity of these two assumptions was evaluated using the straight-chain alkyl para-aminobenzoates as test compounds. The heat capacities of the solid and liquid forms of each of the para-aminobenzoates, near the respective melting point, were determined by differential scanning calorimetry. The data lead one to conclude that the assumption that the differential heat capacity is not usually negligible and is better approximated by the entropy of fusion.

  20. A classifying method analysis on the number of returns for given pulse of post-earthquake airborne LiDAR data

    NASA Astrophysics Data System (ADS)

    Wang, Jinxia; Dou, Aixia; Wang, Xiaoqing; Huang, Shusong; Yuan, Xiaoxiang

    2016-11-01

    Compared to remote sensing image, post-earthquake airborne Light Detection And Ranging (LiDAR) point cloud data contains a high-precision three-dimensional information on earthquake disaster which can improve the accuracy of the identification of destroy buildings. However after the earthquake, the damaged buildings showed so many different characteristics that we can't distinguish currently between trees and damaged buildings points by the most commonly used method of pre-processing. In this study, we analyse the number of returns for given pulse of trees and damaged buildings point cloud and explore methods to distinguish currently between trees and damaged buildings points. We propose a new method by searching for a certain number of neighbourhood space and calculate the ratio(R) of points whose number of returns for given pulse greater than 1 of the neighbourhood points to separate trees from buildings. In this study, we select some point clouds of typical undamaged building, collapsed building and tree as samples from airborne LiDAR point cloud data which got after 2010 earthquake in Haiti MW7.0 by the way of human-computer interaction. Testing to get the Rvalue to distinguish between trees and buildings and apply the R-value to test testing areas. The experiment results show that the proposed method in this study can distinguish between building (undamaged and damaged building) points and tree points effectively but be limited in area where buildings various, damaged complex and trees dense, so this method will be improved necessarily.

  1. Diffuse cloud chemistry. [in interstellar matter

    NASA Technical Reports Server (NTRS)

    Van Dishoeck, Ewine F.; Black, John H.

    1988-01-01

    The current status of models of diffuse interstellar clouds is reviewed. A detailed comparison of recent gas-phase steady-state models shows that both the physical conditions and the molecular abundances in diffuse clouds are still not fully understood. Alternative mechanisms are discussed and observational tests which may discriminate between the various models are suggested. Recent developments regarding the velocity structure of diffuse clouds are mentioned. Similarities and differences between the chemistries in diffuse clouds and those in translucent and high latitude clouds are pointed out.

  2. The pointing errors of geosynchronous satellites

    NASA Technical Reports Server (NTRS)

    Sikdar, D. N.; Das, A.

    1971-01-01

    A study of the correlation between cloud motion and wind field was initiated. Cloud heights and displacements were being obtained from a ceilometer and movie pictures, while winds were measured from pilot balloon observations on a near-simultaneous basis. Cloud motion vectors were obtained from time-lapse cloud pictures, using the WINDCO program, for 27, 28 July, 1969, in the Atlantic. The relationship between observed features of cloud clusters and the ambient wind field derived from cloud trajectories on a wide range of space and time scales is discussed.

  3. Comparison of computation time and image quality between full-parallax 4G-pixels CGHs calculated by the point cloud and polygon-based method

    NASA Astrophysics Data System (ADS)

    Nakatsuji, Noriaki; Matsushima, Kyoji

    2017-03-01

    Full-parallax high-definition CGHs composed of more than billion pixels were so far created only by the polygon-based method because of its high performance. However, GPUs recently allow us to generate CGHs much faster by the point cloud. In this paper, we measure computation time of object fields for full-parallax high-definition CGHs, which are composed of 4 billion pixels and reconstruct the same scene, by using the point cloud with GPU and the polygon-based method with CPU. In addition, we compare the optical and simulated reconstructions between CGHs created by these techniques to verify the image quality.

  4. Development of Three-Dimensional Dental Scanning Apparatus Using Structured Illumination

    PubMed Central

    Park, Anjin; Lee, Byeong Ha; Eom, Joo Beom

    2017-01-01

    We demonstrated a three-dimensional (3D) dental scanning apparatus based on structured illumination. A liquid lens was used for tuning focus and a piezomotor stage was used for the shift of structured light. A simple algorithm, which detects intensity modulation, was used to perform optical sectioning with structured illumination. We reconstructed a 3D point cloud, which represents the 3D coordinates of the digitized surface of a dental gypsum cast by piling up sectioned images. We performed 3D registration of an individual 3D point cloud, which includes alignment and merging the 3D point clouds to exhibit a 3D model of the dental cast. PMID:28714897

  5. Automatic Building Abstraction from Aerial Photogrammetry

    NASA Astrophysics Data System (ADS)

    Ley, A.; Hänsch, R.; Hellwich, O.

    2017-09-01

    Multi-view stereo has been shown to be a viable tool for the creation of realistic 3D city models. Nevertheless, it still states significant challenges since it results in dense, but noisy and incomplete point clouds when applied to aerial images. 3D city modelling usually requires a different representation of the 3D scene than these point clouds. This paper applies a fully-automatic pipeline to generate a simplified mesh from a given dense point cloud. The mesh provides a certain level of abstraction as it only consists of relatively large planar and textured surfaces. Thus, it is possible to remove noise, outlier, as well as clutter, while maintaining a high level of accuracy.

  6. Automatic Detection and Classification of Pole-Like Objects for Urban Cartography Using Mobile Laser Scanning Data

    PubMed Central

    Ordóñez, Celestino; Cabo, Carlos; Sanz-Ablanedo, Enoc

    2017-01-01

    Mobile laser scanning (MLS) is a modern and powerful technology capable of obtaining massive point clouds of objects in a short period of time. Although this technology is nowadays being widely applied in urban cartography and 3D city modelling, it has some drawbacks that need to be avoided in order to strengthen it. One of the most important shortcomings of MLS data is concerned with the fact that it provides an unstructured dataset whose processing is very time-consuming. Consequently, there is a growing interest in developing algorithms for the automatic extraction of useful information from MLS point clouds. This work is focused on establishing a methodology and developing an algorithm to detect pole-like objects and classify them into several categories using MLS datasets. The developed procedure starts with the discretization of the point cloud by means of a voxelization, in order to simplify and reduce the processing time in the segmentation process. In turn, a heuristic segmentation algorithm was developed to detect pole-like objects in the MLS point cloud. Finally, two supervised classification algorithms, linear discriminant analysis and support vector machines, were used to distinguish between the different types of poles in the point cloud. The predictors are the principal component eigenvalues obtained from the Cartesian coordinates of the laser points, the range of the Z coordinate, and some shape-related indexes. The performance of the method was tested in an urban area with 123 poles of different categories. Very encouraging results were obtained, since the accuracy rate was over 90%. PMID:28640189

  7. Estimating Aircraft Heading Based on Laserscanner Derived Point Clouds

    NASA Astrophysics Data System (ADS)

    Koppanyi, Z.; Toth, C., K.

    2015-03-01

    Using LiDAR sensors for tracking and monitoring an operating aircraft is a new application. In this paper, we present data processing methods to estimate the heading of a taxiing aircraft using laser point clouds. During the data acquisition, a Velodyne HDL-32E laser scanner tracked a moving Cessna 172 airplane. The point clouds captured at different times were used for heading estimation. After addressing the problem and specifying the equation of motion to reconstruct the aircraft point cloud from the consecutive scans, three methods are investigated here. The first requires a reference model to estimate the relative angle from the captured data by fitting different cross-sections (horizontal profiles). In the second approach, iterative closest point (ICP) method is used between the consecutive point clouds to determine the horizontal translation of the captured aircraft body. Regarding the ICP, three different versions were compared, namely, the ordinary 3D, 3-DoF 3D and 2-DoF 3D ICP. It was found that 2-DoF 3D ICP provides the best performance. Finally, the last algorithm searches for the unknown heading and velocity parameters by minimizing the volume of the reconstructed plane. The three methods were compared using three test datatypes which are distinguished by object-sensor distance, heading and velocity. We found that the ICP algorithm fails at long distances and when the aircraft motion direction perpendicular to the scan plane, but the first and the third methods give robust and accurate results at 40m object distance and at ~12 knots for a small Cessna airplane.

  8. Coat protein expression strategy of oat blue dwarf virus.

    PubMed

    Edwards, Michael C; Weiland, John J

    2014-02-01

    Oat blue dwarf virus (OBDV) is a member of the genus Marafivirus whose genome encodes a 227 kDa polyprotein (p227) ostensibly processed post-translationally into its functional components. Encoded near the 3' terminus and coterminal with the p227 ORF are ORFs specifying major and minor capsid proteins (CP). Since the CP expression strategy of marafiviruses has not been thoroughly investigated, we produced a series of point mutants in the OBDV CP encoding gene and examined expression in protoplasts. Results support a model in which the 21 kDa major CP is the product of direct translation of a sgRNA, while the 24 kDa minor CP is a cleavage product derived from both the polyprotein and a larger ~26 kDa precursor translated directly from the sgRNA. Cleavage occurs at an LXG[G/A] motif conserved in many viruses that use papain-like proteases for polyprotein processing and protection against degradation via the ubiquitin-proteasome system. Published by Elsevier Inc.

  9. Datum Feature Extraction and Deformation Analysis Method Based on Normal Vector of Point Cloud

    NASA Astrophysics Data System (ADS)

    Sun, W.; Wang, J.; Jin, F.; Liang, Z.; Yang, Y.

    2018-04-01

    In order to solve the problem lacking applicable analysis method in the application of three-dimensional laser scanning technology to the field of deformation monitoring, an efficient method extracting datum feature and analysing deformation based on normal vector of point cloud was proposed. Firstly, the kd-tree is used to establish the topological relation. Datum points are detected by tracking the normal vector of point cloud determined by the normal vector of local planar. Then, the cubic B-spline curve fitting is performed on the datum points. Finally, datum elevation and the inclination angle of the radial point are calculated according to the fitted curve and then the deformation information was analyzed. The proposed approach was verified on real large-scale tank data set captured with terrestrial laser scanner in a chemical plant. The results show that the method could obtain the entire information of the monitor object quickly and comprehensively, and reflect accurately the datum feature deformation.

  10. Temporal Analysis and Automatic Calibration of the Velodyne HDL-32E LiDAR System

    NASA Astrophysics Data System (ADS)

    Chan, T. O.; Lichti, D. D.; Belton, D.

    2013-10-01

    At the end of the first quarter of 2012, more than 600 Velodyne LiDAR systems had been sold worldwide for various robotic and high-accuracy survey applications. The ultra-compact Velodyne HDL-32E LiDAR has become a predominant sensor for many applications that require lower sensor size/weight and cost. For high accuracy applications, cost-effective calibration methods with minimal manual intervention are always desired by users. However, the calibrations are complicated by the Velodyne LiDAR's narrow vertical field of view and the very highly time-variant nature of its measurements. In the paper, the temporal stability of the HDL-32E is first analysed as the motivation for developing a new, automated calibration method. This is followed by a detailed description of the calibration method that is driven by a novel segmentation method for extracting vertical cylindrical features from the Velodyne point clouds. The proposed segmentation method utilizes the Velodyne point cloud's slice-like nature and first decomposes the point clouds into 2D layers. Then the layers are treated as 2D images and are processed with the Generalized Hough Transform which extracts the points distributed in circular patterns from the point cloud layers. Subsequently, the vertical cylindrical features can be readily extracted from the whole point clouds based on the previously extracted points. The points are passed to the calibration that estimates the cylinder parameters and the LiDAR's additional parameters simultaneously by constraining the segmented points to fit to the cylindrical geometric model in such a way the weighted sum of the adjustment residuals are minimized. The proposed calibration is highly automatic and this allows end users to obtain the time-variant additional parameters instantly and frequently whenever there are vertical cylindrical features presenting in scenes. The methods were verified with two different real datasets, and the results suggest that up to 78.43% accuracy improvement for the HDL-32E can be achieved using the proposed calibration method.

  11. CP violation at one loop in the polarization-independent chargino production in e+e- collisions

    NASA Astrophysics Data System (ADS)

    Rolbiecki, K.; Kalinowski, J.

    2007-12-01

    Recently Osland and Vereshagin noticed, based on sample calculations of some box diagrams, that in unpolarized e+e- collisions CP-odd effects in the nondiagonal chargino-pair production process are generated at one loop. Here we perform a full one-loop analysis of these effects and point out that in some cases the neglected vertex and self-energy contributions may play a dominant role. We also show that CP asymmetries in chargino production are sensitive not only to the phase of μ parameter in the chargino sector but also to the phase of stop trilinear coupling At.

  12. Fine-grained Database Field Search Using Attribute-Based Encryption for E-Healthcare Clouds.

    PubMed

    Guo, Cheng; Zhuang, Ruhan; Jie, Yingmo; Ren, Yizhi; Wu, Ting; Choo, Kim-Kwang Raymond

    2016-11-01

    An effectively designed e-healthcare system can significantly enhance the quality of access and experience of healthcare users, including facilitating medical and healthcare providers in ensuring a smooth delivery of services. Ensuring the security of patients' electronic health records (EHRs) in the e-healthcare system is an active research area. EHRs may be outsourced to a third-party, such as a community healthcare cloud service provider for storage due to cost-saving measures. Generally, encrypting the EHRs when they are stored in the system (i.e. data-at-rest) or prior to outsourcing the data is used to ensure data confidentiality. Searchable encryption (SE) scheme is a promising technique that can ensure the protection of private information without compromising on performance. In this paper, we propose a novel framework for controlling access to EHRs stored in semi-trusted cloud servers (e.g. a private cloud or a community cloud). To achieve fine-grained access control for EHRs, we leverage the ciphertext-policy attribute-based encryption (CP-ABE) technique to encrypt tables published by hospitals, including patients' EHRs, and the table is stored in the database with the primary key being the patient's unique identity. Our framework can enable different users with different privileges to search on different database fields. Differ from previous attempts to secure outsourcing of data, we emphasize the control of the searches of the fields within the database. We demonstrate the utility of the scheme by evaluating the scheme using datasets from the University of California, Irvine.

  13. Preparation and properties of Ge4Se96 glass

    NASA Astrophysics Data System (ADS)

    Xu, Junfeng; Sun, Guozhen; Liu, Zhenting; Wang, Yaling; Jian, Zengyun

    2018-03-01

    The chalcogenide glass Ge4Se96 was prepared by melt-quenched method. The characteristic temperatures (the strain point, annealing point, glass transition point, yielding point and soften point) were determined by thermal analysis. The result shows that the glass transition point is 52 °C; the average thermal expansion coefficient is ΔL/L0 = (0.0557∗T-1.7576)/1000. The specific heat of Ge4Se96 glass was measured by stepwise method. The difference in specific heat between amorphous structure and undercooled liquid was determined as ΔCp = 0.1812-5.20 × 10-4T-4.92 × 10-7T2 JK-1 g-1. Based on the result of ΔCp, the Gibbs energy and the entropy curves were calculated, and the Kauzmann temperature was determined as Tk = 233.5 K, which is between that for pure Se glass (216 K) and that for Ge7.4Se92.6 glass (250 K). It indicates that for Ge-Se glass with low Ge content, the Kauzmann temperature increases with increasing the content of Ge.

  14. Robust point cloud classification based on multi-level semantic relationships for urban scenes

    NASA Astrophysics Data System (ADS)

    Zhu, Qing; Li, Yuan; Hu, Han; Wu, Bo

    2017-07-01

    The semantic classification of point clouds is a fundamental part of three-dimensional urban reconstruction. For datasets with high spatial resolution but significantly more noises, a general trend is to exploit more contexture information to surmount the decrease of discrimination of features for classification. However, previous works on adoption of contexture information are either too restrictive or only in a small region and in this paper, we propose a point cloud classification method based on multi-level semantic relationships, including point-homogeneity, supervoxel-adjacency and class-knowledge constraints, which is more versatile and incrementally propagate the classification cues from individual points to the object level and formulate them as a graphical model. The point-homogeneity constraint clusters points with similar geometric and radiometric properties into regular-shaped supervoxels that correspond to the vertices in the graphical model. The supervoxel-adjacency constraint contributes to the pairwise interactions by providing explicit adjacent relationships between supervoxels. The class-knowledge constraint operates at the object level based on semantic rules, guaranteeing the classification correctness of supervoxel clusters at that level. International Society of Photogrammetry and Remote Sensing (ISPRS) benchmark tests have shown that the proposed method achieves state-of-the-art performance with an average per-area completeness and correctness of 93.88% and 95.78%, respectively. The evaluation of classification of photogrammetric point clouds and DSM generated from aerial imagery confirms the method's reliability in several challenging urban scenes.

  15. Terrain Extraction by Integrating Terrestrial Laser Scanner Data and Spectral Information

    NASA Astrophysics Data System (ADS)

    Lau, C. L.; Halim, S.; Zulkepli, M.; Azwan, A. M.; Tang, W. L.; Chong, A. K.

    2015-10-01

    The extraction of true terrain points from unstructured laser point cloud data is an important process in order to produce an accurate digital terrain model (DTM). However, most of these spatial filtering methods just utilizing the geometrical data to discriminate the terrain points from nonterrain points. The point cloud filtering method also can be improved by using the spectral information available with some scanners. Therefore, the objective of this study is to investigate the effectiveness of using the three-channel (red, green and blue) of the colour image captured from built-in digital camera which is available in some Terrestrial Laser Scanner (TLS) for terrain extraction. In this study, the data acquisition was conducted at a mini replica landscape in Universiti Teknologi Malaysia (UTM), Skudai campus using Leica ScanStation C10. The spectral information of the coloured point clouds from selected sample classes are extracted for spectral analysis. The coloured point clouds which within the corresponding preset spectral threshold are identified as that specific feature point from the dataset. This process of terrain extraction is done through using developed Matlab coding. Result demonstrates that a higher spectral resolution passive image is required in order to improve the output. This is because low quality of the colour images captured by the sensor contributes to the low separability in spectral reflectance. In conclusion, this study shows that, spectral information is capable to be used as a parameter for terrain extraction.

  16. Triton X-114 based cloud point extraction: a thermoreversible approach for separation/concentration and dispersion of nanomaterials in the aqueous phase.

    PubMed

    Liu, Jing-fu; Liu, Rui; Yin, Yong-guang; Jiang, Gui-bin

    2009-03-28

    Capable of preserving the sizes and shapes of nanomaterials during the phase transferring, Triton X-114 based cloud point extraction provides a general, simple, and cost-effective route for reversible concentration/separation or dispersion of various nanomaterials in the aqueous phase.

  17. Automatic Extraction of Road Markings from Mobile Laser-Point Cloud Using Intensity Data

    NASA Astrophysics Data System (ADS)

    Yao, L.; Chen, Q.; Qin, C.; Wu, H.; Zhang, S.

    2018-04-01

    With the development of intelligent transportation, road's high precision information data has been widely applied in many fields. This paper proposes a concise and practical way to extract road marking information from point cloud data collected by mobile mapping system (MMS). The method contains three steps. Firstly, road surface is segmented through edge detection from scan lines. Then the intensity image is generated by inverse distance weighted (IDW) interpolation and the road marking is extracted by using adaptive threshold segmentation based on integral image without intensity calibration. Moreover, the noise is reduced by removing a small number of plaque pixels from binary image. Finally, point cloud mapped from binary image is clustered into marking objects according to Euclidean distance, and using a series of algorithms including template matching and feature attribute filtering for the classification of linear markings, arrow markings and guidelines. Through processing the point cloud data collected by RIEGL VUX-1 in case area, the results show that the F-score of marking extraction is 0.83, and the average classification rate is 0.9.

  18. Point Cloud Based Approach to Stem Width Extraction of Sorghum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Jihui; Zakhor, Avideh

    A revolution in the field of genomics has produced vast amounts of data and furthered our understanding of the genotypephenotype map, but is currently constrained by manually intensive or limited phenotype data collection. We propose an algorithm to estimate stem width, a key characteristic used for biomass potential evaluation, from 3D point cloud data collected by a robot equipped with a depth sensor in a single pass in a standard field. The algorithm applies a two step alignment to register point clouds in different frames, a Frangi filter to identify stemlike objects in the point cloud and an orientation basedmore » filter to segment out and refine individual stems for width estimation. Individually, detected stems which are split due to occlusions are merged and then registered with previously found stems in previous camera frames in order to track temporally. We then refine the estimates to produce an accurate histogram of width estimates per plot. Since the plants in each plot are genetically identical, distributions of the stem width per plot can be useful in identifying genetically superior sorghum for biofuels.« less

  19. Airborne LIDAR point cloud tower inclination judgment

    NASA Astrophysics Data System (ADS)

    liang, Chen; zhengjun, Liu; jianguo, Qian

    2016-11-01

    Inclined transmission line towers for the safe operation of the line caused a great threat, how to effectively, quickly and accurately perform inclined judgment tower of power supply company safety and security of supply has played a key role. In recent years, with the development of unmanned aerial vehicles, unmanned aerial vehicles equipped with a laser scanner, GPS, inertial navigation is one of the high-precision 3D Remote Sensing System in the electricity sector more and more. By airborne radar scan point cloud to visually show the whole picture of the three-dimensional spatial information of the power line corridors, such as the line facilities and equipment, terrain and trees. Currently, LIDAR point cloud research in the field has not yet formed an algorithm to determine tower inclination, the paper through the existing power line corridor on the tower base extraction, through their own tower shape characteristic analysis, a vertical stratification the method of combining convex hull algorithm for point cloud tower scarce two cases using two different methods for the tower was Inclined to judge, and the results with high reliability.

  20. Fast grasping of unknown objects using cylinder searching on a single point cloud

    NASA Astrophysics Data System (ADS)

    Lei, Qujiang; Wisse, Martijn

    2017-03-01

    Grasping of unknown objects with neither appearance data nor object models given in advance is very important for robots that work in an unfamiliar environment. The goal of this paper is to quickly synthesize an executable grasp for one unknown object by using cylinder searching on a single point cloud. Specifically, a 3D camera is first used to obtain a partial point cloud of the target unknown object. An original method is then employed to do post treatment on the partial point cloud to minimize the uncertainty which may lead to grasp failure. In order to accelerate the grasp searching, surface normal of the target object is then used to constrain the synthetization of the cylinder grasp candidates. Operability analysis is then used to select out all executable grasp candidates followed by force balance optimization to choose the most reliable grasp as the final grasp execution. In order to verify the effectiveness of our algorithm, Simulations on a Universal Robot arm UR5 and an under-actuated Lacquey Fetch gripper are used to examine the performance of this algorithm, and successful results are obtained.

  1. Sloped Terrain Segmentation for Autonomous Drive Using Sparse 3D Point Cloud

    PubMed Central

    Cho, Seoungjae; Kim, Jonghyun; Ikram, Warda; Cho, Kyungeun; Sim, Sungdae

    2014-01-01

    A ubiquitous environment for road travel that uses wireless networks requires the minimization of data exchange between vehicles. An algorithm that can segment the ground in real time is necessary to obtain location data between vehicles simultaneously executing autonomous drive. This paper proposes a framework for segmenting the ground in real time using a sparse three-dimensional (3D) point cloud acquired from undulating terrain. A sparse 3D point cloud can be acquired by scanning the geography using light detection and ranging (LiDAR) sensors. For efficient ground segmentation, 3D point clouds are quantized in units of volume pixels (voxels) and overlapping data is eliminated. We reduce nonoverlapping voxels to two dimensions by implementing a lowermost heightmap. The ground area is determined on the basis of the number of voxels in each voxel group. We execute ground segmentation in real time by proposing an approach to minimize the comparison between neighboring voxels. Furthermore, we experimentally verify that ground segmentation can be executed at about 19.31 ms per frame. PMID:25093204

  2. Drawing and Landscape Simulation for Japanese Garden by Using Terrestrial Laser Scanner

    NASA Astrophysics Data System (ADS)

    Kumazaki, R.; Kunii, Y.

    2015-05-01

    Recently, many laser scanners are applied for various measurement fields. This paper investigates that it was useful to use the terrestrial laser scanner in the field of landscape architecture and examined a usage in Japanese garden. As for the use of 3D point cloud data in the Japanese garden, it is the visual use such as the animations. Therefore, some applications of the 3D point cloud data was investigated that are as follows. Firstly, ortho image of the Japanese garden could be outputted for the 3D point cloud data. Secondly, contour lines of the Japanese garden also could be extracted, and drawing was became possible. Consequently, drawing of Japanese garden was realized more efficiency due to achievement of laborsaving. Moreover, operation of the measurement and drawing could be performed without technical skills, and any observers can be operated. Furthermore, 3D point cloud data could be edited, and some landscape simulations that extraction and placement of tree or some objects were became possible. As a result, it can be said that the terrestrial laser scanner will be applied in landscape architecture field more widely.

  3. plas.io: Open Source, Browser-based WebGL Point Cloud Visualization

    NASA Astrophysics Data System (ADS)

    Butler, H.; Finnegan, D. C.; Gadomski, P. J.; Verma, U. K.

    2014-12-01

    Point cloud data, in the form of Light Detection and Ranging (LiDAR), RADAR, or semi-global matching (SGM) image processing, are rapidly becoming a foundational data type to quantify and characterize geospatial processes. Visualization of these data, due to overall volume and irregular arrangement, is often difficult. Technological advancement in web browsers, in the form of WebGL and HTML5, have made interactivity and visualization capabilities ubiquitously available which once only existed in desktop software. plas.io is an open source JavaScript application that provides point cloud visualization, exploitation, and compression features in a web-browser platform, reducing the reliance for client-based desktop applications. The wide reach of WebGL and browser-based technologies mean plas.io's capabilities can be delivered to a diverse list of devices -- from phones and tablets to high-end workstations -- with very little custom software development. These properties make plas.io an ideal open platform for researchers and software developers to communicate visualizations of complex and rich point cloud data to devices to which everyone has easy access.

  4. - and Scene-Guided Integration of Tls and Photogrammetric Point Clouds for Landslide Monitoring

    NASA Astrophysics Data System (ADS)

    Zieher, T.; Toschi, I.; Remondino, F.; Rutzinger, M.; Kofler, Ch.; Mejia-Aguilar, A.; Schlögel, R.

    2018-05-01

    Terrestrial and airborne 3D imaging sensors are well-suited data acquisition systems for the area-wide monitoring of landslide activity. State-of-the-art surveying techniques, such as terrestrial laser scanning (TLS) and photogrammetry based on unmanned aerial vehicle (UAV) imagery or terrestrial acquisitions have advantages and limitations associated with their individual measurement principles. In this study we present an integration approach for 3D point clouds derived from these techniques, aiming at improving the topographic representation of landslide features while enabling a more accurate assessment of landslide-induced changes. Four expert-based rules involving local morphometric features computed from eigenvectors, elevation and the agreement of the individual point clouds, are used to choose within voxels of selectable size which sensor's data to keep. Based on the integrated point clouds, digital surface models and shaded reliefs are computed. Using an image correlation technique, displacement vectors are finally derived from the multi-temporal shaded reliefs. All results show comparable patterns of landslide movement rates and directions. However, depending on the applied integration rule, differences in spatial coverage and correlation strength emerge.

  5. Cloud point phenomena for POE-type nonionic surfactants in a model room temperature ionic liquid.

    PubMed

    Inoue, Tohru; Misono, Takeshi

    2008-10-15

    The cloud point phenomenon has been investigated for the solutions of polyoxyethylene (POE)-type nonionic surfactants (C(12)E(5), C(12)E(6), C(12)E(7), C(10)E(6), and C(14)E(6)) in 1-butyl-3-methylimidazolium tetrafluoroborate (bmimBF(4)), a typical room temperature ionic liquid (RTIL). The cloud point, T(c), increases with the elongation of the POE chain, while decreases with the increase in the hydrocarbon chain length. This demonstrates that the solvophilicity/solvophobicity of the surfactants in RTIL comes from POE chain/hydrocarbon chain. When compared with an aqueous system, the chain length dependence of T(c) is larger for the RTIL system regarding both POE and hydrocarbon chains; in particular, hydrocarbon chain length affects T(c) much more strongly in the RTIL system than in equivalent aqueous systems. In a similar fashion to the much-studied aqueous systems, the micellar growth is also observed in this RTIL solvent as the temperature approaches T(c). The cloud point curves have been analyzed using a Flory-Huggins-type model based on phase separation in polymer solutions.

  6. Road traffic sign detection and classification from mobile LiDAR point clouds

    NASA Astrophysics Data System (ADS)

    Weng, Shengxia; Li, Jonathan; Chen, Yiping; Wang, Cheng

    2016-03-01

    Traffic signs are important roadway assets that provide valuable information of the road for drivers to make safer and easier driving behaviors. Due to the development of mobile mapping systems that can efficiently acquire dense point clouds along the road, automated detection and recognition of road assets has been an important research issue. This paper deals with the detection and classification of traffic signs in outdoor environments using mobile light detection and ranging (Li- DAR) and inertial navigation technologies. The proposed method contains two main steps. It starts with an initial detection of traffic signs based on the intensity attributes of point clouds, as the traffic signs are always painted with highly reflective materials. Then, the classification of traffic signs is achieved based on the geometric shape and the pairwise 3D shape context. Some results and performance analyses are provided to show the effectiveness and limits of the proposed method. The experimental results demonstrate the feasibility and effectiveness of the proposed method in detecting and classifying traffic signs from mobile LiDAR point clouds.

  7. Digital Investigations of AN Archaeological Smart Point Cloud: a Real Time Web-Based Platform to Manage the Visualisation of Semantical Queries

    NASA Astrophysics Data System (ADS)

    Poux, F.; Neuville, R.; Hallot, P.; Van Wersch, L.; Luczfalvy Jancsó, A.; Billen, R.

    2017-05-01

    While virtual copies of the real world tend to be created faster than ever through point clouds and derivatives, their working proficiency by all professionals' demands adapted tools to facilitate knowledge dissemination. Digital investigations are changing the way cultural heritage researchers, archaeologists, and curators work and collaborate to progressively aggregate expertise through one common platform. In this paper, we present a web application in a WebGL framework accessible on any HTML5-compatible browser. It allows real time point cloud exploration of the mosaics in the Oratory of Germigny-des-Prés, and emphasises the ease of use as well as performances. Our reasoning engine is constructed over a semantically rich point cloud data structure, where metadata has been injected a priori. We developed a tool that directly allows semantic extraction and visualisation of pertinent information for the end users. It leads to efficient communication between actors by proposing optimal 3D viewpoints as a basis on which interactions can grow.

  8. Point Cloud Based Approach to Stem Width Extraction of Sorghum

    DOE PAGES

    Jin, Jihui; Zakhor, Avideh

    2017-01-29

    A revolution in the field of genomics has produced vast amounts of data and furthered our understanding of the genotypephenotype map, but is currently constrained by manually intensive or limited phenotype data collection. We propose an algorithm to estimate stem width, a key characteristic used for biomass potential evaluation, from 3D point cloud data collected by a robot equipped with a depth sensor in a single pass in a standard field. The algorithm applies a two step alignment to register point clouds in different frames, a Frangi filter to identify stemlike objects in the point cloud and an orientation basedmore » filter to segment out and refine individual stems for width estimation. Individually, detected stems which are split due to occlusions are merged and then registered with previously found stems in previous camera frames in order to track temporally. We then refine the estimates to produce an accurate histogram of width estimates per plot. Since the plants in each plot are genetically identical, distributions of the stem width per plot can be useful in identifying genetically superior sorghum for biofuels.« less

  9. Large-Scale Point-Cloud Visualization through Localized Textured Surface Reconstruction.

    PubMed

    Arikan, Murat; Preiner, Reinhold; Scheiblauer, Claus; Jeschke, Stefan; Wimmer, Michael

    2014-09-01

    In this paper, we introduce a novel scene representation for the visualization of large-scale point clouds accompanied by a set of high-resolution photographs. Many real-world applications deal with very densely sampled point-cloud data, which are augmented with photographs that often reveal lighting variations and inaccuracies in registration. Consequently, the high-quality representation of the captured data, i.e., both point clouds and photographs together, is a challenging and time-consuming task. We propose a two-phase approach, in which the first (preprocessing) phase generates multiple overlapping surface patches and handles the problem of seamless texture generation locally for each patch. The second phase stitches these patches at render-time to produce a high-quality visualization of the data. As a result of the proposed localization of the global texturing problem, our algorithm is more than an order of magnitude faster than equivalent mesh-based texturing techniques. Furthermore, since our preprocessing phase requires only a minor fraction of the whole data set at once, we provide maximum flexibility when dealing with growing data sets.

  10. Deformation analysis of a sinkhole in Thuringia using multi-temporal multi-view stereo 3D reconstruction data

    NASA Astrophysics Data System (ADS)

    Petschko, Helene; Goetz, Jason; Schmidt, Sven

    2017-04-01

    Sinkholes are a serious threat on life, personal property and infrastructure in large parts of Thuringia. Over 9000 sinkholes have been documented by the Geological Survey of Thuringia, which are caused by collapsing hollows which formed due to solution processes within the local bedrock material. However, little is known about surface processes and their dynamics at the flanks of the sinkhole once the sinkhole has shaped. These processes are of high interest as they might lead to dangerous situations at or within the vicinity of the sinkhole. Our objective was the analysis of these deformations over time in 3D by applying terrestrial photogrammetry with a simple DSLR camera. Within this study, we performed an analysis of deformations within a sinkhole close to Bad Frankenhausen (Thuringia) using terrestrial photogrammetry and multi-view stereo 3D reconstruction to obtain a 3D point cloud describing the morphology of the sinkhole. This was performed for multiple data collection campaigns over a 6-month period. The photos of the sinkhole were taken with a Nikon D3000 SLR Camera. For the comparison of the point clouds the Multiscale Model to Model Comparison (M3C2) plugin of the software CloudCompare was used. It allows to apply advanced methods of point cloud difference calculation which considers the co-registration error between two point clouds for assessing the significance of the calculated difference (given in meters). Three Styrofoam cuboids of known dimensions (16 cm wide/29 cm high/11.5 cm deep) were placed within the sinkhole to test the accuracy of the point cloud difference calculation. The multi-view stereo 3D reconstruction was performed with Agisoft Photoscan. Preliminary analysis indicates that about 26% of the sinkhole showed changes exceeding the co-registration error of the point clouds. The areas of change can mainly be detected on the flanks of the sinkhole and on an earth pillar that formed in the center of the sinkhole. These changes describe toppling (positive change of a few centimeters at the earth pillar) and a few erosion processes along the flanks (negative change of a few centimeters) compared to the first date of data acquisition. Additionally, the Styrofoam cuboids have successfully been detected with an observed depth change of 10 cm. However, the limitations of this approach related to the co-registration of the point clouds and data acquisition (windy conditions) have to be analyzed in more detail.

  11. The Use of Compensation Strategies in the Iranian EFL Learners' Speaking and Its Relationship with Their Foreign Language Proficiency

    ERIC Educational Resources Information Center

    Taheri, Ali Akbar; Davoudi, Mohammad

    2016-01-01

    Compensation Strategies (CpSs) are strategies which a language user employs in order to achieve his intended meaning when precise linguistic forms are for some reasons not available at that point of communication. Different factors may influence the use of CpSs, among which the level of language proficiency is one of the most important ones. The…

  12. Understanding tungsten divertor sourcing and SOL transport using multiple poloidally-localized sources in DIII-D ELM-y H-mode discharges

    NASA Astrophysics Data System (ADS)

    Unterberg, Ea; Donovan, D.; Barton, J.; Wampler, Wr; Abrams, T.; Thomas, Dm; Petrie, T.; Guo, Hy; Stangeby, Pg; Elder, Jd; Rudakov, D.; Grierson, B.; Victor, B.

    2017-10-01

    Experiments using metal inserts with novel isotopically-enriched tungsten coatings at the outer divertor strike point (OSP) have provided unique insight into the ELM-induced sourcing, main-SOL transport, and core accumulation control mechanisms of W for a range of operating conditions. This experimental approach has used a multi-head, dual-facing collector probe (CP) at the outboard midplane, as well as W-I and core W spectroscopy. Using the CP system, the total amount of W deposited relative to source measurements shows a clear dependence on ELM size, ELM frequency, and strike point location, with large ELMs depositing significantly more W on the CP from the far-SOL source. Additionally, high spatial ( 1mm) and ELM resolved spectroscopic measurements of W sourcing indicate shifts in the peak erosion rate. Furthermore, high performance discharges with rapid ELMs show core W concentrations of few 10-5, and the CP deposition profile indicates W is predominantly transported to the midplane from the OSP rather than from the far-SOL region. The low central W concentration is shown to be due to flattening of the main plasma density profile, presumably by on-axis electron cyclotron heating. Work supported under USDOE Cooperative Agreement DE-FC02-04ER54698.

  13. Serum ceruloplasmin protein expression and activity increases in iron-deficient rats and is further enhanced by higher dietary copper intake

    PubMed Central

    Ranganathan, Perungavur N.; Lu, Yan; Jiang, Lingli; Kim, Changae

    2011-01-01

    Increases in serum and liver copper content are noted during iron deficiency in mammals, suggesting that copper-dependent processes participate during iron deprivation. One point of intersection between the 2 metals is the liver-derived, multicopper ferroxidase ceruloplasmin (Cp) that is important for iron release from certain tissues. The current study sought to explore Cp expression and activity during physiologic states in which hepatic copper loading occurs (eg, iron deficiency). Weanling rats were fed control or low iron diets containing low, normal, or high copper for ∼ 5 weeks, and parameters of iron homeostasis were measured. Liver copper increased in control and iron-deficient rats fed extra copper. Hepatic Cp mRNA levels did not change; however, serum Cp protein was higher during iron deprivation and with higher copper consumption. In-gel and spectrophotometric ferroxidase and amine oxidase assays demonstrated that Cp activity was enhanced when hepatic copper loading occurred. Interestingly, liver copper levels strongly correlated with Cp protein expression and activity. These observations support the possibility that liver copper loading increases metallation of the Cp protein, leading to increased production of the holo enzyme. Moreover, this phenomenon may play an important role in the compensatory response to maintain iron homeostasis during iron deficiency. PMID:21768302

  14. Effect of vowel context on test-retest nasalance score variability in children with and without cleft palate.

    PubMed

    Ha, Seunghee; Jung, Seungeun; Koh, Kyung S

    2018-06-01

    The purpose of this study was to determine whether test-retest nasalance score variability differs between Korean children with and without cleft palate (CP) and vowel context influences variability in nasalance score. Thirty-four 3-to-5-year-old children with and without CP participated in the study. Three 8-syllable speech stimuli devoid of nasal consonants were used for data collection. Each stimulus was loaded with high, low, or mixed vowels, respectively. All participants were asked to repeat the speech stimuli twice after the examiner, and an immediate test-retest nasalance score was assessed with no headgear change. Children with CP exhibited significantly greater absolute difference in nasalance scores than children without CP. Variability in nasalance scores was significantly different for the vowel context, and the high vowel sentence showed a significantly larger difference in nasalance scores than the low vowel sentence. The cumulative frequencies indicated that, for children with CP in the high vowel sentence, only 8 of 17 (47%) repeated nasalance scores were within 5 points. Test-retest nasalance score variability was greater for children with CP than children without CP, and there was greater variability for the high vowel sentence(s) for both groups. Copyright © 2018 Elsevier B.V. All rights reserved.

  15. Ifcwall Reconstruction from Unstructured Point Clouds

    NASA Astrophysics Data System (ADS)

    Bassier, M.; Klein, R.; Van Genechten, B.; Vergauwen, M.

    2018-05-01

    The automated reconstruction of Building Information Modeling (BIM) objects from point cloud data is still ongoing research. A key aspect is the creation of accurate wall geometry as it forms the basis for further reconstruction of objects in a BIM. After segmenting and classifying the initial point cloud, the labelled segments are processed and the wall topology is reconstructed. However, the preocedure is challenging due to noise, occlusions and the complexity of the input data.In this work, a method is presented to automatically reconstruct consistent wall geometry from point clouds. More specifically, the use of room information is proposed to aid the wall topology creation. First, a set of partial walls is constructed based on classified planar primitives. Next, the rooms are identified using the retrieved wall information along with the floors and ceilings. The wall topology is computed by the intersection of the partial walls conditioned on the room information. The final wall geometry is defined by creating IfcWallStandardCase objects conform the IFC4 standard. The result is a set of walls according to the as-built conditions of a building. The experiments prove that the used method is a reliable framework for wall reconstruction from unstructured point cloud data. Also, the implementation of room information reduces the rate of false positives for the wall topology. Given the walls, ceilings and floors, 94% of the rooms is correctly identified. A key advantage of the proposed method is that it deals with complex rooms and is not bound to single storeys.

  16. Automatic Generation of Indoor Navigable Space Using a Point Cloud and its Scanner Trajectory

    NASA Astrophysics Data System (ADS)

    Staats, B. R.; Diakité, A. A.; Voûte, R. L.; Zlatanova, S.

    2017-09-01

    Automatic generation of indoor navigable models is mostly based on 2D floor plans. However, in many cases the floor plans are out of date. Buildings are not always built according to their blue prints, interiors might change after a few years because of modified walls and doors, and furniture may be repositioned to the user's preferences. Therefore, new approaches for the quick recording of indoor environments should be investigated. This paper concentrates on laser scanning with a Mobile Laser Scanner (MLS) device. The MLS device stores a point cloud and its trajectory. If the MLS device is operated by a human, the trajectory contains information which can be used to distinguish different surfaces. In this paper a method is presented for the identification of walkable surfaces based on the analysis of the point cloud and the trajectory of the MLS scanner. This method consists of several steps. First, the point cloud is voxelized. Second, the trajectory is analysing and projecting to acquire seed voxels. Third, these seed voxels are generated into floor regions by the use of a region growing process. By identifying dynamic objects, doors and furniture, these floor regions can be modified so that each region represents a specific navigable space inside a building as a free navigable voxel space. By combining the point cloud and its corresponding trajectory, the walkable space can be identified for any type of building even if the interior is scanned during business hours.

  17. Rapid, semi-automatic fracture and contact mapping for point clouds, images and geophysical data

    NASA Astrophysics Data System (ADS)

    Thiele, Samuel T.; Grose, Lachlan; Samsu, Anindita; Micklethwaite, Steven; Vollgger, Stefan A.; Cruden, Alexander R.

    2017-12-01

    The advent of large digital datasets from unmanned aerial vehicle (UAV) and satellite platforms now challenges our ability to extract information across multiple scales in a timely manner, often meaning that the full value of the data is not realised. Here we adapt a least-cost-path solver and specially tailored cost functions to rapidly interpolate structural features between manually defined control points in point cloud and raster datasets. We implement the method in the geographic information system QGIS and the point cloud and mesh processing software CloudCompare. Using these implementations, the method can be applied to a variety of three-dimensional (3-D) and two-dimensional (2-D) datasets, including high-resolution aerial imagery, digital outcrop models, digital elevation models (DEMs) and geophysical grids. We demonstrate the algorithm with four diverse applications in which we extract (1) joint and contact patterns in high-resolution orthophotographs, (2) fracture patterns in a dense 3-D point cloud, (3) earthquake surface ruptures of the Greendale Fault associated with the Mw7.1 Darfield earthquake (New Zealand) from high-resolution light detection and ranging (lidar) data, and (4) oceanic fracture zones from bathymetric data of the North Atlantic. The approach improves the consistency of the interpretation process while retaining expert guidance and achieves significant improvements (35-65 %) in digitisation time compared to traditional methods. Furthermore, it opens up new possibilities for data synthesis and can quantify the agreement between datasets and an interpretation.

  18. Differential molar heat capacities to test ideal solubility estimations.

    PubMed

    Neau, S H; Bhandarkar, S V; Hellmuth, E W

    1997-05-01

    Calculation of the ideal solubility of a crystalline solute in a liquid solvent requires knowledge of the difference in the molar heat capacity at constant pressure of the solid and the supercooled liquid forms of the solute, delta Cp. Since this parameter is not usually known, two assumptions have been used to simplify the expression. The first is that delta Cp can be considered equal to zero; the alternate assumption is that the molar entropy of fusion, delta Sf, is an estimate of delta Cp. Reports claiming the superiority of one assumption over the other, on the basis of calculations done using experimentally determined parameters, have appeared in the literature. The validity of the assumptions in predicting the ideal solubility of five structurally unrelated compounds of pharmaceutical interest, with melting points in the range 420 to 470 K, was evaluated in this study. Solid and liquid heat capacities of each compound near its melting point were determined using differential scanning calorimetry. Linear equations describing the heat capacities were extrapolated to the melting point to generate the differential molar heat capacity. Linear data were obtained for both crystal and liquid heat capacities of sample and test compounds. For each sample, ideal solubility at 298 K was calculated and compared to the two estimates generated using literature equations based on the differential molar heat capacity assumptions. For the compounds studied, delta Cp was not negligible and was closer to delta Sf than to zero. However, neither of the two assumptions was valid for accurately estimating the ideal solubility as given by the full equation.

  19. Study on the anatomic relationship between the clavicle and the coracoid process using computed tomography scans of the shoulder.

    PubMed

    Sella, Guilherme do Val; Miyazaki, Alberto N; Nico, Marcelo A C; Filho, Guinel H; Silva, Luciana A; Checchia, Sergio L

    2017-10-01

    The current trend in the treatment of acromioclavicular dislocations is to reconstruct the coracoclavicular ligaments by using transosseous tunnels in the coracoid process or in the clavicle, yet there is no definition as to the location of these. To study the anatomic relationship between the coracoid process and the clavicle, we made measurements to find a convergence point (cP) between them that has intraoperative applicability for creating transosseous tunnels. We analyzed 74 computed tomography scans (40 female and 34 male patients). Measurements were taken in the axial and sagittal planes and obtained from a cP, as determined by the intersection of the cortical surface of the clavicle and the coracoid process, with various relationships having been established. On average, the cP was determined to be about 2.9 cm and 2.5 cm distant from the coracoid process apex for male and female patients, respectively, whereas the width at this position was determined to be 2.1 cm and 1.9 cm. In the clavicle, this point is on average 2.9 cm and 2.5 cm distant from the acromioclavicular joint in male and female patients, respectively, and its anteroposterior width at this point is on average 1.9 cm and 1.6 cm. The cP of the clavicle and the coracoid process was determined with the aim of preparing bone tunnels in operations for treating acromioclavicular dislocations. Copyright © 2017 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.

  20. Low cost digital photogrammetry: From the extraction of point clouds by SFM technique to 3D mathematical modeling

    NASA Astrophysics Data System (ADS)

    Michele, Mangiameli; Giuseppe, Mussumeci; Salvatore, Zito

    2017-07-01

    The Structure From Motion (SFM) is a technique applied to a series of photographs of an object that returns a 3D reconstruction made up by points in the space (point clouds). This research aims at comparing the results of the SFM approach with the results of a 3D laser scanning in terms of density and accuracy of the model. The experience was conducted by detecting several architectural elements (walls and portals of historical buildings) both with a 3D laser scanner of the latest generation and an amateur photographic camera. The point clouds acquired by laser scanner and those acquired by the photo camera have been systematically compared. In particular we present the experience carried out on the "Don Diego Pappalardo Palace" site in Pedara (Catania, Sicily).

  1. Identification of a Compound Spinel and Silicate Presolar Grain in a Chondritic Interplanetary Dust Particle

    NASA Technical Reports Server (NTRS)

    Nguyen, A. N.; Nakamura-Messenger, K.; Messenger, S.; Keller, L. P.; Kloeck, W.

    2014-01-01

    Anhydrous chondritic porous interplanetary dust particles (CP IDPs) have undergone minimal parent body alteration and contain an assemblage of highly primitive materials, including molecular cloud material, presolar grains, and material that formed in the early solar nebula [1-3]. The exact parent bodies of individual IDPs are not known, but IDPs that have extremely high abundances of presolar silicates (up to 1.5%) most likely have cometary origins [1, 4]. The presolar grain abundance among these minimally altered CP IDPs varies widely. "Isotopically primitive" IDPs distinguished by anomalous bulk N isotopic compositions, numerous 15N-rich hotspots, and some C isotopic anomalies have higher average abundances of presolar grains (375 ppm) than IDPs with isotopically normal bulk N (<10 ppm) [5]. Some D and N isotopic anomalies have been linked to carbonaceous matter, though this material is only rarely isotopically anomalous in C [1, 5, 6]. Previous studies of the bulk chemistry and, in some samples, the mineralogy of select anhydrous CP IDPs indicate a link between high C abundance and pyroxene-dominated mineralogy [7]. In this study, we conduct coordinated mineralogical and isotopic analyses of samples that were analyzed by [7] to characterize isotopically anomalous materials and to establish possible correlations with C abundance.

  2. Facets : a Cloudcompare Plugin to Extract Geological Planes from Unstructured 3d Point Clouds

    NASA Astrophysics Data System (ADS)

    Dewez, T. J. B.; Girardeau-Montaut, D.; Allanic, C.; Rohmer, J.

    2016-06-01

    Geological planar facets (stratification, fault, joint…) are key features to unravel the tectonic history of rock outcrop or appreciate the stability of a hazardous rock cliff. Measuring their spatial attitude (dip and strike) is generally performed by hand with a compass/clinometer, which is time consuming, requires some degree of censoring (i.e. refusing to measure some features judged unimportant at the time), is not always possible for fractures higher up on the outcrop and is somewhat hazardous. 3D virtual geological outcrop hold the potential to alleviate these issues. Efficiently segmenting massive 3D point clouds into individual planar facets, inside a convenient software environment was lacking. FACETS is a dedicated plugin within CloudCompare v2.6.2 (http://cloudcompare.org/ ) implemented to perform planar facet extraction, calculate their dip and dip direction (i.e. azimuth of steepest decent) and report the extracted data in interactive stereograms. Two algorithms perform the segmentation: Kd-Tree and Fast Marching. Both divide the point cloud into sub-cells, then compute elementary planar objects and aggregate them progressively according to a planeity threshold into polygons. The boundaries of the polygons are adjusted around segmented points with a tension parameter, and the facet polygons can be exported as 3D polygon shapefiles towards third party GIS software or simply as ASCII comma separated files. One of the great features of FACETS is the capability to explore planar objects but also 3D points with normals with the stereogram tool. Poles can be readily displayed, queried and manually segmented interactively. The plugin blends seamlessly into CloudCompare to leverage all its other 3D point cloud manipulation features. A demonstration of the tool is presented to illustrate these different features. While designed for geological applications, FACETS could be more widely applied to any planar objects.

  3. Extracting valley-ridge lines from point-cloud-based 3D fingerprint models.

    PubMed

    Pang, Xufang; Song, Zhan; Xie, Wuyuan

    2013-01-01

    3D fingerprinting is an emerging technology with the distinct advantage of touchless operation. More important, 3D fingerprint models contain more biometric information than traditional 2D fingerprint images. However, current approaches to fingerprint feature detection usually must transform the 3D models to a 2D space through unwrapping or other methods, which might introduce distortions. A new approach directly extracts valley-ridge features from point-cloud-based 3D fingerprint models. It first applies the moving least-squares method to fit a local paraboloid surface and represent the local point cloud area. It then computes the local surface's curvatures and curvature tensors to facilitate detection of the potential valley and ridge points. The approach projects those points to the most likely valley-ridge lines, using statistical means such as covariance analysis and cross correlation. To finally extract the valley-ridge lines, it grows the polylines that approximate the projected feature points and removes the perturbations between the sampled points. Experiments with different 3D fingerprint models demonstrate this approach's feasibility and performance.

  4. Wax inhibitor based on ethylene vinyl acetate with methyl methacrylate and diethanolamine for crude oil pipeline

    NASA Astrophysics Data System (ADS)

    Anisuzzaman, S. M.; Abang, S.; Bono, A.; Krishnaiah, D.; Karali, R.; Safuan, M. K.

    2017-06-01

    Wax precipitation and deposition is one of the most significant flow assurance challenges in the production system of the crude oil. Wax inhibitors are developed as a preventive strategy to avoid an absolute wax deposition. Wax inhibitors are polymers which can be known as pour point depressants as they impede the wax crystals formation, growth, and deposition. In this study three formulations of wax inhibitors were prepared, ethylene vinyl acetate, ethylene vinyl acetate co-methyl methacrylate (EVA co-MMA) and ethylene vinyl acetate co-diethanolamine (EVA co-DEA) and the comparison of their efficiencies in terms of cloud point¸ pour point, performance inhibition efficiency (%PIE) and viscosity were evaluated. The cloud point and pour point for both EVA and EVA co-MMA were similar, 15°C and 10-5°C, respectively. Whereas, the cloud point and pour point for EVA co-DEA were better, 10°C and 10-5°C respectively. In conclusion, EVA co-DEA had shown the best % PIE (28.42%) which indicates highest percentage reduction of wax deposit as compared to the other two inhibitors.

  5. Marine Boundary Layer Cloud Properties From AMF Point Reyes Satellite Observations

    NASA Technical Reports Server (NTRS)

    Jensen, Michael; Vogelmann, Andrew M.; Luke, Edward; Minnis, Patrick; Miller, Mark A.; Khaiyer, Mandana; Nguyen, Louis; Palikonda, Rabindra

    2007-01-01

    Cloud Diameter, C(sub D), offers a simple measure of Marine Boundary Layer (MBL) cloud organization. The diurnal cycle of cloud-physical properties and C(sub D) at Pt Reyes are consistent with previous work. The time series of C(sub D) can be used to identify distinct mesoscale organization regimes within the Pt. Reyes observation period.

  6. a New Approach for Subway Tunnel Deformation Monitoring: High-Resolution Terrestrial Laser Scanning

    NASA Astrophysics Data System (ADS)

    Li, J.; Wan, Y.; Gao, X.

    2012-07-01

    With the improvement of the accuracy and efficiency of laser scanning technology, high-resolution terrestrial laser scanning (TLS) technology can obtain high precise points-cloud and density distribution and can be applied to high-precision deformation monitoring of subway tunnels and high-speed railway bridges and other fields. In this paper, a new approach using a points-cloud segmentation method based on vectors of neighbor points and surface fitting method based on moving least squares was proposed and applied to subway tunnel deformation monitoring in Tianjin combined with a new high-resolution terrestrial laser scanner (Riegl VZ-400). There were three main procedures. Firstly, a points-cloud consisted of several scanning was registered by linearized iterative least squares approach to improve the accuracy of registration, and several control points were acquired by total stations (TS) and then adjusted. Secondly, the registered points-cloud was resampled and segmented based on vectors of neighbor points to select suitable points. Thirdly, the selected points were used to fit the subway tunnel surface with moving least squares algorithm. Then a series of parallel sections obtained from temporal series of fitting tunnel surfaces were compared to analysis the deformation. Finally, the results of the approach in z direction were compared with the fiber optical displacement sensor approach and the results in x, y directions were compared with TS respectively, and comparison results showed the accuracy errors of x, y, z directions were respectively about 1.5 mm, 2 mm, 1 mm. Therefore the new approach using high-resolution TLS can meet the demand of subway tunnel deformation monitoring.

  7. New from the Old - Measuring Coastal Cliff Change with Historical Oblique Aerial Photos

    NASA Astrophysics Data System (ADS)

    Warrick, J. A.; Ritchie, A.

    2016-12-01

    Oblique aerial photographs are commonly collected to document coastal landscapes. Here we show that these historical photographs can be used to develop topographic models with Structure-from-Motion (SfM) photogrammetric techniques if adequate photo-to-photo overlaps exist. Focusing on the 60-m high cliffs of Fort Funston, California, photographs from the California Coastal Records Project were combined with ground control points to develop topographic point clouds of the study area for five years between 2002 and 2010. Uncertainties in the results were assessed by comparing SfM-derived point clouds with airborne lidar data, and the differences between these data were related to the number and spatial distribution of ground control points used in the SfM analyses. With six or more ground control points the root mean squared error between the SfM and lidar data was less than 0.3 m (minimum = 0.18 m) and the mean systematic error was consistently less than 0.10 m. Because of the oblique orientation of the imagery, the SfM-derived point clouds provided coverage on vertical to overhanging portions of the cliff, and point densities from the SfM techniques averaged between 17 and 161 points/m2 on the cliff face. The time-series of topographic point clouds revealed many topographic changes, including landslides, rockfalls and the erosion of landslide talus along the Fort Funston beach. Thus, we concluded that historical oblique photographs, such as those generated by the California Coastal Records Project, can provide useful tools for mapping coastal topography and measuring coastal change.

  8. Min-Cut Based Segmentation of Airborne LIDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Ural, S.; Shan, J.

    2012-07-01

    Introducing an organization to the unstructured point cloud before extracting information from airborne lidar data is common in many applications. Aggregating the points with similar features into segments in 3-D which comply with the nature of actual objects is affected by the neighborhood, scale, features and noise among other aspects. In this study, we present a min-cut based method for segmenting the point cloud. We first assess the neighborhood of each point in 3-D by investigating the local geometric and statistical properties of the candidates. Neighborhood selection is essential since point features are calculated within their local neighborhood. Following neighborhood determination, we calculate point features and determine the clusters in the feature space. We adapt a graph representation from image processing which is especially used in pixel labeling problems and establish it for the unstructured 3-D point clouds. The edges of the graph that are connecting the points with each other and nodes representing feature clusters hold the smoothness costs in the spatial domain and data costs in the feature domain. Smoothness costs ensure spatial coherence, while data costs control the consistency with the representative feature clusters. This graph representation formalizes the segmentation task as an energy minimization problem. It allows the implementation of an approximate solution by min-cuts for a global minimum of this NP hard minimization problem in low order polynomial time. We test our method with airborne lidar point cloud acquired with maximum planned post spacing of 1.4 m and a vertical accuracy 10.5 cm as RMSE. We present the effects of neighborhood and feature determination in the segmentation results and assess the accuracy and efficiency of the implemented min-cut algorithm as well as its sensitivity to the parameters of the smoothness and data cost functions. We find that smoothness cost that only considers simple distance parameter does not strongly conform to the natural structure of the points. Including shape information within the energy function by assigning costs based on the local properties may help to achieve a better representation for segmentation.

  9. Clustering, randomness, and regularity in cloud fields: 2. Cumulus cloud fields

    NASA Astrophysics Data System (ADS)

    Zhu, T.; Lee, J.; Weger, R. C.; Welch, R. M.

    1992-12-01

    During the last decade a major controversy has been brewing concerning the proper characterization of cumulus convection. The prevailing view has been that cumulus clouds form in clusters, in which cloud spacing is closer than that found for the overall cloud field and which maintains its identity over many cloud lifetimes. This "mutual protection hypothesis" of Randall and Huffman (1980) has been challenged by the "inhibition hypothesis" of Ramirez et al. (1990) which strongly suggests that the spatial distribution of cumuli must tend toward a regular distribution. A dilemma has resulted because observations have been reported to support both hypotheses. The present work reports a detailed analysis of cumulus cloud field spatial distributions based upon Landsat, Advanced Very High Resolution Radiometer, and Skylab data. Both nearest-neighbor and point-to-cloud cumulative distribution function statistics are investigated. The results show unequivocally that when both large and small clouds are included in the cloud field distribution, the cloud field always has a strong clustering signal. The strength of clustering is largest at cloud diameters of about 200-300 m, diminishing with increasing cloud diameter. In many cases, clusters of small clouds are found which are not closely associated with large clouds. As the small clouds are eliminated from consideration, the cloud field typically tends towards regularity. Thus it would appear that the "inhibition hypothesis" of Ramirez and Bras (1990) has been verified for the large clouds. However, these results are based upon the analysis of point processes. A more exact analysis also is made which takes into account the cloud size distributions. Since distinct clouds are by definition nonoverlapping, cloud size effects place a restriction upon the possible locations of clouds in the cloud field. The net effect of this analysis is that the large clouds appear to be randomly distributed, with only weak tendencies towards regularity. For clouds less than 1 km in diameter, the average nearest-neighbor distance is equal to 3-7 cloud diameters. For larger clouds, the ratio of cloud nearest-neighbor distance to cloud diameter increases sharply with increasing cloud diameter. This demonstrates that large clouds inhibit the growth of other large clouds in their vicinity. Nevertheless, this leads to random distributions of large clouds, not regularity.

  10. A case study of microphysical structures and hydrometeor phase in convection using radar Doppler spectra at Darwin, Australia

    DOE PAGES

    Riihimaki, Laura D.; Comstock, J. M.; Luke, E.; ...

    2017-07-12

    To understand the microphysical processes that impact diabatic heating and cloud lifetimes in convection, we need to characterize the spatial distribution of supercooled liquid water. To address this observational challenge, ground-based vertically pointing active sensors at the Darwin Atmospheric Radiation Measurement site are used to classify cloud phase within a deep convective cloud. The cloud cannot be fully observed by a lidar due to signal attenuation. Therefore, we developed an objective method for identifying hydrometeor classes, including mixed-phase conditions, using k-means clustering on parameters that describe the shape of the Doppler spectra from vertically pointing Ka-band cloud radar. Furthermore, thismore » approach shows that multiple, overlapping mixed-phase layers exist within the cloud, rather than a single region of supercooled liquid. Diffusional growth calculations show that the conditions for the Wegener-Bergeron-Findeisen process exist within one of these mixed-phase microstructures.« less

  11. A case study of microphysical structures and hydrometeor phase in convection using radar Doppler spectra at Darwin, Australia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riihimaki, Laura D.; Comstock, J. M.; Luke, E.

    To understand the microphysical processes that impact diabatic heating and cloud lifetimes in convection, we need to characterize the spatial distribution of supercooled liquid water. To address this observational challenge, ground-based vertically pointing active sensors at the Darwin Atmospheric Radiation Measurement site are used to classify cloud phase within a deep convective cloud. The cloud cannot be fully observed by a lidar due to signal attenuation. Therefore, we developed an objective method for identifying hydrometeor classes, including mixed-phase conditions, using k-means clustering on parameters that describe the shape of the Doppler spectra from vertically pointing Ka-band cloud radar. Furthermore, thismore » approach shows that multiple, overlapping mixed-phase layers exist within the cloud, rather than a single region of supercooled liquid. Diffusional growth calculations show that the conditions for the Wegener-Bergeron-Findeisen process exist within one of these mixed-phase microstructures.« less

  12. Determination of Large-Scale Cloud Ice Water Concentration by Combining Surface Radar and Satellite Data in Support of ARM SCM Activities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Guosheng

    2013-03-15

    Single-column modeling (SCM) is one of the key elements of Atmospheric Radiation Measurement (ARM) research initiatives for the development and testing of various physical parameterizations to be used in general circulation models (GCMs). The data required for use with an SCM include observed vertical profiles of temperature, water vapor, and condensed water, as well as the large-scale vertical motion and tendencies of temperature, water vapor, and condensed water due to horizontal advection. Surface-based measurements operated at ARM sites and upper-air sounding networks supply most of the required variables for model inputs, but do not provide the horizontal advection term ofmore » condensed water. Since surface cloud radar and microwave radiometer observations at ARM sites are single-point measurements, they can provide the amount of condensed water at the location of observation sites, but not a horizontal distribution of condensed water contents. Consequently, observational data for the large-scale advection tendencies of condensed water have not been available to the ARM cloud modeling community based on surface observations alone. This lack of advection data of water condensate could cause large uncertainties in SCM simulations. Additionally, to evaluate GCMs cloud physical parameterization, we need to compare GCM results with observed cloud water amounts over a scale that is large enough to be comparable to what a GCM grid represents. To this end, the point-measurements at ARM surface sites are again not adequate. Therefore, cloud water observations over a large area are needed. The main goal of this project is to retrieve ice water contents over an area of 10 x 10 deg. surrounding the ARM sites by combining surface and satellite observations. Built on the progress made during previous ARM research, we have conducted the retrievals of 3-dimensional ice water content by combining surface radar/radiometer and satellite measurements, and have produced 3-D cloud ice water contents in support of cloud modeling activities. The approach of the study is to expand a (surface) point measurement to an (satellite) area measurement. That is, the study takes the advantage of the high quality cloud measurements (particularly cloud radar and microwave radiometer measurements) at the point of the ARM sites. We use the cloud ice water characteristics derived from the point measurement to guide/constrain a satellite retrieval algorithm, then use the satellite algorithm to derive the 3-D cloud ice water distributions within an 10° (latitude) x 10° (longitude) area. During the research period, we have developed, validated and improved our cloud ice water retrievals, and have produced and archived at ARM website as a PI-product of the 3-D cloud ice water contents using combined satellite high-frequency microwave and surface radar observations for SGP March 2000 IOP and TWP-ICE 2006 IOP over 10 deg. x 10 deg. area centered at ARM SGP central facility and Darwin sites. We have also worked on validation of the 3-D ice water product by CloudSat data, synergy with visible/infrared cloud ice water retrievals for better results at low ice water conditions, and created a long-term (several years) of ice water climatology in 10 x 10 deg. area of ARM SGP and TWP sites and then compared it with GCMs.« less

  13. Tropical Oceanic Precipitation Processes over Warm Pool: 2D and 3D Cloud Resolving Model Simulations

    NASA Technical Reports Server (NTRS)

    Tao, W.- K.; Johnson, D.

    1998-01-01

    Rainfall is a key link in the hydrologic cycle as well as the primary heat source for the atmosphere, The vertical distribution of convective latent-heat release modulates the large-scale circulations of the tropics, Furthermore, changes in the moisture distribution at middle and upper levels of the troposphere can affect cloud distributions and cloud liquid water and ice contents. How the incoming solar and outgoing longwave radiation respond to these changes in clouds is a major factor in assessing climate change. Present large-scale weather and climate models simulate cloud processes only crudely, reducing confidence in their predictions on both global and regional scales. One of the most promising methods to test physical parameterizations used in General Circulation Models (GCMS) and climate models is to use field observations together with Cloud Resolving Models (CRMs). The CRMs use more sophisticated and physically realistic parameterizations of cloud microphysical processes, and allow for their complex interactions with solar and infrared radiative transfer processes. The CRMs can reasonably well resolve the evolution, structure, and life cycles of individual clouds and cloud systems, The major objective of this paper is to investigate the latent heating, moisture and momenti,im budgets associated with several convective systems developed during the TOGA COARE IFA - westerly wind burst event (late December, 1992). The tool for this study is the Goddard Cumulus Ensemble (CCE) model which includes a 3-class ice-phase microphysical scheme, The model domain contains 256 x 256 grid points (using 2 km resolution) in the horizontal and 38 grid points (to a depth of 22 km depth) in the vertical, The 2D domain has 1024 grid points. The simulations are performed over a 7 day time period. We will examine (1) the precipitation processes (i.e., condensation/evaporation) and their interaction with warm pool; (2) the heating and moisture budgets in the convective and stratiform regions; (3) the cloud (upward-downward) mass fluxes in convective and stratiform regions; (4) characteristics of clouds (such as cloud size, updraft intensity and cloud lifetime) and the comparison of clouds with Radar observations. Differences and similarities in organization of convection between simulated 2D and 3D cloud systems. Preliminary results indicated that there is major differences between 2D and 3D simulated stratiform rainfall amount and convective updraft and downdraft mass fluxes.

  14. Analysis on the security of cloud computing

    NASA Astrophysics Data System (ADS)

    He, Zhonglin; He, Yuhua

    2011-02-01

    Cloud computing is a new technology, which is the fusion of computer technology and Internet development. It will lead the revolution of IT and information field. However, in cloud computing data and application software is stored at large data centers, and the management of data and service is not completely trustable, resulting in safety problems, which is the difficult point to improve the quality of cloud service. This paper briefly introduces the concept of cloud computing. Considering the characteristics of cloud computing, it constructs the security architecture of cloud computing. At the same time, with an eye toward the security threats cloud computing faces, several corresponding strategies are provided from the aspect of cloud computing users and service providers.

  15. 3-D Deformation Field Of The 2010 El Mayor-Cucapah (Mexico) Earthquake From Matching Before To After Aerial Lidar Point Clouds

    NASA Astrophysics Data System (ADS)

    Hinojosa-Corona, A.; Nissen, E.; Arrowsmith, R.; Krishnan, A. K.; Saripalli, S.; Oskin, M. E.; Arregui, S. M.; Limon, J. F.

    2012-12-01

    The Mw 7.2 El Mayor-Cucapah earthquake (EMCE) of 4 April 2010 generated a ~110 km long, NW-SE trending rupture, with normal and right-lateral slip in the order of 2-3m in the Sierra Cucapah, the northern half, where the surface rupture has the most outstanding expression. Vertical and horizontal surface displacements produced by the EMCE have been addressed separately by other authors with a variety of aerial and satellite remote sensing techniques. Slip variation along fault and post-seismic scarp erosion and diffusion have been estimated in other studies using terrestrial LiDAR (TLS) on segments of the rupture. To complement these other studies, we computed the 3D deformation field by comparing pre- to post-event point clouds from aerial LiDAR surveys. The pre-event LiDAR with lower point density (0.013-0.033 pts m-2) required filtering and post-processing before comparing with the denser (9-18 pts m-2) more accurate post event dataset. The 3-dimensional surface displacement field was determined using an adaptation of the Iterative Closest Point (ICP) algorithm, implemented in the open source Point Cloud Library (PCL). The LiDAR datasets are first split into a grid of windows, and for each one, ICP iteratively converges on the rigid body transformation (comprising a translation and a rotation) that best aligns the pre- to post-event points. Testing on synthetic datasets perturbed with displacements of known magnitude showed that windows with dimensions of 100-200m gave the best results for datasets with these densities. Here we present the deformation field with detailed displacements in segments of the surface rupture where its expression was recognized by ICP from the point cloud matching, mainly the scarcely vegetated Sierra Cucapah with the Borrego and Paso Superior fault segments the most outstanding, where we are able to compare our results with values measured in the field and results from TLS reported in other works. EMC simulated displacement field for a 2m right lateral normal (east block down) slip on the pre-event point cloud along the Borrego fault on Sierra Cucapah. Shaded DEM from post-event point cloud as backdrop.

  16. Scan Line Based Road Marking Extraction from Mobile LiDAR Point Clouds.

    PubMed

    Yan, Li; Liu, Hua; Tan, Junxiang; Li, Zan; Xie, Hong; Chen, Changjun

    2016-06-17

    Mobile Mapping Technology (MMT) is one of the most important 3D spatial data acquisition technologies. The state-of-the-art mobile mapping systems, equipped with laser scanners and named Mobile LiDAR Scanning (MLS) systems, have been widely used in a variety of areas, especially in road mapping and road inventory. With the commercialization of Advanced Driving Assistance Systems (ADASs) and self-driving technology, there will be a great demand for lane-level detailed 3D maps, and MLS is the most promising technology to generate such lane-level detailed 3D maps. Road markings and road edges are necessary information in creating such lane-level detailed 3D maps. This paper proposes a scan line based method to extract road markings from mobile LiDAR point clouds in three steps: (1) preprocessing; (2) road points extraction; (3) road markings extraction and refinement. In preprocessing step, the isolated LiDAR points in the air are removed from the LiDAR point clouds and the point clouds are organized into scan lines. In the road points extraction step, seed road points are first extracted by Height Difference (HD) between trajectory data and road surface, then full road points are extracted from the point clouds by moving least squares line fitting. In the road markings extraction and refinement step, the intensity values of road points in a scan line are first smoothed by a dynamic window median filter to suppress intensity noises, then road markings are extracted by Edge Detection and Edge Constraint (EDEC) method, and the Fake Road Marking Points (FRMPs) are eliminated from the detected road markings by segment and dimensionality feature-based refinement. The performance of the proposed method is evaluated by three data samples and the experiment results indicate that road points are well extracted from MLS data and road markings are well extracted from road points by the applied method. A quantitative study shows that the proposed method achieves an average completeness, correctness, and F-measure of 0.96, 0.93, and 0.94, respectively. The time complexity analysis shows that the scan line based road markings extraction method proposed in this paper provides a promising alternative for offline road markings extraction from MLS data.

  17. Speciation and Determination of Low Concentration of Iron in Beer Samples by Cloud Point Extraction

    ERIC Educational Resources Information Center

    Khalafi, Lida; Doolittle, Pamela; Wright, John

    2018-01-01

    A laboratory experiment is described in which students determine the concentration and speciation of iron in beer samples using cloud point extraction and absorbance spectroscopy. The basis of determination is the complexation between iron and 2-(5-bromo-2- pyridylazo)-5-diethylaminophenol (5-Br-PADAP) as a colorimetric reagent in an aqueous…

  18. Investigating Freezing Point Depression and Cirrus Cloud Nucleation Mechanisms Using a Differential Scanning Calorimeter

    ERIC Educational Resources Information Center

    Bodzewski, Kentaro Y.; Caylor, Ryan L.; Comstock, Ashley M.; Hadley, Austin T.; Imholt, Felisha M.; Kirwan, Kory D.; Oyama, Kira S.; Wise, Matthew E.

    2016-01-01

    A differential scanning calorimeter was used to study homogeneous nucleation of ice from micron-sized aqueous ammonium sulfate aerosol particles. It is important to understand the conditions at which these particles nucleate ice because of their connection to cirrus cloud formation. Additionally, the concept of freezing point depression, a topic…

  19. Chloroplast Genome Evolution in Early Diverged Leptosporangiate Ferns

    PubMed Central

    Kim, Hyoung Tae; Chung, Myong Gi; Kim, Ki-Joong

    2014-01-01

    In this study, the chloroplast (cp) genome sequences from three early diverged leptosporangiate ferns were completed and analyzed in order to understand the evolution of the genome of the fern lineages. The complete cp genome sequence of Osmunda cinnamomea (Osmundales) was 142,812 base pairs (bp). The cp genome structure was similar to that of eusporangiate ferns. The gene/intron losses that frequently occurred in the cp genome of leptosporangiate ferns were not found in the cp genome of O. cinnamomea. In addition, putative RNA editing sites in the cp genome were rare in O. cinnamomea, even though the sites were frequently predicted to be present in leptosporangiate ferns. The complete cp genome sequence of Diplopterygium glaucum (Gleicheniales) was 151,007 bp and has a 9.7 kb inversion between the trnL-CAA and trnV-GCA genes when compared to O. cinnamomea. Several repeated sequences were detected around the inversion break points. The complete cp genome sequence of Lygodium japonicum (Schizaeales) was 157,142 bp and a deletion of the rpoC1 intron was detected. This intron loss was shared by all of the studied species of the genus Lygodium. The GC contents and the effective numbers of co-dons (ENCs) in ferns varied significantly when compared to seed plants. The ENC values of the early diverged leptosporangiate ferns showed intermediate levels between eusporangiate and core leptosporangiate ferns. However, our phylogenetic tree based on all of the cp gene sequences clearly indicated that the cp genome similarity between O. cinnamomea (Osmundales) and eusporangiate ferns are symplesiomorphies, rather than synapomorphies. Therefore, our data is in agreement with the view that Osmundales is a distinct early diverged lineage in the leptosporangiate ferns. PMID:24823358

  20. Chloroplast genome evolution in early diverged leptosporangiate ferns.

    PubMed

    Kim, Hyoung Tae; Chung, Myong Gi; Kim, Ki-Joong

    2014-05-01

    In this study, the chloroplast (cp) genome sequences from three early diverged leptosporangiate ferns were completed and analyzed in order to understand the evolution of the genome of the fern lineages. The complete cp genome sequence of Osmunda cinnamomea (Osmundales) was 142,812 base pairs (bp). The cp genome structure was similar to that of eusporangiate ferns. The gene/intron losses that frequently occurred in the cp genome of leptosporangiate ferns were not found in the cp genome of O. cinnamomea. In addition, putative RNA editing sites in the cp genome were rare in O. cinnamomea, even though the sites were frequently predicted to be present in leptosporangiate ferns. The complete cp genome sequence of Diplopterygium glaucum (Gleicheniales) was 151,007 bp and has a 9.7 kb inversion between the trnL-CAA and trnVGCA genes when compared to O. cinnamomea. Several repeated sequences were detected around the inversion break points. The complete cp genome sequence of Lygodium japonicum (Schizaeales) was 157,142 bp and a deletion of the rpoC1 intron was detected. This intron loss was shared by all of the studied species of the genus Lygodium. The GC contents and the effective numbers of codons (ENCs) in ferns varied significantly when compared to seed plants. The ENC values of the early diverged leptosporangiate ferns showed intermediate levels between eusporangiate and core leptosporangiate ferns. However, our phylogenetic tree based on all of the cp gene sequences clearly indicated that the cp genome similarity between O. cinnamomea (Osmundales) and eusporangiate ferns are symplesiomorphies, rather than synapomorphies. Therefore, our data is in agreement with the view that Osmundales is a distinct early diverged lineage in the leptosporangiate ferns.

  1. Accurate 3D point cloud comparison and volumetric change analysis of Terrestrial Laser Scan data in a hard rock coastal cliff environment

    NASA Astrophysics Data System (ADS)

    Earlie, C. S.; Masselink, G.; Russell, P.; Shail, R.; Kingston, K.

    2013-12-01

    Our understanding of the evolution of hard rock coastlines is limited due to the episodic nature and ';slow' rate at which changes occur. High-resolution surveying techniques, such as Terrestrial Laser Scanning (TLS), have just begun to be adopted as a method of obtaining detailed point cloud data to monitor topographical changes over short periods of time (weeks to months). However, the difficulties involved in comparing consecutive point cloud data sets in a complex three-dimensional plane, such as occlusion due to surface roughness and positioning of data capture point as a result of a consistently changing environment (a beach profile), mean that comparing data sets can lead to errors in the region of 10 - 20 cm. Meshing techniques are often used for point cloud data analysis for simple surfaces, but in surfaces such as rocky cliff faces, this technique has been found to be ineffective. Recession rates of hard rock coastlines in the UK are typically determined using aerial photography or airborne LiDAR data, yet the detail of the important changes occurring to the cliff face and toe are missed using such techniques. In this study we apply an algorithm (M3C2 - Multiscale Model to Model Cloud Comparison), initially developed for analysing fluvial morphological change, that directly compares point to point cloud data using surface normals that are consistent with surface roughness and measure the change that occurs along the normal direction (Lague et al., 2013). The surfaces changes are analysed using a set of user defined scales based on surface roughness and registration error. Once the correct parameters are defined, the volumetric cliff face changes are calculated by integrating the mean distance between the point clouds. The analysis has been undertaken at two hard rock sites identified for their active erosion located on the UK's south west peninsular at Porthleven in south west Cornwall and Godrevy in north Cornwall. Alongside TLS point cloud data, in-situ measurements of the nearshore wave climate, using a pressure transducer, offshore wave climate from a directional wavebuoy, and rainfall records from nearby weather stations were collected. Combining beach elevation information from the georeferenced point clouds with a continuous time series of wave climate provides an indication of the variation in wave energy delivered to the cliff face. The rates of retreat were found to agree with the existing rates that are currently used in shoreline management. The additional geotechnical detail afforded by applying the M3C2 method to a hard rock environment provides not only a means of obtaining volumetric changes with confidence, but also a clear illustration of the locations of failure on the cliff face. Monthly cliff scans help to narrow down the timings of failure under energetic wave conditions or periods of heavy rainfall. Volumetric changes and sensitive regions to failure established using this method allows us to capture episodic changes to the cliff face at a high resolution (1 - 2 cm) that are otherwise missed using lower resolution techniques typically used for shoreline management, and to understand in greater detail the geotechnical behaviour of hard rock cliffs and determine rates of erosion with greater accuracy.

  2. D Building Reconstruction by Multiview Images and the Integrated Application with Augmented Reality

    NASA Astrophysics Data System (ADS)

    Hwang, Jin-Tsong; Chu, Ting-Chen

    2016-10-01

    This study presents an approach wherein photographs with a high degree of overlap are clicked using a digital camera and used to generate three-dimensional (3D) point clouds via feature point extraction and matching. To reconstruct a building model, an unmanned aerial vehicle (UAV) is used to click photographs from vertical shooting angles above the building. Multiview images are taken from the ground to eliminate the shielding effect on UAV images caused by trees. Point clouds from the UAV and multiview images are generated via Pix4Dmapper. By merging two sets of point clouds via tie points, the complete building model is reconstructed. The 3D models are reconstructed using AutoCAD 2016 to generate vectors from the point clouds; SketchUp Make 2016 is used to rebuild a complete building model with textures. To apply 3D building models in urban planning and design, a modern approach is to rebuild the digital models; however, replacing the landscape design and building distribution in real time is difficult as the frequency of building replacement increases. One potential solution to these problems is augmented reality (AR). Using Unity3D and Vuforia to design and implement the smartphone application service, a markerless AR of the building model can be built. This study is aimed at providing technical and design skills related to urban planning, urban designing, and building information retrieval using AR.

  3. Combining 3d Volume and Mesh Models for Representing Complicated Heritage Buildings

    NASA Astrophysics Data System (ADS)

    Tsai, F.; Chang, H.; Lin, Y.-W.

    2017-08-01

    This study developed a simple but effective strategy to combine 3D volume and mesh models for representing complicated heritage buildings and structures. The idea is to seamlessly integrate 3D parametric or polyhedral models and mesh-based digital surfaces to generate a hybrid 3D model that can take advantages of both modeling methods. The proposed hybrid model generation framework is separated into three phases. Firstly, after acquiring or generating 3D point clouds of the target, these 3D points are partitioned into different groups. Secondly, a parametric or polyhedral model of each group is generated based on plane and surface fitting algorithms to represent the basic structure of that region. A "bare-bones" model of the target can subsequently be constructed by connecting all 3D volume element models. In the third phase, the constructed bare-bones model is used as a mask to remove points enclosed by the bare-bones model from the original point clouds. The remaining points are then connected to form 3D surface mesh patches. The boundary points of each surface patch are identified and these boundary points are projected onto the surfaces of the bare-bones model. Finally, new meshes are created to connect the projected points and original mesh boundaries to integrate the mesh surfaces with the 3D volume model. The proposed method was applied to an open-source point cloud data set and point clouds of a local historical structure. Preliminary results indicated that the reconstructed hybrid models using the proposed method can retain both fundamental 3D volume characteristics and accurate geometric appearance with fine details. The reconstructed hybrid models can also be used to represent targets in different levels of detail according to user and system requirements in different applications.

  4. Supervised Outlier Detection in Large-Scale Mvs Point Clouds for 3d City Modeling Applications

    NASA Astrophysics Data System (ADS)

    Stucker, C.; Richard, A.; Wegner, J. D.; Schindler, K.

    2018-05-01

    We propose to use a discriminative classifier for outlier detection in large-scale point clouds of cities generated via multi-view stereo (MVS) from densely acquired images. What makes outlier removal hard are varying distributions of inliers and outliers across a scene. Heuristic outlier removal using a specific feature that encodes point distribution often delivers unsatisfying results. Although most outliers can be identified correctly (high recall), many inliers are erroneously removed (low precision), too. This aggravates object 3D reconstruction due to missing data. We thus propose to discriminatively learn class-specific distributions directly from the data to achieve high precision. We apply a standard Random Forest classifier that infers a binary label (inlier or outlier) for each 3D point in the raw, unfiltered point cloud and test two approaches for training. In the first, non-semantic approach, features are extracted without considering the semantic interpretation of the 3D points. The trained model approximates the average distribution of inliers and outliers across all semantic classes. Second, semantic interpretation is incorporated into the learning process, i.e. we train separate inlieroutlier classifiers per semantic class (building facades, roof, ground, vegetation, fields, and water). Performance of learned filtering is evaluated on several large SfM point clouds of cities. We find that results confirm our underlying assumption that discriminatively learning inlier-outlier distributions does improve precision over global heuristics by up to ≍ 12 percent points. Moreover, semantically informed filtering that models class-specific distributions further improves precision by up to ≍ 10 percent points, being able to remove very isolated building, roof, and water points while preserving inliers on building facades and vegetation.

  5. Why is the simulated climatology of tropical cyclones so sensitive to the choice of cumulus parameterization scheme in the WRF model?

    NASA Astrophysics Data System (ADS)

    Zhang, Chunxi; Wang, Yuqing

    2018-01-01

    The sensitivity of simulated tropical cyclones (TCs) to the choice of cumulus parameterization (CP) scheme in the advanced Weather Research and Forecasting Model (WRF-ARW) version 3.5 is analyzed based on ten seasonal simulations with 20-km horizontal grid spacing over the western North Pacific. Results show that the simulated frequency and intensity of TCs are very sensitive to the choice of the CP scheme. The sensitivity can be explained well by the difference in the low-level circulation in a height and sorted moisture space. By transporting moist static energy from dry to moist region, the low-level circulation is important to convective self-aggregation which is believed to be related to genesis of TC-like vortices (TCLVs) and TCs in idealized settings. The radiative and evaporative cooling associated with low-level clouds and shallow convection in dry regions is found to play a crucial role in driving the moisture-sorted low-level circulation. With shallow convection turned off in a CP scheme, relatively strong precipitation occurs frequently in dry regions. In this case, the diabatic cooling can still drive the low-level circulation but its strength is reduced and thus TCLV/TC genesis is suppressed. The inclusion of the cumulus momentum transport (CMT) in a CP scheme can considerably suppress genesis of TCLVs/TCs, while changes in the moisture-sorted low-level circulation and horizontal distribution of precipitation are trivial, indicating that the CMT modulates the TCLVs/TCs activities in the model by mechanisms other than the horizontal transport of moist static energy.

  6. Lost in Virtual Reality: Pathfinding Algorithms Detect Rock Fractures and Contacts in Point Clouds

    NASA Astrophysics Data System (ADS)

    Thiele, S.; Grose, L.; Micklethwaite, S.

    2016-12-01

    UAV-based photogrammetric and LiDAR techniques provide high resolution 3D point clouds and ortho-rectified photomontages that can capture surface geology in outstanding detail over wide areas. Automated and semi-automated methods are vital to extract full value from these data in practical time periods, though the nuances of geological structures and materials (natural variability in colour and geometry, soft and hard linkage, shadows and multiscale properties) make this a challenging task. We present a novel method for computer assisted trace detection in dense point clouds, using a lowest cost path solver to "follow" fracture traces and lithological contacts between user defined end points. This is achieved by defining a local neighbourhood network where each point in the cloud is linked to its neighbours, and then using a least-cost path algorithm to search this network and estimate the trace of the fracture or contact. A variety of different algorithms can then be applied to calculate the best fit plane, produce a fracture network, or map properties such as roughness, curvature and fracture intensity. Our prototype of this method (Fig. 1) suggests the technique is feasible and remarkably good at following traces under non-optimal conditions such as variable-shadow, partial occlusion and complex fracturing. Furthermore, if a fracture is initially mapped incorrectly, the user can easily provide further guidance by defining intermediate waypoints. Future development will include optimization of the algorithm to perform well on large point clouds and modifications that permit the detection of features such as step-overs. We also plan on implementing this approach in an interactive graphical user environment.

  7. Organization of the Tropical Convective Cloud Population by Humidity and the Critical Transition to Heavy Precipitation

    NASA Astrophysics Data System (ADS)

    Igel, M.

    2015-12-01

    The tropical atmosphere exhibits an abrupt statistical switch between non-raining and heavily raining states as column moisture increases across a wide range of length scales. Deep convection occurs at values of column humidity above the transition point and induces drying of moist columns. With a 1km resolution, large domain cloud resolving model run in RCE, what will be made clear here for the first time is how the entire tropical convective cloud population is affected by and feeds back to the pickup in heavy precipitation. Shallow convection can act to dry the low levels through weak precipitation or vertical redistribution of moisture, or to moisten toward a transition to deep convection. It is shown that not only can deep convection dehydrate the entire column, it can also dry just the lower layer through intense rain. In the latter case, deep stratiform cloud then forms to dry the upper layer through rain with anomalously high rates for its value of column humidity until both the total column moisture falls below the critical transition point and the upper levels are cloud free. Thus, all major tropical cloud types are shown to respond strongly to the same critical phase-transition point. This mutual response represents a potentially strong organizational mechanism for convection, and the frequency of and logical rules determining physical evolutions between these convective regimes will be discussed. The precise value of the point in total column moisture at which the transition to heavy precipitation occurs is shown to result from two independent thresholds in lower-layer and upper-layer integrated humidity.

  8. High-frequency electroacupuncture versus carprofen in an incisional pain model in rats

    PubMed Central

    Teixeira, F.M.; Castro, L.L.; Ferreira, R.T.; Pires, P.A.; Vanderlinde, F.A.; Medeiros, M.A.

    2012-01-01

    The objective of the present study was to compare the effect of electroacupuncture (EA) and carprofen (CP) on postoperative incisional pain using the plantar incision (PI) model in rats. A 1-cm longitudinal incision was made through skin, fascia and muscles of a hind paw of male Wistar rats and the development of mechanical and thermal hypersensitivity was determined over 4 days using the von Frey and Hargreaves methods, respectively. Based on the experimental treatments received on the third postoperative day, the animals were divided into the following groups: PI+CP (CP, 2 mg/kg, po); PI+EAST36 (100-Hz EA applied bilaterally at the Zusanli point (ST36)); PI+EANP (EA applied to a non-acupoint region); PI+IMMO (immobilization only); PI (vehicle). In the von Frey test, the PI+EAST36 group had higher withdrawal force thresholds in response to mechanical stimuli than the PI, PI+IMMO and PI+EANP groups at several times studied. Furthermore, the PI+EAST36 group showed paw withdrawal thresholds in response to mechanical stimuli that were similar to those of the PI+CP group. In the Hargreaves test, all groups had latencies higher than those observed with PI. The PI+EAST36 group was similar to the PI+IMMO, PI+EANP and PI+CP groups. We conclude that 100-Hz EA at the ST36 point, but not at non-acupoints, can reduce mechanical nociception in the rat model of incisional pain, and its effectiveness is comparable to that of carprofen. PMID:22911345

  9. Antifreeze glycoproteins from antarctic notothenioid fishes fail to protect the rat cardiac explant during hypothermic and freezing preservation.

    PubMed

    Wang, T; Zhu, Q; Yang, X; Layne, J R; Devries, A L

    1994-04-01

    The antarctic notothenioid fishes avoid freezing through the action of circulating antifreeze glycoproteins (AFGPs). This study investigated whether AFGPs could serve as cryoprotectants for the isolated rat heart under three different storage conditions. (1) Hearts were flushed with 15 mg AFGP/ml cardioplegic solution (CP) and stored for 9 h at 0 degrees C. This AFGP concentration has been reported to protect pig oocytes during hypothermic storage. (2) Hearts were flushed with 10 mg AFGP/ml CP-14 and stored frozen at -1.4 degrees C for 3 h. At this concentration the AFGPs significantly reduce the solution freezing point and also change the crystal morphology from dendritic to spicular. (3) Hearts were flushed with 10 micrograms AFGP/ml CP-15 and stored frozen at -1.4 degrees C for 5 h. At this low concentration the AFGPs have a strong inhibitory effect on ice recrystallization, but have little effect on the freezing point and less apparent effect on the crystal habit. After hypothermic or freezing storage, the functional viability was assessed by determining cardiac output (CO) during working reperfusion. No difference in CO was found between AFGP-treated and untreated hearts after 9 h of 0 degree C storage. CO in hearts frozen in CP-14 without AFGPs recovered to 50% of the freshly perfused control hearts. Hearts frozen in the presence of high concentrations of AFGPs (10 mg/ml CP-14) failed to beat upon thawing and reperfusion, although their tissue ice content was less than that found in hearts without AFGP treatment.(ABSTRACT TRUNCATED AT 250 WORDS)

  10. A Lidar Point Cloud Based Procedure for Vertical Canopy Structure Analysis And 3D Single Tree Modelling in Forest

    PubMed Central

    Wang, Yunsheng; Weinacker, Holger; Koch, Barbara

    2008-01-01

    A procedure for both vertical canopy structure analysis and 3D single tree modelling based on Lidar point cloud is presented in this paper. The whole area of research is segmented into small study cells by a raster net. For each cell, a normalized point cloud whose point heights represent the absolute heights of the ground objects is generated from the original Lidar raw point cloud. The main tree canopy layers and the height ranges of the layers are detected according to a statistical analysis of the height distribution probability of the normalized raw points. For the 3D modelling of individual trees, individual trees are detected and delineated not only from the top canopy layer but also from the sub canopy layer. The normalized points are resampled into a local voxel space. A series of horizontal 2D projection images at the different height levels are then generated respect to the voxel space. Tree crown regions are detected from the projection images. Individual trees are then extracted by means of a pre-order forest traversal process through all the tree crown regions at the different height levels. Finally, 3D tree crown models of the extracted individual trees are reconstructed. With further analyses on the 3D models of individual tree crowns, important parameters such as crown height range, crown volume and crown contours at the different height levels can be derived. PMID:27879916

  11. Structure Line Detection from LIDAR Point Clouds Using Topological Elevation Analysis

    NASA Astrophysics Data System (ADS)

    Lo, C. Y.; Chen, L. C.

    2012-07-01

    Airborne LIDAR point clouds, which have considerable points on object surfaces, are essential to building modeling. In the last two decades, studies have developed different approaches to identify structure lines using two main approaches, data-driven and modeldriven. These studies have shown that automatic modeling processes depend on certain considerations, such as used thresholds, initial value, designed formulas, and predefined cues. Following the development of laser scanning systems, scanning rates have increased and can provide point clouds with higher point density. Therefore, this study proposes using topological elevation analysis (TEA) to detect structure lines instead of threshold-dependent concepts and predefined constraints. This analysis contains two parts: data pre-processing and structure line detection. To preserve the original elevation information, a pseudo-grid for generating digital surface models is produced during the first part. The highest point in each grid is set as the elevation value, and its original threedimensional position is preserved. In the second part, using TEA, the structure lines are identified based on the topology of local elevation changes in two directions. Because structure lines can contain certain geometric properties, their locations have small relieves in the radial direction and steep elevation changes in the circular direction. Following the proposed approach, TEA can be used to determine 3D line information without selecting thresholds. For validation, the TEA results are compared with those of the region growing approach. The results indicate that the proposed method can produce structure lines using dense point clouds.

  12. CP violation in h → ττ and LFV h → μτ

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hayreter, Alper; He, Xiao-Gang; Valencia, German

    The CMS Collaboration has reported a possible lepton flavor violating (LFV) signal . Whereas this does not happen in the standard model (SM), we point out that new physics responsible for this type of decay would, in general, also produce charge-parity (CP) violation in . We estimate the size of this effect in a model independent manner and find that a large asymmetry, of order 25%, is allowed by current constraints.

  13. CP violation in h → ττ and LFV h → μτ

    DOE PAGES

    Hayreter, Alper; He, Xiao-Gang; Valencia, German

    2016-06-30

    The CMS Collaboration has reported a possible lepton flavor violating (LFV) signal . Whereas this does not happen in the standard model (SM), we point out that new physics responsible for this type of decay would, in general, also produce charge-parity (CP) violation in . We estimate the size of this effect in a model independent manner and find that a large asymmetry, of order 25%, is allowed by current constraints.

  14. Contextual Classification of Point Cloud Data by Exploiting Individual 3d Neigbourhoods

    NASA Astrophysics Data System (ADS)

    Weinmann, M.; Schmidt, A.; Mallet, C.; Hinz, S.; Rottensteiner, F.; Jutzi, B.

    2015-03-01

    The fully automated analysis of 3D point clouds is of great importance in photogrammetry, remote sensing and computer vision. For reliably extracting objects such as buildings, road inventory or vegetation, many approaches rely on the results of a point cloud classification, where each 3D point is assigned a respective semantic class label. Such an assignment, in turn, typically involves statistical methods for feature extraction and machine learning. Whereas the different components in the processing workflow have extensively, but separately been investigated in recent years, the respective connection by sharing the results of crucial tasks across all components has not yet been addressed. This connection not only encapsulates the interrelated issues of neighborhood selection and feature extraction, but also the issue of how to involve spatial context in the classification step. In this paper, we present a novel and generic approach for 3D scene analysis which relies on (i) individually optimized 3D neighborhoods for (ii) the extraction of distinctive geometric features and (iii) the contextual classification of point cloud data. For a labeled benchmark dataset, we demonstrate the beneficial impact of involving contextual information in the classification process and that using individual 3D neighborhoods of optimal size significantly increases the quality of the results for both pointwise and contextual classification.

  15. Feature relevance assessment for the semantic interpretation of 3D point cloud data

    NASA Astrophysics Data System (ADS)

    Weinmann, M.; Jutzi, B.; Mallet, C.

    2013-10-01

    The automatic analysis of large 3D point clouds represents a crucial task in photogrammetry, remote sensing and computer vision. In this paper, we propose a new methodology for the semantic interpretation of such point clouds which involves feature relevance assessment in order to reduce both processing time and memory consumption. Given a standard benchmark dataset with 1.3 million 3D points, we first extract a set of 21 geometric 3D and 2D features. Subsequently, we apply a classifier-independent ranking procedure which involves a general relevance metric in order to derive compact and robust subsets of versatile features which are generally applicable for a large variety of subsequent tasks. This metric is based on 7 different feature selection strategies and thus addresses different intrinsic properties of the given data. For the example of semantically interpreting 3D point cloud data, we demonstrate the great potential of smaller subsets consisting of only the most relevant features with 4 different state-of-the-art classifiers. The results reveal that, instead of including as many features as possible in order to compensate for lack of knowledge, a crucial task such as scene interpretation can be carried out with only few versatile features and even improved accuracy.

  16. Visualization of the Construction of Ancient Roman Buildings in Ostia Using Point Cloud Data

    NASA Astrophysics Data System (ADS)

    Hori, Y.; Ogawa, T.

    2017-02-01

    The implementation of laser scanning in the field of archaeology provides us with an entirely new dimension in research and surveying. It allows us to digitally recreate individual objects, or entire cities, using millions of three-dimensional points grouped together in what is referred to as "point clouds". In addition, the visualization of the point cloud data, which can be used in the final report by archaeologists and architects, should usually be produced as a JPG or TIFF file. Not only the visualization of point cloud data, but also re-examination of older data and new survey of the construction of Roman building applying remote-sensing technology for precise and detailed measurements afford new information that may lead to revising drawings of ancient buildings which had been adduced as evidence without any consideration of a degree of accuracy, and finally can provide new research of ancient buildings. We used laser scanners at fields because of its speed, comprehensive coverage, accuracy and flexibility of data manipulation. Therefore, we "skipped" many of post-processing and focused on the images created from the meta-data simply aligned using a tool which extended automatic feature-matching algorithm and a popular renderer that can provide graphic results.

  17. Satellite Articulation Characterization from an Image Trajectory Matrix Using Optimization

    NASA Astrophysics Data System (ADS)

    Curtis, D. H.; Cobb, R. G.

    Autonomous on-orbit satellite servicing and inspection benefits from an inspector satellite that can autonomously gain as much information as possible about the primary satellite. This includes performance of articulated objects such as solar arrays, antennas, and sensors. This paper presents a method of characterizing the articulation of a satellite using resolved monocular imagery. A simulated point cloud representing a nominal satellite with articulating solar panels and a complex articulating appendage is developed and projected to the image coordinates that would be seen from an inspector following a given inspection route. A method is developed to analyze the resulting image trajectory matrix. The developed method takes advantage of the fact that the route of the inspector satellite is known to assist in the segmentation of the points into different rigid bodies, the creation of the 3D point cloud, and the identification of the articulation parameters. Once the point cloud and the articulation parameters are calculated, they can be compared to the known truth. The error in the calculated point cloud is determined as well as the difference between the true workspace of the satellite and the calculated workspace. These metrics can be used to compare the quality of various inspection routes for characterizing the satellite and its articulation.

  18. - and Graph-Based Point Cloud Segmentation of 3d Scenes Using Perceptual Grouping Laws

    NASA Astrophysics Data System (ADS)

    Xu, Y.; Hoegner, L.; Tuttas, S.; Stilla, U.

    2017-05-01

    Segmentation is the fundamental step for recognizing and extracting objects from point clouds of 3D scene. In this paper, we present a strategy for point cloud segmentation using voxel structure and graph-based clustering with perceptual grouping laws, which allows a learning-free and completely automatic but parametric solution for segmenting 3D point cloud. To speak precisely, two segmentation methods utilizing voxel and supervoxel structures are reported and tested. The voxel-based data structure can increase efficiency and robustness of the segmentation process, suppressing the negative effect of noise, outliers, and uneven points densities. The clustering of voxels and supervoxel is carried out using graph theory on the basis of the local contextual information, which commonly conducted utilizing merely pairwise information in conventional clustering algorithms. By the use of perceptual laws, our method conducts the segmentation in a pure geometric way avoiding the use of RGB color and intensity information, so that it can be applied to more general applications. Experiments using different datasets have demonstrated that our proposed methods can achieve good results, especially for complex scenes and nonplanar surfaces of objects. Quantitative comparisons between our methods and other representative segmentation methods also confirms the effectiveness and efficiency of our proposals.

  19. Hierarchical extraction of urban objects from mobile laser scanning data

    NASA Astrophysics Data System (ADS)

    Yang, Bisheng; Dong, Zhen; Zhao, Gang; Dai, Wenxia

    2015-01-01

    Point clouds collected in urban scenes contain a huge number of points (e.g., billions), numerous objects with significant size variability, complex and incomplete structures, and variable point densities, raising great challenges for the automated extraction of urban objects in the field of photogrammetry, computer vision, and robotics. This paper addresses these challenges by proposing an automated method to extract urban objects robustly and efficiently. The proposed method generates multi-scale supervoxels from 3D point clouds using the point attributes (e.g., colors, intensities) and spatial distances between points, and then segments the supervoxels rather than individual points by combining graph based segmentation with multiple cues (e.g., principal direction, colors) of the supervoxels. The proposed method defines a set of rules for merging segments into meaningful units according to types of urban objects and forms the semantic knowledge of urban objects for the classification of objects. Finally, the proposed method extracts and classifies urban objects in a hierarchical order ranked by the saliency of the segments. Experiments show that the proposed method is efficient and robust for extracting buildings, streetlamps, trees, telegraph poles, traffic signs, cars, and enclosures from mobile laser scanning (MLS) point clouds, with an overall accuracy of 92.3%.

  20. A New Approach for Inspection of Selected Geometric Parameters of a Railway Track Using Image-Based Point Clouds

    PubMed Central

    Sawicki, Piotr

    2018-01-01

    The paper presents the results of testing a proposed image-based point clouds measuring method for geometric parameters determination of a railway track. The study was performed based on a configuration of digital images and reference control network. A DSLR (digital Single-Lens-Reflex) Nikon D5100 camera was used to acquire six digital images of the tested section of railway tracks. The dense point clouds and the 3D mesh model were generated with the use of two software systems, RealityCapture and PhotoScan, which have implemented different matching and 3D object reconstruction techniques: Multi-View Stereo and Semi-Global Matching, respectively. The study found that both applications could generate appropriate 3D models. Final meshes of 3D models were filtered with the MeshLab software. The CloudCompare application was used to determine the track gauge and cant for defined cross-sections, and the results obtained from point clouds by dense image matching techniques were compared with results of direct geodetic measurements. The obtained RMS difference in the horizontal (gauge) and vertical (cant) plane was RMS∆ < 0.45 mm. The achieved accuracy meets the accuracy condition of measurements and inspection of the rail tracks (error m < 1 mm), specified in the Polish branch railway instruction Id-14 (D-75) and the European technical norm EN 13848-4:2011. PMID:29509679

  1. A New Approach for Inspection of Selected Geometric Parameters of a Railway Track Using Image-Based Point Clouds.

    PubMed

    Gabara, Grzegorz; Sawicki, Piotr

    2018-03-06

    The paper presents the results of testing a proposed image-based point clouds measuring method for geometric parameters determination of a railway track. The study was performed based on a configuration of digital images and reference control network. A DSLR (digital Single-Lens-Reflex) Nikon D5100 camera was used to acquire six digital images of the tested section of railway tracks. The dense point clouds and the 3D mesh model were generated with the use of two software systems, RealityCapture and PhotoScan, which have implemented different matching and 3D object reconstruction techniques: Multi-View Stereo and Semi-Global Matching, respectively. The study found that both applications could generate appropriate 3D models. Final meshes of 3D models were filtered with the MeshLab software. The CloudCompare application was used to determine the track gauge and cant for defined cross-sections, and the results obtained from point clouds by dense image matching techniques were compared with results of direct geodetic measurements. The obtained RMS difference in the horizontal (gauge) and vertical (cant) plane was RMS∆ < 0.45 mm. The achieved accuracy meets the accuracy condition of measurements and inspection of the rail tracks (error m < 1 mm), specified in the Polish branch railway instruction Id-14 (D-75) and the European technical norm EN 13848-4:2011.

  2. Object recognition and localization from 3D point clouds by maximum-likelihood estimation

    NASA Astrophysics Data System (ADS)

    Dantanarayana, Harshana G.; Huntley, Jonathan M.

    2017-08-01

    We present an algorithm based on maximum-likelihood analysis for the automated recognition of objects, and estimation of their pose, from 3D point clouds. Surfaces segmented from depth images are used as the features, unlike `interest point'-based algorithms which normally discard such data. Compared to the 6D Hough transform, it has negligible memory requirements, and is computationally efficient compared to iterative closest point algorithms. The same method is applicable to both the initial recognition/pose estimation problem as well as subsequent pose refinement through appropriate choice of the dispersion of the probability density functions. This single unified approach therefore avoids the usual requirement for different algorithms for these two tasks. In addition to the theoretical description, a simple 2 degrees of freedom (d.f.) example is given, followed by a full 6 d.f. analysis of 3D point cloud data from a cluttered scene acquired by a projected fringe-based scanner, which demonstrated an RMS alignment error as low as 0.3 mm.

  3. Cryopreservation: Evolution of Molecular Based Strategies.

    PubMed

    Baust, John M; Corwin, William; Snyder, Kristi K; Van Buskirk, Robert; Baust, John G

    2016-01-01

    Cryopreservation (CP) is an enabling process providing for on-demand access to biological material (cells and tissues) which serve as a starting, intermediate or even final product. While a critical tool, CP protocols, approaches and technologies have evolved little over the last several decades. A lack of conversion of discoveries from the CP sciences into mainstream utilization has resulted in a bottleneck in technological progression in areas such as stem cell research and cell therapy. While the adoption has been slow, discoveries including molecular control and buffering of cell stress response to CP as well as the development of new devices for improved sample freezing and thawing are providing for improved CP from both the processing and sample quality perspectives. Numerous studies have described the impact, mechanisms and points of control of cryopreservation-induced delayed-onset cell death (CIDOCD). In an effort to limit CIDOCD, efforts have focused on CP agent and freeze media formulation to provide a solution path and have yielded improvements in survival over traditional approaches. Importantly, each of these areas, new technologies and cell stress modulation, both individually and in combination, are now providing a new foundation to accelerate new research, technology and product development for which CP serves as an integral component. This chapter provides an overview of the molecular stress responses of cells to cryopreservation, the impact of the hypothermic and cell death continuums and the targeted modulation of common and/or cell specific responses to CP in providing a path to improving cell quality.

  4. Toroidal magnetized iron neutrino detector for a neutrino factory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bross, A.; Wands, R.; Bayes, R.

    2013-08-01

    A neutrino factory has unparalleled physics reach for the discovery and measurement of CP violation in the neutrino sector. A far detector for a neutrino factory must have good charge identification with excellent background rejection and a large mass. An elegant solution is to construct a magnetized iron neutrino detector (MIND) along the lines of MINOS, where iron plates provide a toroidal magnetic field and scintillator planes provide 3D space points. In this report, the current status of a simulation of a toroidal MIND for a neutrino factory is discussed in light of the recent measurements of largemore » $$\\theta_{13}$$. The response and performance using the 10 GeV neutrino factory configuration are presented. It is shown that this setup has equivalent $$\\delta_{CP}$$ reach to a MIND with a dipole field and is sensitive to the discovery of CP violation over 85% of the values of $$\\delta_{CP}$$.« less

  5. Neuropsychological profiles of children with cerebral palsy.

    PubMed

    Stadskleiv, Kristine; Jahnsen, Reidun; Andersen, Guro L; von Tetzchner, Stephen

    2018-02-01

    To explore factors contributing to variability in cognitive functioning in children with cerebral palsy (CP). A geographical cohort of 70 children with CP was assessed with tests of language comprehension, visual-spatial reasoning, attention, working memory, memory, and executive functioning. Mean age was 9;9 years (range 5;1-17;7), 54.3% were girls, and 50.0% had hemiplegic, 25.7% diplegic, 12.9% quadriplegic, and 11.4% dyskinetic CP. For the participants with severe motor impairments, assessments were adapted for gaze pointing. A cognitive quotient (CQ) was computed. Mean CQ was 78.5 (range 19-123). Gross motor functioning, epilepsy, and type of brain injury explained 35.5% of the variance in CQ (F = 10.643, p = .000). Twenty-four percent had an intellectual disability, most of them were children with quadriplegic CP. Verbal comprehension and perceptual reasoning scores did only differ for the 21% with an uneven profile, of whom two-thirds had challenges with perceptual reasoning.

  6. On the Nature of CP Pup

    NASA Technical Reports Server (NTRS)

    Mason, E.; Orio, M.; Mukai, K.; Bianchini, A.; deMartino, D.; diMille, F.; Williams, R. E.

    2013-01-01

    We present new X-ray and optical spectra of the old nova CP Pup (nova Pup 1942) obtained with Chandra and the CTIO 4m telescope. The X-ray spectrum reveals a multi-temperature optically thin plasma reaching a maximum temperature of 36+19 keV 16 absorbed by local complex neutral material. The time resolved optical spectroscopy confirms the presence of the 1.47 hr period, with cycle-to-cycle amplitude changes, as well as of an additional long term modulation which is suggestive either of a longer pe- riod or of non-Keplerian velocities in the emission line regions. These new observational facts add further support to CP Pup as a magnetic cataclysmic variable (mCV). We compare the mCV and the non-mCV scenarios and while we cannot conclude whether CP Pup is a long period system, all observational evidences point at an intermediate polar (IP) type CV.

  7. Pericardiectomy as a diagnostic and therapeutic procedure.

    PubMed

    Konik, Ewa; Geske, Jeffrey; Edwards, William; Gersh, Bernard

    2016-11-14

    A 70-year-old man presented with recent onset, predominantly right-sided heart failure. Echocardiogram demonstrated features of hypertensive heart disease and was suggestive of, but non-diagnostic for, constrictive pericarditis (CP). CT demonstrated mild pericardial thickening. Right heart catheterisation showed elevation and equalisation of diastolic pressures in all cardiac chambers with early rapid filling, minimal ventricular interdependence, and no dissociation of intrathoracic and intracardiac pressures. While several features pointed towards CP, the minimal ventricular interdependence and no dissociation of intrathoracic and intracardiac pressures suggested other pathology. Diagnostic pericardiectomy was performed, after which the central venous pressure decreased from 22 to 12 mm Hg. Pathology revealed pericardial fibrosis. The patient experienced sustained resolution of his heart failure. A potential explanation for lack of CP criteria was the presence of hypertensive heart disease. CP needs to be considered when approaching patients with heart failure as diagnostic evaluation can be multifaceted and treatment curative. 2016 BMJ Publishing Group Ltd.

  8. Automatic pole-like object modeling via 3D part-based analysis of point cloud

    NASA Astrophysics Data System (ADS)

    He, Liu; Yang, Haoxiang; Huang, Yuchun

    2016-10-01

    Pole-like objects, including trees, lampposts and traffic signs, are indispensable part of urban infrastructure. With the advance of vehicle-based laser scanning (VLS), massive point cloud of roadside urban areas becomes applied in 3D digital city modeling. Based on the property that different pole-like objects have various canopy parts and similar trunk parts, this paper proposed the 3D part-based shape analysis to robustly extract, identify and model the pole-like objects. The proposed method includes: 3D clustering and recognition of trunks, voxel growing and part-based 3D modeling. After preprocessing, the trunk center is identified as the point that has local density peak and the largest minimum inter-cluster distance. Starting from the trunk centers, the remaining points are iteratively clustered to the same centers of their nearest point with higher density. To eliminate the noisy points, cluster border is refined by trimming boundary outliers. Then, candidate trunks are extracted based on the clustering results in three orthogonal planes by shape analysis. Voxel growing obtains the completed pole-like objects regardless of overlaying. Finally, entire trunk, branch and crown part are analyzed to obtain seven feature parameters. These parameters are utilized to model three parts respectively and get signal part-assembled 3D model. The proposed method is tested using the VLS-based point cloud of Wuhan University, China. The point cloud includes many kinds of trees, lampposts and other pole-like posters under different occlusions and overlaying. Experimental results show that the proposed method can extract the exact attributes and model the roadside pole-like objects efficiently.

  9. Cutting efficiency of air-turbine burs on cast titanium and dental casting alloys.

    PubMed

    Watanabe, I; Ohkubo, C; Ford, J P; Atsuta, M; Okabe, T

    2000-11-01

    The purpose of this study was to investigate the cutting efficiency of air-turbine burs on cast free-machining titanium alloy (DT2F) and to compare the results with those for cast commercially pure (CP) Ti, Ti-6Al-4V alloy, and dental casting alloys. The cast metal (DT2F, CP Ti, Ti-6Al-4V, Type IV gold alloy and Co-Cr alloy) specimens were cut with air-turbine burs (carbide burs and diamond points) at air pressures of 138 or 207 kPa and a cutting force of 0.784 N. The cutting efficiency of each bur was evaluated as volume loss calculated from the weight loss cut for 5 s and the density of each metal. The bulk microhardness was measured to correlate the machinability and the hardness of each metal. The amounts of DT2F cut with the carbide burs were significantly (p < 0.05) greater than for the other titanium specimens at either 138 or 207 kPa. The diamond points exhibited similar machining efficiency among all metals except for Type IV gold alloy. The increase in the volume loss of Co-Cr alloy (Vitallium) cut with the diamond points showed a negative value (-29%) with an increase in air pressure from 138 to 207 kPa. There was a negative correlation between the amounts of metal removed (volume loss) and the hardness (r2 = 0.689) when the carbide burs were used. The results of this study indicated that a free-machining titanium alloy (DT2F) exhibited better machinability compared to CP Ti and Ti-6Al-4V alloy when using carbide fissure burs. When machining cast CP Ti and its alloys, carbide fissure burs possessed a greater machining efficiency than the diamond points and are recommended for titanium dental prostheses.

  10. Cloud-Scale Vertical Velocity and Turbulent Dissipation Rate Retrievals

    DOE Data Explorer

    Shupe, Matthew

    2013-05-22

    Time-height fields of retrieved in-cloud vertical wind velocity and turbulent dissipation rate, both retrieved primarily from vertically-pointing, Ka-band cloud radar measurements. Files are available for manually-selected, stratiform, mixed-phase cloud cases observed at the North Slope of Alaska (NSA) site during periods covering the Mixed-Phase Arctic Cloud Experiment (MPACE, late September through early November 2004) and the Indirect and Semi-Direct Aerosol Campaign (ISDAC, April-early May 2008). These time periods will be expanded in a future submission.

  11. A continuous surface reconstruction method on point cloud captured from a 3D surface photogrammetry system.

    PubMed

    Liu, Wenyang; Cheung, Yam; Sabouri, Pouya; Arai, Tatsuya J; Sawant, Amit; Ruan, Dan

    2015-11-01

    To accurately and efficiently reconstruct a continuous surface from noisy point clouds captured by a surface photogrammetry system (VisionRT). The authors have developed a level-set based surface reconstruction method on point clouds captured by a surface photogrammetry system (VisionRT). The proposed method reconstructs an implicit and continuous representation of the underlying patient surface by optimizing a regularized fitting energy, offering extra robustness to noise and missing measurements. By contrast to explicit/discrete meshing-type schemes, their continuous representation is particularly advantageous for subsequent surface registration and motion tracking by eliminating the need for maintaining explicit point correspondences as in discrete models. The authors solve the proposed method with an efficient narrowband evolving scheme. The authors evaluated the proposed method on both phantom and human subject data with two sets of complementary experiments. In the first set of experiment, the authors generated a series of surfaces each with different black patches placed on one chest phantom. The resulting VisionRT measurements from the patched area had different degree of noise and missing levels, since VisionRT has difficulties in detecting dark surfaces. The authors applied the proposed method to point clouds acquired under these different configurations, and quantitatively evaluated reconstructed surfaces by comparing against a high-quality reference surface with respect to root mean squared error (RMSE). In the second set of experiment, the authors applied their method to 100 clinical point clouds acquired from one human subject. In the absence of ground-truth, the authors qualitatively validated reconstructed surfaces by comparing the local geometry, specifically mean curvature distributions, against that of the surface extracted from a high-quality CT obtained from the same patient. On phantom point clouds, their method achieved submillimeter reconstruction RMSE under different configurations, demonstrating quantitatively the faith of the proposed method in preserving local structural properties of the underlying surface in the presence of noise and missing measurements, and its robustness toward variations of such characteristics. On point clouds from the human subject, the proposed method successfully reconstructed all patient surfaces, filling regions where raw point coordinate readings were missing. Within two comparable regions of interest in the chest area, similar mean curvature distributions were acquired from both their reconstructed surface and CT surface, with mean and standard deviation of (μrecon=-2.7×10(-3) mm(-1), σrecon=7.0×10(-3) mm(-1)) and (μCT=-2.5×10(-3) mm(-1), σCT=5.3×10(-3) mm(-1)), respectively. The agreement of local geometry properties between the reconstructed surfaces and the CT surface demonstrated the ability of the proposed method in faithfully representing the underlying patient surface. The authors have integrated and developed an accurate level-set based continuous surface reconstruction method on point clouds acquired by a 3D surface photogrammetry system. The proposed method has generated a continuous representation of the underlying phantom and patient surfaces with good robustness against noise and missing measurements. It serves as an important first step for further development of motion tracking methods during radiotherapy.

  12. Underwater 3d Modeling: Image Enhancement and Point Cloud Filtering

    NASA Astrophysics Data System (ADS)

    Sarakinou, I.; Papadimitriou, K.; Georgoula, O.; Patias, P.

    2016-06-01

    This paper examines the results of image enhancement and point cloud filtering on the visual and geometric quality of 3D models for the representation of underwater features. Specifically it evaluates the combination of effects from the manual editing of images' radiometry (captured at shallow depths) and the selection of parameters for point cloud definition and mesh building (processed in 3D modeling software). Such datasets, are usually collected by divers, handled by scientists and used for geovisualization purposes. In the presented study, have been created 3D models from three sets of images (seafloor, part of a wreck and a small boat's wreck) captured at three different depths (3.5m, 10m and 14m respectively). Four models have been created from the first dataset (seafloor) in order to evaluate the results from the application of image enhancement techniques and point cloud filtering. The main process for this preliminary study included a) the definition of parameters for the point cloud filtering and the creation of a reference model, b) the radiometric editing of images, followed by the creation of three improved models and c) the assessment of results by comparing the visual and the geometric quality of improved models versus the reference one. Finally, the selected technique is tested on two other data sets in order to examine its appropriateness for different depths (at 10m and 14m) and different objects (part of a wreck and a small boat's wreck) in the context of an ongoing research in the Laboratory of Photogrammetry and Remote Sensing.

  13. Terrestrial laser scanning to quantify above-ground biomass of structurally complex coastal wetland vegetation

    NASA Astrophysics Data System (ADS)

    Owers, Christopher J.; Rogers, Kerrylee; Woodroffe, Colin D.

    2018-05-01

    Above-ground biomass represents a small yet significant contributor to carbon storage in coastal wetlands. Despite this, above-ground biomass is often poorly quantified, particularly in areas where vegetation structure is complex. Traditional methods for providing accurate estimates involve harvesting vegetation to develop mangrove allometric equations and quantify saltmarsh biomass in quadrats. However broad scale application of these methods may not capture structural variability in vegetation resulting in a loss of detail and estimates with considerable uncertainty. Terrestrial laser scanning (TLS) collects high resolution three-dimensional point clouds capable of providing detailed structural morphology of vegetation. This study demonstrates that TLS is a suitable non-destructive method for estimating biomass of structurally complex coastal wetland vegetation. We compare volumetric models, 3-D surface reconstruction and rasterised volume, and point cloud elevation histogram modelling techniques to estimate biomass. Our results show that current volumetric modelling approaches for estimating TLS-derived biomass are comparable to traditional mangrove allometrics and saltmarsh harvesting. However, volumetric modelling approaches oversimplify vegetation structure by under-utilising the large amount of structural information provided by the point cloud. The point cloud elevation histogram model presented in this study, as an alternative to volumetric modelling, utilises all of the information within the point cloud, as opposed to sub-sampling based on specific criteria. This method is simple but highly effective for both mangrove (r2 = 0.95) and saltmarsh (r2 > 0.92) vegetation. Our results provide evidence that application of TLS in coastal wetlands is an effective non-destructive method to accurately quantify biomass for structurally complex vegetation.

  14. Satellite remote sensing and cloud modeling of St. Anthony, Minnesota storm clouds and dew point depression

    NASA Technical Reports Server (NTRS)

    Hung, R. J.; Tsao, Y. D.

    1988-01-01

    Rawinsonde data and geosynchronous satellite imagery were used to investigate the life cycles of St. Anthony, Minnesota's severe convective storms. It is found that the fully developed storm clouds, with overshooting cloud tops penetrating above the tropopause, collapsed about three minutes before the touchdown of the tornadoes. Results indicate that the probability of producing an outbreak of tornadoes causing greater damage increases when there are higher values of potential energy storage per unit area for overshooting cloud tops penetrating the tropopause. It is also found that there is less chance for clouds with a lower moisture content to be outgrown as a storm cloud than clouds with a higher moisture content.

  15. FUNCTION GENERATOR FOR ANALOGUE COMPUTERS

    DOEpatents

    Skramstad, H.K.; Wright, J.H.; Taback, L.

    1961-12-12

    An improved analogue computer is designed which can be used to determine the final ground position of radioactive fallout particles in an atomic cloud. The computer determines the fallout pattern on the basis of known wind velocity and direction at various altitudes, and intensity of radioactivity in the mushroom cloud as a function of particle size and initial height in the cloud. The output is then displayed on a cathode-ray tube so that the average or total luminance of the tube screen at any point represents the intensity of radioactive fallout at the geographical location represented by that point. (AEC)

  16. Designing and Testing a UAV Mapping System for Agricultural Field Surveying

    PubMed Central

    Skovsen, Søren

    2017-01-01

    A Light Detection and Ranging (LiDAR) sensor mounted on an Unmanned Aerial Vehicle (UAV) can map the overflown environment in point clouds. Mapped canopy heights allow for the estimation of crop biomass in agriculture. The work presented in this paper contributes to sensory UAV setup design for mapping and textual analysis of agricultural fields. LiDAR data are combined with data from Global Navigation Satellite System (GNSS) and Inertial Measurement Unit (IMU) sensors to conduct environment mapping for point clouds. The proposed method facilitates LiDAR recordings in an experimental winter wheat field. Crop height estimates ranging from 0.35–0.58 m are correlated to the applied nitrogen treatments of 0–300 kgNha. The LiDAR point clouds are recorded, mapped, and analysed using the functionalities of the Robot Operating System (ROS) and the Point Cloud Library (PCL). Crop volume estimation is based on a voxel grid with a spatial resolution of 0.04 × 0.04 × 0.001 m. Two different flight patterns are evaluated at an altitude of 6 m to determine the impacts of the mapped LiDAR measurements on crop volume estimations. PMID:29168783

  17. Reconstruction of Consistent 3d CAD Models from Point Cloud Data Using a Priori CAD Models

    NASA Astrophysics Data System (ADS)

    Bey, A.; Chaine, R.; Marc, R.; Thibault, G.; Akkouche, S.

    2011-09-01

    We address the reconstruction of 3D CAD models from point cloud data acquired in industrial environments, using a pre-existing 3D model as an initial estimate of the scene to be processed. Indeed, this prior knowledge can be used to drive the reconstruction so as to generate an accurate 3D model matching the point cloud. We more particularly focus our work on the cylindrical parts of the 3D models. We propose to state the problem in a probabilistic framework: we have to search for the 3D model which maximizes some probability taking several constraints into account, such as the relevancy with respect to the point cloud and the a priori 3D model, and the consistency of the reconstructed model. The resulting optimization problem can then be handled using a stochastic exploration of the solution space, based on the random insertion of elements in the configuration under construction, coupled with a greedy management of the conflicts which efficiently improves the configuration at each step. We show that this approach provides reliable reconstructed 3D models by presenting some results on industrial data sets.

  18. Designing and Testing a UAV Mapping System for Agricultural Field Surveying.

    PubMed

    Christiansen, Martin Peter; Laursen, Morten Stigaard; Jørgensen, Rasmus Nyholm; Skovsen, Søren; Gislum, René

    2017-11-23

    A Light Detection and Ranging (LiDAR) sensor mounted on an Unmanned Aerial Vehicle (UAV) can map the overflown environment in point clouds. Mapped canopy heights allow for the estimation of crop biomass in agriculture. The work presented in this paper contributes to sensory UAV setup design for mapping and textual analysis of agricultural fields. LiDAR data are combined with data from Global Navigation Satellite System (GNSS) and Inertial Measurement Unit (IMU) sensors to conduct environment mapping for point clouds. The proposed method facilitates LiDAR recordings in an experimental winter wheat field. Crop height estimates ranging from 0.35-0.58 m are correlated to the applied nitrogen treatments of 0-300 kg N ha . The LiDAR point clouds are recorded, mapped, and analysed using the functionalities of the Robot Operating System (ROS) and the Point Cloud Library (PCL). Crop volume estimation is based on a voxel grid with a spatial resolution of 0.04 × 0.04 × 0.001 m. Two different flight patterns are evaluated at an altitude of 6 m to determine the impacts of the mapped LiDAR measurements on crop volume estimations.

  19. Pairwise registration of TLS point clouds using covariance descriptors and a non-cooperative game

    NASA Astrophysics Data System (ADS)

    Zai, Dawei; Li, Jonathan; Guo, Yulan; Cheng, Ming; Huang, Pengdi; Cao, Xiaofei; Wang, Cheng

    2017-12-01

    It is challenging to automatically register TLS point clouds with noise, outliers and varying overlap. In this paper, we propose a new method for pairwise registration of TLS point clouds. We first generate covariance matrix descriptors with an adaptive neighborhood size from point clouds to find candidate correspondences, we then construct a non-cooperative game to isolate mutual compatible correspondences, which are considered as true positives. The method was tested on three models acquired by two different TLS systems. Experimental results demonstrate that our proposed adaptive covariance (ACOV) descriptor is invariant to rigid transformation and robust to noise and varying resolutions. The average registration errors achieved on three models are 0.46 cm, 0.32 cm and 1.73 cm, respectively. The computational times cost on these models are about 288 s, 184 s and 903 s, respectively. Besides, our registration framework using ACOV descriptors and a game theoretic method is superior to the state-of-the-art methods in terms of both registration error and computational time. The experiment on a large outdoor scene further demonstrates the feasibility and effectiveness of our proposed pairwise registration framework.

  20. The Control Region of Mitochondrial DNA Shows an Unusual CpG and Non-CpG Methylation Pattern

    PubMed Central

    Bellizzi, Dina; D'Aquila, Patrizia; Scafone, Teresa; Giordano, Marco; Riso, Vincenzo; Riccio, Andrea; Passarino, Giuseppe

    2013-01-01

    DNA methylation is a common epigenetic modification of the mammalian genome. Conflicting data regarding the possible presence of methylated cytosines within mitochondrial DNA (mtDNA) have been reported. To clarify this point, we analysed the methylation status of mtDNA control region (D-loop) on human and murine DNA samples from blood and cultured cells by bisulphite sequencing and methylated/hydroxymethylated DNA immunoprecipitation assays. We found methylated and hydroxymethylated cytosines in the L-strand of all samples analysed. MtDNA methylation particularly occurs within non-C-phosphate-G (non-CpG) nucleotides, mainly in the promoter region of the heavy strand and in conserved sequence blocks, suggesting its involvement in regulating mtDNA replication and/or transcription. We observed DNA methyltransferases within the mitochondria, but the inactivation of Dnmt1, Dnmt3a, and Dnmt3b in mouse embryonic stem (ES) cells results in a reduction of the CpG methylation, while the non-CpG methylation shows to be not affected. This suggests that D-loop epigenetic modification is only partially established by these enzymes. Our data show that DNA methylation occurs in the mtDNA control region of mammals, not only at symmetrical CpG dinucleotides, typical of nuclear genome, but in a peculiar non-CpG pattern previously reported for plants and fungi. The molecular mechanisms responsible for this pattern remain an open question. PMID:23804556

  1. Antiproliferative Activity of Cyanophora paradoxa Pigments in Melanoma, Breast and Lung Cancer Cells

    PubMed Central

    Baudelet, Paul-Hubert; Gagez, Anne-Laure; Bérard, Jean-Baptiste; Juin, Camille; Bridiau, Nicolas; Kaas, Raymond; Thiéry, Valérie; Cadoret, Jean-Paul; Picot, Laurent

    2013-01-01

    The glaucophyte Cyanophora paradoxa (Cp) was chemically investigated to identify pigments efficiently inhibiting malignant melanoma, mammary carcinoma and lung adenocarcinoma cells growth. Cp water and ethanol extracts significantly inhibited the growth of the three cancer cell lines in vitro, at 100 µg·mL−1. Flash chromatography of the Cp ethanol extract, devoid of c-phycocyanin and allophycocyanin, enabled the collection of eight fractions, four of which strongly inhibited cancer cells growth at 100 µg·mL−1. Particularly, two fractions inhibited more than 90% of the melanoma cells growth, one inducing apoptosis in the three cancer cells lines. The detailed analysis of Cp pigment composition resulted in the discrimination of 17 molecules, ten of which were unequivocally identified by high resolution mass spectrometry. Pheophorbide a, β-cryptoxanthin and zeaxanthin were the three main pigments or derivatives responsible for the strong cytotoxicity of Cp fractions in cancer cells. These data point to Cyanophora paradoxa as a new microalgal source to purify potent anticancer pigments, and demonstrate for the first time the strong antiproliferative activity of zeaxanthin and β-cryptoxanthin in melanoma cells. PMID:24189278

  2. Accessing Valuable Ligand Supports for Transition Metals: A Modified, Intermediate Scale Preparation of 1,2,3,4,5-Pentamethylcyclopentadiene.

    PubMed

    Call, Zachary; Suchewski, Meagan; Bradley, Christopher A

    2017-03-20

    A reliable, intermediate scale preparation of 1,2,3,4,5-pentamethylcyclopentadiene (Cp*H) is presented, based on modifications of existing protocols that derive from initial 2-bromo-2-butene lithiation followed by acid mediated dienol cyclization. The revised synthesis and purification of the ligand avoids the use of mechanical stirring while still permitting access to significant quantities (39 g) of Cp*H in good yield (58%). The procedure offers other additional benefits, including a more controlled quench of excess lithium during the production of the intermediate heptadienols and a simplified isolation of Cp*H of sufficient purity for metallation with transition metals. The ligand was subsequently used to synthesize [Cp*MCl2]2 complexes of both iridium and ruthenium to demonstrate the utility of the Cp*H prepared and purified by our method. The procedure outlined herein affords substantial quantities of a ubiquitous ancillary ligand support used in organometallic chemistry while minimizing the need for specialized laboratory equipment, thus providing a simpler and more accessible entry point into the chemistry of 1,2,3,4,5-pentamethylcyclopentadiene.

  3. Normalized vertical ice mass flux profiles from vertically pointing 8-mm-wavelength Doppler radar

    NASA Technical Reports Server (NTRS)

    Orr, Brad W.; Kropfli, Robert A.

    1993-01-01

    During the FIRE 2 (First International Satellite Cloud Climatology Project Regional Experiment) project, NOAA's Wave Propagation Laboratory (WPL) operated its 8-mm wavelength Doppler radar extensively in the vertically pointing mode. This allowed for the calculation of a number of important cirrus cloud parameters, including cloud boundary statistics, cloud particle characteristic sizes and concentrations, and ice mass content (imc). The flux of imc, or, alternatively, ice mass flux (imf), is also an important parameter of a cirrus cloud system. Ice mass flux is important in the vertical redistribution of water substance and thus, in part, determines the cloud evolution. It is important for the development of cloud parameterizations to be able to define the essential physical characteristics of large populations of clouds in the simplest possible way. One method would be to normalize profiles of observed cloud properties, such as those mentioned above, in ways similar to those used in the convective boundary layer. The height then scales from 0.0 at cloud base to 1.0 at cloud top, and the measured cloud parameter scales by its maximum value so that all normalized profiles have 1.0 as their maximum value. The goal is that there will be a 'universal' shape to profiles of the normalized data. This idea was applied to estimates of imf calculated from data obtained by the WPL cloud radar during FIRE II. Other quantities such as median particle diameter, concentration, and ice mass content can also be estimated with this radar, and we expect to also examine normalized profiles of these quantities in time for the 1993 FIRE II meeting.

  4. Study protocol for a randomised, double-blinded, placebo-controlled, clinical trial of S-ketamine for pain treatment in patients with chronic pancreatitis (RESET trial).

    PubMed

    Juel, Jacob; Olesen, Søren Schou; Olesen, Anne Estrup; Poulsen, Jakob Lykke; Dahan, Albert; Wilder-Smith, Oliver; Madzak, Adnan; Frøkjær, Jens Brøndum; Drewes, Asbjørn Mohr

    2015-03-10

    Chronic pancreatitis (CP) is an inflammatory disease that causes irreversible damage to pancreatic tissue. Pain is its most prominent symptom. In the absence of pathology suitable for endoscopic or surgical interventions, pain treatment usually includes opioids. However, opioids often have limited efficacy. Moreover, side effects are common and bothersome. Hence, novel approaches to control pain associated with CP are highly desirable. Sensitisation of the central nervous system is reported to play a key role in pain generation and chronification. Fundamental to the process of central sensitisation is abnormal activation of the N-methyl-D-aspartate receptor, which can be antagonised by S-ketamine. The RESET trial is investigating the analgaesic and antihyperalgesic effect of S-ketamine in patients with CP. 40 patients with CP will be enrolled. Patients are randomised to receive 8 h of intravenous S-ketamine followed by oral S-ketamine, or matching placebo, for 4 weeks. To improve blinding, 1 mg of midazolam will be added to active and placebo treatment. The primary end point is clinical pain relief as assessed by a daily pain diary. Secondary end points include changes in patient-reported outcome measures, opioid consumption and rates of side effects. The end points are registered through the 4-week medication period and for an additional follow-up period of 8 weeks to investigate long-term effects. In addition, experimental pain measures also serves as secondary end points, and neurophysiological imaging parameters are collected. Furthermore, experimental baseline recordings are compared to recordings from a group of healthy controls to evaluate general aspects of pain processing in CP. The protocol is approved by the North Denmark Region Committee on Health Research Ethics (N-20130040) and the Danish Health and Medicines Authorities (EudraCT number: 2013-003357-17). The results will be disseminated in peer-reviewed journals and at scientific conferences. The study is registered at http://www.clinicaltrialsregister.eu (EudraCT number 2013-003357-17). Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  5. 2.5D multi-view gait recognition based on point cloud registration.

    PubMed

    Tang, Jin; Luo, Jian; Tjahjadi, Tardi; Gao, Yan

    2014-03-28

    This paper presents a method for modeling a 2.5-dimensional (2.5D) human body and extracting the gait features for identifying the human subject. To achieve view-invariant gait recognition, a multi-view synthesizing method based on point cloud registration (MVSM) to generate multi-view training galleries is proposed. The concept of a density and curvature-based Color Gait Curvature Image is introduced to map 2.5D data onto a 2D space to enable data dimension reduction by discrete cosine transform and 2D principle component analysis. Gait recognition is achieved via a 2.5D view-invariant gait recognition method based on point cloud registration. Experimental results on the in-house database captured by a Microsoft Kinect camera show a significant performance gain when using MVSM.

  6. The potential of cloud point system as a novel two-phase partitioning system for biotransformation.

    PubMed

    Wang, Zhilong

    2007-05-01

    Although the extractive biotransformation in two-phase partitioning systems have been studied extensively, such as the water-organic solvent two-phase system, the aqueous two-phase system, the reverse micelle system, and the room temperature ionic liquid, etc., this has not yet resulted in a widespread industrial application. Based on the discussion of the main obstacles, an exploitation of a cloud point system, which has already been applied in a separation field known as a cloud point extraction, as a novel two-phase partitioning system for biotransformation, is reviewed by analysis of some topical examples. At the end of the review, the process control and downstream processing in the application of the novel two-phase partitioning system for biotransformation are also briefly discussed.

  7. Motion data classification on the basis of dynamic time warping with a cloud point distance measure

    NASA Astrophysics Data System (ADS)

    Switonski, Adam; Josinski, Henryk; Zghidi, Hafedh; Wojciechowski, Konrad

    2016-06-01

    The paper deals with the problem of classification of model free motion data. The nearest neighbors classifier which is based on comparison performed by Dynamic Time Warping transform with cloud point distance measure is proposed. The classification utilizes both specific gait features reflected by a movements of subsequent skeleton joints and anthropometric data. To validate proposed approach human gait identification challenge problem is taken into consideration. The motion capture database containing data of 30 different humans collected in Human Motion Laboratory of Polish-Japanese Academy of Information Technology is used. The achieved results are satisfactory, the obtained accuracy of human recognition exceeds 90%. What is more, the applied cloud point distance measure does not depend on calibration process of motion capture system which results in reliable validation.

  8. Lidars for smoke and dust cloud diagnostics

    NASA Astrophysics Data System (ADS)

    Fujimura, S. F.; Warren, R. E.; Lutomirski, R. F.

    1980-11-01

    An algorithm that integrates a time-resolved lidar signature for use in estimating transmittance, extinction coefficient, mass concentration, and CL values generated under battlefield conditions is applied to lidar signatures measured during the DIRT-I tests. Estimates are given for the dependence of the inferred transmittance and extinction coefficient on uncertainties in parameters such as the obscurant backscatter-to-extinction ratio. The enhanced reliability in estimating transmittance through use of a target behind the obscurant cloud is discussed. It is found that the inversion algorithm can produce reliable estimates of smoke or dust transmittance and extinction from all points within the cloud for which a resolvable signal can be detected, and that a single point calibration measurement can convert the extinction values to mass concentration for each resolvable signal point.

  9. Point clouds in BIM

    NASA Astrophysics Data System (ADS)

    Antova, Gergana; Kunchev, Ivan; Mickrenska-Cherneva, Christina

    2016-10-01

    The representation of physical buildings in Building Information Models (BIM) has been a subject of research since four decades in the fields of Construction Informatics and GeoInformatics. The early digital representations of buildings mainly appeared as 3D drawings constructed by CAD software, and the 3D representation of the buildings was only geometric, while semantics and topology were out of modelling focus. On the other hand, less detailed building representations, with often focus on ‘outside’ representations were also found in form of 2D /2,5D GeoInformation models. Point clouds from 3D laser scanning data give a full and exact representation of the building geometry. The article presents different aspects and the benefits of using point clouds in BIM in the different stages of a lifecycle of a building.

  10. Helical magnetic fields in molecular clouds?. A new method to determine the line-of-sight magnetic field structure in molecular clouds

    NASA Astrophysics Data System (ADS)

    Tahani, M.; Plume, R.; Brown, J. C.; Kainulainen, J.

    2018-06-01

    Context. Magnetic fields pervade in the interstellar medium (ISM) and are believed to be important in the process of star formation, yet probing magnetic fields in star formation regions is challenging. Aims: We propose a new method to use Faraday rotation measurements in small-scale star forming regions to find the direction and magnitude of the component of magnetic field along the line of sight. We test the proposed method in four relatively nearby regions of Orion A, Orion B, Perseus, and California. Methods: We use rotation measure data from the literature. We adopt a simple approach based on relative measurements to estimate the rotation measure due to the molecular clouds over the Galactic contribution. We then use a chemical evolution code along with extinction maps of each cloud to find the electron column density of the molecular cloud at the position of each rotation measure data point. Combining the rotation measures produced by the molecular clouds and the electron column density, we calculate the line-of-sight magnetic field strength and direction. Results: In California and Orion A, we find clear evidence that the magnetic fields at one side of these filamentary structures are pointing towards us and are pointing away from us at the other side. Even though the magnetic fields in Perseus might seem to suggest the same behavior, not enough data points are available to draw such conclusions. In Orion B, as well, there are not enough data points available to detect such behavior. This magnetic field reversal is consistent with a helical magnetic field morphology. In the vicinity of available Zeeman measurements in OMC-1, OMC-B, and the dark cloud Barnard 1, we find magnetic field values of - 23 ± 38 μG, - 129 ± 28 μG, and 32 ± 101 μG, respectively, which are in agreement with the Zeeman measurements. Tables 1 to 7 are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/614/A100

  11. Features of Point Clouds Synthesized from Multi-View ALOS/PRISM Data and Comparisons with LiDAR Data in Forested Areas

    NASA Technical Reports Server (NTRS)

    Ni, Wenjian; Ranson, Kenneth Jon; Zhang, Zhiyu; Sun, Guoqing

    2014-01-01

    LiDAR waveform data from airborne LiDAR scanners (ALS) e.g. the Land Vegetation and Ice Sensor (LVIS) havebeen successfully used for estimation of forest height and biomass at local scales and have become the preferredremote sensing dataset. However, regional and global applications are limited by the cost of the airborne LiDARdata acquisition and there are no available spaceborne LiDAR systems. Some researchers have demonstrated thepotential for mapping forest height using aerial or spaceborne stereo imagery with very high spatial resolutions.For stereo imageswith global coverage but coarse resolution newanalysis methods need to be used. Unlike mostresearch based on digital surface models, this study concentrated on analyzing the features of point cloud datagenerated from stereo imagery. The synthesizing of point cloud data from multi-view stereo imagery increasedthe point density of the data. The point cloud data over forested areas were analyzed and compared to small footprintLiDAR data and large-footprint LiDAR waveform data. The results showed that the synthesized point clouddata from ALOSPRISM triplets produce vertical distributions similar to LiDAR data and detected the verticalstructure of sparse and non-closed forests at 30mresolution. For dense forest canopies, the canopy could be capturedbut the ground surface could not be seen, so surface elevations from other sourceswould be needed to calculatethe height of the canopy. A canopy height map with 30 m pixels was produced by subtracting nationalelevation dataset (NED) fromthe averaged elevation of synthesized point clouds,which exhibited spatial featuresof roads, forest edges and patches. The linear regression showed that the canopy height map had a good correlationwith RH50 of LVIS data with a slope of 1.04 and R2 of 0.74 indicating that the canopy height derived fromPRISM triplets can be used to estimate forest biomass at 30 m resolution.

  12. Multibeam 3D Underwater SLAM with Probabilistic Registration.

    PubMed

    Palomer, Albert; Ridao, Pere; Ribas, David

    2016-04-20

    This paper describes a pose-based underwater 3D Simultaneous Localization and Mapping (SLAM) using a multibeam echosounder to produce high consistency underwater maps. The proposed algorithm compounds swath profiles of the seafloor with dead reckoning localization to build surface patches (i.e., point clouds). An Iterative Closest Point (ICP) with a probabilistic implementation is then used to register the point clouds, taking into account their uncertainties. The registration process is divided in two steps: (1) point-to-point association for coarse registration and (2) point-to-plane association for fine registration. The point clouds of the surfaces to be registered are sub-sampled in order to decrease both the computation time and also the potential of falling into local minima during the registration. In addition, a heuristic is used to decrease the complexity of the association step of the ICP from O(n2) to O(n) . The performance of the SLAM framework is tested using two real world datasets: First, a 2.5D bathymetric dataset obtained with the usual down-looking multibeam sonar configuration, and second, a full 3D underwater dataset acquired with a multibeam sonar mounted on a pan and tilt unit.

  13. A Direct Georeferencing Method for Terrestrial Laser Scanning Using GNSS Data and the Vertical Deflection from Global Earth Gravity Models

    PubMed Central

    Borkowski, Andrzej; Owczarek-Wesołowska, Magdalena; Gromczak, Anna

    2017-01-01

    Terrestrial laser scanning is an efficient technique in providing highly accurate point clouds for various geoscience applications. The point clouds have to be transformed to a well-defined reference frame, such as the global Geodetic Reference System 1980. The transformation to the geocentric coordinate frame is based on estimating seven Helmert parameters using several GNSS (Global Navigation Satellite System) referencing points. This paper proposes a method for direct point cloud georeferencing that provides coordinates in the geocentric frame. The proposed method employs the vertical deflection from an external global Earth gravity model and thus demands a minimum number of GNSS measurements. The proposed method can be helpful when the number of georeferencing GNSS points is limited, for instance in city corridors. It needs only two georeferencing points. The validation of the method in a field test reveals that the differences between the classical georefencing and the proposed method amount at maximum to 7 mm with the standard deviation of 8 mm for all of three coordinate components. The proposed method may serve as an alternative for the laser scanning data georeferencing, especially when the number of GNSS points is insufficient for classical methods. PMID:28672795

  14. A Direct Georeferencing Method for Terrestrial Laser Scanning Using GNSS Data and the Vertical Deflection from Global Earth Gravity Models.

    PubMed

    Osada, Edward; Sośnica, Krzysztof; Borkowski, Andrzej; Owczarek-Wesołowska, Magdalena; Gromczak, Anna

    2017-06-24

    Terrestrial laser scanning is an efficient technique in providing highly accurate point clouds for various geoscience applications. The point clouds have to be transformed to a well-defined reference frame, such as the global Geodetic Reference System 1980. The transformation to the geocentric coordinate frame is based on estimating seven Helmert parameters using several GNSS (Global Navigation Satellite System) referencing points. This paper proposes a method for direct point cloud georeferencing that provides coordinates in the geocentric frame. The proposed method employs the vertical deflection from an external global Earth gravity model and thus demands a minimum number of GNSS measurements. The proposed method can be helpful when the number of georeferencing GNSS points is limited, for instance in city corridors. It needs only two georeferencing points. The validation of the method in a field test reveals that the differences between the classical georefencing and the proposed method amount at maximum to 7 mm with the standard deviation of 8 mm for all of three coordinate components. The proposed method may serve as an alternative for the laser scanning data georeferencing, especially when the number of GNSS points is insufficient for classical methods.

  15. Effects of Stratospheric Lapse Rate on Thunderstorm Cloud-Top Structure in a Three-Dimensional Numerical Simulation. Part I: Some Basic Results of Comparative Experiments.

    NASA Astrophysics Data System (ADS)

    Schlesinger, Robert E.

    1988-05-01

    An anelastic three-dimensional model is used to investigate the effects of stratospheric temperature lapse rate on cloud top height/temperature structure for strongly sheared mature isolated midlatitude thunderstorms. Three comparative experiments are performed, differing only with respect to the stratospheric stability. The assumed stratospheric lapse rate is 0 K km1 (isothermal) in the first experiment, 3 K km1 in the second, and 3 K km1 (inversion) in the third.Kinematic storm structure is very similar in all three cases, especially in the troposphere. A strong quasi-steady updraft evolves splitting into a dominant cyclonic overshooting right-mover and a weaker anticyclonic left-mover that does not reach the tropopause. Strongest downdrafts occur at low to middle levels between the updrafts, and in the lower stratosphere a few kilometers upshear and downshear of the tapering updraft summit.Each storm shows a cloud-top thermal couplet, relatively cold near and upshear of the summit, and with a `close-in' warm region downshear. Both cold and warm regions become warmer, with significant morphological changes and a lowering of the cloud summit, as stratospheric stability is increased, though the temperature spread is not greatly affected.The coldest and highest cloud-top points are nearly colocated in the absence of a stratospheric inversion, but the coldest point is offset well upshear of the summit when an inversion is present. The cold region as a whole in each case shows at least a transient `V' shape, with the arms pointing downshear, although this shape is persistent only with the inversion.In the experiment with a 3 K km1 stratospheric lapse rate (weakest stability), the warm region is small and separates into two spots with secondary cold spots downshear of them. The warm region becomes larger, and remains single, as stratospheric stability increase. In each run, the warm regions are not accompanied by corresponding cloud-top height minima except very briefly.The cold cloud-top points are near or slightly downwind of relative vertical velocity maxima, usually positive, while the warm points are imbedded in subsidence downwind of the principal cloud-top downdraft core. The storm-relative cloud-top horizontal wind fields are consistent with the `V' shape of the cold region, showing strong diffluent flow directed downshear along the flanks from an upshear stagnation zone.

  16. Multiseasonal Tree Crown Structure Mapping with Point Clouds from OTS Quadrocopter Systems

    NASA Astrophysics Data System (ADS)

    Hese, S.; Behrendt, F.

    2017-08-01

    OTF (Off The Shelf) quadro copter systems provide a cost effective (below 2000 Euro), flexible and mobile platform for high resolution point cloud mapping. Various studies showed the full potential of these small and flexible platforms. Especially in very tight and complex 3D environments the automatic obstacle avoidance, low copter weight, long flight times and precise maneuvering are important advantages of these small OTS systems in comparison with larger octocopter systems. This study examines the potential of the DJI Phantom 4 pro series and the Phantom 3A series for within-stand and forest tree crown 3D point cloud mapping using both within stand oblique imaging in different altitude levels and data captured from a nadir perspective. On a test site in Brandenburg/Germany a beach crown was selected and measured with 3 different altitude levels in Point Of Interest (POI) mode with oblique data capturing and deriving one nadir mosaic created with 85/85 % overlap using Drone Deploy automatic mapping software. Three different flight campaigns were performed, one in September 2016 (leaf-on), one in March 2017 (leaf-off) and one in May 2017 (leaf-on) to derive point clouds from different crown structure and phenological situations - covering the leaf-on and leafoff status of the tree crown. After height correction, the point clouds where used with GPS geo referencing to calculate voxel based densities on 50 × 10 × 10 cm voxel definitions using a topological network of chessboard image objects in 0,5 m height steps in an object based image processing environment. Comparison between leaf-off and leaf-on status was done on volume pixel definitions comparing the attributed point densities per volume and plotting the resulting values as a function of distance to the crown center. In the leaf-off status SFM (structure from motion) algorithms clearly identified the central stem and also secondary branch systems. While the penetration into the crown structure is limited in the leaf-on status (the point cloud is a mainly a description of the interpolated crown surface) - the visibility of the internal crown structure in leaf-off status allows to map also the internal tree structure up to and stopping at the secondary branch level system. When combined the leaf-on and leaf-off point clouds generate a comprehensive tree crown structure description that allows a low cost and detailed 3D crown structure mapping and potentially precise biomass mapping and/or internal structural differentiation of deciduous tree species types. Compared to TLS (Terrestrial Laser Scanning) based measurements the costs are neglectable and in the range of 1500-2500 €. This suggests the approach for low cost but fine scale in-situ applications and/or projects where TLS measurements cannot be derived and for less dense forest stands where POI flights can be performed. This study used the in-copter GPS measurements for geo referencing. Better absolute geo referencing results will be obtained with DGPS reference points. The study however clearly demonstrates the potential of OTS very low cost copter systems and the image attributed GPS measurements of the copter for the automatic calculation of complex 3D point clouds in a multi temporal tree crown mapping context.

  17. Automatic Extraction of Road Markings from Mobile Laser Scanning Data

    NASA Astrophysics Data System (ADS)

    Ma, H.; Pei, Z.; Wei, Z.; Zhong, R.

    2017-09-01

    Road markings as critical feature in high-defination maps, which are Advanced Driver Assistance System (ADAS) and self-driving technology required, have important functions in providing guidance and information to moving cars. Mobile laser scanning (MLS) system is an effective way to obtain the 3D information of the road surface, including road markings, at highway speeds and at less than traditional survey costs. This paper presents a novel method to automatically extract road markings from MLS point clouds. Ground points are first filtered from raw input point clouds using neighborhood elevation consistency method. The basic assumption of the method is that the road surface is smooth. Points with small elevation-difference between neighborhood are considered to be ground points. Then ground points are partitioned into a set of profiles according to trajectory data. The intensity histogram of points in each profile is generated to find intensity jumps in certain threshold which inversely to laser distance. The separated points are used as seed points to region grow based on intensity so as to obtain road mark of integrity. We use the point cloud template-matching method to refine the road marking candidates via removing the noise clusters with low correlation coefficient. During experiment with a MLS point set of about 2 kilometres in a city center, our method provides a promising solution to the road markings extraction from MLS data.

  18. HNSciCloud - Overview and technical Challenges

    NASA Astrophysics Data System (ADS)

    Gasthuber, Martin; Meinhard, Helge; Jones, Robert

    2017-10-01

    HEP is only one of many sciences with sharply increasing compute requirements that cannot be met by profiting from Moore’s law alone. Commercial clouds potentially allow for realising larger economies of scale. While some small-scale experience requiring dedicated effort has been collected, public cloud resources have not been integrated yet with the standard workflows of science organisations in their private data centres; in addition, European science has not ramped up to significant scale yet. The HELIX NEBULA Science Cloud project - HNSciCloud, partly funded by the European Commission, addresses these points. Ten organisations under CERN’s leadership, covering particle physics, bioinformatics, photon science and other sciences, have joined to procure public cloud resources as well as dedicated development efforts towards this integration. The HNSciCloud project faces the challenge to accelerate developments performed by the selected commercial providers. In order to guarantee cost efficient usage of IaaS resources across a wide range of scientific communities, the technical requirements had to be carefully constructed. With respect to current IaaS offerings, dataintensive science is the biggest challenge; other points that need to be addressed concern identity federations, network connectivity and how to match business practices of large IaaS providers with those of public research organisations. In the first section, this paper will give an overview of the project and explain the findings so far. The last section will explain the key points of the technical requirements and present first results of the experience of the procurers with the services in comparison to their’on-premise’ infrastructure.

  19. 3D change detection at street level using mobile laser scanning point clouds and terrestrial images

    NASA Astrophysics Data System (ADS)

    Qin, Rongjun; Gruen, Armin

    2014-04-01

    Automatic change detection and geo-database updating in the urban environment are difficult tasks. There has been much research on detecting changes with satellite and aerial images, but studies have rarely been performed at the street level, which is complex in its 3D geometry. Contemporary geo-databases include 3D street-level objects, which demand frequent data updating. Terrestrial images provides rich texture information for change detection, but the change detection with terrestrial images from different epochs sometimes faces problems with illumination changes, perspective distortions and unreliable 3D geometry caused by the lack of performance of automatic image matchers, while mobile laser scanning (MLS) data acquired from different epochs provides accurate 3D geometry for change detection, but is very expensive for periodical acquisition. This paper proposes a new method for change detection at street level by using combination of MLS point clouds and terrestrial images: the accurate but expensive MLS data acquired from an early epoch serves as the reference, and terrestrial images or photogrammetric images captured from an image-based mobile mapping system (MMS) at a later epoch are used to detect the geometrical changes between different epochs. The method will automatically mark the possible changes in each view, which provides a cost-efficient method for frequent data updating. The methodology is divided into several steps. In the first step, the point clouds are recorded by the MLS system and processed, with data cleaned and classified by semi-automatic means. In the second step, terrestrial images or mobile mapping images at a later epoch are taken and registered to the point cloud, and then point clouds are projected on each image by a weighted window based z-buffering method for view dependent 2D triangulation. In the next step, stereo pairs of the terrestrial images are rectified and re-projected between each other to check the geometrical consistency between point clouds and stereo images. Finally, an over-segmentation based graph cut optimization is carried out, taking into account the color, depth and class information to compute the changed area in the image space. The proposed method is invariant to light changes, robust to small co-registration errors between images and point clouds, and can be applied straightforwardly to 3D polyhedral models. This method can be used for 3D street data updating, city infrastructure management and damage monitoring in complex urban scenes.

  20. Methylated site display (MSD)-AFLP, a sensitive and affordable method for analysis of CpG methylation profiles.

    PubMed

    Aiba, Toshiki; Saito, Toshiyuki; Hayashi, Akiko; Sato, Shinji; Yunokawa, Harunobu; Maruyama, Toru; Fujibuchi, Wataru; Kurita, Hisaka; Tohyama, Chiharu; Ohsako, Seiichiroh

    2017-03-09

    It has been pointed out that environmental factors or chemicals can cause diseases that are developmental in origin. To detect abnormal epigenetic alterations in DNA methylation, convenient and cost-effective methods are required for such research, in which multiple samples are processed simultaneously. We here present methylated site display (MSD), a unique technique for the preparation of DNA libraries. By combining it with amplified fragment length polymorphism (AFLP) analysis, we developed a new method, MSD-AFLP. Methylated site display libraries consist of only DNAs derived from DNA fragments that are CpG methylated at the 5' end in the original genomic DNA sample. To test the effectiveness of this method, CpG methylation levels in liver, kidney, and hippocampal tissues of mice were compared to examine if MSD-AFLP can detect subtle differences in the levels of tissue-specific differentially methylated CpGs. As a result, many CpG sites suspected to be tissue-specific differentially methylated were detected. Nucleotide sequences adjacent to these methyl-CpG sites were identified and we determined the methylation level by methylation-sensitive restriction endonuclease (MSRE)-PCR analysis to confirm the accuracy of AFLP analysis. The differences of the methylation level among tissues were almost identical among these methods. By MSD-AFLP analysis, we detected many CpGs showing less than 5% statistically significant tissue-specific difference and less than 10% degree of variability. Additionally, MSD-AFLP analysis could be used to identify CpG methylation sites in other organisms including humans. MSD-AFLP analysis can potentially be used to measure slight changes in CpG methylation level. Regarding the remarkable precision, sensitivity, and throughput of MSD-AFLP analysis studies, this method will be advantageous in a variety of epigenetics-based research.

  1. A building extraction approach for Airborne Laser Scanner data utilizing the Object Based Image Analysis paradigm

    NASA Astrophysics Data System (ADS)

    Tomljenovic, Ivan; Tiede, Dirk; Blaschke, Thomas

    2016-10-01

    In the past two decades Object-Based Image Analysis (OBIA) established itself as an efficient approach for the classification and extraction of information from remote sensing imagery and, increasingly, from non-image based sources such as Airborne Laser Scanner (ALS) point clouds. ALS data is represented in the form of a point cloud with recorded multiple returns and intensities. In our work, we combined OBIA with ALS point cloud data in order to identify and extract buildings as 2D polygons representing roof outlines in a top down mapping approach. We performed rasterization of the ALS data into a height raster for the purpose of the generation of a Digital Surface Model (DSM) and a derived Digital Elevation Model (DEM). Further objects were generated in conjunction with point statistics from the linked point cloud. With the use of class modelling methods, we generated the final target class of objects representing buildings. The approach was developed for a test area in Biberach an der Riß (Germany). In order to point out the possibilities of the adaptation-free transferability to another data set, the algorithm has been applied ;as is; to the ISPRS Benchmarking data set of Toronto (Canada). The obtained results show high accuracies for the initial study area (thematic accuracies of around 98%, geometric accuracy of above 80%). The very high performance within the ISPRS Benchmark without any modification of the algorithm and without any adaptation of parameters is particularly noteworthy.

  2. Automatic Road Sign Inventory Using Mobile Mapping Systems

    NASA Astrophysics Data System (ADS)

    Soilán, M.; Riveiro, B.; Martínez-Sánchez, J.; Arias, P.

    2016-06-01

    The periodic inspection of certain infrastructure features plays a key role for road network safety and preservation, and for developing optimal maintenance planning that minimize the life-cycle cost of the inspected features. Mobile Mapping Systems (MMS) use laser scanner technology in order to collect dense and precise three-dimensional point clouds that gather both geometric and radiometric information of the road network. Furthermore, time-stamped RGB imagery that is synchronized with the MMS trajectory is also available. In this paper a methodology for the automatic detection and classification of road signs from point cloud and imagery data provided by a LYNX Mobile Mapper System is presented. First, road signs are detected in the point cloud. Subsequently, the inventory is enriched with geometrical and contextual data such as orientation or distance to the trajectory. Finally, semantic content is given to the detected road signs. As point cloud resolution is insufficient, RGB imagery is used projecting the 3D points in the corresponding images and analysing the RGB data within the bounding box defined by the projected points. The methodology was tested in urban and road environments in Spain, obtaining global recall results greater than 95%, and F-score greater than 90%. In this way, inventory data is obtained in a fast, reliable manner, and it can be applied to improve the maintenance planning of the road network, or to feed a Spatial Information System (SIS), thus, road sign information can be available to be used in a Smart City context.

  3. Comparison of the different approaches to generate holograms from data acquired with a Kinect sensor

    NASA Astrophysics Data System (ADS)

    Kang, Ji-Hoon; Leportier, Thibault; Ju, Byeong-Kwon; Song, Jin Dong; Lee, Kwang-Hoon; Park, Min-Chul

    2017-05-01

    Data of real scenes acquired in real-time with a Kinect sensor can be processed with different approaches to generate a hologram. 3D models can be generated from a point cloud or a mesh representation. The advantage of the point cloud approach is that computation process is well established since it involves only diffraction and propagation of point sources between parallel planes. On the other hand, the mesh representation enables to reduce the number of elements necessary to represent the object. Then, even though the computation time for the contribution of a single element increases compared to a simple point, the total computation time can be reduced significantly. However, the algorithm is more complex since propagation of elemental polygons between non-parallel planes should be implemented. Finally, since a depth map of the scene is acquired at the same time than the intensity image, a depth layer approach can also be adopted. This technique is appropriate for a fast computation since propagation of an optical wavefront from one plane to another can be handled efficiently with the fast Fourier transform. Fast computation with depth layer approach is convenient for real time applications, but point cloud method is more appropriate when high resolution is needed. In this study, since Kinect can be used to obtain both point cloud and depth map, we examine the different approaches that can be adopted for hologram computation and compare their performance.

  4. Ensemble of shape functions and support vector machines for the estimation of discrete arm muscle activation from external biceps 3D point clouds.

    PubMed

    Abraham, Leandro; Bromberg, Facundo; Forradellas, Raymundo

    2018-04-01

    Muscle activation level is currently being captured using impractical and expensive devices which make their use in telemedicine settings extremely difficult. To address this issue, a prototype is presented of a non-invasive, easy-to-install system for the estimation of a discrete level of muscle activation of the biceps muscle from 3D point clouds captured with RGB-D cameras. A methodology is proposed that uses the ensemble of shape functions point cloud descriptor for the geometric characterization of 3D point clouds, together with support vector machines to learn a classifier that, based on this geometric characterization for some points of view of the biceps, provides a model for the estimation of muscle activation for all neighboring points of view. This results in a classifier that is robust to small perturbations in the point of view of the capturing device, greatly simplifying the installation process for end-users. In the discrimination of five levels of effort with values up to the maximum voluntary contraction (MVC) of the biceps muscle (3800 g), the best variant of the proposed methodology achieved mean absolute errors of about 9.21% MVC - an acceptable performance for telemedicine settings where the electric measurement of muscle activation is impractical. The results prove that the correlations between the external geometry of the arm and biceps muscle activation are strong enough to consider computer vision and supervised learning an alternative with great potential for practical applications in tele-physiotherapy. Copyright © 2018 Elsevier Ltd. All rights reserved.

  5. Retrieval of effective cloud field parameters from radiometric data

    NASA Astrophysics Data System (ADS)

    Paulescu, Marius; Badescu, Viorel; Brabec, Marek

    2017-06-01

    Clouds play a key role in establishing the Earth's climate. Real cloud fields are very different and very complex in both morphological and microphysical senses. Consequently, the numerical description of the cloud field is a critical task for accurate climate modeling. This study explores the feasibility of retrieving the effective cloud field parameters (namely the cloud aspect ratio and cloud factor) from systematic radiometric measurements at high frequency (measurement is taken every 15 s). Two different procedures are proposed, evaluated, and discussed with respect to both physical and numerical restrictions. None of the procedures is classified as best; therefore, the specific advantages and weaknesses are discussed. It is shown that the relationship between the cloud shade and point cloudiness computed using the estimated cloud field parameters recovers the typical relationship derived from measurements.

  6. Social Attitudes toward Cerebral Palsy and Potential Uses in Medical Education Based on the Analysis of Motion Pictures.

    PubMed

    Jóźwiak, Marek; Chen, Brian Po-Jung; Musielak, Bartosz; Fabiszak, Jacek; Grzegorzewski, Andrzej

    2015-01-01

    This study presents how motion pictures illustrate a person with cerebral palsy (CP), the social impact from the media, and the possibility of cerebral palsy education by using motion pictures. 937 motion pictures were reviewed in this study. With the criteria of nondocumentary movies, possibility of disability classification, and availability, the total number of motion pictures about CP was reduced to 34. The geographical distribution of movie number ever produced is as follows: North America 12, Europe 11, India 2, East Asia 6, and Australia 3. The CP incidences of different motor types in real world and in movies, respectively, are 78-86%, 65% (Spastic); 1.5-6%, 9% (Dyskinetic); 6.5-9%, 26% (Mixed); 3%, 0% (Ataxic); 3-4%, 0% (Hypotonic). The CP incidences of different Gross Motor Function Classification System (GMFCS) levels in real world and in movies, respectively, are 40-51%, 47% (Level I + II); 14-19%, 12% (Level III); 34-41%, 41% (Level IV + V). Comparisons of incidence between the real world and the movies are surprisingly matching. Motion pictures honestly reflect the general public's point of view to CP patients in our real world. With precise selection and medical professional explanations, motion pictures can play the suitable role making CP understood more clearly.

  7. Spontaneous CP breaking in QCD and the axion potential: an effective Lagrangian approach

    NASA Astrophysics Data System (ADS)

    Di Vecchia, Paolo; Rossi, Giancarlo; Veneziano, Gabriele; Yankielowicz, Shimon

    2017-12-01

    Using the well-known low-energy effective Lagrangian of QCD — valid for small (non-vanishing) quark masses and a large number of colors — we study in detail the regions of parameter space where CP is spontaneously broken/unbroken for a vacuum angle θ = π. In the CP broken region there are first order phase transitions as one crosses θ = π, while on the (hyper)surface separating the two regions, there are second order phase transitions signalled by the vanishing of the mass of a pseudo Nambu-Goldstone boson and by a divergent QCD topological susceptibility. The second order point sits at the end of a first order line associated with the CP spontaneous breaking, in the appropriate complex parameter plane. When the effective Lagrangian is extended by the inclusion of an axion these features of QCD imply that standard calculations of the axion potential have to be revised if the QCD parameters fall in the above mentioned CP broken region, in spite of the fact that the axion solves the strong- CP problem. These last results could be of interest for axionic dark matter calculations if the topological susceptibility of pure Yang-Mills theory falls off sufficiently fast when temperature is increased towards the QCD deconfining transition.

  8. MLS data segmentation using Point Cloud Library procedures. (Polish Title: Segmentacja danych MLS z użyciem procedur Point Cloud Library)

    NASA Astrophysics Data System (ADS)

    Grochocka, M.

    2013-12-01

    Mobile laser scanning is dynamically developing measurement technology, which is becoming increasingly widespread in acquiring three-dimensional spatial information. Continuous technical progress based on the use of new tools, technology development, and thus the use of existing resources in a better way, reveals new horizons of extensive use of MLS technology. Mobile laser scanning system is usually used for mapping linear objects, and in particular the inventory of roads, railways, bridges, shorelines, shafts, tunnels, and even geometrically complex urban spaces. The measurement is done from the perspective of use of the object, however, does not interfere with the possibilities of movement and work. This paper presents the initial results of the segmentation data acquired by the MLS. The data used in this work was obtained as part of an inventory measurement infrastructure railway line. Measurement of point clouds was carried out using a profile scanners installed on the railway platform. To process the data, the tools of 'open source' Point Cloud Library was used. These tools allow to use templates of programming libraries. PCL is an open, independent project, operating on a large scale for processing 2D/3D image and point clouds. Software PCL is released under the terms of the BSD license (Berkeley Software Distribution License), which means it is a free for commercial and research use. The article presents a number of issues related to the use of this software and its capabilities. Segmentation data is based on applying the templates library pcl_ segmentation, which contains the segmentation algorithms to separate clusters. These algorithms are best suited to the processing point clouds, consisting of a number of spatially isolated regions. Template library performs the extraction of the cluster based on the fit of the model by the consensus method samples for various parametric models (planes, cylinders, spheres, lines, etc.). Most of the mathematical operation is carried out on the basis of Eigen library, a set of templates for linear algebra.

  9. Looking for Off-Fault Deformation and Measuring Strain Accumulation During the Past 70 years on a Portion of the Locked San Andreas Fault

    NASA Astrophysics Data System (ADS)

    Vadman, M.; Bemis, S. P.

    2017-12-01

    Even at high tectonic rates, detection of possible off-fault plastic/aseismic deformation and variability in far-field strain accumulation requires high spatial resolution data and likely decades of measurements. Due to the influence that variability in interseismic deformation could have on the timing, size, and location of future earthquakes and the calculation of modern geodetic estimates of strain, we attempt to use historical aerial photographs to constrain deformation through time across a locked fault. Modern photo-based 3D reconstruction techniques facilitate the creation of dense point clouds from historical aerial photograph collections. We use these tools to generate a time series of high-resolution point clouds that span 10-20 km across the Carrizo Plain segment of the San Andreas fault. We chose this location due to the high tectonic rates along the San Andreas fault and lack of vegetation, which may obscure tectonic signals. We use ground control points collected with differential GPS to establish scale and georeference the aerial photograph-derived point clouds. With a locked fault assumption, point clouds can be co-registered (to one another and/or the 1.7 km wide B4 airborne lidar dataset) along the fault trace to calculate relative displacements away from the fault. We use CloudCompare to compute 3D surface displacements, which reflect the interseismic strain accumulation that occurred in the time interval between photo collections. As expected, we do not observe clear surface displacements along the primary fault trace in our comparisons of the B4 lidar data against the aerial photograph-derived point clouds. However, there may be small scale variations within the lidar swath area that represent near-fault plastic deformation. With large-scale historical photographs available for the Carrizo Plain extending back to at least the 1940s, we can potentially sample nearly half the interseismic period since the last major earthquake on this portion of this fault (1857). Where sufficient aerial photograph coverage is available, this approach has the potential to illuminate complex fault zone processes for this and other major strike-slip faults.

  10. Big Geo Data Services: From More Bytes to More Barrels

    NASA Astrophysics Data System (ADS)

    Misev, Dimitar; Baumann, Peter

    2016-04-01

    The data deluge is affecting the oil and gas industry just as much as many other industries. However, aside from the sheer volume there is the challenge of data variety, such as regular and irregular grids, multi-dimensional space/time grids, point clouds, and TINs and other meshes. A uniform conceptualization for modelling and serving them could save substantial effort, such as the proverbial "department of reformatting". The notion of a coverage actually can accomplish this. Its abstract model in ISO 19123 together with the concrete, interoperable OGC Coverage Implementation Schema (CIS), which is currently under adoption as ISO 19123-2, provieds a common platform for representing any n-D grid type, point clouds, and general meshes. This is paired by the OGC Web Coverage Service (WCS) together with its datacube analytics language, the OGC Web Coverage Processing Service (WCPS). The OGC WCS Core Reference Implementation, rasdaman, relies on Array Database technology, i.e. a NewSQL/NoSQL approach. It supports the grid part of coverages, with installations of 100+ TB known and single queries parallelized across 1,000+ cloud nodes. Recent research attempts to address the point cloud and mesh part through a unified query model. The Holy Grail envisioned is that these approaches can be merged into a single service interface at some time. We present both grid amd point cloud / mesh approaches and discuss status, implementation, standardization, and research perspectives, including a live demo.

  11. Stochastic Surface Mesh Reconstruction

    NASA Astrophysics Data System (ADS)

    Ozendi, M.; Akca, D.; Topan, H.

    2018-05-01

    A generic and practical methodology is presented for 3D surface mesh reconstruction from the terrestrial laser scanner (TLS) derived point clouds. It has two main steps. The first step deals with developing an anisotropic point error model, which is capable of computing the theoretical precisions of 3D coordinates of each individual point in the point cloud. The magnitude and direction of the errors are represented in the form of error ellipsoids. The following second step is focused on the stochastic surface mesh reconstruction. It exploits the previously determined error ellipsoids by computing a point-wise quality measure, which takes into account the semi-diagonal axis length of the error ellipsoid. The points only with the least errors are used in the surface triangulation. The remaining ones are automatically discarded.

  12. Automatic extraction of pavement markings on streets from point cloud data of mobile LiDAR

    NASA Astrophysics Data System (ADS)

    Gao, Yang; Zhong, Ruofei; Tang, Tao; Wang, Liuzhao; Liu, Xianlin

    2017-08-01

    Pavement markings provide an important foundation as they help to keep roads users safe. Accurate and comprehensive information about pavement markings assists the road regulators and is useful in developing driverless technology. Mobile light detection and ranging (LiDAR) systems offer new opportunities to collect and process accurate pavement markings’ information. Mobile LiDAR systems can directly obtain the three-dimensional (3D) coordinates of an object, thus defining spatial data and the intensity of (3D) objects in a fast and efficient way. The RGB attribute information of data points can be obtained based on the panoramic camera in the system. In this paper, we present a novel method process to automatically extract pavement markings using multiple attribute information of the laser scanning point cloud from the mobile LiDAR data. This method process utilizes a differential grayscale of RGB color, laser pulse reflection intensity, and the differential intensity to identify and extract pavement markings. We utilized point cloud density to remove the noise and used morphological operations to eliminate the errors. In the application, we tested our method process on different sections of roads in Beijing, China, and Buffalo, NY, USA. The results indicated that both correctness (p) and completeness (r) were higher than 90%. The method process of this research can be applied to extract pavement markings from huge point cloud data produced by mobile LiDAR.

  13. A novel point cloud registration using 2D image features

    NASA Astrophysics Data System (ADS)

    Lin, Chien-Chou; Tai, Yen-Chou; Lee, Jhong-Jin; Chen, Yong-Sheng

    2017-01-01

    Since a 3D scanner only captures a scene of a 3D object at a time, a 3D registration for multi-scene is the key issue of 3D modeling. This paper presents a novel and an efficient 3D registration method based on 2D local feature matching. The proposed method transforms the point clouds into 2D bearing angle images and then uses the 2D feature based matching method, SURF, to find matching pixel pairs between two images. The corresponding points of 3D point clouds can be obtained by those pixel pairs. Since the corresponding pairs are sorted by their distance between matching features, only the top half of the corresponding pairs are used to find the optimal rotation matrix by the least squares approximation. In this paper, the optimal rotation matrix is derived by orthogonal Procrustes method (SVD-based approach). Therefore, the 3D model of an object can be reconstructed by aligning those point clouds with the optimal transformation matrix. Experimental results show that the accuracy of the proposed method is close to the ICP, but the computation cost is reduced significantly. The performance is six times faster than the generalized-ICP algorithm. Furthermore, while the ICP requires high alignment similarity of two scenes, the proposed method is robust to a larger difference of viewing angle.

  14. Drogue tracking using 3D flash lidar for autonomous aerial refueling

    NASA Astrophysics Data System (ADS)

    Chen, Chao-I.; Stettner, Roger

    2011-06-01

    Autonomous aerial refueling (AAR) is an important capability for an unmanned aerial vehicle (UAV) to increase its flying range and endurance without increasing its size. This paper presents a novel tracking method that utilizes both 2D intensity and 3D point-cloud data acquired with a 3D Flash LIDAR sensor to establish relative position and orientation between the receiver vehicle and drogue during an aerial refueling process. Unlike classic, vision-based sensors, a 3D Flash LIDAR sensor can provide 3D point-cloud data in real time without motion blur, in the day or night, and is capable of imaging through fog and clouds. The proposed method segments out the drogue through 2D analysis and estimates the center of the drogue from 3D point-cloud data for flight trajectory determination. A level-set front propagation routine is first employed to identify the target of interest and establish its silhouette information. Sufficient domain knowledge, such as the size of the drogue and the expected operable distance, is integrated into our approach to quickly eliminate unlikely target candidates. A statistical analysis along with a random sample consensus (RANSAC) is performed on the target to reduce noise and estimate the center of the drogue after all 3D points on the drogue are identified. The estimated center and drogue silhouette serve as the seed points to efficiently locate the target in the next frame.

  15. Thermodynamic and cloud parameter retrieval using infrared spectral data

    NASA Technical Reports Server (NTRS)

    Zhou, Daniel K.; Smith, William L., Sr.; Liu, Xu; Larar, Allen M.; Huang, Hung-Lung A.; Li, Jun; McGill, Matthew J.; Mango, Stephen A.

    2005-01-01

    High-resolution infrared radiance spectra obtained from near nadir observations provide atmospheric, surface, and cloud property information. A fast radiative transfer model, including cloud effects, is used for atmospheric profile and cloud parameter retrieval. The retrieval algorithm is presented along with its application to recent field experiment data from the NPOESS Airborne Sounding Testbed - Interferometer (NAST-I). The retrieval accuracy dependence on cloud properties is discussed. It is shown that relatively accurate temperature and moisture retrievals can be achieved below optically thin clouds. For optically thick clouds, accurate temperature and moisture profiles down to cloud top level are obtained. For both optically thin and thick cloud situations, the cloud top height can be retrieved with an accuracy of approximately 1.0 km. Preliminary NAST-I retrieval results from the recent Atlantic-THORPEX Regional Campaign (ATReC) are presented and compared with coincident observations obtained from dropsondes and the nadir-pointing Cloud Physics Lidar (CPL).

  16. Modeling the expenditure and reconstitution of work capacity above critical power.

    PubMed

    Skiba, Philip Friere; Chidnok, Weerapong; Vanhatalo, Anni; Jones, Andrew M

    2012-08-01

    The critical power (CP) model includes two constants: the CP and the W' [P = (W' / t) + CP]. The W' is the finite work capacity available above CP. Power output above CP results in depletion of the W' complete depletion of the W' results in exhaustion. Monitoring the W' may be valuable to athletes during training and competition. Our purpose was to develop a function describing the dynamic state of the W' during intermittent exercise. After determination of V˙O(2max), CP, and W', seven subjects completed four separate exercise tests on a cycle ergometer on different days. Each protocol comprised a set of intervals: 60 s at a severe power output, followed by 30-s recovery at a lower prescribed power output. The intervals were repeated until exhaustion. These data were entered into a continuous equation predicting balance of W' remaining, assuming exponential reconstitution of the W'. The time constant was varied by an iterative process until the remaining modeled W' = 0 at the point of exhaustion. The time constants of W' recharge were negatively correlated with the difference between sub-CP recovery power and CP. The relationship was best fit by an exponential (r = 0.77). The model-predicted W' balance correlated with the temporal course of the rise in V˙O(2) (r = 0.82-0.96). The model accurately predicted exhaustion of the W' in a competitive cyclist during a road race. We have developed a function to track the dynamic state of the W' during intermittent exercise. This may have important implications for the planning and real-time monitoring of athletic performance.

  17. Cloud condensation nuclei near marine cumulus

    NASA Technical Reports Server (NTRS)

    Hudson, James G.

    1993-01-01

    Extensive airborne measurements of cloud condensation nucleus (CCN) spectra and condensation nuclei below, in, between, and above the cumulus clouds near Hawaii point to important aerosol-cloud interactions. Consistent particle concentrations of 200/cu cm were found above the marine boundary layer and within the noncloudy marine boundary layer. Lower and more variable CCN concentrations within the cloudy boundary layer, especially very close to the clouds, appear to be a result of cloud scavenging processes. Gravitational coagulation of cloud droplets may be the principal cause of this difference in the vertical distribution of CCN. The results suggest a reservoir of CCN in the free troposphere which can act as a source for the marine boundary layer.

  18. Analysis, Thematic Maps and Data Mining from Point Cloud to Ontology for Software Development

    NASA Astrophysics Data System (ADS)

    Nespeca, R.; De Luca, L.

    2016-06-01

    The primary purpose of the survey for the restoration of Cultural Heritage is the interpretation of the state of building preservation. For this, the advantages of the remote sensing systems that generate dense point cloud (range-based or image-based) are not limited only to the acquired data. The paper shows that it is possible to extrapolate very useful information in diagnostics using spatial annotation, with the use of algorithms already implemented in open-source software. Generally, the drawing of degradation maps is the result of manual work, so dependent on the subjectivity of the operator. This paper describes a method of extraction and visualization of information, obtained by mathematical procedures, quantitative, repeatable and verifiable. The case study is a part of the east facade of the Eglise collégiale Saint-Maurice also called Notre Dame des Grâces, in Caromb, in southern France. The work was conducted on the matrix of information contained in the point cloud asci format. The first result is the extrapolation of new geometric descriptors. First, we create the digital maps with the calculated quantities. Subsequently, we have moved to semi-quantitative analyses that transform new data into useful information. We have written the algorithms for accurate selection, for the segmentation of point cloud, for automatic calculation of the real surface and the volume. Furthermore, we have created the graph of spatial distribution of the descriptors. This work shows that if we work during the data processing we can transform the point cloud into an enriched database: the use, the management and the data mining is easy, fast and effective for everyone involved in the restoration process.

  19. Combining structure-from-motion derived point clouds from satellites and unmanned aircraft systems images with ground-truth data to create high-resolution digital elevation models

    NASA Astrophysics Data System (ADS)

    Palaseanu, M.; Thatcher, C.; Danielson, J.; Gesch, D. B.; Poppenga, S.; Kottermair, M.; Jalandoni, A.; Carlson, E.

    2016-12-01

    Coastal topographic and bathymetric (topobathymetric) data with high spatial resolution (1-meter or better) and high vertical accuracy are needed to assess the vulnerability of Pacific Islands to climate change impacts, including sea level rise. According to the Intergovernmental Panel on Climate Change reports, low-lying atolls in the Pacific Ocean are extremely vulnerable to king tide events, storm surge, tsunamis, and sea-level rise. The lack of coastal topobathymetric data has been identified as a critical data gap for climate vulnerability and adaptation efforts in the Republic of the Marshall Islands (RMI). For Majuro Atoll, home to the largest city of RMI, the only elevation dataset currently available is the Shuttle Radar Topography Mission data which has a 30-meter spatial resolution and 16-meter vertical accuracy (expressed as linear error at 90%). To generate high-resolution digital elevation models (DEMs) in the RMI, elevation information and photographic imagery have been collected from field surveys using GNSS/total station and unmanned aerial vehicles for Structure-from-Motion (SfM) point cloud generation. Digital Globe WorldView II imagery was processed to create SfM point clouds to fill in gaps in the point cloud derived from the higher resolution UAS photos. The combined point cloud data is filtered and classified to bare-earth and georeferenced using the GNSS data acquired on roads and along survey transects perpendicular to the coast. A total station was used to collect elevation data under tree canopies where heavy vegetation cover blocked the view of GNSS satellites. A subset of the GPS / total station data was set aside for error assessment of the resulting DEM.

  20. Outcrop-scale fracture trace identification using surface roughness derived from a high-density point cloud

    NASA Astrophysics Data System (ADS)

    Okyay, U.; Glennie, C. L.; Khan, S.

    2017-12-01

    Owing to the advent of terrestrial laser scanners (TLS), high-density point cloud data has become increasingly available to the geoscience research community. Research groups have started producing their own point clouds for various applications, gradually shifting their emphasis from obtaining the data towards extracting more and meaningful information from the point clouds. Extracting fracture properties from three-dimensional data in a (semi-)automated manner has been an active area of research in geosciences. Several studies have developed various processing algorithms for extracting only planar surfaces. In comparison, (semi-)automated identification of fracture traces at the outcrop scale, which could be used for mapping fracture distribution have not been investigated frequently. Understanding the spatial distribution and configuration of natural fractures is of particular importance, as they directly influence fluid-flow through the host rock. Surface roughness, typically defined as the deviation of a natural surface from a reference datum, has become an important metric in geoscience research, especially with the increasing density and accuracy of point clouds. In the study presented herein, a surface roughness model was employed to identify fracture traces and their distribution on an ophiolite outcrop in Oman. Surface roughness calculations were performed using orthogonal distance regression over various grid intervals. The results demonstrated that surface roughness could identify outcrop-scale fracture traces from which fracture distribution and density maps can be generated. However, considering outcrop conditions and properties and the purpose of the application, the definition of an adequate grid interval for surface roughness model and selection of threshold values for distribution maps are not straightforward and require user intervention and interpretation.

  1. A hierarchical methodology for urban facade parsing from TLS point clouds

    NASA Astrophysics Data System (ADS)

    Li, Zhuqiang; Zhang, Liqiang; Mathiopoulos, P. Takis; Liu, Fangyu; Zhang, Liang; Li, Shuaipeng; Liu, Hao

    2017-01-01

    The effective and automated parsing of building facades from terrestrial laser scanning (TLS) point clouds of urban environments is an important research topic in the GIS and remote sensing fields. It is also challenging because of the complexity and great variety of the available 3D building facade layouts as well as the noise and data missing of the input TLS point clouds. In this paper, we introduce a novel methodology for the accurate and computationally efficient parsing of urban building facades from TLS point clouds. The main novelty of the proposed methodology is that it is a systematic and hierarchical approach that considers, in an adaptive way, the semantic and underlying structures of the urban facades for segmentation and subsequent accurate modeling. Firstly, the available input point cloud is decomposed into depth planes based on a data-driven method; such layer decomposition enables similarity detection in each depth plane layer. Secondly, the labeling of the facade elements is performed using the SVM classifier in combination with our proposed BieS-ScSPM algorithm. The labeling outcome is then augmented with weak architectural knowledge. Thirdly, least-squares fitted normalized gray accumulative curves are applied to detect regular structures, and a binarization dilation extraction algorithm is used to partition facade elements. A dynamic line-by-line division is further applied to extract the boundaries of the elements. The 3D geometrical façade models are then reconstructed by optimizing facade elements across depth plane layers. We have evaluated the performance of the proposed method using several TLS facade datasets. Qualitative and quantitative performance comparisons with several other state-of-the-art methods dealing with the same facade parsing problem have demonstrated its superiority in performance and its effectiveness in improving segmentation accuracy.

  2. Anisotropy Changes of a Fluorescent Probe during the Micellar Growth and Clouding of a Nonionic Detergent.

    PubMed

    Komaromy-Hiller; von Wandruszka R

    1996-01-15

    The effects of temperature and Triton X-114 (TX-114) concentration on the fluorescence anisotropy of perylene were investigated before and after detergent clouding. The measured anisotropy values were used to estimate the microviscosity of the micellar interior. In the lower detergent concentration range, an anisotropy maximum was observed at the critical micelle concentration (CMC), while the values decreased in the range immediately above the CMC. This was ascribed to the micellar volume increase, which, in the case of TX-114, was not accompanied by a more ordered internal environment. A gradual decrease of anisotropy and microviscosity with increasing temperature below the cloud point was observed. At the cloud point, no abrupt changes were found to occur. Compared to detergents with more flexible hydrophobic moieties, TX-114 micelles have a relatively ordered micellar interior indicated by the microviscosity and calculated fusion energy values. In the separated micellar phase formed after clouding, the probe anisotropy increased as water was eliminated at higher temperatures.

  3. A new algorithm combining geostatistics with the surrogate data approach to increase the accuracy of comparisons of point radiation measurements with cloud measurements

    NASA Astrophysics Data System (ADS)

    Venema, V. K. C.; Lindau, R.; Varnai, T.; Simmer, C.

    2009-04-01

    Two main groups of statistical methods used in the Earth sciences are geostatistics and stochastic modelling. Geostatistical methods, such as various kriging algorithms, aim at estimating the mean value for every point as well as possible. In case of sparse measurements, such fields have less variability at small scales and a narrower distribution as the true field. This can lead to biases if a nonlinear process is simulated on such a kriged field. Stochastic modelling aims at reproducing the structure of the data. One of the stochastic modelling methods, the so-called surrogate data approach, replicates the value distribution and power spectrum of a certain data set. However, while stochastic methods reproduce the statistical properties of the data, the location of the measurement is not considered. Because radiative transfer through clouds is a highly nonlinear process it is essential to model the distribution (e.g. of optical depth, extinction, liquid water content or liquid water path) accurately as well as the correlations in the cloud field because of horizontal photon transport. This explains the success of surrogate cloud fields for use in 3D radiative transfer studies. However, up to now we could only achieve good results for the radiative properties averaged over the field, but not for a radiation measurement located at a certain position. Therefore we have developed a new algorithm that combines the accuracy of stochastic (surrogate) modelling with the positioning capabilities of kriging. In this way, we can automatically profit from the large geostatistical literature and software. The algorithm is tested on cloud fields from large eddy simulations (LES). On these clouds a measurement is simulated. From the pseudo-measurement we estimated the distribution and power spectrum. Furthermore, the pseudo-measurement is kriged to a field the size of the final surrogate cloud. The distribution, spectrum and the kriged field are the inputs to the algorithm. This algorithm is similar to the standard iterative amplitude adjusted Fourier transform (IAAFT) algorithm, but has an additional iterative step in which the surrogate field is nudged towards the kriged field. The nudging strength is gradually reduced to zero. We work with four types of pseudo-measurements: one zenith pointing measurement (which together with the wind produces a line measurement), five zenith pointing measurements, a slow and a fast azimuth scan (which together with the wind produce spirals). Because we work with LES clouds and the truth is known, we can validate the algorithm by performing 3D radiative transfer calculations on the original LES clouds and on the new surrogate clouds. For comparison also the radiative properties of the kriged fields and standard surrogate fields are computed. Preliminary results already show that these new surrogate clouds reproduce the structure of the original clouds very well and the minima and maxima are located where the pseudo-measurements sees them. The main limitation seems to be the amount of data, which is especially very limited in case of just one zenith pointing measurement.

  4. Some semiclassical structure constants for AdS 4 × CP 3

    NASA Astrophysics Data System (ADS)

    Ahn, Changrim; Bozhilov, Plamen

    2018-02-01

    We compute structure constants in three-point functions of three string states in AdS 4× CP 3 in the framework of the semiclassical approach. We consider HHL correlation functions where two of the states are "heavy" string states of finite-size giant magnons carrying one or two angular momenta and the other one corresponds to such "light" states as dilaton operators with non-zero momentum, primary scalar operators, and singlet scalar operators with higher string levels.

  5. Impact of systematic uncertainties for the CP violation measurement in superbeam experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meloni, Davide

    We present a three-flavour fit to the recent ν{sub µ} → ν{sub e} T2K oscillation data with different models for the neutrino-nucleus cross section. We show that, even for a limited statistics, the allowed regions and best fit points in the (θ{sub 13}, δ{sub CP}) plane are affected if, instead of using the Fermi Gas model to describe the quasielastic cross section, we employ a model including the multinucleon emission channel [1].

  6. A continuous surface reconstruction method on point cloud captured from a 3D surface photogrammetry system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Wenyang; Cheung, Yam; Sabouri, Pouya

    2015-11-15

    Purpose: To accurately and efficiently reconstruct a continuous surface from noisy point clouds captured by a surface photogrammetry system (VisionRT). Methods: The authors have developed a level-set based surface reconstruction method on point clouds captured by a surface photogrammetry system (VisionRT). The proposed method reconstructs an implicit and continuous representation of the underlying patient surface by optimizing a regularized fitting energy, offering extra robustness to noise and missing measurements. By contrast to explicit/discrete meshing-type schemes, their continuous representation is particularly advantageous for subsequent surface registration and motion tracking by eliminating the need for maintaining explicit point correspondences as in discretemore » models. The authors solve the proposed method with an efficient narrowband evolving scheme. The authors evaluated the proposed method on both phantom and human subject data with two sets of complementary experiments. In the first set of experiment, the authors generated a series of surfaces each with different black patches placed on one chest phantom. The resulting VisionRT measurements from the patched area had different degree of noise and missing levels, since VisionRT has difficulties in detecting dark surfaces. The authors applied the proposed method to point clouds acquired under these different configurations, and quantitatively evaluated reconstructed surfaces by comparing against a high-quality reference surface with respect to root mean squared error (RMSE). In the second set of experiment, the authors applied their method to 100 clinical point clouds acquired from one human subject. In the absence of ground-truth, the authors qualitatively validated reconstructed surfaces by comparing the local geometry, specifically mean curvature distributions, against that of the surface extracted from a high-quality CT obtained from the same patient. Results: On phantom point clouds, their method achieved submillimeter reconstruction RMSE under different configurations, demonstrating quantitatively the faith of the proposed method in preserving local structural properties of the underlying surface in the presence of noise and missing measurements, and its robustness toward variations of such characteristics. On point clouds from the human subject, the proposed method successfully reconstructed all patient surfaces, filling regions where raw point coordinate readings were missing. Within two comparable regions of interest in the chest area, similar mean curvature distributions were acquired from both their reconstructed surface and CT surface, with mean and standard deviation of (μ{sub recon} = − 2.7 × 10{sup −3} mm{sup −1}, σ{sub recon} = 7.0 × 10{sup −3} mm{sup −1}) and (μ{sub CT} = − 2.5 × 10{sup −3} mm{sup −1}, σ{sub CT} = 5.3 × 10{sup −3} mm{sup −1}), respectively. The agreement of local geometry properties between the reconstructed surfaces and the CT surface demonstrated the ability of the proposed method in faithfully representing the underlying patient surface. Conclusions: The authors have integrated and developed an accurate level-set based continuous surface reconstruction method on point clouds acquired by a 3D surface photogrammetry system. The proposed method has generated a continuous representation of the underlying phantom and patient surfaces with good robustness against noise and missing measurements. It serves as an important first step for further development of motion tracking methods during radiotherapy.« less

  7. A continuous surface reconstruction method on point cloud captured from a 3D surface photogrammetry system

    PubMed Central

    Liu, Wenyang; Cheung, Yam; Sabouri, Pouya; Arai, Tatsuya J.; Sawant, Amit; Ruan, Dan

    2015-01-01

    Purpose: To accurately and efficiently reconstruct a continuous surface from noisy point clouds captured by a surface photogrammetry system (VisionRT). Methods: The authors have developed a level-set based surface reconstruction method on point clouds captured by a surface photogrammetry system (VisionRT). The proposed method reconstructs an implicit and continuous representation of the underlying patient surface by optimizing a regularized fitting energy, offering extra robustness to noise and missing measurements. By contrast to explicit/discrete meshing-type schemes, their continuous representation is particularly advantageous for subsequent surface registration and motion tracking by eliminating the need for maintaining explicit point correspondences as in discrete models. The authors solve the proposed method with an efficient narrowband evolving scheme. The authors evaluated the proposed method on both phantom and human subject data with two sets of complementary experiments. In the first set of experiment, the authors generated a series of surfaces each with different black patches placed on one chest phantom. The resulting VisionRT measurements from the patched area had different degree of noise and missing levels, since VisionRT has difficulties in detecting dark surfaces. The authors applied the proposed method to point clouds acquired under these different configurations, and quantitatively evaluated reconstructed surfaces by comparing against a high-quality reference surface with respect to root mean squared error (RMSE). In the second set of experiment, the authors applied their method to 100 clinical point clouds acquired from one human subject. In the absence of ground-truth, the authors qualitatively validated reconstructed surfaces by comparing the local geometry, specifically mean curvature distributions, against that of the surface extracted from a high-quality CT obtained from the same patient. Results: On phantom point clouds, their method achieved submillimeter reconstruction RMSE under different configurations, demonstrating quantitatively the faith of the proposed method in preserving local structural properties of the underlying surface in the presence of noise and missing measurements, and its robustness toward variations of such characteristics. On point clouds from the human subject, the proposed method successfully reconstructed all patient surfaces, filling regions where raw point coordinate readings were missing. Within two comparable regions of interest in the chest area, similar mean curvature distributions were acquired from both their reconstructed surface and CT surface, with mean and standard deviation of (μrecon = − 2.7 × 10−3 mm−1, σrecon = 7.0 × 10−3 mm−1) and (μCT = − 2.5 × 10−3 mm−1, σCT = 5.3 × 10−3 mm−1), respectively. The agreement of local geometry properties between the reconstructed surfaces and the CT surface demonstrated the ability of the proposed method in faithfully representing the underlying patient surface. Conclusions: The authors have integrated and developed an accurate level-set based continuous surface reconstruction method on point clouds acquired by a 3D surface photogrammetry system. The proposed method has generated a continuous representation of the underlying phantom and patient surfaces with good robustness against noise and missing measurements. It serves as an important first step for further development of motion tracking methods during radiotherapy. PMID:26520747

  8. Automatic determination of trunk diameter, crown base and height of scots pine (Pinus Sylvestris L.) Based on analysis of 3D point clouds gathered from multi-station terrestrial laser scanning. (Polish Title: Automatyczne okreslanie srednicy pnia, podstawy korony oraz wysokosci sosny zwyczajnej (Pinus Silvestris L.) Na podstawie analiz chmur punktow 3D pochodzacych z wielostanowiskowego naziemnego skanowania laserowego)

    NASA Astrophysics Data System (ADS)

    Ratajczak, M.; Wężyk, P.

    2015-12-01

    Rapid development of terrestrial laser scanning (TLS) in recent years resulted in its recognition and implementation in many industries, including forestry and nature conservation. The use of the 3D TLS point clouds in the process of inventory of trees and stands, as well as in the determination of their biometric features (trunk diameter, tree height, crown base, number of trunk shapes), trees and lumber size (volume of trees) is slowly becoming a practice. In addition to the measurement precision, the primary added value of TLS is the ability to automate the processing of the clouds of points 3D in the direction of the extraction of selected features of trees and stands. The paper presents the original software (GNOM) for the automatic measurement of selected features of trees, based on the cloud of points obtained by the ground laser scanner FARO. With the developed algorithms (GNOM), the location of tree trunks on the circular research surface was specified and the measurement was performed; the measurement covered the DBH (l: 1.3m), further diameters of tree trunks at different heights of the tree trunk, base of the tree crown and volume of the tree trunk (the selection measurement method), as well as the tree crown. Research works were performed in the territory of the Niepolomice Forest in an unmixed pine stand (Pinussylvestris L.) on the circular surface with a radius of 18 m, within which there were 16 pine trees (14 of them were cut down). It was characterized by a two-storey and even-aged construction (147 years old) and was devoid of undergrowth. Ground scanning was performed just before harvesting. The DBH of 16 pine trees was specified in a fully automatic way, using the algorithm GNOM with an accuracy of +2.1%, as compared to the reference measurement by the DBH measurement device. The medium, absolute measurement error in the cloud of points - using semi-automatic methods "PIXEL" (between points) and PIPE (fitting the cylinder) in the FARO Scene 5.x., showed the error, 3.5% and 5.0%,.respectively The reference height was assumed as the measurement performed by the tape on the cut tree. The average error of automatic determination of the tree height by the algorithm GNOM based on the TLS point clouds amounted to 6.3% and was slightly higher than when using the manual method of measurements on profiles in the TerraScan (Terrasolid; the error of 5.6%). The relatively high value of the error may be mainly related to the small number of points TLS in the upper parts of crowns. The crown height measurement showed the error of +9.5%. The reference in this case was the tape measurement performed already on the trunks of cut pine trees. Processing the clouds of points by the algorithms GNOM for 16 analyzed trees took no longer than 10 min. (37 sec. /tree). The paper mainly showed the TLS measurement innovation and its high precision in acquiring biometric data in forestry, and at the same time also the further need to increase the degree of automation of processing the clouds of points 3D from terrestrial laser scanning.

  9. Identification of stable areas in unreferenced laser scans for automated geomorphometric monitoring

    NASA Astrophysics Data System (ADS)

    Wujanz, Daniel; Avian, Michael; Krueger, Daniel; Neitzel, Frank

    2018-04-01

    Current research questions in the field of geomorphology focus on the impact of climate change on several processes subsequently causing natural hazards. Geodetic deformation measurements are a suitable tool to document such geomorphic mechanisms, e.g. by capturing a region of interest with terrestrial laser scanners which results in a so-called 3-D point cloud. The main problem in deformation monitoring is the transformation of 3-D point clouds captured at different points in time (epochs) into a stable reference coordinate system. In this contribution, a surface-based registration methodology is applied, termed the iterative closest proximity algorithm (ICProx), that solely uses point cloud data as input, similar to the iterative closest point algorithm (ICP). The aim of this study is to automatically classify deformations that occurred at a rock glacier and an ice glacier, as well as in a rockfall area. For every case study, two epochs were processed, while the datasets notably differ in terms of geometric characteristics, distribution and magnitude of deformation. In summary, the ICProx algorithm's classification accuracy is 70 % on average in comparison to reference data.

  10. Applications of Panoramic Images: from 720° Panorama to Interior 3d Models of Augmented Reality

    NASA Astrophysics Data System (ADS)

    Lee, I.-C.; Tsai, F.

    2015-05-01

    A series of panoramic images are usually used to generate a 720° panorama image. Although panoramic images are typically used for establishing tour guiding systems, in this research, we demonstrate the potential of using panoramic images acquired from multiple sites to create not only 720° panorama, but also three-dimensional (3D) point clouds and 3D indoor models. Since 3D modeling is one of the goals of this research, the location of the panoramic sites needed to be carefully planned in order to maintain a robust result for close-range photogrammetry. After the images are acquired, panoramic images are processed into 720° panoramas, and these panoramas which can be used directly as panorama guiding systems or other applications. In addition to these straightforward applications, interior orientation parameters can also be estimated while generating 720° panorama. These parameters are focal length, principle point, and lens radial distortion. The panoramic images can then be processed with closerange photogrammetry procedures to extract the exterior orientation parameters and generate 3D point clouds. In this research, VisaulSFM, a structure from motion software is used to estimate the exterior orientation, and CMVS toolkit is used to generate 3D point clouds. Next, the 3D point clouds are used as references to create building interior models. In this research, Trimble Sketchup was used to build the model, and the 3D point cloud was added to the determining of locations of building objects using plane finding procedure. In the texturing process, the panorama images are used as the data source for creating model textures. This 3D indoor model was used as an Augmented Reality model replacing a guide map or a floor plan commonly used in an on-line touring guide system. The 3D indoor model generating procedure has been utilized in two research projects: a cultural heritage site at Kinmen, and Taipei Main Station pedestrian zone guidance and navigation system. The results presented in this paper demonstrate the potential of using panoramic images to generate 3D point clouds and 3D models. However, it is currently a manual and labor-intensive process. A research is being carried out to Increase the degree of automation of these procedures.

  11. Automatic Rail Extraction and Celarance Check with a Point Cloud Captured by Mls in a Railway

    NASA Astrophysics Data System (ADS)

    Niina, Y.; Honma, R.; Honma, Y.; Kondo, K.; Tsuji, K.; Hiramatsu, T.; Oketani, E.

    2018-05-01

    Recently, MLS (Mobile Laser Scanning) has been successfully used in a road maintenance. In this paper, we present the application of MLS for the inspection of clearance along railway tracks of West Japan Railway Company. Point clouds around the track are captured by MLS mounted on a bogie and rail position can be determined by matching the shape of the ideal rail head with respect to the point cloud by ICP algorithm. A clearance check is executed automatically with virtual clearance model laid along the extracted rail. As a result of evaluation, the accuracy of extracting rail positions is less than 3 mm. With respect to the automatic clearance check, the objects inside the clearance and the ones related to a contact line is successfully detected by visual confirmation.

  12. Cloud point extraction: an alternative to traditional liquid-liquid extraction for lanthanides(III) separation.

    PubMed

    Favre-Réguillon, Alain; Draye, Micheline; Lebuzit, Gérard; Thomas, Sylvie; Foos, Jacques; Cote, Gérard; Guy, Alain

    2004-06-17

    Cloud point extraction (CPE) was used to extract and separate lanthanum(III) and gadolinium(III) nitrate from an aqueous solution. The methodology used is based on the formation of lanthanide(III)-8-hydroxyquinoline (8-HQ) complexes soluble in a micellar phase of non-ionic surfactant. The lanthanide(III) complexes are then extracted into the surfactant-rich phase at a temperature above the cloud point temperature (CPT). The structure of the non-ionic surfactant, and the chelating agent-metal molar ratio are identified as factors determining the extraction efficiency and selectivity. In an aqueous solution containing equimolar concentrations of La(III) and Gd(III), extraction efficiency for Gd(III) can reach 96% with a Gd(III)/La(III) selectivity higher than 30 using Triton X-114. Under those conditions, a Gd(III) decontamination factor of 50 is obtained.

  13. 2.5D Multi-View Gait Recognition Based on Point Cloud Registration

    PubMed Central

    Tang, Jin; Luo, Jian; Tjahjadi, Tardi; Gao, Yan

    2014-01-01

    This paper presents a method for modeling a 2.5-dimensional (2.5D) human body and extracting the gait features for identifying the human subject. To achieve view-invariant gait recognition, a multi-view synthesizing method based on point cloud registration (MVSM) to generate multi-view training galleries is proposed. The concept of a density and curvature-based Color Gait Curvature Image is introduced to map 2.5D data onto a 2D space to enable data dimension reduction by discrete cosine transform and 2D principle component analysis. Gait recognition is achieved via a 2.5D view-invariant gait recognition method based on point cloud registration. Experimental results on the in-house database captured by a Microsoft Kinect camera show a significant performance gain when using MVSM. PMID:24686727

  14. Examining Effects of Virtual Machine Settings on Voice over Internet Protocol in a Private Cloud Environment

    ERIC Educational Resources Information Center

    Liao, Yuan

    2011-01-01

    The virtualization of computing resources, as represented by the sustained growth of cloud computing, continues to thrive. Information Technology departments are building their private clouds due to the perception of significant cost savings by managing all physical computing resources from a single point and assigning them to applications or…

  15. From Laser Scanning to Finite Element Analysis of Complex Buildings by Using a Semi-Automatic Procedure.

    PubMed

    Castellazzi, Giovanni; D'Altri, Antonio Maria; Bitelli, Gabriele; Selvaggi, Ilenia; Lambertini, Alessandro

    2015-07-28

    In this paper, a new semi-automatic procedure to transform three-dimensional point clouds of complex objects to three-dimensional finite element models is presented and validated. The procedure conceives of the point cloud as a stacking of point sections. The complexity of the clouds is arbitrary, since the procedure is designed for terrestrial laser scanner surveys applied to buildings with irregular geometry, such as historical buildings. The procedure aims at solving the problems connected to the generation of finite element models of these complex structures by constructing a fine discretized geometry with a reduced amount of time and ready to be used with structural analysis. If the starting clouds represent the inner and outer surfaces of the structure, the resulting finite element model will accurately capture the whole three-dimensional structure, producing a complex solid made by voxel elements. A comparison analysis with a CAD-based model is carried out on a historical building damaged by a seismic event. The results indicate that the proposed procedure is effective and obtains comparable models in a shorter time, with an increased level of automation.

  16. Towards Reconstructing a Doric Column in a Virtual Construction Site

    NASA Astrophysics Data System (ADS)

    Bartzis, D.

    2017-02-01

    This paper deals with the 3D reconstruction of ancient Greek architectural members, especially with the element of the Doric column. The case study for this project is the Choragic monument of Nicias on the South Slope of the Athenian Acropolis, from which a column drum, two capitals and smaller fragments are preserved. The first goal of this paper is to present some benefits of using 3D reconstruction methods not only in documentation but also in understanding of ancient Greek architectural members. The second goal is to take advantage of the produced point clouds. By using the Cloud Compare software, comparisons are made between the actual architectural members and an "ideal" point cloud of the whole column in its original form. Seeking for probable overlaps between the two point clouds could assist in estimating the original position of each member/fragment on the column. This method is expanded with more comparisons between the reference column model and other members/fragments around the Acropolis, which may have not yet been ascribed to the monument of Nicias.

  17. Air Modeling - Observational Meteorological Data

    EPA Pesticide Factsheets

    Observed meteorological data for use in air quality modeling consist of physical parameters that are measured directly by instrumentation, and include temperature, dew point, wind direction, wind speed, cloud cover, cloud layer(s), ceiling height,

  18. Method for cold stable biojet fuel

    DOEpatents

    Seames, Wayne S.; Aulich, Ted

    2015-12-08

    Plant or animal oils are processed to produce a fuel that operates at very cold temperatures and is suitable as an aviation turbine fuel, a diesel fuel, a fuel blendstock, or any fuel having a low cloud point, pour point or freeze point. The process is based on the cracking of plant or animal oils or their associated esters, known as biodiesel, to generate lighter chemical compounds that have substantially lower cloud, pour, and/or freeze points than the original oil or biodiesel. Cracked oil is processed using separation steps together with analysis to collect fractions with desired low temperature properties by removing undesirable compounds that do not possess the desired temperature properties.

  19. Nonuniform multiview color texture mapping of image sequence and three-dimensional model for faded cultural relics with sift feature points

    NASA Astrophysics Data System (ADS)

    Li, Na; Gong, Xingyu; Li, Hongan; Jia, Pengtao

    2018-01-01

    For faded relics, such as Terracotta Army, the 2D-3D registration between an optical camera and point cloud model is an important part for color texture reconstruction and further applications. This paper proposes a nonuniform multiview color texture mapping for the image sequence and the three-dimensional (3D) model of point cloud collected by Handyscan3D. We first introduce nonuniform multiview calibration, including the explanation of its algorithm principle and the analysis of its advantages. We then establish transformation equations based on sift feature points for the multiview image sequence. At the same time, the selection of nonuniform multiview sift feature points is introduced in detail. Finally, the solving process of the collinear equations based on multiview perspective projection is given with three steps and the flowchart. In the experiment, this method is applied to the color reconstruction of the kneeling figurine, Tangsancai lady, and general figurine. These results demonstrate that the proposed method provides an effective support for the color reconstruction of the faded cultural relics and be able to improve the accuracy of 2D-3D registration between the image sequence and the point cloud model.

  20. Estimation of cylinder orientation in three-dimensional point cloud using angular distance-based optimization

    NASA Astrophysics Data System (ADS)

    Su, Yun-Ting; Hu, Shuowen; Bethel, James S.

    2017-05-01

    Light detection and ranging (LIDAR) has become a widely used tool in remote sensing for mapping, surveying, modeling, and a host of other applications. The motivation behind this work is the modeling of piping systems in industrial sites, where cylinders are the most common primitive or shape. We focus on cylinder parameter estimation in three-dimensional point clouds, proposing a mathematical formulation based on angular distance to determine the cylinder orientation. We demonstrate the accuracy and robustness of the technique on synthetically generated cylinder point clouds (where the true axis orientation is known) as well as on real LIDAR data of piping systems. The proposed algorithm is compared with a discrete space Hough transform-based approach as well as a continuous space inlier approach, which iteratively discards outlier points to refine the cylinder parameter estimates. Results show that the proposed method is more computationally efficient than the Hough transform approach and is more accurate than both the Hough transform approach and the inlier method.

Top