Sample records for scission-point model

  1. New statistical scission-point model to predict fission fragment observables

    NASA Astrophysics Data System (ADS)

    Lemaître, Jean-François; Panebianco, Stefano; Sida, Jean-Luc; Hilaire, Stéphane; Heinrich, Sophie

    2015-09-01

    The development of high performance computing facilities makes possible a massive production of nuclear data in a full microscopic framework. Taking advantage of the individual potential calculations of more than 7000 nuclei, a new statistical scission-point model, called SPY, has been developed. It gives access to the absolute available energy at the scission point, which allows the use of a parameter-free microcanonical statistical description to calculate the distributions and the mean values of all fission observables. SPY uses the richness of microscopy in a rather simple theoretical framework, without any parameter except the scission-point definition, to draw clear answers based on perfect knowledge of the ingredients involved in the model, with very limited computing cost.

  2. SPY: A new scission point model based on microscopic ingredients to predict fission fragments properties

    NASA Astrophysics Data System (ADS)

    Lemaître, J.-F.; Dubray, N.; Hilaire, S.; Panebianco, S.; Sida, J.-L.

    2013-12-01

    Our purpose is to determine fission fragments characteristics in a framework of a scission point model named SPY for Scission Point Yields. This approach can be considered as a theoretical laboratory to study fission mechanism since it gives access to the correlation between the fragments properties and their nuclear structure, such as shell correction, pairing, collective degrees of freedom, odd-even effects. Which ones are dominant in final state? What is the impact of compound nucleus structure? The SPY model consists in a statistical description of the fission process at the scission point where fragments are completely formed and well separated with fixed properties. The most important property of the model relies on the nuclear structure of the fragments which is derived from full quantum microscopic calculations. This approach allows computing the fission final state of extremely exotic nuclei which are inaccessible by most of the fission model available on the market.

  3. SPY: a new scission-point model based on microscopic inputs to predict fission fragment properties

    NASA Astrophysics Data System (ADS)

    Panebianco, Stefano; Dubray, Nöel; Goriely, Stéphane; Hilaire, Stéphane; Lemaître, Jean-François; Sida, Jean-Luc

    2014-04-01

    Despite the difficulty in describing the whole fission dynamics, the main fragment characteristics can be determined in a static approach based on a so-called scission-point model. Within this framework, a new Scission-Point model for the calculations of fission fragment Yields (SPY) has been developed. This model, initially based on the approach developed by Wilkins in the late seventies, consists in performing a static energy balance at scission, where the two fragments are supposed to be completely separated so that their macroscopic properties (mass and charge) can be considered as fixed. Given the knowledge of the system state density, averaged quantities such as mass and charge yields, mean kinetic and excitation energy can then be extracted in the framework of a microcanonical statistical description. The main advantage of the SPY model is the introduction of one of the most up-to-date microscopic descriptions of the nucleus for the individual energy of each fragment and, in the future, for their state density. These quantities are obtained in the framework of HFB calculations using the Gogny nucleon-nucleon interaction, ensuring an overall coherence of the model. Starting from a description of the SPY model and its main features, a comparison between the SPY predictions and experimental data will be discussed for some specific cases, from light nuclei around mercury to major actinides. Moreover, extensive predictions over the whole chart of nuclides will be discussed, with particular attention to their implication in stellar nucleosynthesis. Finally, future developments, mainly concerning the introduction of microscopic state densities, will be briefly discussed.

  4. Molecular weight kinetics and chain scission models for dextran polymers during ultrasonic degradation.

    PubMed

    Pu, Yuanyuan; Zou, Qingsong; Hou, Dianzhi; Zhang, Yiping; Chen, Shan

    2017-01-20

    Ultrasonic degradation of six dextran samples with different initial molecular weights (IMW) has been performed to investigate the degradation behavior and chain scission mechanism of dextrans. The weight-average molecular weight (Mw) and polydispersity index (D value) were monitored by High Performance Gel Permeation Chromatography (HPGPC). Results showed that Mw and D value decreased with increasing ultrasonic time, resulting in a more homologous dextran solution with lower molecular weight. A significant degradation occurred in dextrans with higher IMW, particularly at the initial stage of the ultrasonic treatment. The Malhotra model was found to well describe the molecular weight kinetics for all dextran samples. Experimental data was fitted into two chain scission models to study dextran chain scission mechanism and the model performance was compared. Results indicated that the midpoint scission model agreed well with experimental results, with a linear regression factor of R 2 >0.99. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Statistical prescission point model of fission fragment angular distributions

    NASA Astrophysics Data System (ADS)

    John, Bency; Kataria, S. K.

    1998-03-01

    In light of recent developments in fission studies such as slow saddle to scission motion and spin equilibration near the scission point, the theory of fission fragment angular distribution is examined and a new statistical prescission point model is developed. The conditional equilibrium of the collective angular bearing modes at the prescission point, which is guided mainly by their relaxation times and population probabilities, is taken into account in the present model. The present model gives a consistent description of the fragment angular and spin distributions for a wide variety of heavy and light ion induced fission reactions.

  6. Universal scaling for polymer chain scission in turbulence

    PubMed Central

    Vanapalli, Siva A.; Ceccio, Steven L.; Solomon, Michael J.

    2006-01-01

    We report that previous polymer chain scission experiments in strong flows, long analyzed according to accepted laminar flow scission theories, were in fact affected by turbulence. We reconcile existing anomalies between theory and experiment with the hypothesis that the local stress at the Kolmogorov scale generates the molecular tension leading to polymer covalent bond breakage. The hypothesis yields a universal scaling for polymer scission in turbulent flows. This surprising reassessment of over 40 years of experimental data simplifies the theoretical picture of polymer dynamics leading to scission and allows control of scission in commercial polymers and genomic DNA. PMID:17075043

  7. Nonadiabatic effects in C-Br bond scission in the photodissociation of bromoacetyl chloride

    NASA Astrophysics Data System (ADS)

    Valero, Rosendo; Truhlar, Donald G.

    2006-11-01

    Bromoacetyl chloride photodissociation has been interpreted as a paradigmatic example of a process in which nonadiabatic effects play a major role. In molecular beam experiments by Butler and co-workers [J. Chem. Phys. 95, 3848 (1991); J. Chem. Phys. 97, 355 (1992)], BrCH2C(O )Cl was prepared in its ground electronic state (S0) and excited with a laser at 248nm to its first excited singlet state (S1). The two main ensuing photoreactions are the ruptures of the C-Cl bond and of the C-Br bond. A nonadiabatic model was proposed in which the C-Br scission is strongly suppressed due to nonadiabatic recrossing at the barrier formed by the avoided crossing between the S1 and S2 states. Recent reduced-dimensional dynamical studies lend support to this model. However, another interpretation that has been given for the experimental results is that the reduced probability of C-Br scission is a consequence of incomplete intramolecular energy redistribution. To provide further insight into this problem, we have studied the energetically lowest six singlet electronic states of bromoacetyl chloride by using an ab initio multiconfigurational perturbative electronic structure method. Stationary points (minima and saddle points) and minimum energy paths have been characterized on the S0 and S1 potential energy surfaces. The fourfold way diabatization method has been applied to transform five adiabatic excited electronic states to a diabatic representation. The diabatic potential energy matrix of the first five excited singlet states has been constructed along several cuts of the potential energy hypersurfaces. The thermochemistry of the photodissociation reactions and a comparison with experimental translational energy distributions strongly suggest that nonadiabatic effects dominate the C-Br scission, but that the reaction proceeds along the energetically allowed diabatic pathway to excited-state products instead of being nonadiabatically suppressed. This conclusion is also

  8. An atomic finite element model for biodegradable polymers. Part 2. A model for change in Young's modulus due to polymer chain scission.

    PubMed

    Gleadall, Andrew; Pan, Jingzhe; Kruft, Marc-Anton

    2015-11-01

    Atomic simulations were undertaken to analyse the effect of polymer chain scission on amorphous poly(lactide) during degradation. Many experimental studies have analysed mechanical properties degradation but relatively few computation studies have been conducted. Such studies are valuable for supporting the design of bioresorbable medical devices. Hence in this paper, an Effective Cavity Theory for the degradation of Young's modulus was developed. Atomic simulations indicated that a volume of reduced-stiffness polymer may exist around chain scissions. In the Effective Cavity Theory, each chain scission is considered to instantiate an effective cavity. Finite Element Analysis simulations were conducted to model the effect of the cavities on Young's modulus. Since polymer crystallinity affects mechanical properties, the effect of increases in crystallinity during degradation on Young's modulus is also considered. To demonstrate the ability of the Effective Cavity Theory, it was fitted to several sets of experimental data for Young's modulus in the literature. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Pre-scission model predictions of fission fragment mass distributions for super-heavy elements

    NASA Astrophysics Data System (ADS)

    Carjan, N.; Ivanyuk, F. A.; Oganessian, Yu. Ts.

    2017-12-01

    The total deformation energy just before the moment of neck rupture for the heaviest nuclei for which spontaneous fission has been detected (Ds281279-, 281Rg and Cn284282-) is calculated. The Strutinsky's prescription is used and nuclear shapes just before scission are described in terms of Cassinian ovals defined for the fixed value of elongation parameter α = 0.98 and generalized by the inclusion of four additional shape parameters: α1, α3, α4, and α6. Supposing that the probability of each point in the deformation space is given by Boltzmann factor, the distribution of the fission-fragment masses is estimated. The octupole deformation α3 at scission is found to play a decisive role in determining the main feature of the mass distribution: symmetric or asymmetric. Only the inclusion of α3 leads to an asymmetric division. Finally, the calculations are extended to an unexplored region of super-heavy nuclei: the even-even Fl (Z = 114), Lv (Z = 116), Og (Z = 118) and (Z = 126) isotopes. For these nuclei, the most probable mass of the light fragment has an almost constant value (≈136) like in the case of the most probable mass of the heavy fragment in the actinide region. It is the neutron shell at 82 that makes this light fragment so stable. Naturally, for very neutron-deficient isotopes, the mass division becomes symmetric when N = 2 × 82.

  10. Branching, Chain Scission, and Solution Stability of Worm-Like Micelles

    NASA Astrophysics Data System (ADS)

    Beaucage, Greg; Vogtt, Karsten; Jiang, Hanqui

    As salt is added to a simple micelle solution such as SDS or SLES, the zero shear rate specific viscosity rises rapidly followed by a maximum and decay. The rapid rise in viscosity is associated with formation of elliptical and extended chain worm-like micelles, WLMs. Entanglement of these long chain micelles leads to the viscoelastic behavior we associate with shampoo and body wash. The plateau and drop in viscosity at high salt concentrations is caused by a special type of topological branching where the branch points have no energy penalty to motion along the chain according to Cates theory. These have some similarity to catenane crosslinks. Predictive dynamic theories for WLMs rely on structural details; the diameter, persistence length, contour length, branch length, segment length between branch points, and mesh size. Further, since the contour length and other large scale features are in kinetic equilibrium, with frequent chain breakage and formation, the thermodynamics of these long chain structures are of interest both in terms of chain scission as well as in terms of the stability of the colloidal solution as a whole. Recent structural studies of WLMs using static neutron scattering based on new scattering models will be presented demonstrating that these input parameters for dynamic models of complex topological systems are quantitatively and directly available. In this context it is important to consider a comparison between dynamic features, for instance entanglement, and their static analogs, chain overlap.

  11. Fission time scale from pre-scission neutron and α multiplicities in the 16O + 194Pt reaction

    NASA Astrophysics Data System (ADS)

    Kapoor, K.; Verma, S.; Sharma, P.; Mahajan, R.; Kaur, N.; Kaur, G.; Behera, B. R.; Singh, K. P.; Kumar, A.; Singh, H.; Dubey, R.; Saneesh, N.; Jhingan, A.; Sugathan, P.; Mohanto, G.; Nayak, B. K.; Saxena, A.; Sharma, H. P.; Chamoli, S. K.; Mukul, I.; Singh, V.

    2017-11-01

    Pre- and post-scission α -particle multiplicities have been measured for the reaction 16O+P194t at 98.4 MeV forming R210n compound nucleus. α particles were measured at various angles in coincidence with the fission fragments. Moving source technique was used to extract the pre- and post-scission contributions to the particle multiplicity. Study of the fission mechanism using the different probes are helpful in understanding the detailed reaction dynamics. The neutron multiplicities for this reaction have been reported earlier. The multiplicities of neutrons and α particles were reproduced using standard statistical model code joanne2 by varying the transient (τt r) and saddle to scission (τs s c) times. This code includes deformation dependent-particle transmission coefficients, binding energies and level densities. Fission time scales of the order of 50-65 ×10-21 s are required to reproduce the neutron and α -particle multiplicities.

  12. Controlling the bond scission sequence of oxygenates for energy applications

    NASA Astrophysics Data System (ADS)

    Stottlemyer, Alan L.

    The so called "Holy Grail" of heterogeneous catalysis is a fundamental understanding of catalyzed chemical transformations which span multidimensional scales of both length and time, enabling rational catalyst design. Such an undertaking is realizable only with an atomic level understanding of bond formation and destruction with respect to intrinsic properties of the metal catalyst. In this study, we investigate the bond scission sequence of small oxygenates (methanol, ethanol, ethylene glycol) on bimetallic transition metal catalysts and transition metal carbide catalysts. Oxygenates are of interest both as hydrogen carriers for reforming to H2 and CO and as fuels in direct alcohol fuel cells (DAFC). To address the so-called "materials gap" and "pressure gap" this work adopted three parallel research approaches: (1) ultra high vacuum (UHV) studies including temperature programmed desorption (TPD) and high-resolution electron energy loss spectroscopy (HREELS) on polycrystalline surfaces; (2) DFT studies including thermodynamic and kinetic calculations; (3) electrochemical studies including cyclic voltammetry (CV) and chronoamperometry (CA). Recent studies have suggested that tungsten monocarbide (WC) may behave similarly to Pt for the electrooxidation of oxygenates. TPD was used to quantify the activity and selectivity of oxygenate decomposition for WC and Pt-modifiedWC (Pt/WC) as compared to Pt. While decomposition activity was generally higher on WC than on Pt, scission of the C-O bond resulted in alkane/alkene formation on WC, an undesired product for DAFC. When Pt was added to WC by physical vapor deposition C-O bond scission was limited, suggesting that Pt synergistically modifies WC to improve the selectivity toward C-H bond scission to produce H2 and CO. Additionally, TPD confirmed WC and Pt/WC to be more CO tolerant than Pt. HREELS results verified that surface intermediates were different on Pt/WC as compared to Pt or WC and evidence of aldehyde

  13. Advanced model for the prediction of the neutron-rich fission product yields

    NASA Astrophysics Data System (ADS)

    Rubchenya, V. A.; Gorelov, D.; Jokinen, A.; Penttilä, H.; Äystö, J.

    2013-12-01

    The consistent models for the description of the independent fission product formation cross sections in the spontaneous fission and in the neutron and proton induced fission at the energies up to 100 MeV is developed. This model is a combination of new version of the two-component exciton model and a time-dependent statistical model for fusion-fission process with inclusion of dynamical effects for accurate calculations of nucleon composition and excitation energy of the fissioning nucleus at the scission point. For each member of the compound nucleus ensemble at the scission point, the primary fission fragment characteristics: kinetic and excitation energies and their yields are calculated using the scission-point fission model with inclusion of the nuclear shell and pairing effects, and multimodal approach. The charge distribution of the primary fragment isobaric chains was considered as a result of the frozen quantal fluctuations of the isovector nuclear matter density at the scission point with the finite neck radius. Model parameters were obtained from the comparison of the predicted independent product fission yields with the experimental results and with the neutron-rich fission product data measured with a Penning trap at the Accelerator Laboratory of the University of Jyväskylä (JYFLTRAP).

  14. Ultrasound degradation of xanthan polymer in aqueous solution: Its scission mechanism and the effect of NaCl incorporation.

    PubMed

    Saleh, H M; Annuar, M S M; Simarani, K

    2017-11-01

    Degradation of xanthan polymer in aqueous solution by ultrasonic irradiation was investigated. The effects of selected variables i.e. sonication intensity, irradiation time, concentration of xanthan gum and molar concentration of NaCl in solution were studied. Combined approach of full factorial design and conventional one-factor-at-a-time was applied to obtain optimum degradation at sonication power intensity of 11.5Wcm -2 , irradiation time 120min and 0.1gL -1 xanthan in a salt-free solution. Molecular weight reduction of xanthan gum under sonication was described by an exponential decay function with higher rate constant for polymer degradation in the salt free solution. The limiting molecular weight where fragments no longer undergo scission was determined from the function. The incorporation of NaCl in xanthan solution resulted in a lower limiting molecular weight. The ultrasound-mediated degradation of aqueous xanthan polymer chain agreed with a random scission model. Side chain of xanthan polymer is proposed to be the primary site of scission action. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. The Amphipathic Helix of Influenza A Virus M2 Protein Is Required for Filamentous Bud Formation and Scission of Filamentous and Spherical Particles

    PubMed Central

    Roberts, Kari L.; Leser, George P.; Ma, Chunlong

    2013-01-01

    Influenza virus assembles and buds at the infected-cell plasma membrane. This involves extrusion of the plasma membrane followed by scission of the bud, resulting in severing the nascent virion from its former host. The influenza virus M2 ion channel protein contains in its cytoplasmic tail a membrane-proximal amphipathic helix that facilitates the scission process and is also required for filamentous particle formation. Mutation of five conserved hydrophobic residues to alanines within the amphipathic helix (M2 five-point mutant, or 5PM) reduced scission and also filament formation, whereas single mutations had no apparent phenotype. Here, we show that any two of these five residues mutated together to alanines result in virus debilitated for growth and filament formation in a manner similar to 5PM. Growth kinetics of the M2 mutants are approximately 2 logs lower than the wild-type level, and plaque diameter was significantly reduced. When the 5PM and a representative double mutant (I51A-Y52A) were introduced into A/WSN/33 M2, a strain that produces spherical particles, similar debilitation in viral growth occurred. Electron microscopy showed that with the 5PM and the I51A-Y52A A/Udorn/72 and WSN viruses, scission failed, and emerging virus particles exhibited a “beads-on-a-string” morphology. The major spike glycoprotein hemagglutinin is localized within lipid rafts in virus-infected cells, whereas M2 is associated at the periphery of rafts. Mutant M2s were more widely dispersed, and their abundance at the raft periphery was reduced, suggesting that the M2 amphipathic helix is required for proper localization in the host membrane and that this has implications for budding and scission. PMID:23843641

  16. Recent advances in nuclear fission theory: pre- and post-scission physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Talou, Patrick; Kawano, Toshihiko; Bouland, Olivier

    2010-01-01

    Recent advances in the modeling of the nuclear fission process for data evaluation purposes are reviewed. In particular, it is stressed that a more comprehensive approach to fission data is needed if predictive capability is to be achieved. The link between pre- and post-scission data is clarified, and a path forward to evaluate those data in a consistent and comprehensive manner is presented. Two examples are given: (i) the modeling of fission cross-sections in the R-matrix formalism, for which results for Pu isotopes from 239 to 242 are presented; (ii) the modeling of prompt fission neutrons in the Monte Carlomore » Hauser-Feshbach framework. Results for neutron-induced fission on {sup 235}U are discussed.« less

  17. Amide Link Scission in the Polyamide Active Layers of Thin-Film Composite Membranes upon Exposure to Free Chlorine: Kinetics and Mechanisms.

    PubMed

    Powell, Joshua; Luh, Jeanne; Coronell, Orlando

    2015-10-20

    The volume-averaged amide link scission in the aromatic polyamide active layer of a reverse osmosis membrane upon exposure to free chlorine was quantified at a variety of free chlorine exposure times, concentrations, and pH and rinsing conditions. The results showed that (i) hydroxyl ions are needed for scission to occur, (ii) hydroxide-induced amide link scission is a strong function of exposure to hypochlorous acid, (iii) the ratio between amide links broken and chlorine atoms taken up increased with the chlorination pH and reached a maximum of ∼25%, (iv) polyamide disintegration occurs when high free chlorine concentrations, alkaline conditions, and high exposure times are combined, (v) amide link scission promotes further chlorine uptake, and (vi) scission at the membrane surface is unrepresentative of volume-averaged scission in the active layer. Our observations are consistent with previously proposed mechanisms describing amide link scission as a result of the hydrolysis of the N-chlorinated amidic N-C bond due to nucleophilic attack by hydroxyl ions. This study increases the understanding of the physicochemical changes that could occur for membranes in treatment plants using chlorine as an upstream disinfectant and the extent and rate at which those changes would occur.

  18. Angular distribution of scission neutrons studied with time-dependent Schrödinger equation

    NASA Astrophysics Data System (ADS)

    Wada, Takahiro; Asano, Tomomasa; Carjan, Nicolae

    2018-03-01

    We investigate the angular distribution of scission neutrons taking account of the effects of fission fragments. The time evolution of the wave function of the scission neutron is obtained by integrating the time-dependent Schrodinger equation numerically. The effects of the fission fragments are taken into account by means of the optical potentials. The angular distribution is strongly modified by the presence of the fragments. In the case of asymmetric fission, it is found that the heavy fragment has stronger effects. Dependence on the initial distribution and on the properties of fission fragments is discussed. We also discuss on the treatment of the boundary to avoid artificial reflections

  19. Rheological analysis of irradiated crosslinkable and scissionable polymers used for medical devices under different radiation conditions

    NASA Astrophysics Data System (ADS)

    Satti, A. J.; Ressia, J. A.; Cerrada, M. L.; Andreucetti, N. A.; Vallés, E. M.

    2018-03-01

    The effects on different synthetic polymers of distinct types of radiation, gamma rays and electron beam, under different atmospheres are followed by changes in their viscoelastic behavior. Taking into account the two main radioinduced reactions, crosslinking and scissioning of polymeric chains, liquid polydimethylsiloxane has been used as example of crosslinkable polymer and semi crystalline polypropylene as example of scissionable polymer. Propylene - 1-hexene copolymers have been also evaluated, and the effects of both reactions were clearly noticed. Accordingly, samples of those aforementioned polymers have been irradiated with 60Co gamma irradiation in air and under vacuum, and also with electron beam, at similar doses. Sinusoidal dynamic oscillation experiments showed a significant increase in branching and crosslinking reactions when specimens are irradiated under vacuum, while scissioning reactions were observed for the different polymers when irradiation takes place under air with either gamma irradiation or electron beam.

  20. Flavonoids with DNA strand-scission activity from Rhus javanica var. roxburghiana.

    PubMed

    Lin, Chun-Nan; Chen, Hui-Ling; Yen, Ming-Hong

    2008-01-01

    The flavonoids isolated from the stems of Rhus javanica var. roxburghiana, taxifolin (1), fisetin (2), fustin (3), 3,7,4'-trihydroxyflavanone (4) and 3,7,4'-trihydroxyflavone (5) caused breakage of supercoiled plasmid pBR322 DNA in the presence of Cu(II). Cu(I) was shown to be an essential intermediate by using the Cu(I)-specific sequestering reagent neocuproine. The Cu(II)-mediated DNA scissions induced by 1, 2, 3 and 5 were inhibited by the addition of catalase and exhibited DNA strand break by the addition of KI and superoxide dimutase (SOD), while in the Cu(II)-mediated DNA scissions induced by 4 was inhibited by the addition of KI, SOD, and catalase. It is concluded that 1, 2, 3, and 5 can induce H2O2 and superoxide anion, while 4 can induce OH* and H2O2 and subsequent oxidative damage of DNA in the presence of Cu(II).

  1. Fabrication of nanobeads from nanocups by controlling scission/crosslinking in organic polymer materials.

    PubMed

    Oyama, Tomoko Gowa; Oshima, Akihiro; Washio, Masakazu; Tagawa, Seiichi

    2012-12-14

    The development of several kinds of micro/nanofabrication techniques has resulted in many innovations in the micro/nanodevices that support today's science and technology. With feature miniaturization, the fabrication tools have shifted from light to ionizing radiation. Here, we propose a simple micro/nanofabrication technique for organic materials using a scanning beam (SB) of ionizing radiation. By controlling the scission/crosslinking of the material via three-dimensional energy-deposition distribution of the SB, appropriate solvents can easily peel off only the crosslinked region from the bulk material. The technique was demonstrated using a focused ion beam and a chlorinated organic polymer. The polymer underwent main-chain scission upon irradiation, but it crosslinked after high-dose irradiation. Appropriate solvents could easily peel off only the crosslinked region from the bulk material. The technique, 'nanobead from nanocup', enabled the production of desired structures such as nanowires and nanomembranes. It can be also applied to the micro/nanofabrication of functional materials.

  2. Polysulfide-Scission Reagents for the Suppression of the Shuttle Effect in Lithium-Sulfur Batteries.

    PubMed

    Hua, Wuxing; Yang, Zhi; Nie, Huagui; Li, Zhongyu; Yang, Jizhang; Guo, Zeqing; Ruan, Chunping; Chen, Xi'an; Huang, Shaoming

    2017-02-28

    Lithium-sulfur batteries have become an appealing candidate for next-generation energy-storage technologies because of their low cost and high energy density. However, one of their major practical problems is the high solubility of long-chain lithium polysulfides and their infamous shuttle effect, which causes low Coulombic efficiency and sulfur loss. Here, we introduced a concept involving the dithiothreitol (DTT) assisted scission of polysulfides into lithium-sulfur system. Our designed porous carbon nanotube/S cathode coupling with a lightweight graphene/DTT interlayer (PCNTs-S@Gra/DTT) exhibited ultrahigh cycle-ability even at 5 C over 1100 cycles, with a capacity degradation rate of 0.036% per cycle. Additionally, the PCNTs-S@Gra/DTT electrode with a 3.51 mg cm -2 sulfur mass loading delivered a high initial areal capacity of 5.29 mAh cm -2 (1509 mAh g -1 ) at current density of 0.58 mA cm -2 , and the reversible areal capacity of the cell was maintained at 3.45 mAh cm -2 (984 mAh g -1 ) over 200 cycles at a higher current density of 1.17 mA cm -2 . Employing this molecule scission principle offers a promising avenue to achieve high-performance lithium-sulfur batteries.

  3. Interchangeable adaptors regulate mitochondrial dynamin assembly for membrane scission

    PubMed Central

    Koirala, Sajjan; Guo, Qian; Kalia, Raghav; Bui, Huyen T.; Eckert, Debra M.; Frost, Adam; Shaw, Janet M.

    2013-01-01

    Mitochondrial fission is mediated by the dynamin-related GTPases Dnm1/Drp1 (yeast/mammals), which form spirals around constricted sites on mitochondria. Additional membrane-associated adaptor proteins (Fis1, Mdv1, Mff, and MiDs) are required to recruit these GTPases from the cytoplasm to the mitochondrial surface. Whether these adaptors participate in both GTPase recruitment and membrane scission is not known. Here we use a yeast strain lacking all fission proteins to identify the minimal combinations of GTPases and adaptors sufficient for mitochondrial fission. Although Fis1 is dispensable for fission, membrane-anchored Mdv1, Mff, or MiDs paired individually with their respective GTPases are sufficient to divide mitochondria. In addition to their role in Drp1 membrane recruitment, MiDs coassemble with Drp1 in vitro. The resulting heteropolymer adopts a dramatically different structure with a narrower diameter than Drp1 homopolymers assembled in isolation. This result demonstrates that an adaptor protein alters the architecture of a mitochondrial dynamin GTPase polymer in a manner that could facilitate membrane constriction and severing activity. PMID:23530241

  4. Biodegradation of bis(1-chloro-2-propyl) ether via initial ether scission and subsequent dehalogenation by Rhodococcus sp. strain DTB.

    PubMed

    Moreno Horn, Marcus; Garbe, Leif-Alexander; Tressl, Roland; Adrian, Lorenz; Görisch, Helmut

    2003-04-01

    Rhodococcus sp. strain DTB (DSM 44534) grows on bis(1-chloro-2-propyl) ether (DDE) as sole source of carbon and energy. The non-chlorinated diisopropyl ether and bis(1-hydroxy-2-propyl) ether, however, did not serve as substrates. In ether degradation experiments with dense cell suspensions, 1-chloro-2-propanol and chloroacetone were formed, which indicated that scission of the ether bond is the first step while dehalogenation of the chlorinated C(3)-compounds occurs at a later stage of the degradation pathway. Inhibition of ether scission by methimazole suggested that the first step in degradation is catalyzed by a flavin-dependent enzyme activity. The non-chlorinated compounds 1,2-propanediol, hydroxyacetone, lactate, pyruvate, 1-propanol, propanal, and propionate also supported growth, which suggested that the intermediates 1,2-propanediol and hydroxyacetone are converted to pyruvate or to propionate, which can be channeled into the citric acid cycle by a number of routes. Total release of chloride and growth-yield experiments with DDE and non-chlorinated C(3)-compounds suggested complete biodegradation of the chlorinated ether.

  5. Deactivation of Ceria Supported Palladium through C–C Scission during Transfer Hydrogenation of Phenol with Alcohols

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, Nicholas C.; Manzano, J. Sebastián; Slowing, Igor I.

    The stability of palladium supported on ceria (Pd/CeO 2) was studied during liquid flow transfer hydrogenation using primary and secondary alcohols as hydrogen donors. For primary alcohols, the ceria support was reduced to cerium hydroxy carbonate within 14 h and was a contributing factor toward catalyst deactivation. For secondary alcohols, cerium hydroxy carbonate was not observed during the same time period and the catalyst was stable upon prolonged reaction. Regeneration through oxidation/reduction does not restore initial activity likely due to irreversible catalyst restructuring. Lastly, a deactivation mechanism involving C–C scission of acyl and carboxylate intermediates is proposed.

  6. Deactivation of Ceria Supported Palladium through C–C Scission during Transfer Hydrogenation of Phenol with Alcohols

    DOE PAGES

    Nelson, Nicholas C.; Manzano, J. Sebastián; Slowing, Igor I.

    2016-11-21

    The stability of palladium supported on ceria (Pd/CeO 2) was studied during liquid flow transfer hydrogenation using primary and secondary alcohols as hydrogen donors. For primary alcohols, the ceria support was reduced to cerium hydroxy carbonate within 14 h and was a contributing factor toward catalyst deactivation. For secondary alcohols, cerium hydroxy carbonate was not observed during the same time period and the catalyst was stable upon prolonged reaction. Regeneration through oxidation/reduction does not restore initial activity likely due to irreversible catalyst restructuring. Lastly, a deactivation mechanism involving C–C scission of acyl and carboxylate intermediates is proposed.

  7. Model Breaking Points Conceptualized

    ERIC Educational Resources Information Center

    Vig, Rozy; Murray, Eileen; Star, Jon R.

    2014-01-01

    Current curriculum initiatives (e.g., National Governors Association Center for Best Practices and Council of Chief State School Officers 2010) advocate that models be used in the mathematics classroom. However, despite their apparent promise, there comes a point when models break, a point in the mathematical problem space where the model cannot,…

  8. Dynamin recruitment and membrane scission at the neck of a clathrin-coated pit.

    PubMed

    Cocucci, Emanuele; Gaudin, Raphaël; Kirchhausen, Tom

    2014-11-05

    Dynamin, the GTPase required for clathrin-mediated endocytosis, is recruited to clathrin-coated pits in two sequential phases. The first is associated with coated pit maturation; the second, with fission of the membrane neck of a coated pit. Using gene-edited cells that express dynamin2-EGFP instead of dynamin2 and live-cell TIRF imaging with single-molecule EGFP sensitivity and high temporal resolution, we detected the arrival of dynamin at coated pits and defined dynamin dimers as the preferred assembly unit. We also used live-cell spinning-disk confocal microscopy calibrated by single-molecule EGFP detection to determine the number of dynamins recruited to the coated pits. A large fraction of budding coated pits recruit between 26 and 40 dynamins (between 1 and 1.5 helical turns of a dynamin collar) during the recruitment phase associated with neck fission; 26 are enough for coated vesicle release in cells partially depleted of dynamin by RNA interference. We discuss how these results restrict models for the mechanism of dynamin-mediated membrane scission. © 2014 Cocucci et al. This article is distributed by The American Society for Cell Biology under license from the author(s). Two months after publication it is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).

  9. Chain scission and anti fungal effect of electron beam on cellulose membrane

    NASA Astrophysics Data System (ADS)

    Wanichapichart, Pikul; Taweepreeda, Wirach; Nawae, Safitree; Choomgan, Pastraporn; Yasenchak, Dan

    2012-08-01

    Two types of bacterial cellulose (BC) membranes were produced under a modified H&S medium using sucrose as a carbon source, with (CCB) and without (SHB) coconut juice supplement. Both membranes showed similar crystallinity of 69.24 and 71.55%. After being irradiated with E-beams under oxygen limited and ambient condition, the results from water contact angle showed that only the irradiated membrane CCB was increased from 30 to 40 degrees, and irradiation under oxygen ambient condition provided the greatest value. Comparing with the control membranes, smaller water flux was the cases after electron beam irradiation which indicated a reduction of membrane pore area. However, the results from molecular weight cut off (MWCO) revealed that chain scission was greater for membrane SHB and its cut off was increased from 28,000 Da to more than 35,000 Da. FTIR analysis revealed some changes in membrane functional groups, corresponding with the above results. These changes initiated new property of cellulose membranes, an anti-fungal food wrap.

  10. DNA strand scission induced by adriamycin and aclacinomycin A.

    PubMed

    Someya, A; Tanaka, N

    1979-08-01

    The binding of adriamycin and aclacinomycin A with PM2 DNA, and the consequent cleavage of DNA have been demonstrated by agarose gel electrophoresis, using an ethidium bromide assay. Adriamycin was observed to induce a single strand scission of DNA in the presence of a reducing agent, but aclacinomycin A caused much less degree of DNA breaks. The DNA cleavage was enhanced by Cu2+ and Fe2+, but not significantly by Ni2+, Zn2+, Mg2+ and Ca2+, suggesting that reduction and auto-oxidation of the quinone moiety and H2O2 production participate in the DNA-cutting effect. The DNA degradation was dependent upon concentrations of the anthracyclines and CuCl2. The degree of DNA cleavage at 0.04 mM adriamycin was similar to that at 0.4 mM aclacinomycin A in the presence of 1 mM NADPH and 0.4 mM CuCl2. DNA was degraded to small fragments at 0.4 mM adriamycin and 0.2 mM CuCl2. The anthracycline-induced DNA cleavage was stimulated by H2O2, but partially inhibited by potassium iodide, superoxide dismutase, catalase and nitrogen gas atmosphere. The results suggested that both free radical of anthracycline quinones and hydroxyl radical directly react with DNA strands.

  11. Monte Carlo based toy model for fission process

    NASA Astrophysics Data System (ADS)

    Kurniadi, R.; Waris, A.; Viridi, S.

    2014-09-01

    There are many models and calculation techniques to obtain visible image of fission yield process. In particular, fission yield can be calculated by using two calculations approach, namely macroscopic approach and microscopic approach. This work proposes another calculation approach in which the nucleus is treated as a toy model. Hence, the fission process does not represent real fission process in nature completely. The toy model is formed by Gaussian distribution of random number that randomizes distance likesthe distance between particle and central point. The scission process is started by smashing compound nucleus central point into two parts that are left central and right central points. These three points have different Gaussian distribution parameters such as mean (μCN, μL, μR), and standard deviation (σCN, σL, σR). By overlaying of three distributions, the number of particles (NL, NR) that are trapped by central points can be obtained. This process is iterated until (NL, NR) become constant numbers. Smashing process is repeated by changing σL and σR, randomly.

  12. Anisotropic pyrochemical microetching of poly(tetrafluoroethylene) initiated by synchrotron radiation-induced scission of molecule bonds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yamaguchi, Akinobu, E-mail: yamaguti@lasti.u-hyogo.ac.jp, E-mail: utsumi@lasti.u-hyogo.ac.jp; Kido, Hideki; Utsumi, Yuichi, E-mail: yamaguti@lasti.u-hyogo.ac.jp, E-mail: utsumi@lasti.u-hyogo.ac.jp

    2016-02-01

    We developed a process for micromachining polytetrafluoroethylene (PTFE): anisotropic pyrochemical microetching induced by synchrotron X-ray irradiation. X-ray irradiation was performed at room temperature. Upon heating, the irradiated PTFE substrates exhibited high-precision features. Both the X-ray diffraction peak and Raman signal from the irradiated areas of the substrate decreased with increasing irradiation dose. The etching mechanism is speculated as follows: X-ray irradiation caused chain scission, which decreased the number-average degree of polymerization. The melting temperature of irradiated PTFE decreased as the polymer chain length decreased, enabling the treated regions to melt at a lower temperature. The anisotropic pyrochemical etching process enabledmore » the fabrication of PTFE microstructures with higher precision than simultaneously heating and irradiating the sample.« less

  13. Smooth random change point models.

    PubMed

    van den Hout, Ardo; Muniz-Terrera, Graciela; Matthews, Fiona E

    2011-03-15

    Change point models are used to describe processes over time that show a change in direction. An example of such a process is cognitive ability, where a decline a few years before death is sometimes observed. A broken-stick model consists of two linear parts and a breakpoint where the two lines intersect. Alternatively, models can be formulated that imply a smooth change between the two linear parts. Change point models can be extended by adding random effects to account for variability between subjects. A new smooth change point model is introduced and examples are presented that show how change point models can be estimated using functions in R for mixed-effects models. The Bayesian inference using WinBUGS is also discussed. The methods are illustrated using data from a population-based longitudinal study of ageing, the Cambridge City over 75 Cohort Study. The aim is to identify how many years before death individuals experience a change in the rate of decline of their cognitive ability. Copyright © 2010 John Wiley & Sons, Ltd.

  14. DFT investigations of phosphotriesters hydrolysis in aqueous solution: a model for DNA single strand scission induced by N-nitrosoureas.

    PubMed

    Liu, Tingting; Zhao, Lijiao; Zhong, Rugang

    2013-02-01

    DNA phosphotriester adducts are common alkylation products of DNA phosphodiester moiety induced by N-nitrosoureas. The 2-hydroxyethyl phosphotriester was reported to hydrolyze more rapidly than other alkyl phosphotriesters both in neutral and in alkaline conditions, which can cause DNA single strand scission. In this work, DFT calculations have been employed to map out the four lowest activation free-energy profiles for neutral and alkaline hydrolysis of triethyl phosphate (TEP) and diethyl 2-hydroxyethyl phosphate (DEHEP). All the hydrolysis pathways were illuminated to be stepwise involving an acyclic or cyclic phosphorane intermediate for TEP or DEHEP, respectively. The rate-limiting step for all the hydrolysis reactions was found to be the formation of phosphorane intermediate, with the exception of DEHEP hydrolysis in alkaline conditions that the decomposition process turned out to be the rate-limiting step, owing to the extraordinary low formation barrier of cyclic phosphorane intermediate catalyzed by hydroxide. The rate-limiting barriers obtained for the four reactions are all consistent with the available experimental information concerning the corresponding hydrolysis reactions of phosphotriesters. Our calculations performed on the phosphate triesters hydrolysis predict that the lower formation barriers of cyclic phosphorane intermediates compared to its acyclic counter-part should be the dominant factor governing the hydrolysis rate enhancement of DEHEP relative to TEP both in neutral and in alkaline conditions.

  15. Critical Role of Water and Oxygen Defects in C-O Scission during CO2 Reduction on Zn2GeO4(010).

    PubMed

    Yang, Jing; Li, Yanlu; Zhao, Xian; Fan, Weiliu

    2018-03-27

    Exploration of catalyst structure and environmental sensitivity for C-O bond scission is essential for improving the conversion efficiency because of the inertness of CO 2 . We performed density functional theory calculations to understand the influence of the properties of adsorbed water and the reciprocal action with oxygen vacancy on the CO 2 dissociation mechanism on Zn 2 GeO 4 (010). When a perfect surface was hydrated, the introduction of H 2 O was predicted to promote the scission step by two modes based on its appearance, with the greatest enhancement from dissociative adsorbed H 2 O. The dissociative H 2 O lowers the barrier and reaction energy of CO 2 dissociation through hydrogen bonding to preactivate the C-O bond and assisted scission via a COOH intermediate. The perfect surface with bidentate-binding H 2 O was energetically more favorable for CO 2 dissociation than the surface with monodentate-binding H 2 O. Direct dissociation was energetically favored by the former, whereas monodentate H 2 O facilitated the H-assisted pathway. The defective surface exhibited a higher reactivity for CO 2 decomposition than the perfect surface because the generation of oxygen vacancies could disperse the product location. When the defective surface was hydrated, the reciprocal action for vacancy and surface H 2 O on CO 2 dissociation was related to the vacancy type. The presence of H 2 O substantially decreased the reaction energy for the direct dissociation of CO 2 on O 2c1 - and O 3c2 -defect surfaces, which converts the endoergic reaction to an exoergic reaction. However, the increased decomposition barrier made the step kinetically unfavorable and reduced the reaction rate. When H 2 O was present on the O 2c2 -defect surface, both the barrier and reaction energy for direct dissociation were invariable. This result indicated that the introduction of H 2 O had little effect on the kinetics and thermodynamics. Moreover, the H-assisted pathway was suppressed on all

  16. An Emprical Point Error Model for Tls Derived Point Clouds

    NASA Astrophysics Data System (ADS)

    Ozendi, Mustafa; Akca, Devrim; Topan, Hüseyin

    2016-06-01

    The random error pattern of point clouds has significant effect on the quality of final 3D model. The magnitude and distribution of random errors should be modelled numerically. This work aims at developing such an anisotropic point error model, specifically for the terrestrial laser scanner (TLS) acquired 3D point clouds. A priori precisions of basic TLS observations, which are the range, horizontal angle and vertical angle, are determined by predefined and practical measurement configurations, performed at real-world test environments. A priori precision of horizontal (𝜎𝜃) and vertical (𝜎𝛼) angles are constant for each point of a data set, and can directly be determined through the repetitive scanning of the same environment. In our practical tests, precisions of the horizontal and vertical angles were found as 𝜎𝜃=±36.6𝑐𝑐 and 𝜎𝛼=±17.8𝑐𝑐, respectively. On the other hand, a priori precision of the range observation (𝜎𝜌) is assumed to be a function of range, incidence angle of the incoming laser ray, and reflectivity of object surface. Hence, it is a variable, and computed for each point individually by employing an empirically developed formula varying as 𝜎𝜌=±2-12 𝑚𝑚 for a FARO Focus X330 laser scanner. This procedure was followed by the computation of error ellipsoids of each point using the law of variance-covariance propagation. The direction and size of the error ellipsoids were computed by the principal components transformation. The usability and feasibility of the model was investigated in real world scenarios. These investigations validated the suitability and practicality of the proposed method.

  17. Model for Semantically Rich Point Cloud Data

    NASA Astrophysics Data System (ADS)

    Poux, F.; Neuville, R.; Hallot, P.; Billen, R.

    2017-10-01

    This paper proposes an interoperable model for managing high dimensional point clouds while integrating semantics. Point clouds from sensors are a direct source of information physically describing a 3D state of the recorded environment. As such, they are an exhaustive representation of the real world at every scale: 3D reality-based spatial data. Their generation is increasingly fast but processing routines and data models lack of knowledge to reason from information extraction rather than interpretation. The enhanced smart point cloud developed model allows to bring intelligence to point clouds via 3 connected meta-models while linking available knowledge and classification procedures that permits semantic injection. Interoperability drives the model adaptation to potentially many applications through specialized domain ontologies. A first prototype is implemented in Python and PostgreSQL database and allows to combine semantic and spatial concepts for basic hybrid queries on different point clouds.

  18. Catalytic strategy for carbon−carbon bond scission by the cytochrome P450 OleT

    PubMed Central

    Grant, Job L.; Mitchell, Megan E.; Makris, Thomas Michael

    2016-01-01

    OleT is a cytochrome P450 that catalyzes the hydrogen peroxide-dependent metabolism of Cn chain-length fatty acids to synthesize Cn-1 1-alkenes. The decarboxylation reaction provides a route for the production of drop-in hydrocarbon fuels from a renewable and abundant natural resource. This transformation is highly unusual for a P450, which typically uses an Fe4+−oxo intermediate known as compound I for the insertion of oxygen into organic substrates. OleT, previously shown to form compound I, catalyzes a different reaction. A large substrate kinetic isotope effect (≥8) for OleT compound I decay confirms that, like monooxygenation, alkene formation is initiated by substrate C−H bond abstraction. Rather than finalizing the reaction through rapid oxygen rebound, alkene synthesis proceeds through the formation of a reaction cycle intermediate with kinetics, optical properties, and reactivity indicative of an Fe4+−OH species, compound II. The direct observation of this intermediate, normally fleeting in hydroxylases, provides a rationale for the carbon−carbon scission reaction catalyzed by OleT. PMID:27555591

  19. Inferring Models of Bacterial Dynamics toward Point Sources

    PubMed Central

    Jashnsaz, Hossein; Nguyen, Tyler; Petrache, Horia I.; Pressé, Steve

    2015-01-01

    Experiments have shown that bacteria can be sensitive to small variations in chemoattractant (CA) concentrations. Motivated by these findings, our focus here is on a regime rarely studied in experiments: bacteria tracking point CA sources (such as food patches or even prey). In tracking point sources, the CA detected by bacteria may show very large spatiotemporal fluctuations which vary with distance from the source. We present a general statistical model to describe how bacteria locate point sources of food on the basis of stochastic event detection, rather than CA gradient information. We show how all model parameters can be directly inferred from single cell tracking data even in the limit of high detection noise. Once parameterized, our model recapitulates bacterial behavior around point sources such as the “volcano effect”. In addition, while the search by bacteria for point sources such as prey may appear random, our model identifies key statistical signatures of a targeted search for a point source given any arbitrary source configuration. PMID:26466373

  20. Accuracy limit of rigid 3-point water models

    NASA Astrophysics Data System (ADS)

    Izadi, Saeed; Onufriev, Alexey V.

    2016-08-01

    Classical 3-point rigid water models are most widely used due to their computational efficiency. Recently, we introduced a new approach to constructing classical rigid water models [S. Izadi et al., J. Phys. Chem. Lett. 5, 3863 (2014)], which permits a virtually exhaustive search for globally optimal model parameters in the sub-space that is most relevant to the electrostatic properties of the water molecule in liquid phase. Here we apply the approach to develop a 3-point Optimal Point Charge (OPC3) water model. OPC3 is significantly more accurate than the commonly used water models of same class (TIP3P and SPCE) in reproducing a comprehensive set of liquid bulk properties, over a wide range of temperatures. Beyond bulk properties, we show that OPC3 predicts the intrinsic charge hydration asymmetry (CHA) of water — a characteristic dependence of hydration free energy on the sign of the solute charge — in very close agreement with experiment. Two other recent 3-point rigid water models, TIP3PFB and H2ODC, each developed by its own, completely different optimization method, approach the global accuracy optimum represented by OPC3 in both the parameter space and accuracy of bulk properties. Thus, we argue that an accuracy limit of practical 3-point rigid non-polarizable models has effectively been reached; remaining accuracy issues are discussed.

  1. Quest for consistent modelling of statistical decay of the compound nucleus

    NASA Astrophysics Data System (ADS)

    Banerjee, Tathagata; Nath, S.; Pal, Santanu

    2018-01-01

    A statistical model description of heavy ion induced fusion-fission reactions is presented where shell effects, collective enhancement of level density, tilting away effect of compound nuclear spin and dissipation are included. It is shown that the inclusion of all these effects provides a consistent picture of fission where fission hindrance is required to explain the experimental values of both pre-scission neutron multiplicities and evaporation residue cross-sections in contrast to some of the earlier works where a fission hindrance is required for pre-scission neutrons but a fission enhancement for evaporation residue cross-sections.

  2. Assimilating Flow Data into Complex Multiple-Point Statistical Facies Models Using Pilot Points Method

    NASA Astrophysics Data System (ADS)

    Ma, W.; Jafarpour, B.

    2017-12-01

    We develop a new pilot points method for conditioning discrete multiple-point statistical (MPS) facies simulation on dynamic flow data. While conditioning MPS simulation on static hard data is straightforward, their calibration against nonlinear flow data is nontrivial. The proposed method generates conditional models from a conceptual model of geologic connectivity, known as a training image (TI), by strategically placing and estimating pilot points. To place pilot points, a score map is generated based on three sources of information:: (i) the uncertainty in facies distribution, (ii) the model response sensitivity information, and (iii) the observed flow data. Once the pilot points are placed, the facies values at these points are inferred from production data and are used, along with available hard data at well locations, to simulate a new set of conditional facies realizations. While facies estimation at the pilot points can be performed using different inversion algorithms, in this study the ensemble smoother (ES) and its multiple data assimilation variant (ES-MDA) are adopted to update permeability maps from production data, which are then used to statistically infer facies types at the pilot point locations. The developed method combines the information in the flow data and the TI by using the former to infer facies values at select locations away from the wells and the latter to ensure consistent facies structure and connectivity where away from measurement locations. Several numerical experiments are used to evaluate the performance of the developed method and to discuss its important properties.

  3. Modeling hard clinical end-point data in economic analyses.

    PubMed

    Kansal, Anuraag R; Zheng, Ying; Palencia, Roberto; Ruffolo, Antonio; Hass, Bastian; Sorensen, Sonja V

    2013-11-01

    The availability of hard clinical end-point data, such as that on cardiovascular (CV) events among patients with type 2 diabetes mellitus, is increasing, and as a result there is growing interest in using hard end-point data of this type in economic analyses. This study investigated published approaches for modeling hard end-points from clinical trials and evaluated their applicability in health economic models with different disease features. A review of cost-effectiveness models of interventions in clinically significant therapeutic areas (CV diseases, cancer, and chronic lower respiratory diseases) was conducted in PubMed and Embase using a defined search strategy. Only studies integrating hard end-point data from randomized clinical trials were considered. For each study included, clinical input characteristics and modeling approach were summarized and evaluated. A total of 33 articles (23 CV, eight cancer, two respiratory) were accepted for detailed analysis. Decision trees, Markov models, discrete event simulations, and hybrids were used. Event rates were incorporated either as constant rates, time-dependent risks, or risk equations based on patient characteristics. Risks dependent on time and/or patient characteristics were used where major event rates were >1%/year in models with fewer health states (<7). Models of infrequent events or with numerous health states generally preferred constant event rates. The detailed modeling information and terminology varied, sometimes requiring interpretation. Key considerations for cost-effectiveness models incorporating hard end-point data include the frequency and characteristics of the relevant clinical events and how the trial data is reported. When event risk is low, simplification of both the model structure and event rate modeling is recommended. When event risk is common, such as in high risk populations, more detailed modeling approaches, including individual simulations or explicitly time-dependent event rates

  4. Approximate Model for Turbulent Stagnation Point Flow.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dechant, Lawrence

    2016-01-01

    Here we derive an approximate turbulent self-similar model for a class of favorable pressure gradient wedge-like flows, focusing on the stagnation point limit. While the self-similar model provides a useful gross flow field estimate this approach must be combined with a near wall model is to determine skin friction and by Reynolds analogy the heat transfer coefficient. The combined approach is developed in detail for the stagnation point flow problem where turbulent skin friction and Nusselt number results are obtained. Comparison to the classical Van Driest (1958) result suggests overall reasonable agreement. Though the model is only valid near themore » stagnation region of cylinders and spheres it nonetheless provides a reasonable model for overall cylinder and sphere heat transfer. The enhancement effect of free stream turbulence upon the laminar flow is used to derive a similar expression which is valid for turbulent flow. Examination of free stream enhanced laminar flow suggests that the rather than enhancement of a laminar flow behavior free stream disturbance results in early transition to turbulent stagnation point behavior. Excellent agreement is shown between enhanced laminar flow and turbulent flow behavior for high levels, e.g. 5% of free stream turbulence. Finally the blunt body turbulent stagnation results are shown to provide realistic heat transfer results for turbulent jet impingement problems.« less

  5. Shock response of 1,3,5-trinitroperhydro-1,3,5-triazine (RDX): The C-N bond scission studied by molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Yuan, Jiao-Nan; Wei, Yong-Kai; Zhang, Xiu-Qing; Chen, Xiang-Rong; Ji, Guang-Fu; Kotni, Meena Kumari; Wei, Dong-Qing

    2017-10-01

    The shock response has a great influence on the design, synthesis, and application of energetic materials in both industrial and military areas. Therefore, the initial decomposition mechanism of bond scission at the atomistic level of condensed-phase α-RDX under shock loading has been studied based on quantum molecular dynamics simulations in combination with a multi-scale shock technique. First, based on the frontier molecular orbital theory, our calculated result shows that the N-NO2 bond is the weakest bond in the α-RDX molecule in the ground state, which may be the initial bond for pyrolysis. Second, the changes of bonds under shock loading are investigated by the changes of structures, kinetic bond lengths, and Laplacian bond orders during the simulation. Also, the variation of thermodynamic properties with time in shocked α-RDX at 10 km/s along the lattice vector a for a timescale of up to 3.5 ps is presented. By analyzing the detailed structural changes of RDX under shock loading, we find that the shocked RDX crystal undergoes a process of compression and rotation, which leads to the C-N bond initial rupture. The time variation of dynamic bond lengths in a shocked RDX crystal is calculated, and the result indicates that the C-N bond is easier to rupture than other bonds. The Laplacian bond orders are used to predict the molecular reactivity and stability. The values of the calculated bond orders show that the C-N bonds are more sensitive than other bonds under shock loading. In a word, the C-N bond scission has been validated as the initial decomposition in a RDX crystal shocked at 10 km/s. Finally, the bond-length criterion has been used to identify individual molecules in the simulation. The distance thresholds up to which two particles are considered direct neighbors and assigned to the same cluster have been tested. The species and density numbers of the initial decomposition products are collected according to the trajectory.

  6. A point particle model of lightly bound skyrmions

    NASA Astrophysics Data System (ADS)

    Gillard, Mike; Harland, Derek; Kirk, Elliot; Maybee, Ben; Speight, Martin

    2017-04-01

    A simple model of the dynamics of lightly bound skyrmions is developed in which skyrmions are replaced by point particles, each carrying an internal orientation. The model accounts well for the static energy minimizers of baryon number 1 ≤ B ≤ 8 obtained by numerical simulation of the full field theory. For 9 ≤ B ≤ 23, a large number of static solutions of the point particle model are found, all closely resembling size B subsets of a face centred cubic lattice, with the particle orientations dictated by a simple colouring rule. Rigid body quantization of these solutions is performed, and the spin and isospin of the corresponding ground states extracted. As part of the quantization scheme, an algorithm to compute the symmetry group of an oriented point cloud, and to determine its corresponding Finkelstein-Rubinstein constraints, is devised.

  7. Earth observing system instrument pointing control modeling for polar orbiting platforms

    NASA Technical Reports Server (NTRS)

    Briggs, H. C.; Kia, T.; Mccabe, S. A.; Bell, C. E.

    1987-01-01

    An approach to instrument pointing control performance assessment for large multi-instrument platforms is described. First, instrument pointing requirements and reference platform control systems for the Eos Polar Platforms are reviewed. Performance modeling tools including NASTRAN models of two large platforms, a modal selection procedure utilizing a balanced realization method, and reduced order platform models with core and instrument pointing control loops added are then described. Time history simulations of instrument pointing and stability performance in response to commanded slewing of adjacent instruments demonstrates the limits of tolerable slew activity. Simplified models of rigid body responses are also developed for comparison. Instrument pointing control methods required in addition to the core platform control system to meet instrument pointing requirements are considered.

  8. Development and evaluation of spatial point process models for epidermal nerve fibers.

    PubMed

    Olsbo, Viktor; Myllymäki, Mari; Waller, Lance A; Särkkä, Aila

    2013-06-01

    We propose two spatial point process models for the spatial structure of epidermal nerve fibers (ENFs) across human skin. The models derive from two point processes, Φb and Φe, describing the locations of the base and end points of the fibers. Each point of Φe (the end point process) is connected to a unique point in Φb (the base point process). In the first model, both Φe and Φb are Poisson processes, yielding a null model of uniform coverage of the skin by end points and general baseline results and reference values for moments of key physiologic indicators. The second model provides a mechanistic model to generate end points for each base, and we model the branching structure more directly by defining Φe as a cluster process conditioned on the realization of Φb as its parent points. In both cases, we derive distributional properties for observable quantities of direct interest to neurologists such as the number of fibers per base, and the direction and range of fibers on the skin. We contrast both models by fitting them to data from skin blister biopsy images of ENFs and provide inference regarding physiological properties of ENFs. Copyright © 2013 Elsevier Inc. All rights reserved.

  9. Tunnel Point Cloud Filtering Method Based on Elliptic Cylindrical Model

    NASA Astrophysics Data System (ADS)

    Zhua, Ningning; Jiaa, Yonghong; Luo, Lun

    2016-06-01

    The large number of bolts and screws that attached to the subway shield ring plates, along with the great amount of accessories of metal stents and electrical equipments mounted on the tunnel walls, make the laser point cloud data include lots of non-tunnel section points (hereinafter referred to as non-points), therefore affecting the accuracy for modeling and deformation monitoring. This paper proposed a filtering method for the point cloud based on the elliptic cylindrical model. The original laser point cloud data was firstly projected onto a horizontal plane, and a searching algorithm was given to extract the edging points of both sides, which were used further to fit the tunnel central axis. Along the axis the point cloud was segmented regionally, and then fitted as smooth elliptic cylindrical surface by means of iteration. This processing enabled the automatic filtering of those inner wall non-points. Experiments of two groups showed coincident results, that the elliptic cylindrical model based method could effectively filter out the non-points, and meet the accuracy requirements for subway deformation monitoring. The method provides a new mode for the periodic monitoring of tunnel sections all-around deformation in subways routine operation and maintenance.

  10. Lung motion estimation using dynamic point shifting: An innovative model based on a robust point matching algorithm.

    PubMed

    Yi, Jianbing; Yang, Xuan; Chen, Guoliang; Li, Yan-Ran

    2015-10-01

    Image-guided radiotherapy is an advanced 4D radiotherapy technique that has been developed in recent years. However, respiratory motion causes significant uncertainties in image-guided radiotherapy procedures. To address these issues, an innovative lung motion estimation model based on a robust point matching is proposed in this paper. An innovative robust point matching algorithm using dynamic point shifting is proposed to estimate patient-specific lung motion during free breathing from 4D computed tomography data. The correspondence of the landmark points is determined from the Euclidean distance between the landmark points and the similarity between the local images that are centered at points at the same time. To ensure that the points in the source image correspond to the points in the target image during other phases, the virtual target points are first created and shifted based on the similarity between the local image centered at the source point and the local image centered at the virtual target point. Second, the target points are shifted by the constrained inverse function mapping the target points to the virtual target points. The source point set and shifted target point set are used to estimate the transformation function between the source image and target image. The performances of the authors' method are evaluated on two publicly available DIR-lab and POPI-model lung datasets. For computing target registration errors on 750 landmark points in six phases of the DIR-lab dataset and 37 landmark points in ten phases of the POPI-model dataset, the mean and standard deviation by the authors' method are 1.11 and 1.11 mm, but they are 2.33 and 2.32 mm without considering image intensity, and 1.17 and 1.19 mm with sliding conditions. For the two phases of maximum inhalation and maximum exhalation in the DIR-lab dataset with 300 landmark points of each case, the mean and standard deviation of target registration errors on the 3000 landmark points of ten

  11. The application of the pilot points in groundwater numerical inversion model

    NASA Astrophysics Data System (ADS)

    Hu, Bin; Teng, Yanguo; Cheng, Lirong

    2015-04-01

    Numerical inversion simulation of groundwater has been widely applied in groundwater. Compared to traditional forward modeling, inversion model has more space to study. Zones and inversing modeling cell by cell are conventional methods. Pilot points is a method between them. The traditional inverse modeling method often uses software dividing the model into several zones with a few parameters needed to be inversed. However, distribution is usually too simple for modeler and result of simulation deviation. Inverse cell by cell will get the most actual parameter distribution in theory, but it need computational complexity greatly and quantity of survey data for geological statistical simulation areas. Compared to those methods, pilot points distribute a set of points throughout the different model domains for parameter estimation. Property values are assigned to model cells by Kriging to ensure geological units within the parameters of heterogeneity. It will reduce requirements of simulation area geological statistics and offset the gap between above methods. Pilot points can not only save calculation time, increase fitting degree, but also reduce instability of numerical model caused by numbers of parameters and other advantages. In this paper, we use pilot point in a field which structure formation heterogeneity and hydraulics parameter was unknown. We compare inversion modeling results of zones and pilot point methods. With the method of comparative analysis, we explore the characteristic of pilot point in groundwater inversion model. First, modeler generates an initial spatially correlated field given a geostatistical model by the description of the case site with the software named Groundwater Vistas 6. Defining Kriging to obtain the value of the field functions over the model domain on the basis of their values at measurement and pilot point locations (hydraulic conductivity), then we assign pilot points to the interpolated field which have been divided into 4

  12. Lung motion estimation using dynamic point shifting: An innovative model based on a robust point matching algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yi, Jianbing, E-mail: yijianbing8@163.com; Yang, Xuan, E-mail: xyang0520@263.net; Li, Yan-Ran, E-mail: lyran@szu.edu.cn

    2015-10-15

    Purpose: Image-guided radiotherapy is an advanced 4D radiotherapy technique that has been developed in recent years. However, respiratory motion causes significant uncertainties in image-guided radiotherapy procedures. To address these issues, an innovative lung motion estimation model based on a robust point matching is proposed in this paper. Methods: An innovative robust point matching algorithm using dynamic point shifting is proposed to estimate patient-specific lung motion during free breathing from 4D computed tomography data. The correspondence of the landmark points is determined from the Euclidean distance between the landmark points and the similarity between the local images that are centered atmore » points at the same time. To ensure that the points in the source image correspond to the points in the target image during other phases, the virtual target points are first created and shifted based on the similarity between the local image centered at the source point and the local image centered at the virtual target point. Second, the target points are shifted by the constrained inverse function mapping the target points to the virtual target points. The source point set and shifted target point set are used to estimate the transformation function between the source image and target image. Results: The performances of the authors’ method are evaluated on two publicly available DIR-lab and POPI-model lung datasets. For computing target registration errors on 750 landmark points in six phases of the DIR-lab dataset and 37 landmark points in ten phases of the POPI-model dataset, the mean and standard deviation by the authors’ method are 1.11 and 1.11 mm, but they are 2.33 and 2.32 mm without considering image intensity, and 1.17 and 1.19 mm with sliding conditions. For the two phases of maximum inhalation and maximum exhalation in the DIR-lab dataset with 300 landmark points of each case, the mean and standard deviation of target registration errors

  13. HYDROLOGY AND SEDIMENT MODELING USING THE BASINS NON-POINT SOURCE MODEL

    EPA Science Inventory

    The Non-Point Source Model (Hydrologic Simulation Program-Fortran, or HSPF) within the EPA Office of Water's BASINS watershed modeling system was used to simulate streamflow and total suspended solids within Contentnea Creek, North Carolina, which is a tributary of the Neuse Rive...

  14. Point-to-point migration functions and gravity model renormalization: approaches to aggregation in spatial interaction modeling.

    PubMed

    Slater, P B

    1985-08-01

    Two distinct approaches to assessing the effect of geographic scale on spatial interactions are modeled. In the first, the question of whether a distance deterrence function, which explains interactions for one system of zones, can also succeed on a more aggregate scale, is examined. Only the two-parameter function for which it is found that distances between macrozones are weighted averaged of distances between component zones is satisfactory in this regard. Estimation of continuous (point-to-point) functions--in the form of quadrivariate cubic polynomials--for US interstate migration streams, is then undertaken. Upon numerical integration, these higher order surfaces yield predictions of interzonal and intrazonal movements at any scale of interest. Test of spatial stationarity, isotropy, and symmetry of interstate migration are conducted in this framework.

  15. Using Laser Scanners to Augment the Systematic Error Pointing Model

    NASA Astrophysics Data System (ADS)

    Wernicke, D. R.

    2016-08-01

    The antennas of the Deep Space Network (DSN) rely on precise pointing algorithms to communicate with spacecraft that are billions of miles away. Although the existing systematic error pointing model is effective at reducing blind pointing errors due to static misalignments, several of its terms have a strong dependence on seasonal and even daily thermal variation and are thus not easily modeled. Changes in the thermal state of the structure create a separation from the model and introduce a varying pointing offset. Compensating for this varying offset is possible by augmenting the pointing model with laser scanners. In this approach, laser scanners mounted to the alidade measure structural displacements while a series of transformations generate correction angles. Two sets of experiments were conducted in August 2015 using commercially available laser scanners. When compared with historical monopulse corrections under similar conditions, the computed corrections are within 3 mdeg of the mean. However, although the results show promise, several key challenges relating to the sensitivity of the optical equipment to sunlight render an implementation of this approach impractical. Other measurement devices such as inclinometers may be implementable at a significantly lower cost.

  16. ASYMPTOTICS FOR CHANGE-POINT MODELS UNDER VARYING DEGREES OF MIS-SPECIFICATION

    PubMed Central

    SONG, RUI; BANERJEE, MOULINATH; KOSOROK, MICHAEL R.

    2015-01-01

    Change-point models are widely used by statisticians to model drastic changes in the pattern of observed data. Least squares/maximum likelihood based estimation of change-points leads to curious asymptotic phenomena. When the change–point model is correctly specified, such estimates generally converge at a fast rate (n) and are asymptotically described by minimizers of a jump process. Under complete mis-specification by a smooth curve, i.e. when a change–point model is fitted to data described by a smooth curve, the rate of convergence slows down to n1/3 and the limit distribution changes to that of the minimizer of a continuous Gaussian process. In this paper we provide a bridge between these two extreme scenarios by studying the limit behavior of change–point estimates under varying degrees of model mis-specification by smooth curves, which can be viewed as local alternatives. We find that the limiting regime depends on how quickly the alternatives approach a change–point model. We unravel a family of ‘intermediate’ limits that can transition, at least qualitatively, to the limits in the two extreme scenarios. The theoretical results are illustrated via a set of carefully designed simulations. We also demonstrate how inference for the change-point parameter can be performed in absence of knowledge of the underlying scenario by resorting to subsampling techniques that involve estimation of the convergence rate. PMID:26681814

  17. Point- and line-based transformation models for high resolution satellite image rectification

    NASA Astrophysics Data System (ADS)

    Abd Elrahman, Ahmed Mohamed Shaker

    Rigorous mathematical models with the aid of satellite ephemeris data can present the relationship between the satellite image space and the object space. With government funded satellites, access to calibration and ephemeris data has allowed the development and use of these models. However, for commercial high-resolution satellites, which have been recently launched, these data are withheld from users, and therefore alternative empirical models should be used. In general, the existing empirical models are based on the use of control points and involve linking points in the image space and the corresponding points in the object space. But the lack of control points in some remote areas and the questionable accuracy of the identified discrete conjugate points provide a catalyst for the development of algorithms based on features other than control points. This research, concerned with image rectification and 3D geo-positioning determination using High-Resolution Satellite Imagery (HRSI), has two major objectives. First, the effects of satellite sensor characteristics, number of ground control points (GCPs), and terrain elevation variations on the performance of several point based empirical models are studied. Second, a new mathematical model, using only linear features as control features, or linear features with a minimum number of GCPs, is developed. To meet the first objective, several experiments for different satellites such as Ikonos, QuickBird, and IRS-1D have been conducted using different point based empirical models. Various data sets covering different terrain types are presented and results from representative sets of the experiments are shown and analyzed. The results demonstrate the effectiveness and the superiority of these models under certain conditions. From the results obtained, several alternatives to circumvent the effects of the satellite sensor characteristics, the number of GCPs, and the terrain elevation variations are introduced. To meet

  18. Assessment of Response Surface Models using Independent Confirmation Point Analysis

    NASA Technical Reports Server (NTRS)

    DeLoach, Richard

    2010-01-01

    This paper highlights various advantages that confirmation-point residuals have over conventional model design-point residuals in assessing the adequacy of a response surface model fitted by regression techniques to a sample of experimental data. Particular advantages are highlighted for the case of design matrices that may be ill-conditioned for a given sample of data. The impact of both aleatory and epistemological uncertainty in response model adequacy assessments is considered.

  19. A method for automatic feature points extraction of human vertebrae three-dimensional model

    NASA Astrophysics Data System (ADS)

    Wu, Zhen; Wu, Junsheng

    2017-05-01

    A method for automatic extraction of the feature points of the human vertebrae three-dimensional model is presented. Firstly, the statistical model of vertebrae feature points is established based on the results of manual vertebrae feature points extraction. Then anatomical axial analysis of the vertebrae model is performed according to the physiological and morphological characteristics of the vertebrae. Using the axial information obtained from the analysis, a projection relationship between the statistical model and the vertebrae model to be extracted is established. According to the projection relationship, the statistical model is matched with the vertebrae model to get the estimated position of the feature point. Finally, by analyzing the curvature in the spherical neighborhood with the estimated position of feature points, the final position of the feature points is obtained. According to the benchmark result on multiple test models, the mean relative errors of feature point positions are less than 5.98%. At more than half of the positions, the error rate is less than 3% and the minimum mean relative error is 0.19%, which verifies the effectiveness of the method.

  20. Optimization of Regression Models of Experimental Data Using Confirmation Points

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.

    2010-01-01

    A new search metric is discussed that may be used to better assess the predictive capability of different math term combinations during the optimization of a regression model of experimental data. The new search metric can be determined for each tested math term combination if the given experimental data set is split into two subsets. The first subset consists of data points that are only used to determine the coefficients of the regression model. The second subset consists of confirmation points that are exclusively used to test the regression model. The new search metric value is assigned after comparing two values that describe the quality of the fit of each subset. The first value is the standard deviation of the PRESS residuals of the data points. The second value is the standard deviation of the response residuals of the confirmation points. The greater of the two values is used as the new search metric value. This choice guarantees that both standard deviations are always less or equal to the value that is used during the optimization. Experimental data from the calibration of a wind tunnel strain-gage balance is used to illustrate the application of the new search metric. The new search metric ultimately generates an optimized regression model that was already tested at regression model independent confirmation points before it is ever used to predict an unknown response from a set of regressors.

  1. Next-generation concurrent engineering: developing models to complement point designs

    NASA Technical Reports Server (NTRS)

    Morse, Elizabeth; Leavens, Tracy; Cohanim, Barbak; Harmon, Corey; Mahr, Eric; Lewis, Brian

    2006-01-01

    Concurrent Engineering Design teams have made routine the rapid development of point designs for space missions. The Jet Propulsion Laboratory's Team X is now evolving into a next generation CED; nin addition to a point design, the team develops a model of the local trade space. The process is a balance between the power of model-developing tools and the creativity of human experts, enabling the development of a variety of trade models for any space mission.

  2. Neck curve polynomials in neck rupture model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurniadi, Rizal; Perkasa, Yudha S.; Waris, Abdul

    2012-06-06

    The Neck Rupture Model is a model that explains the scission process which has smallest radius in liquid drop at certain position. Old fashion of rupture position is determined randomly so that has been called as Random Neck Rupture Model (RNRM). The neck curve polynomials have been employed in the Neck Rupture Model for calculation the fission yield of neutron induced fission reaction of {sup 280}X{sub 90} with changing of order of polynomials as well as temperature. The neck curve polynomials approximation shows the important effects in shaping of fission yield curve.

  3. The Comparison of Point Data Models for the Output of WRF Hydro Model in the IDV

    NASA Astrophysics Data System (ADS)

    Ho, Y.; Weber, J.

    2017-12-01

    WRF Hydro netCDF output files contain streamflow, flow depth, longitude, latitude, altitude and stream order values for each forecast point. However, the data are not CF compliant. The total number of forecast points for the US CONUS is approximately 2.7 million and it is a big challenge for any visualization and analysis tool. The IDV point cloud display shows point data as a set of points colored by parameter. This display is very efficient compared to a standard point type display for rendering a large number of points. The one problem we have is that the data I/O can be a bottleneck issue when dealing with a large collection of point input files. In this presentation, we will experiment with different point data models and their APIs to access the same WRF Hydro model output. The results will help us construct a CF compliant netCDF point data format for the community.

  4. Equivalence of MAXENT and Poisson point process models for species distribution modeling in ecology.

    PubMed

    Renner, Ian W; Warton, David I

    2013-03-01

    Modeling the spatial distribution of a species is a fundamental problem in ecology. A number of modeling methods have been developed, an extremely popular one being MAXENT, a maximum entropy modeling approach. In this article, we show that MAXENT is equivalent to a Poisson regression model and hence is related to a Poisson point process model, differing only in the intercept term, which is scale-dependent in MAXENT. We illustrate a number of improvements to MAXENT that follow from these relations. In particular, a point process model approach facilitates methods for choosing the appropriate spatial resolution, assessing model adequacy, and choosing the LASSO penalty parameter, all currently unavailable to MAXENT. The equivalence result represents a significant step in the unification of the species distribution modeling literature. Copyright © 2013, The International Biometric Society.

  5. Bisous model-Detecting filamentary patterns in point processes

    NASA Astrophysics Data System (ADS)

    Tempel, E.; Stoica, R. S.; Kipper, R.; Saar, E.

    2016-07-01

    The cosmic web is a highly complex geometrical pattern, with galaxy clusters at the intersection of filaments and filaments at the intersection of walls. Identifying and describing the filamentary network is not a trivial task due to the overwhelming complexity of the structure, its connectivity and the intrinsic hierarchical nature. To detect and quantify galactic filaments we use the Bisous model, which is a marked point process built to model multi-dimensional patterns. The Bisous filament finder works directly with the galaxy distribution data and the model intrinsically takes into account the connectivity of the filamentary network. The Bisous model generates the visit map (the probability to find a filament at a given point) together with the filament orientation field. Using these two fields, we can extract filament spines from the data. Together with this paper we publish the computer code for the Bisous model that is made available in GitHub. The Bisous filament finder has been successfully used in several cosmological applications and further development of the model will allow to detect the filamentary network also in photometric redshift surveys, using the full redshift posterior. We also want to encourage the astro-statistical community to use the model and to connect it with all other existing methods for filamentary pattern detection and characterisation.

  6. Next-generation concurrent engineering: developing models to complement point designs

    NASA Technical Reports Server (NTRS)

    Morse, Elizabeth; Leavens, Tracy; Cohanim, Babak; Harmon, Corey; Mahr, Eric; Lewis, Brian

    2006-01-01

    Concurrent Engineering Design (CED) teams have made routine the rapid development of point designs for space missions. The Jet Propulsion Laboratory's Team X is now evolving into a 'next-generation CED; in addition to a point design, the Team develops a model of the local trade space. The process is a balance between the power of a model developing tools and the creativity of humal experts, enabling the development of a variety of trade models for any space mission. This paper reviews the modeling method and its practical implementation in the ED environment. Example results illustrate the benefit of this approach.

  7. Modeling spatially-varying landscape change points in species occurrence thresholds

    USGS Publications Warehouse

    Wagner, Tyler; Midway, Stephen R.

    2014-01-01

    Predicting species distributions at scales of regions to continents is often necessary, as large-scale phenomena influence the distributions of spatially structured populations. Land use and land cover are important large-scale drivers of species distributions, and landscapes are known to create species occurrence thresholds, where small changes in a landscape characteristic results in abrupt changes in occurrence. The value of the landscape characteristic at which this change occurs is referred to as a change point. We present a hierarchical Bayesian threshold model (HBTM) that allows for estimating spatially varying parameters, including change points. Our model also allows for modeling estimated parameters in an effort to understand large-scale drivers of variability in land use and land cover on species occurrence thresholds. We use range-wide detection/nondetection data for the eastern brook trout (Salvelinus fontinalis), a stream-dwelling salmonid, to illustrate our HBTM for estimating and modeling spatially varying threshold parameters in species occurrence. We parameterized the model for investigating thresholds in landscape predictor variables that are measured as proportions, and which are therefore restricted to values between 0 and 1. Our HBTM estimated spatially varying thresholds in brook trout occurrence for both the proportion agricultural and urban land uses. There was relatively little spatial variation in change point estimates, although there was spatial variability in the overall shape of the threshold response and associated uncertainty. In addition, regional mean stream water temperature was correlated to the change point parameters for the proportion of urban land use, with the change point value increasing with increasing mean stream water temperature. We present a framework for quantify macrosystem variability in spatially varying threshold model parameters in relation to important large-scale drivers such as land use and land cover

  8. Dissipative N-point-vortex Models in the Plane

    NASA Astrophysics Data System (ADS)

    Shashikanth, Banavara N.

    2010-02-01

    A method is presented for constructing point vortex models in the plane that dissipate the Hamiltonian function at any prescribed rate and yet conserve the level sets of the invariants of the Hamiltonian model arising from the SE (2) symmetries. The method is purely geometric in that it uses the level sets of the Hamiltonian and the invariants to construct the dissipative field and is based on elementary classical geometry in ℝ3. Extension to higher-dimensional spaces, such as the point vortex phase space, is done using exterior algebra. The method is in fact general enough to apply to any smooth finite-dimensional system with conserved quantities, and, for certain special cases, the dissipative vector field constructed can be associated with an appropriately defined double Nambu-Poisson bracket. The most interesting feature of this method is that it allows for an infinite sequence of such dissipative vector fields to be constructed by repeated application of a symmetric linear operator (matrix) at each point of the intersection of the level sets.

  9. Underwater 3d Modeling: Image Enhancement and Point Cloud Filtering

    NASA Astrophysics Data System (ADS)

    Sarakinou, I.; Papadimitriou, K.; Georgoula, O.; Patias, P.

    2016-06-01

    This paper examines the results of image enhancement and point cloud filtering on the visual and geometric quality of 3D models for the representation of underwater features. Specifically it evaluates the combination of effects from the manual editing of images' radiometry (captured at shallow depths) and the selection of parameters for point cloud definition and mesh building (processed in 3D modeling software). Such datasets, are usually collected by divers, handled by scientists and used for geovisualization purposes. In the presented study, have been created 3D models from three sets of images (seafloor, part of a wreck and a small boat's wreck) captured at three different depths (3.5m, 10m and 14m respectively). Four models have been created from the first dataset (seafloor) in order to evaluate the results from the application of image enhancement techniques and point cloud filtering. The main process for this preliminary study included a) the definition of parameters for the point cloud filtering and the creation of a reference model, b) the radiometric editing of images, followed by the creation of three improved models and c) the assessment of results by comparing the visual and the geometric quality of improved models versus the reference one. Finally, the selected technique is tested on two other data sets in order to examine its appropriateness for different depths (at 10m and 14m) and different objects (part of a wreck and a small boat's wreck) in the context of an ongoing research in the Laboratory of Photogrammetry and Remote Sensing.

  10. Extracting valley-ridge lines from point-cloud-based 3D fingerprint models.

    PubMed

    Pang, Xufang; Song, Zhan; Xie, Wuyuan

    2013-01-01

    3D fingerprinting is an emerging technology with the distinct advantage of touchless operation. More important, 3D fingerprint models contain more biometric information than traditional 2D fingerprint images. However, current approaches to fingerprint feature detection usually must transform the 3D models to a 2D space through unwrapping or other methods, which might introduce distortions. A new approach directly extracts valley-ridge features from point-cloud-based 3D fingerprint models. It first applies the moving least-squares method to fit a local paraboloid surface and represent the local point cloud area. It then computes the local surface's curvatures and curvature tensors to facilitate detection of the potential valley and ridge points. The approach projects those points to the most likely valley-ridge lines, using statistical means such as covariance analysis and cross correlation. To finally extract the valley-ridge lines, it grows the polylines that approximate the projected feature points and removes the perturbations between the sampled points. Experiments with different 3D fingerprint models demonstrate this approach's feasibility and performance.

  11. A New Blind Pointing Model Improves Large Reflector Antennas Precision Pointing at Ka-Band (32 GHz)

    NASA Technical Reports Server (NTRS)

    Rochblatt, David J.

    2009-01-01

    The National Aeronautics and Space Administration (NASA), Jet Propulsion Laboratory (JPL)-Deep Space Network (DSN) subnet of 34-m Beam Waveguide (BWG) Antennas was recently upgraded with Ka-Band (32-GHz) frequency feeds for space research and communication. For normal telemetry tracking a Ka-Band monopulse system is used, which typically yields 1.6-mdeg mean radial error (MRE) pointing accuracy on the 34-m diameter antennas. However, for the monopulse to be able to acquire and lock, for special radio science applications where monopulse cannot be used, or as a back-up for the monopulse, high-precision open-loop blind pointing is required. This paper describes a new 4th order pointing model and calibration technique, which was developed and applied to the DSN 34-m BWG antennas yielding 1.8 to 3.0-mdeg MRE pointing accuracy and amplitude stability of 0.2 dB, at Ka-Band, and successfully used for the CASSINI spacecraft occultation experiment at Saturn and Titan. In addition, the new 4th order pointing model was used during a telemetry experiment at Ka-Band (32 GHz) utilizing the Mars Reconnaissance Orbiter (MRO) spacecraft while at a distance of 0.225 astronomical units (AU) from Earth and communicating with a DSN 34-m BWG antenna at a record high rate of 6-megabits per second (Mb/s).

  12. Infinite-disorder critical points of models with stretched exponential interactions

    NASA Astrophysics Data System (ADS)

    Juhász, Róbert

    2014-09-01

    We show that an interaction decaying as a stretched exponential function of distance, J(l)˜ e-cl^a , is able to alter the universality class of short-range systems having an infinite-disorder critical point. To do so, we study the low-energy properties of the random transverse-field Ising chain with the above form of interaction by a strong-disorder renormalization group (SDRG) approach. We find that the critical behavior of the model is controlled by infinite-disorder fixed points different from those of the short-range model if 0 < a < 1/2. In this range, the critical exponents calculated analytically by a simplified SDRG scheme are found to vary with a, while, for a > 1/2, the model belongs to the same universality class as its short-range variant. The entanglement entropy of a block of size L increases logarithmically with L at the critical point but, unlike the short-range model, the prefactor is dependent on disorder in the range 0 < a < 1/2. Numerical results obtained by an improved SDRG scheme are found to be in agreement with the analytical predictions. The same fixed points are expected to describe the critical behavior of, among others, the random contact process with stretched exponentially decaying activation rates.

  13. Two-point spectral model for variable density homogeneous turbulence

    NASA Astrophysics Data System (ADS)

    Pal, Nairita; Kurien, Susan; Clark, Timothy; Aslangil, Denis; Livescu, Daniel

    2017-11-01

    We present a comparison between a two-point spectral closure model for buoyancy-driven variable density homogeneous turbulence, with Direct Numerical Simulation (DNS) data of the same system. We wish to understand how well a suitable spectral model might capture variable density effects and the transition to turbulence from an initially quiescent state. Following the BHRZ model developed by Besnard et al. (1990), the spectral model calculation computes the time evolution of two-point correlations of the density fluctuations with the momentum and the specific-volume. These spatial correlations are expressed as function of wavenumber k and denoted by a (k) and b (k) , quantifying mass flux and turbulent mixing respectively. We assess the accuracy of the model, relative to a full DNS of the complete hydrodynamical equations, using a and b as metrics. Work at LANL was performed under the auspices of the U.S. DOE Contract No. DE-AC52-06NA25396.

  14. Set points, settling points and some alternative models: theoretical options to understand how genes and environments combine to regulate body adiposity

    PubMed Central

    Speakman, John R.; Levitsky, David A.; Allison, David B.; Bray, Molly S.; de Castro, John M.; Clegg, Deborah J.; Clapham, John C.; Dulloo, Abdul G.; Gruer, Laurence; Haw, Sally; Hebebrand, Johannes; Hetherington, Marion M.; Higgs, Susanne; Jebb, Susan A.; Loos, Ruth J. F.; Luckman, Simon; Luke, Amy; Mohammed-Ali, Vidya; O’Rahilly, Stephen; Pereira, Mark; Perusse, Louis; Robinson, Tom N.; Rolls, Barbara; Symonds, Michael E.; Westerterp-Plantenga, Margriet S.

    2011-01-01

    The close correspondence between energy intake and expenditure over prolonged time periods, coupled with an apparent protection of the level of body adiposity in the face of perturbations of energy balance, has led to the idea that body fatness is regulated via mechanisms that control intake and energy expenditure. Two models have dominated the discussion of how this regulation might take place. The set point model is rooted in physiology, genetics and molecular biology, and suggests that there is an active feedback mechanism linking adipose tissue (stored energy) to intake and expenditure via a set point, presumably encoded in the brain. This model is consistent with many of the biological aspects of energy balance, but struggles to explain the many significant environmental and social influences on obesity, food intake and physical activity. More importantly, the set point model does not effectively explain the ‘obesity epidemic’ – the large increase in body weight and adiposity of a large proportion of individuals in many countries since the 1980s. An alternative model, called the settling point model, is based on the idea that there is passive feedback between the size of the body stores and aspects of expenditure. This model accommodates many of the social and environmental characteristics of energy balance, but struggles to explain some of the biological and genetic aspects. The shortcomings of these two models reflect their failure to address the gene-by-environment interactions that dominate the regulation of body weight. We discuss two additional models – the general intake model and the dual intervention point model – that address this issue and might offer better ways to understand how body fatness is controlled. PMID:22065844

  15. A Semiparametric Change-Point Regression Model for Longitudinal Observations.

    PubMed

    Xing, Haipeng; Ying, Zhiliang

    2012-12-01

    Many longitudinal studies involve relating an outcome process to a set of possibly time-varying covariates, giving rise to the usual regression models for longitudinal data. When the purpose of the study is to investigate the covariate effects when experimental environment undergoes abrupt changes or to locate the periods with different levels of covariate effects, a simple and easy-to-interpret approach is to introduce change-points in regression coefficients. In this connection, we propose a semiparametric change-point regression model, in which the error process (stochastic component) is nonparametric and the baseline mean function (functional part) is completely unspecified, the observation times are allowed to be subject-specific, and the number, locations and magnitudes of change-points are unknown and need to be estimated. We further develop an estimation procedure which combines the recent advance in semiparametric analysis based on counting process argument and multiple change-points inference, and discuss its large sample properties, including consistency and asymptotic normality, under suitable regularity conditions. Simulation results show that the proposed methods work well under a variety of scenarios. An application to a real data set is also given.

  16. Modeling Menstrual Cycle Length and Variability at the Approach of Menopause Using Hierarchical Change Point Models

    PubMed Central

    Huang, Xiaobi; Elliott, Michael R.; Harlow, Siobán D.

    2013-01-01

    SUMMARY As women approach menopause, the patterns of their menstrual cycle lengths change. To study these changes, we need to jointly model both the mean and variability of cycle length. Our proposed model incorporates separate mean and variance change points for each woman and a hierarchical model to link them together, along with regression components to include predictors of menopausal onset such as age at menarche and parity. Additional complexity arises from the fact that the calendar data have substantial missingness due to hormone use, surgery, and failure to report. We integrate multiple imputation and time-to event modeling in a Bayesian estimation framework to deal with different forms of the missingness. Posterior predictive model checks are applied to evaluate the model fit. Our method successfully models patterns of women’s menstrual cycle trajectories throughout their late reproductive life and identifies change points for mean and variability of segment length, providing insight into the menopausal process. More generally, our model points the way toward increasing use of joint mean-variance models to predict health outcomes and better understand disease processes. PMID:24729638

  17. A second generation distributed point polarizable water model.

    PubMed

    Kumar, Revati; Wang, Fang-Fang; Jenness, Glen R; Jordan, Kenneth D

    2010-01-07

    A distributed point polarizable model (DPP2) for water, with explicit terms for charge penetration, induction, and charge transfer, is introduced. The DPP2 model accurately describes the interaction energies in small and large water clusters and also gives an average internal energy per molecule and radial distribution functions of liquid water in good agreement with experiment. A key to the success of the model is its accurate description of the individual terms in the n-body expansion of the interaction energies.

  18. Modeling the contribution of point sources and non-point sources to Thachin River water pollution.

    PubMed

    Schaffner, Monika; Bader, Hans-Peter; Scheidegger, Ruth

    2009-08-15

    Major rivers in developing and emerging countries suffer increasingly of severe degradation of water quality. The current study uses a mathematical Material Flow Analysis (MMFA) as a complementary approach to address the degradation of river water quality due to nutrient pollution in the Thachin River Basin in Central Thailand. This paper gives an overview of the origins and flow paths of the various point- and non-point pollution sources in the Thachin River Basin (in terms of nitrogen and phosphorus) and quantifies their relative importance within the system. The key parameters influencing the main nutrient flows are determined and possible mitigation measures discussed. The results show that aquaculture (as a point source) and rice farming (as a non-point source) are the key nutrient sources in the Thachin River Basin. Other point sources such as pig farms, households and industries, which were previously cited as the most relevant pollution sources in terms of organic pollution, play less significant roles in comparison. This order of importance shifts when considering the model results for the provincial level. Crosschecks with secondary data and field studies confirm the plausibility of our simulations. Specific nutrient loads for the pollution sources are derived; these can be used for a first broad quantification of nutrient pollution in comparable river basins. Based on an identification of the sensitive model parameters, possible mitigation scenarios are determined and their potential to reduce the nutrient load evaluated. A comparison of simulated nutrient loads with measured nutrient concentrations shows that nutrient retention in the river system may be significant. Sedimentation in the slow flowing surface water network as well as nitrogen emission to the air from the warm oxygen deficient waters are certainly partly responsible, but also wetlands along the river banks could play an important role as nutrient sinks.

  19. A travel time forecasting model based on change-point detection method

    NASA Astrophysics Data System (ADS)

    LI, Shupeng; GUANG, Xiaoping; QIAN, Yongsheng; ZENG, Junwei

    2017-06-01

    Travel time parameters obtained from road traffic sensors data play an important role in traffic management practice. A travel time forecasting model is proposed for urban road traffic sensors data based on the method of change-point detection in this paper. The first-order differential operation is used for preprocessing over the actual loop data; a change-point detection algorithm is designed to classify the sequence of large number of travel time data items into several patterns; then a travel time forecasting model is established based on autoregressive integrated moving average (ARIMA) model. By computer simulation, different control parameters are chosen for adaptive change point search for travel time series, which is divided into several sections of similar state.Then linear weight function is used to fit travel time sequence and to forecast travel time. The results show that the model has high accuracy in travel time forecasting.

  20. On the simulation and theory of polymer dynamics in sieving media: Friction, molecular pulleys, Brownian ratchets and polymer scission

    NASA Astrophysics Data System (ADS)

    Kenward, Martin

    and predictably reduce the polydispersity (PDI) of polymer solutions. The experimental investigation, carried out by the Barron group illustrated that a dilute polymer solution, when passed through a narrow constriction at high pressure can systematically reduce the PDI of the polymer solution. My contribution to this work was to develop a statistical model which calculates polymer molecular weight distributions and which can predict the resulting degraded polymer distribution. Two key things resulted from this investigation, the first is that polymers can break multiple times during a single scission event (i.e., one pass through the experimental system). Secondly we showed that it is possible to predictably reproduce polymer distributions after multiple scission events.

  1. An interpretation model of GPR point data in tunnel geological prediction

    NASA Astrophysics Data System (ADS)

    He, Yu-yao; Li, Bao-qi; Guo, Yuan-shu; Wang, Teng-na; Zhu, Ya

    2017-02-01

    GPR (Ground Penetrating Radar) point data plays an absolutely necessary role in the tunnel geological prediction. However, the research work on the GPR point data is very little and the results does not meet the actual requirements of the project. In this paper, a GPR point data interpretation model which is based on WD (Wigner distribution) and deep CNN (convolutional neural network) is proposed. Firstly, the GPR point data is transformed by WD to get the map of time-frequency joint distribution; Secondly, the joint distribution maps are classified by deep CNN. The approximate location of geological target is determined by observing the time frequency map in parallel; Finally, the GPR point data is interpreted according to the classification results and position information from the map. The simulation results show that classification accuracy of the test dataset (include 1200 GPR point data) is 91.83% at the 200 iteration. Our model has the advantages of high accuracy and fast training speed, and can provide a scientific basis for the development of tunnel construction and excavation plan.

  2. Pseudo-critical point in anomalous phase diagrams of simple plasma models

    NASA Astrophysics Data System (ADS)

    Chigvintsev, A. Yu; Iosilevskiy, I. L.; Noginova, L. Yu

    2016-11-01

    Anomalous phase diagrams in subclass of simplified (“non-associative”) Coulomb models is under discussion. The common feature of this subclass is absence on definition of individual correlations for charges of opposite sign. It is e.g. modified OCP of ions on uniformly compressible background of ideal Fermi-gas of electrons OCP(∼), or a superposition of two non-ideal OCP(∼) models of ions and electrons etc. In contrast to the ordinary OCP model on non-compressible (“rigid”) background OCP(#) two new phase transitions with upper critical point, boiling and sublimation, appear in OCP(∼) phase diagram in addition to the well-known Wigner crystallization. The point is that the topology of phase diagram in OCP(∼) becomes anomalous at high enough value of ionic charge number Z. Namely, the only one unified crystal- fluid phase transition without critical point exists as continuous superposition of melting and sublimation in OCP(∼) at the interval (Z 1 < Z < Z 2). The most remarkable is appearance of pseudo-critical points at both boundary values Z = Z 1 ≈ 35.5 and Z = Z 2 ≈ 40.0. It should be stressed that critical isotherm is exactly cubic in both these pseudo-critical points. In this study we have improved our previous calculations and utilized more complicated model components equation of state provided by Chabrier and Potekhin (1998 Phys. Rev. E 58 4941).

  3. Reconstruction of Consistent 3d CAD Models from Point Cloud Data Using a Priori CAD Models

    NASA Astrophysics Data System (ADS)

    Bey, A.; Chaine, R.; Marc, R.; Thibault, G.; Akkouche, S.

    2011-09-01

    We address the reconstruction of 3D CAD models from point cloud data acquired in industrial environments, using a pre-existing 3D model as an initial estimate of the scene to be processed. Indeed, this prior knowledge can be used to drive the reconstruction so as to generate an accurate 3D model matching the point cloud. We more particularly focus our work on the cylindrical parts of the 3D models. We propose to state the problem in a probabilistic framework: we have to search for the 3D model which maximizes some probability taking several constraints into account, such as the relevancy with respect to the point cloud and the a priori 3D model, and the consistency of the reconstructed model. The resulting optimization problem can then be handled using a stochastic exploration of the solution space, based on the random insertion of elements in the configuration under construction, coupled with a greedy management of the conflicts which efficiently improves the configuration at each step. We show that this approach provides reliable reconstructed 3D models by presenting some results on industrial data sets.

  4. Point cloud modeling using the homogeneous transformation for non-cooperative pose estimation

    NASA Astrophysics Data System (ADS)

    Lim, Tae W.

    2015-06-01

    A modeling process to simulate point cloud range data that a lidar (light detection and ranging) sensor produces is presented in this paper in order to support the development of non-cooperative pose (relative attitude and position) estimation approaches which will help improve proximity operation capabilities between two adjacent vehicles. The algorithms in the modeling process were based on the homogeneous transformation, which has been employed extensively in robotics and computer graphics, as well as in recently developed pose estimation algorithms. Using a flash lidar in a laboratory testing environment, point cloud data of a test article was simulated and compared against the measured point cloud data. The simulated and measured data sets match closely, validating the modeling process. The modeling capability enables close examination of the characteristics of point cloud images of an object as it undergoes various translational and rotational motions. Relevant characteristics that will be crucial in non-cooperative pose estimation were identified such as shift, shadowing, perspective projection, jagged edges, and differential point cloud density. These characteristics will have to be considered in developing effective non-cooperative pose estimation algorithms. The modeling capability will allow extensive non-cooperative pose estimation performance simulations prior to field testing, saving development cost and providing performance metrics of the pose estimation concepts and algorithms under evaluation. The modeling process also provides "truth" pose of the test objects with respect to the sensor frame so that the pose estimation error can be quantified.

  5. Improved Modeling of Three-Point Estimates for Decision Making: Going Beyond the Triangle

    DTIC Science & Technology

    2016-03-01

    OF THREE-POINT ESTIMATES FOR DECISION MAKING: GOING BEYOND THE TRIANGLE by Daniel W. Mulligan March 2016 Thesis Advisor: Mark Rhoades...REPORT TYPE AND DATES COVERED Master’s thesis 4. TITLE AND SUBTITLE IMPROVED MODELING OF THREE-POINT ESTIMATES FOR DECISION MAKING: GOING BEYOND...unlimited IMPROVED MODELING OF THREE-POINT ESTIMATES FOR DECISION MAKING: GOING BEYOND THE TRIANGLE Daniel W. Mulligan Civilian, National

  6. Marked point process for modelling seismic activity (case study in Sumatra and Java)

    NASA Astrophysics Data System (ADS)

    Pratiwi, Hasih; Sulistya Rini, Lia; Wayan Mangku, I.

    2018-05-01

    Earthquake is a natural phenomenon that is random, irregular in space and time. Until now the forecast of earthquake occurrence at a location is still difficult to be estimated so that the development of earthquake forecast methodology is still carried out both from seismology aspect and stochastic aspect. To explain the random nature phenomena, both in space and time, a point process approach can be used. There are two types of point processes: temporal point process and spatial point process. The temporal point process relates to events observed over time as a sequence of time, whereas the spatial point process describes the location of objects in two or three dimensional spaces. The points on the point process can be labelled with additional information called marks. A marked point process can be considered as a pair (x, m) where x is the point of location and m is the mark attached to the point of that location. This study aims to model marked point process indexed by time on earthquake data in Sumatra Island and Java Island. This model can be used to analyse seismic activity through its intensity function by considering the history process up to time before t. Based on data obtained from U.S. Geological Survey from 1973 to 2017 with magnitude threshold 5, we obtained maximum likelihood estimate for parameters of the intensity function. The estimation of model parameters shows that the seismic activity in Sumatra Island is greater than Java Island.

  7. Stability of equilibrium points in intraguild predation model with disease with SI model

    NASA Astrophysics Data System (ADS)

    Hassan, Aimi Nuraida binti Ali; Bujang, Noriham binti; Mahdi, Ahmad Faisal Bin

    2017-04-01

    Intraguild Predation (IGP) is classified as killing and eating among potential competitors. Intraguild Predation is a universal interaction, differing from competition or predation. Lotka Volterra competition model and Intraguild predation model has been analyze. The assumption for this model is no any immigration or migration involves. This paper is only considered IGP model for susceptible and infective (SI) only. The analysis of stability of the equilibrium points of Intraguild Predation Models with disease using Routh Hurwitz criteria will be illustrated using some numerical example.

  8. Image-Based Airborne LiDAR Point Cloud Encoding for 3d Building Model Retrieval

    NASA Astrophysics Data System (ADS)

    Chen, Yi-Chen; Lin, Chao-Hung

    2016-06-01

    With the development of Web 2.0 and cyber city modeling, an increasing number of 3D models have been available on web-based model-sharing platforms with many applications such as navigation, urban planning, and virtual reality. Based on the concept of data reuse, a 3D model retrieval system is proposed to retrieve building models similar to a user-specified query. The basic idea behind this system is to reuse these existing 3D building models instead of reconstruction from point clouds. To efficiently retrieve models, the models in databases are compactly encoded by using a shape descriptor generally. However, most of the geometric descriptors in related works are applied to polygonal models. In this study, the input query of the model retrieval system is a point cloud acquired by Light Detection and Ranging (LiDAR) systems because of the efficient scene scanning and spatial information collection. Using Point clouds with sparse, noisy, and incomplete sampling as input queries is more difficult than that by using 3D models. Because that the building roof is more informative than other parts in the airborne LiDAR point cloud, an image-based approach is proposed to encode both point clouds from input queries and 3D models in databases. The main goal of data encoding is that the models in the database and input point clouds can be consistently encoded. Firstly, top-view depth images of buildings are generated to represent the geometry surface of a building roof. Secondly, geometric features are extracted from depth images based on height, edge and plane of building. Finally, descriptors can be extracted by spatial histograms and used in 3D model retrieval system. For data retrieval, the models are retrieved by matching the encoding coefficients of point clouds and building models. In experiments, a database including about 900,000 3D models collected from the Internet is used for evaluation of data retrieval. The results of the proposed method show a clear superiority

  9. A removal model for estimating detection probabilities from point-count surveys

    USGS Publications Warehouse

    Farnsworth, G.L.; Pollock, K.H.; Nichols, J.D.; Simons, T.R.; Hines, J.E.; Sauer, J.R.

    2002-01-01

    Use of point-count surveys is a popular method for collecting data on abundance and distribution of birds. However, analyses of such data often ignore potential differences in detection probability. We adapted a removal model to directly estimate detection probability during point-count surveys. The model assumes that singing frequency is a major factor influencing probability of detection when birds are surveyed using point counts. This may be appropriate for surveys in which most detections are by sound. The model requires counts to be divided into several time intervals. Point counts are often conducted for 10 min, where the number of birds recorded is divided into those first observed in the first 3 min, the subsequent 2 min, and the last 5 min. We developed a maximum-likelihood estimator for the detectability of birds recorded during counts divided into those intervals. This technique can easily be adapted to point counts divided into intervals of any length. We applied this method to unlimited-radius counts conducted in Great Smoky Mountains National Park. We used model selection criteria to identify whether detection probabilities varied among species, throughout the morning, throughout the season, and among different observers. We found differences in detection probability among species. Species that sing frequently such as Winter Wren (Troglodytes troglodytes) and Acadian Flycatcher (Empidonax virescens) had high detection probabilities (∼90%) and species that call infrequently such as Pileated Woodpecker (Dryocopus pileatus) had low detection probability (36%). We also found detection probabilities varied with the time of day for some species (e.g. thrushes) and between observers for other species. We used the same approach to estimate detection probability and density for a subset of the observations with limited-radius point counts.

  10. An infrared sky model based on the IRAS point source data

    NASA Technical Reports Server (NTRS)

    Cohen, Martin; Walker, Russell; Wainscoat, Richard; Volk, Kevin; Walker, Helen; Schwartz, Deborah

    1990-01-01

    A detailed model for the infrared point source sky is presented that comprises geometrically and physically realistic representations of the galactic disk, bulge, spheroid, spiral arms, molecular ring, and absolute magnitudes. The model was guided by a parallel Monte Carlo simulation of the Galaxy. The content of the galactic source table constitutes an excellent match to the 12 micrometer luminosity function in the simulation, as well as the luminosity functions at V and K. Models are given for predicting the density of asteroids to be observed, and the diffuse background radiance of the Zodiacal cloud. The model can be used to predict the character of the point source sky expected for observations from future infrared space experiments.

  11. Statistical properties of several models of fractional random point processes

    NASA Astrophysics Data System (ADS)

    Bendjaballah, C.

    2011-08-01

    Statistical properties of several models of fractional random point processes have been analyzed from the counting and time interval statistics points of view. Based on the criterion of the reduced variance, it is seen that such processes exhibit nonclassical properties. The conditions for these processes to be treated as conditional Poisson processes are examined. Numerical simulations illustrate part of the theoretical calculations.

  12. Stochastic point-source modeling of ground motions in the Cascadia region

    USGS Publications Warehouse

    Atkinson, G.M.; Boore, D.M.

    1997-01-01

    A stochastic model is used to develop preliminary ground motion relations for the Cascadia region for rock sites. The model parameters are derived from empirical analyses of seismographic data from the Cascadia region. The model is based on a Brune point-source characterized by a stress parameter of 50 bars. The model predictions are compared to ground-motion data from the Cascadia region and to data from large earthquakes in other subduction zones. The point-source simulations match the observations from moderate events (M 100 km). The discrepancy at large magnitudes suggests further work on modeling finite-fault effects and regional attenuation is warranted. In the meantime, the preliminary equations are satisfactory for predicting motions from events of M < 7 and provide conservative estimates of motions from larger events at distances less than 100 km.

  13. Bayesian Multiscale Modeling of Closed Curves in Point Clouds

    PubMed Central

    Gu, Kelvin; Pati, Debdeep; Dunson, David B.

    2014-01-01

    Modeling object boundaries based on image or point cloud data is frequently necessary in medical and scientific applications ranging from detecting tumor contours for targeted radiation therapy, to the classification of organisms based on their structural information. In low-contrast images or sparse and noisy point clouds, there is often insufficient data to recover local segments of the boundary in isolation. Thus, it becomes critical to model the entire boundary in the form of a closed curve. To achieve this, we develop a Bayesian hierarchical model that expresses highly diverse 2D objects in the form of closed curves. The model is based on a novel multiscale deformation process. By relating multiple objects through a hierarchical formulation, we can successfully recover missing boundaries by borrowing structural information from similar objects at the appropriate scale. Furthermore, the model’s latent parameters help interpret the population, indicating dimensions of significant structural variability and also specifying a ‘central curve’ that summarizes the collection. Theoretical properties of our prior are studied in specific cases and efficient Markov chain Monte Carlo methods are developed, evaluated through simulation examples and applied to panorex teeth images for modeling teeth contours and also to a brain tumor contour detection problem. PMID:25544786

  14. D Modeling of Components of a Garden by Using Point Cloud Data

    NASA Astrophysics Data System (ADS)

    Kumazakia, R.; Kunii, Y.

    2016-06-01

    Laser measurement is currently applied to several tasks such as plumbing management, road investigation through mobile mapping systems, and elevation model utilization through airborne LiDAR. Effective laser measurement methods have been well-documented in civil engineering, but few attempts have been made to establish equally effective methods in landscape engineering. By using point cloud data acquired through laser measurement, the aesthetic landscaping of Japanese gardens can be enhanced. This study focuses on simple landscape simulations for pruning and rearranging trees as well as rearranging rocks, lanterns, and other garden features by using point cloud data. However, such simulations lack concreteness. Therefore, this study considers the construction of a library of garden features extracted from point cloud data. The library would serve as a resource for creating new gardens and simulating gardens prior to conducting repairs. Extracted garden features are imported as 3ds Max objects, and realistic 3D models are generated by using a material editor system. As further work toward the publication of a 3D model library, file formats for tree crowns and trunks should be adjusted. Moreover, reducing the size of created models is necessary. Models created using point cloud data are informative because simply shaped garden features such as trees are often seen in the 3D industry.

  15. The importance of topographically corrected null models for analyzing ecological point processes.

    PubMed

    McDowall, Philip; Lynch, Heather J

    2017-07-01

    Analyses of point process patterns and related techniques (e.g., MaxEnt) make use of the expected number of occurrences per unit area and second-order statistics based on the distance between occurrences. Ecologists working with point process data often assume that points exist on a two-dimensional x-y plane or within a three-dimensional volume, when in fact many observed point patterns are generated on a two-dimensional surface existing within three-dimensional space. For many surfaces, however, such as the topography of landscapes, the projection from the surface to the x-y plane preserves neither area nor distance. As such, when these point patterns are implicitly projected to and analyzed in the x-y plane, our expectations of the point pattern's statistical properties may not be met. When used in hypothesis testing, we find that the failure to account for the topography of the generating surface may bias statistical tests that incorrectly identify clustering and, furthermore, may bias coefficients in inhomogeneous point process models that incorporate slope as a covariate. We demonstrate the circumstances under which this bias is significant, and present simple methods that allow point processes to be simulated with corrections for topography. These point patterns can then be used to generate "topographically corrected" null models against which observed point processes can be compared. © 2017 by the Ecological Society of America.

  16. Nonrelativistic approaches derived from point-coupling relativistic models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lourenco, O.; Dutra, M.; Delfino, A.

    2010-03-15

    We construct nonrelativistic versions of relativistic nonlinear hadronic point-coupling models, based on new normalized spinor wave functions after small component reduction. These expansions give us energy density functionals that can be compared to their relativistic counterparts. We show that the agreement between the nonrelativistic limit approach and the Skyrme parametrizations becomes strongly dependent on the incompressibility of each model. We also show that the particular case A=B=0 (Walecka model) leads to the same energy density functional of the Skyrme parametrizations SV and ZR2, while the truncation scheme, up to order {rho}{sup 3}, leads to parametrizations for which {sigma}=1.

  17. Identifying influential data points in hydrological model calibration and their impact on streamflow predictions

    NASA Astrophysics Data System (ADS)

    Wright, David; Thyer, Mark; Westra, Seth

    2015-04-01

    Highly influential data points are those that have a disproportionately large impact on model performance, parameters and predictions. However, in current hydrological modelling practice the relative influence of individual data points on hydrological model calibration is not commonly evaluated. This presentation illustrates and evaluates several influence diagnostics tools that hydrological modellers can use to assess the relative influence of data. The feasibility and importance of including influence detection diagnostics as a standard tool in hydrological model calibration is discussed. Two classes of influence diagnostics are evaluated: (1) computationally demanding numerical "case deletion" diagnostics; and (2) computationally efficient analytical diagnostics, based on Cook's distance. These diagnostics are compared against hydrologically orientated diagnostics that describe changes in the model parameters (measured through the Mahalanobis distance), performance (objective function displacement) and predictions (mean and maximum streamflow). These influence diagnostics are applied to two case studies: a stage/discharge rating curve model, and a conceptual rainfall-runoff model (GR4J). Removing a single data point from the calibration resulted in differences to mean flow predictions of up to 6% for the rating curve model, and differences to mean and maximum flow predictions of up to 10% and 17%, respectively, for the hydrological model. When using the Nash-Sutcliffe efficiency in calibration, the computationally cheaper Cook's distance metrics produce similar results to the case-deletion metrics at a fraction of the computational cost. However, Cooks distance is adapted from linear regression with inherit assumptions on the data and is therefore less flexible than case deletion. Influential point detection diagnostics show great potential to improve current hydrological modelling practices by identifying highly influential data points. The findings of this

  18. Comprehensive overview of the Point-by-Point model of prompt emission in fission

    NASA Astrophysics Data System (ADS)

    Tudora, A.; Hambsch, F.-J.

    2017-08-01

    The investigation of prompt emission in fission is very important in understanding the fission process and to improve the quality of evaluated nuclear data required for new applications. In the last decade remarkable efforts were done for both the development of prompt emission models and the experimental investigation of the properties of fission fragments and the prompt neutrons and γ-ray emission. The accurate experimental data concerning the prompt neutron multiplicity as a function of fragment mass and total kinetic energy for 252Cf(SF) and 235 ( n, f) recently measured at JRC-Geel (as well as other various prompt emission data) allow a consistent and very detailed validation of the Point-by-Point (PbP) deterministic model of prompt emission. The PbP model results describe very well a large variety of experimental data starting from the multi-parametric matrices of prompt neutron multiplicity ν (A,TKE) and γ-ray energy E_{γ}(A,TKE) which validate the model itself, passing through different average prompt emission quantities as a function of A ( e.g., ν(A), E_{γ}(A), < ɛ > (A) etc.), as a function of TKE ( e.g., ν (TKE), E_{γ}(TKE)) up to the prompt neutron distribution P (ν) and the total average prompt neutron spectrum. The PbP model does not use free or adjustable parameters. To calculate the multi-parametric matrices it needs only data included in the reference input parameter library RIPL of IAEA. To provide average prompt emission quantities as a function of A, of TKE and total average quantities the multi-parametric matrices are averaged over reliable experimental fragment distributions. The PbP results are also in agreement with the results of the Monte Carlo prompt emission codes FIFRELIN, CGMF and FREYA. The good description of a large variety of experimental data proves the capability of the PbP model to be used in nuclear data evaluations and its reliability to predict prompt emission data for fissioning nuclei and incident energies for

  19. Hybrid Scheme for Modeling Local Field Potentials from Point-Neuron Networks.

    PubMed

    Hagen, Espen; Dahmen, David; Stavrinou, Maria L; Lindén, Henrik; Tetzlaff, Tom; van Albada, Sacha J; Grün, Sonja; Diesmann, Markus; Einevoll, Gaute T

    2016-12-01

    With rapidly advancing multi-electrode recording technology, the local field potential (LFP) has again become a popular measure of neuronal activity in both research and clinical applications. Proper understanding of the LFP requires detailed mathematical modeling incorporating the anatomical and electrophysiological features of neurons near the recording electrode, as well as synaptic inputs from the entire network. Here we propose a hybrid modeling scheme combining efficient point-neuron network models with biophysical principles underlying LFP generation by real neurons. The LFP predictions rely on populations of network-equivalent multicompartment neuron models with layer-specific synaptic connectivity, can be used with an arbitrary number of point-neuron network populations, and allows for a full separation of simulated network dynamics and LFPs. We apply the scheme to a full-scale cortical network model for a ∼1 mm 2 patch of primary visual cortex, predict laminar LFPs for different network states, assess the relative LFP contribution from different laminar populations, and investigate effects of input correlations and neuron density on the LFP. The generic nature of the hybrid scheme and its public implementation in hybridLFPy form the basis for LFP predictions from other and larger point-neuron network models, as well as extensions of the current application with additional biological detail. © The Author 2016. Published by Oxford University Press.

  20. a Fast Method for Measuring the Similarity Between 3d Model and 3d Point Cloud

    NASA Astrophysics Data System (ADS)

    Zhang, Zongliang; Li, Jonathan; Li, Xin; Lin, Yangbin; Zhang, Shanxin; Wang, Cheng

    2016-06-01

    This paper proposes a fast method for measuring the partial Similarity between 3D Model and 3D point Cloud (SimMC). It is crucial to measure SimMC for many point cloud-related applications such as 3D object retrieval and inverse procedural modelling. In our proposed method, the surface area of model and the Distance from Model to point Cloud (DistMC) are exploited as measurements to calculate SimMC. Here, DistMC is defined as the weighted distance of the distances between points sampled from model and point cloud. Similarly, Distance from point Cloud to Model (DistCM) is defined as the average distance of the distances between points in point cloud and model. In order to reduce huge computational burdens brought by calculation of DistCM in some traditional methods, we define SimMC as the ratio of weighted surface area of model to DistMC. Compared to those traditional SimMC measuring methods that are only able to measure global similarity, our method is capable of measuring partial similarity by employing distance-weighted strategy. Moreover, our method is able to be faster than other partial similarity assessment methods. We demonstrate the superiority of our method both on synthetic data and laser scanning data.

  1. The four fixed points of scale invariant single field cosmological models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xue, BingKan, E-mail: bxue@princeton.edu

    2012-10-01

    We introduce a new set of flow parameters to describe the time dependence of the equation of state and the speed of sound in single field cosmological models. A scale invariant power spectrum is produced if these flow parameters satisfy specific dynamical equations. We analyze the flow of these parameters and find four types of fixed points that encompass all known single field models. Moreover, near each fixed point we uncover new models where the scale invariance of the power spectrum relies on having simultaneously time varying speed of sound and equation of state. We describe several distinctive new modelsmore » and discuss constraints from strong coupling and superluminality.« less

  2. General methodology for nonlinear modeling of neural systems with Poisson point-process inputs.

    PubMed

    Marmarelis, V Z; Berger, T W

    2005-07-01

    This paper presents a general methodological framework for the practical modeling of neural systems with point-process inputs (sequences of action potentials or, more broadly, identical events) based on the Volterra and Wiener theories of functional expansions and system identification. The paper clarifies the distinctions between Volterra and Wiener kernels obtained from Poisson point-process inputs. It shows that only the Wiener kernels can be estimated via cross-correlation, but must be defined as zero along the diagonals. The Volterra kernels can be estimated far more accurately (and from shorter data-records) by use of the Laguerre expansion technique adapted to point-process inputs, and they are independent of the mean rate of stimulation (unlike their P-W counterparts that depend on it). The Volterra kernels can also be estimated for broadband point-process inputs that are not Poisson. Useful applications of this modeling approach include cases where we seek to determine (model) the transfer characteristics between one neuronal axon (a point-process 'input') and another axon (a point-process 'output') or some other measure of neuronal activity (a continuous 'output', such as population activity) with which a causal link exists.

  3. Locating the quantum critical point of the Bose-Hubbard model through singularities of simple observables.

    PubMed

    Łącki, Mateusz; Damski, Bogdan; Zakrzewski, Jakub

    2016-12-02

    We show that the critical point of the two-dimensional Bose-Hubbard model can be easily found through studies of either on-site atom number fluctuations or the nearest-neighbor two-point correlation function (the expectation value of the tunnelling operator). Our strategy to locate the critical point is based on the observation that the derivatives of these observables with respect to the parameter that drives the superfluid-Mott insulator transition are singular at the critical point in the thermodynamic limit. Performing the quantum Monte Carlo simulations of the two-dimensional Bose-Hubbard model, we show that this technique leads to the accurate determination of the position of its critical point. Our results can be easily extended to the three-dimensional Bose-Hubbard model and different Hubbard-like models. They provide a simple experimentally-relevant way of locating critical points in various cold atomic lattice systems.

  4. Modeling deep brain stimulation: point source approximation versus realistic representation of the electrode

    NASA Astrophysics Data System (ADS)

    Zhang, Tianhe C.; Grill, Warren M.

    2010-12-01

    Deep brain stimulation (DBS) has emerged as an effective treatment for movement disorders; however, the fundamental mechanisms by which DBS works are not well understood. Computational models of DBS can provide insights into these fundamental mechanisms and typically require two steps: calculation of the electrical potentials generated by DBS and, subsequently, determination of the effects of the extracellular potentials on neurons. The objective of this study was to assess the validity of using a point source electrode to approximate the DBS electrode when calculating the thresholds and spatial distribution of activation of a surrounding population of model neurons in response to monopolar DBS. Extracellular potentials in a homogenous isotropic volume conductor were calculated using either a point current source or a geometrically accurate finite element model of the Medtronic DBS 3389 lead. These extracellular potentials were coupled to populations of model axons, and thresholds and spatial distributions were determined for different electrode geometries and axon orientations. Median threshold differences between DBS and point source electrodes for individual axons varied between -20.5% and 9.5% across all orientations, monopolar polarities and electrode geometries utilizing the DBS 3389 electrode. Differences in the percentage of axons activated at a given amplitude by the point source electrode and the DBS electrode were between -9.0% and 12.6% across all monopolar configurations tested. The differences in activation between the DBS and point source electrodes occurred primarily in regions close to conductor-insulator interfaces and around the insulating tip of the DBS electrode. The robustness of the point source approximation in modeling several special cases—tissue anisotropy, a long active electrode and bipolar stimulation—was also examined. Under the conditions considered, the point source was shown to be a valid approximation for predicting excitation

  5. Photosensitization of DNA damage by a new cationic pyropheophorbide derivative: sequence-specific formation of a frank scission.

    PubMed

    Kanony, Claire; Fabiano-Tixier, Anne-Sylvie; Ravanat, Jean-Luc; Vicendo, Patricia; Paillous, Nicole

    2003-06-01

    Pyropheophorbides are red-absorbing porphyrin-like photosensitizers that may interact with DNA either by intercalation or by external binding with self-stacking according to the value of the nucleotide to chromophore molar ratio (N/C). This article reports on the nature and sequence selectivity of the DNA damage photoinduced by a water-soluble chlorhydrate of aminopyropheophorbide. First, this pyropheophorbide is shown to induce on irradiation the cleavage of phiX174 DNA by both Type-I and -II mechanisms, suggested by scavengers and D2O effects. These conclusions are then improved by sequencing experiments performed on a 20-mer oligodeoxynucleotide (ODN) irradiated at wavelengths >345 nm in the presence of the dye, N/C varying from 2.5 to 0.5. Oxidation of all guanine residues to the same extent is observed after piperidine treatment on both single- and double-stranded ODN. Moreover, unexpectedly, a remarkable sequence-selective cleavage occurring at a 5'-CG-3' site is detected before alkali treatment. This frank break is clearly predominant for a low nucleotide to chromophore molar ratio, corresponding to a self-stacking of the dye along the DNA helix. The electrophoretic properties of the band suggest that this lesion results from a sugar oxidation, which leads via a base release to a ribonolactone residue. The proposal is supported by high-performance liquid chromatography-matrix-assisted laser desorption-ionization mass spectrometry experiments that also reveal other sequence-selective frank scissions of lower intensity at 5'-GC-3' or other 5'-CG-3' sites. This sequence selectivity is discussed with regard to the binding selectivity of cationic porphyrins.

  6. The RTOG Outcomes Model: economic end points and measures.

    PubMed

    Konski, Andre; Watkins-Bruner, Deborah

    2004-03-01

    Recognising the value added by economic evaluations of clinical trials and the interaction of clinical, humanistic and economic end points, the Radiation Therapy Oncology Group (RTOG) has developed an Outcomes Model that guides the comprehensive assessment of this triad of end points. This paper will focus on the economic component of the model. The Economic Impact Committee was founded in 1994 to study the economic impact of clinical trials of cancer care. A steep learning curve ensued with considerable time initially spent understanding the methodology of economic analysis. Since then, economic analyses have been performed on RTOG clinical trials involving treatments for patients with non-small cell lung cancer, locally-advanced head and neck cancer and prostate cancer. As the care of cancer patients evolves with time, so has the economic analyses performed by the Economic Impact Committee. This paper documents the evolution of the cost-effectiveness analyses of RTOG from performing average cost-utility analysis to more technically sophisticated Monte Carlo simulation of Markov models, to incorporating prospective economic analyses as an initial end point. Briefly, results indicated that, accounting for quality-adjusted survival, concurrent chemotherapy and radiation for the treatment of non-small cell lung cancer, more aggressive radiation fractionation schedules for head and neck cancer and the addition of hormone therapy to radiation for prostate cancer are within the range of economically acceptable recommendations. The RTOG economic analyses have provided information that can further inform clinicians and policy makers of the value added of new or improved treatments.

  7. Electromagnetic braking revisited with a magnetic point dipole model

    NASA Astrophysics Data System (ADS)

    Land, Sara; McGuire, Patrick; Bumb, Nikhil; Mann, Brian P.; Yellen, Benjamin B.

    2016-04-01

    A theoretical model is developed to predict the trajectory of magnetized spheres falling through a copper pipe. The derive magnetic point dipole model agrees well with the experimental trajectories for NdFeB spherical magnets of varying diameter, which are embedded inside 3D printed shells with fixed outer dimensions. This demonstration of electrodynamic phenomena and Lenz's law serves as a good laboratory exercise for physics, electromagnetics, and dynamics classes at the undergraduate level.

  8. An automated model-based aim point distribution system for solar towers

    NASA Astrophysics Data System (ADS)

    Schwarzbözl, Peter; Rong, Amadeus; Macke, Ansgar; Säck, Jan-Peter; Ulmer, Steffen

    2016-05-01

    Distribution of heliostat aim points is a major task during central receiver operation, as the flux distribution produced by the heliostats varies continuously with time. Known methods for aim point distribution are mostly based on simple aim point patterns and focus on control strategies to meet local temperature and flux limits of the receiver. Lowering the peak flux on the receiver to avoid hot spots and maximizing thermal output are obviously competing targets that call for a comprehensive optimization process. This paper presents a model-based method for online aim point optimization that includes the current heliostat field mirror quality derived through an automated deflectometric measurement process.

  9. Joint Clustering and Component Analysis of Correspondenceless Point Sets: Application to Cardiac Statistical Modeling.

    PubMed

    Gooya, Ali; Lekadir, Karim; Alba, Xenia; Swift, Andrew J; Wild, Jim M; Frangi, Alejandro F

    2015-01-01

    Construction of Statistical Shape Models (SSMs) from arbitrary point sets is a challenging problem due to significant shape variation and lack of explicit point correspondence across the training data set. In medical imaging, point sets can generally represent different shape classes that span healthy and pathological exemplars. In such cases, the constructed SSM may not generalize well, largely because the probability density function (pdf) of the point sets deviates from the underlying assumption of Gaussian statistics. To this end, we propose a generative model for unsupervised learning of the pdf of point sets as a mixture of distinctive classes. A Variational Bayesian (VB) method is proposed for making joint inferences on the labels of point sets, and the principal modes of variations in each cluster. The method provides a flexible framework to handle point sets with no explicit point-to-point correspondences. We also show that by maximizing the marginalized likelihood of the model, the optimal number of clusters of point sets can be determined. We illustrate this work in the context of understanding the anatomical phenotype of the left and right ventricles in heart. To this end, we use a database containing hearts of healthy subjects, patients with Pulmonary Hypertension (PH), and patients with Hypertrophic Cardiomyopathy (HCM). We demonstrate that our method can outperform traditional PCA in both generalization and specificity measures.

  10. Pursuit Eye-Movements in Curve Driving Differentiate between Future Path and Tangent Point Models

    PubMed Central

    Lappi, Otto; Pekkanen, Jami; Itkonen, Teemu H.

    2013-01-01

    For nearly 20 years, looking at the tangent point on the road edge has been prominent in models of visual orientation in curve driving. It is the most common interpretation of the commonly observed pattern of car drivers looking through a bend, or at the apex of the curve. Indeed, in the visual science literature, visual orientation towards the inside of a bend has become known as “tangent point orientation”. Yet, it remains to be empirically established whether it is the tangent point the drivers are looking at, or whether some other reference point on the road surface, or several reference points, are being targeted in addition to, or instead of, the tangent point. Recently discovered optokinetic pursuit eye-movements during curve driving can provide complementary evidence over and above traditional gaze-position measures. This paper presents the first detailed quantitative analysis of pursuit eye movements elicited by curvilinear optic flow in real driving. The data implicates the far zone beyond the tangent point as an important gaze target area during steady-state cornering. This is in line with the future path steering models, but difficult to reconcile with any pure tangent point steering model. We conclude that the tangent point steering models do not provide a general explanation of eye movement and steering during a curve driving sequence and cannot be considered uncritically as the default interpretation when the gaze position distribution is observed to be situated in the region of the curve apex. PMID:23894300

  11. Polarizable six-point water models from computational and empirical optimization.

    PubMed

    Tröster, Philipp; Lorenzen, Konstantin; Tavan, Paul

    2014-02-13

    Tröster et al. (J. Phys. Chem B 2013, 117, 9486-9500) recently suggested a mixed computational and empirical approach to the optimization of polarizable molecular mechanics (PMM) water models. In the empirical part the parameters of Buckingham potentials are optimized by PMM molecular dynamics (MD) simulations. The computational part applies hybrid calculations, which combine the quantum mechanical description of a H2O molecule by density functional theory (DFT) with a PMM model of its liquid phase environment generated by MD. While the static dipole moments and polarizabilities of the PMM water models are fixed at the experimental gas phase values, the DFT/PMM calculations are employed to optimize the remaining electrostatic properties. These properties cover the width of a Gaussian inducible dipole positioned at the oxygen and the locations of massless negative charge points within the molecule (the positive charges are attached to the hydrogens). The authors considered the cases of one and two negative charges rendering the PMM four- and five-point models TL4P and TL5P. Here we extend their approach to three negative charges, thus suggesting the PMM six-point model TL6P. As compared to the predecessors and to other PMM models, which also exhibit partial charges at fixed positions, TL6P turned out to predict all studied properties of liquid water at p0 = 1 bar and T0 = 300 K with a remarkable accuracy. These properties cover, for instance, the diffusion constant, viscosity, isobaric heat capacity, isothermal compressibility, dielectric constant, density, and the isobaric thermal expansion coefficient. This success concurrently provides a microscopic physical explanation of corresponding shortcomings of previous models. It uniquely assigns the failures of previous models to substantial inaccuracies in the description of the higher electrostatic multipole moments of liquid phase water molecules. Resulting favorable properties concerning the transferability to

  12. Recent tests of the equilibrium-point hypothesis (lambda model).

    PubMed

    Feldman, A G; Ostry, D J; Levin, M F; Gribble, P L; Mitnitski, A B

    1998-07-01

    The lambda model of the equilibrium-point hypothesis (Feldman & Levin, 1995) is an approach to motor control which, like physics, is based on a logical system coordinating empirical data. The model has gone through an interesting period. On one hand, several nontrivial predictions of the model have been successfully verified in recent studies. In addition, the explanatory and predictive capacity of the model has been enhanced by its extension to multimuscle and multijoint systems. On the other hand, claims have recently appeared suggesting that the model should be abandoned. The present paper focuses on these claims and concludes that they are unfounded. Much of the experimental data that have been used to reject the model are actually consistent with it.

  13. Automatic pole-like object modeling via 3D part-based analysis of point cloud

    NASA Astrophysics Data System (ADS)

    He, Liu; Yang, Haoxiang; Huang, Yuchun

    2016-10-01

    Pole-like objects, including trees, lampposts and traffic signs, are indispensable part of urban infrastructure. With the advance of vehicle-based laser scanning (VLS), massive point cloud of roadside urban areas becomes applied in 3D digital city modeling. Based on the property that different pole-like objects have various canopy parts and similar trunk parts, this paper proposed the 3D part-based shape analysis to robustly extract, identify and model the pole-like objects. The proposed method includes: 3D clustering and recognition of trunks, voxel growing and part-based 3D modeling. After preprocessing, the trunk center is identified as the point that has local density peak and the largest minimum inter-cluster distance. Starting from the trunk centers, the remaining points are iteratively clustered to the same centers of their nearest point with higher density. To eliminate the noisy points, cluster border is refined by trimming boundary outliers. Then, candidate trunks are extracted based on the clustering results in three orthogonal planes by shape analysis. Voxel growing obtains the completed pole-like objects regardless of overlaying. Finally, entire trunk, branch and crown part are analyzed to obtain seven feature parameters. These parameters are utilized to model three parts respectively and get signal part-assembled 3D model. The proposed method is tested using the VLS-based point cloud of Wuhan University, China. The point cloud includes many kinds of trees, lampposts and other pole-like posters under different occlusions and overlaying. Experimental results show that the proposed method can extract the exact attributes and model the roadside pole-like objects efficiently.

  14. Fitting IRT Models to Dichotomous and Polytomous Data: Assessing the Relative Model-Data Fit of Ideal Point and Dominance Models

    ERIC Educational Resources Information Center

    Tay, Louis; Ali, Usama S.; Drasgow, Fritz; Williams, Bruce

    2011-01-01

    This study investigated the relative model-data fit of an ideal point item response theory (IRT) model (the generalized graded unfolding model [GGUM]) and dominance IRT models (e.g., the two-parameter logistic model [2PLM] and Samejima's graded response model [GRM]) to simulated dichotomous and polytomous data generated from each of these models.…

  15. First Prismatic Building Model Reconstruction from Tomosar Point Clouds

    NASA Astrophysics Data System (ADS)

    Sun, Y.; Shahzad, M.; Zhu, X.

    2016-06-01

    This paper demonstrates for the first time the potential of explicitly modelling the individual roof surfaces to reconstruct 3-D prismatic building models using spaceborne tomographic synthetic aperture radar (TomoSAR) point clouds. The proposed approach is modular and works as follows: it first extracts the buildings via DSM generation and cutting-off the ground terrain. The DSM is smoothed using BM3D denoising method proposed in (Dabov et al., 2007) and a gradient map of the smoothed DSM is generated based on height jumps. Watershed segmentation is then adopted to oversegment the DSM into different regions. Subsequently, height and polygon complexity constrained merging is employed to refine (i.e., to reduce) the retrieved number of roof segments. Coarse outline of each roof segment is then reconstructed and later refined using quadtree based regularization plus zig-zag line simplification scheme. Finally, height is associated to each refined roof segment to obtain the 3-D prismatic model of the building. The proposed approach is illustrated and validated over a large building (convention center) in the city of Las Vegas using TomoSAR point clouds generated from a stack of 25 images using Tomo-GENESIS software developed at DLR.

  16. A Gibbs point field model for the spatial pattern of coronary capillaries

    NASA Astrophysics Data System (ADS)

    Karch, R.; Neumann, M.; Neumann, F.; Ullrich, R.; Neumüller, J.; Schreiner, W.

    2006-09-01

    We propose a Gibbs point field model for the pattern of coronary capillaries in transverse histologic sections from human hearts, based on the physiology of oxygen supply from capillaries to tissue. To specify the potential energy function of the Gibbs point field, we draw on an analogy between the equation of steady-state oxygen diffusion from an array of parallel capillaries to the surrounding tissue and Poisson's equation for the electrostatic potential of a two-dimensional distribution of identical point charges. The influence of factors other than diffusion is treated as a thermal disturbance. On this basis, we arrive at the well-known two-dimensional one-component plasma, a system of identical point charges exhibiting a weak (logarithmic) repulsive interaction that is completely characterized by a single dimensionless parameter. By variation of this parameter, the model is able to reproduce many characteristics of real capillary patterns.

  17. Analysis of point-to-point lung motion with full inspiration and expiration CT data using non-linear optimization method: optimal geometric assumption model for the effective registration algorithm

    NASA Astrophysics Data System (ADS)

    Kim, Namkug; Seo, Joon Beom; Heo, Jeong Nam; Kang, Suk-Ho

    2007-03-01

    The study was conducted to develop a simple model for more robust lung registration of volumetric CT data, which is essential for various clinical lung analysis applications, including the lung nodule matching in follow up CT studies, semi-quantitative assessment of lung perfusion, and etc. The purpose of this study is to find the most effective reference point and geometric model based on the lung motion analysis from the CT data sets obtained in full inspiration (In.) and expiration (Ex.). Ten pairs of CT data sets in normal subjects obtained in full In. and Ex. were used in this study. Two radiologists were requested to draw 20 points representing the subpleural point of the central axis in each segment. The apex, hilar point, and center of inertia (COI) of each unilateral lung were proposed as the reference point. To evaluate optimal expansion point, non-linear optimization without constraints was employed. The objective function is sum of distances from the line, consist of the corresponding points between In. and Ex. to the optimal point x. By using the nonlinear optimization, the optimal points was evaluated and compared between reference points. The average distance between the optimal point and each line segment revealed that the balloon model was more suitable to explain the lung expansion model. This lung motion analysis based on vector analysis and non-linear optimization shows that balloon model centered on the center of inertia of lung is most effective geometric model to explain lung expansion by breathing.

  18. Modelling of point and diffuse pollution: application of the Moneris model in the Ipojuca river basin, Pernambuco State, Brazil.

    PubMed

    de Lima Barros, Alessandra Maciel; do Carmo Sobral, Maria; Gunkel, Günter

    2013-01-01

    Emissions of pollutants and nutrients are causing several problems in aquatic ecosystems, and in general an excess of nutrients, specifically nitrogen and phosphorus, is responsible for the eutrophication process in water bodies. In most developed countries, more attention is given to diffuse pollution because problems with point pollution have already been solved. In many non-developed countries basic data for point and diffuse pollution are not available. The focus of the presented studies is to quantify nutrient emissions from point and diffuse sources in the Ipojuca river basin, Pernambuco State, Brazil, using the Moneris model (Modelling Nutrient Emissions in River Systems). This model has been developed in Germany and has already been implemented in more than 600 river basins. The model is mainly based on river flow, water quality and geographical information system data. According to the Moneris model results, untreated domestic sewage is the major source of nutrients in the Ipojuca river basin. The Moneris model has shown itself to be a useful tool that allows the identification and quantification of point and diffuse nutrient sources, thus enabling the adoption of measures to reduce them. The Moneris model, conducted for the first time in a tropical river basin with intermittent flow, can be used as a reference for implementation in other watersheds.

  19. PCTO-SIM: Multiple-point geostatistical modeling using parallel conditional texture optimization

    NASA Astrophysics Data System (ADS)

    Pourfard, Mohammadreza; Abdollahifard, Mohammad J.; Faez, Karim; Motamedi, Sayed Ahmad; Hosseinian, Tahmineh

    2017-05-01

    Multiple-point Geostatistics is a well-known general statistical framework by which complex geological phenomena have been modeled efficiently. Pixel-based and patch-based are two major categories of these methods. In this paper, the optimization-based category is used which has a dual concept in texture synthesis as texture optimization. Our extended version of texture optimization uses the energy concept to model geological phenomena. While honoring the hard point, the minimization of our proposed cost function forces simulation grid pixels to be as similar as possible to training images. Our algorithm has a self-enrichment capability and creates a richer training database from a sparser one through mixing the information of all surrounding patches of the simulation nodes. Therefore, it preserves pattern continuity in both continuous and categorical variables very well. It also shows a fuzzy result in its every realization similar to the expected result of multi realizations of other statistical models. While the main core of most previous Multiple-point Geostatistics methods is sequential, the parallel main core of our algorithm enabled it to use GPU efficiently to reduce the CPU time. One new validation method for MPS has also been proposed in this paper.

  20. Capacity Estimation Model for Signalized Intersections under the Impact of Access Point

    PubMed Central

    Zhao, Jing; Li, Peng; Zhou, Xizhao

    2016-01-01

    Highway Capacity Manual 2010 provides various factors to adjust the base saturation flow rate for the capacity analysis of signalized intersections. No factors, however, is considered for the potential change of signalized intersections capacity caused by the access point closeing to the signalized intersection. This paper presented a theoretical model to estimate the lane group capacity at signalized intersections with the consideration of the effects of access points. Two scenarios of access point locations, upstream or downstream of the signalized intersection, and impacts of six types of access traffic flow are taken into account. The proposed capacity model was validated based on VISSIM simulation. Results of extensive numerical analysis reveal the substantial impact of access point on the capacity, which has an inverse correlation with both the number of major street lanes and the distance between the intersection and access point. Moreover, among the six types of access traffic flows, the access traffic flow 1 (right-turning traffic from major street), flow 4 (left-turning traffic from access point), and flow 5 (left-turning traffic from major street) cause a more significant effect on lane group capacity than others. Some guidance on the mitigation of the negative effect is provided for practitioners. PMID:26726998

  1. Two-Point Turbulence Closure Applied to Variable Resolution Modeling

    NASA Technical Reports Server (NTRS)

    Girimaji, Sharath S.; Rubinstein, Robert

    2011-01-01

    Variable resolution methods have become frontline CFD tools, but in order to take full advantage of this promising new technology, more formal theoretical development is desirable. Two general classes of variable resolution methods can be identified: hybrid or zonal methods in which RANS and LES models are solved in different flow regions, and bridging or seamless models which interpolate smoothly between RANS and LES. This paper considers the formulation of bridging methods using methods of two-point closure theory. The fundamental problem is to derive a subgrid two-equation model. We compare and reconcile two different approaches to this goal: the Partially Integrated Transport Model, and the Partially Averaged Navier-Stokes method.

  2. Depinning of the Bragg glass in a point disordered model superconductor.

    PubMed

    Olsson, Peter

    2007-03-02

    We perform simulations of the three-dimensional frustrated anisotropic XY model with point disorder as a model of a type-II superconductor with quenched point pinning in a magnetic field and a weak applied current. Using resistively shunted junction dynamics, we find a critical current I_{c} that separates a creep region with immeasurably low voltage from a region with a voltage V proportional, variant(I-I_{c}) and also identify the mechanism behind this behavior. It also turns out that data at fixed disorder strength may be collapsed by plotting V versus TI, where T is the temperature, though the reason for this behavior as yet not is fully understood.

  3. Equilibrium points, stability and numerical solutions of fractional-order predator-prey and rabies models

    NASA Astrophysics Data System (ADS)

    Ahmed, E.; El-Sayed, A. M. A.; El-Saka, H. A. A.

    2007-01-01

    In this paper we are concerned with the fractional-order predator-prey model and the fractional-order rabies model. Existence and uniqueness of solutions are proved. The stability of equilibrium points are studied. Numerical solutions of these models are given. An example is given where the equilibrium point is a centre for the integer order system but locally asymptotically stable for its fractional-order counterpart.

  4. A removal model for estimating detection probabilities from point-count surveys

    USGS Publications Warehouse

    Farnsworth, G.L.; Pollock, K.H.; Nichols, J.D.; Simons, T.R.; Hines, J.E.; Sauer, J.R.

    2000-01-01

    We adapted a removal model to estimate detection probability during point count surveys. The model assumes one factor influencing detection during point counts is the singing frequency of birds. This may be true for surveys recording forest songbirds when most detections are by sound. The model requires counts to be divided into several time intervals. We used time intervals of 2, 5, and 10 min to develop a maximum-likelihood estimator for the detectability of birds during such surveys. We applied this technique to data from bird surveys conducted in Great Smoky Mountains National Park. We used model selection criteria to identify whether detection probabilities varied among species, throughout the morning, throughout the season, and among different observers. The overall detection probability for all birds was 75%. We found differences in detection probability among species. Species that sing frequently such as Winter Wren and Acadian Flycatcher had high detection probabilities (about 90%) and species that call infrequently such as Pileated Woodpecker had low detection probability (36%). We also found detection probabilities varied with the time of day for some species (e.g. thrushes) and between observers for other species. This method of estimating detectability during point count surveys offers a promising new approach to using count data to address questions of the bird abundance, density, and population trends.

  5. Application of the nudged elastic band method to the point-to-point radio wave ray tracing in IRI modeled ionosphere

    NASA Astrophysics Data System (ADS)

    Nosikov, I. A.; Klimenko, M. V.; Bessarab, P. F.; Zhbankov, G. A.

    2017-07-01

    Point-to-point ray tracing is an important problem in many fields of science. While direct variational methods where some trajectory is transformed to an optimal one are routinely used in calculations of pathways of seismic waves, chemical reactions, diffusion processes, etc., this approach is not widely known in ionospheric point-to-point ray tracing. We apply the Nudged Elastic Band (NEB) method to a radio wave propagation problem. In the NEB method, a chain of points which gives a discrete representation of the radio wave ray is adjusted iteratively to an optimal configuration satisfying the Fermat's principle, while the endpoints of the trajectory are kept fixed according to the boundary conditions. Transverse displacements define the radio ray trajectory, while springs between the points control their distribution along the ray. The method is applied to a study of point-to-point ionospheric ray tracing, where the propagation medium is obtained with the International Reference Ionosphere model taking into account traveling ionospheric disturbances. A 2-dimensional representation of the optical path functional is developed and used to gain insight into the fundamental difference between high and low rays. We conclude that high and low rays are minima and saddle points of the optical path functional, respectively.

  6. Degradation mechanisms of bioresorbable polyesters. Part 2. Effects of initial molecular weight and residual monomer.

    PubMed

    Gleadall, Andrew; Pan, Jingzhe; Kruft, Marc-Anton; Kellomäki, Minna

    2014-05-01

    This paper presents an understanding of how initial molecular weight and initial monomer fraction affect the degradation of bioresorbable polymers in terms of the underlying hydrolysis mechanisms. A mathematical model was used to analyse the effects of initial molecular weight for various hydrolysis mechanisms including noncatalytic random scission, autocatalytic random scission, noncatalytic end scission or autocatalytic end scission. Different behaviours were identified to relate initial molecular weight to the molecular weight half-life and to the time until the onset of mass loss. The behaviours were validated by fitting the model to experimental data for molecular weight reduction and mass loss of samples with different initial molecular weights. Several publications that consider initial molecular weight were reviewed. The effect of residual monomer on degradation was also analysed, and shown to accelerate the reduction of molecular weight and mass loss. An inverse square root law relationship was found between molecular weight half-life and initial monomer fraction for autocatalytic hydrolysis. The relationship was tested by fitting the model to experimental data with various residual monomer contents. Copyright © 2014 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

  7. Point-based and model-based geolocation analysis of airborne laser scanning data

    NASA Astrophysics Data System (ADS)

    Sefercik, Umut Gunes; Buyuksalih, Gurcan; Jacobsen, Karsten; Alkan, Mehmet

    2017-01-01

    Airborne laser scanning (ALS) is one of the most effective remote sensing technologies providing precise three-dimensional (3-D) dense point clouds. A large-size ALS digital surface model (DSM) covering the whole Istanbul province was analyzed by point-based and model-based comprehensive statistical approaches. Point-based analysis was performed using checkpoints on flat areas. Model-based approaches were implemented in two steps as strip to strip comparing overlapping ALS DSMs individually in three subareas and comparing the merged ALS DSMs with terrestrial laser scanning (TLS) DSMs in four other subareas. In the model-based approach, the standard deviation of height and normalized median absolute deviation were used as the accuracy indicators combined with the dependency of terrain inclination. The results demonstrate that terrain roughness has a strong impact on the vertical accuracy of ALS DSMs. From the relative horizontal shifts determined and partially improved by merging the overlapping strips and comparison of the ALS, and the TLS, data were found not to be negligible. The analysis of ALS DSM in relation to TLS DSM allowed us to determine the characteristics of the DSM in detail.

  8. D Point Cloud Model Colorization by Dense Registration of Digital Images

    NASA Astrophysics Data System (ADS)

    Crombez, N.; Caron, G.; Mouaddib, E.

    2015-02-01

    Architectural heritage is a historic and artistic property which has to be protected, preserved, restored and must be shown to the public. Modern tools like 3D laser scanners are more and more used in heritage documentation. Most of the time, the 3D laser scanner is completed by a digital camera which is used to enrich the accurate geometric informations with the scanned objects colors. However, the photometric quality of the acquired point clouds is generally rather low because of several problems presented below. We propose an accurate method for registering digital images acquired from any viewpoints on point clouds which is a crucial step for a good colorization by colors projection. We express this image-to-geometry registration as a pose estimation problem. The camera pose is computed using the entire images intensities under a photometric visual and virtual servoing (VVS) framework. The camera extrinsic and intrinsic parameters are automatically estimated. Because we estimates the intrinsic parameters we do not need any informations about the camera which took the used digital image. Finally, when the point cloud model and the digital image are correctly registered, we project the 3D model in the digital image frame and assign new colors to the visible points. The performance of the approach is proven in simulation and real experiments on indoor and outdoor datasets of the cathedral of Amiens, which highlight the success of our method, leading to point clouds with better photometric quality and resolution.

  9. An Electrostatic Charge Partitioning Model for the Dissociation of Protein Complexes in the Gas Phase

    NASA Astrophysics Data System (ADS)

    Sciuto, Stephen V.; Liu, Jiangjiang; Konermann, Lars

    2011-10-01

    Electrosprayed multi-protein complexes can be dissociated by collisional activation in the gas phase. Typically, these processes follow a mechanism whereby a single subunit gets ejected with a disproportionately high amount of charge relative to its mass. This asymmetric behavior suggests that the departing subunit undergoes some degree of unfolding prior to being separated from the residual complex. These structural changes occur concomitantly with charge (proton) transfer towards the subunit that is being unraveled. Charge accumulation takes place up to the point where the subunit loses physical contact with the residual complex. This work develops a simple electrostatic model for studying the relationship between conformational changes and charge enrichment during collisional activation. Folded subunits are described as spheres that carry continuum surface charge. The unfolded chain is envisioned as random coil bead string. Simulations are guided by the principle that the system will adopt the charge configuration with the lowest potential energy for any backbone conformation. A finite-difference gradient algorithm is used to determine the charge on each subunit throughout the dissociation process. Both dimeric and tetrameric protein complexes are investigated. The model reproduces the occurrence of asymmetric charge partitioning for dissociation events that are preceded by subunit unfolding. Quantitative comparisons of experimental MS/MS data with model predictions yield estimates of the structural changes that occur during collisional activation. Our findings suggest that subunit separation can occur over a wide range of scission point structures that correspond to different degrees of unfolding.

  10. The environmental zero-point problem in evolutionary reaction norm modeling.

    PubMed

    Ergon, Rolf

    2018-04-01

    There is a potential problem in present quantitative genetics evolutionary modeling based on reaction norms. Such models are state-space models, where the multivariate breeder's equation in some form is used as the state equation that propagates the population state forward in time. These models use the implicit assumption of a constant reference environment, in many cases set to zero. This zero-point is often the environment a population is adapted to, that is, where the expected geometric mean fitness is maximized. Such environmental reference values follow from the state of the population system, and they are thus population properties. The environment the population is adapted to, is, in other words, an internal population property, independent of the external environment. It is only when the external environment coincides with the internal reference environment, or vice versa, that the population is adapted to the current environment. This is formally a result of state-space modeling theory, which is an important theoretical basis for evolutionary modeling. The potential zero-point problem is present in all types of reaction norm models, parametrized as well as function-valued, and the problem does not disappear when the reference environment is set to zero. As the environmental reference values are population characteristics, they ought to be modeled as such. Whether such characteristics are evolvable is an open question, but considering the complexity of evolutionary processes, such evolvability cannot be excluded without good arguments. As a straightforward solution, I propose to model the reference values as evolvable mean traits in their own right, in addition to other reaction norm traits. However, solutions based on an evolvable G matrix are also possible.

  11. Factors influencing superimposition error of 3D cephalometric landmarks by plane orientation method using 4 reference points: 4 point superimposition error regression model.

    PubMed

    Hwang, Jae Joon; Kim, Kee-Deog; Park, Hyok; Park, Chang Seo; Jeong, Ho-Gul

    2014-01-01

    Superimposition has been used as a method to evaluate the changes of orthodontic or orthopedic treatment in the dental field. With the introduction of cone beam CT (CBCT), evaluating 3 dimensional changes after treatment became possible by superimposition. 4 point plane orientation is one of the simplest ways to achieve superimposition of 3 dimensional images. To find factors influencing superimposition error of cephalometric landmarks by 4 point plane orientation method and to evaluate the reproducibility of cephalometric landmarks for analyzing superimposition error, 20 patients were analyzed who had normal skeletal and occlusal relationship and took CBCT for diagnosis of temporomandibular disorder. The nasion, sella turcica, basion and midpoint between the left and the right most posterior point of the lesser wing of sphenoidal bone were used to define a three-dimensional (3D) anatomical reference co-ordinate system. Another 15 reference cephalometric points were also determined three times in the same image. Reorientation error of each landmark could be explained substantially (23%) by linear regression model, which consists of 3 factors describing position of each landmark towards reference axes and locating error. 4 point plane orientation system may produce an amount of reorientation error that may vary according to the perpendicular distance between the landmark and the x-axis; the reorientation error also increases as the locating error and shift of reference axes viewed from each landmark increases. Therefore, in order to reduce the reorientation error, accuracy of all landmarks including the reference points is important. Construction of the regression model using reference points of greater precision is required for the clinical application of this model.

  12. Boiling points of halogenated ethanes: an explanatory model implicating weak intermolecular hydrogen-halogen bonding.

    PubMed

    Beauchamp, Guy

    2008-10-23

    This study explores via structural clues the influence of weak intermolecular hydrogen-halogen bonds on the boiling point of halogenated ethanes. The plot of boiling points of 86 halogenated ethanes versus the molar refraction (linked to polarizability) reveals a series of straight lines, each corresponding to one of nine possible arrangements of hydrogen and halogen atoms on the two-carbon skeleton. A multiple linear regression model of the boiling points could be designed based on molar refraction and subgroup structure as independent variables (R(2) = 0.995, standard error of boiling point 4.2 degrees C). The model is discussed in view of the fact that molar refraction can account for approximately 83.0% of the observed variation in boiling point, while 16.5% could be ascribed to weak C-X...H-C intermolecular interactions. The difference in the observed boiling point of molecules having similar molar refraction values but differing in hydrogen-halogen intermolecular bonds can reach as much as 90 degrees C.

  13. Multiplicative point process as a model of trading activity

    NASA Astrophysics Data System (ADS)

    Gontis, V.; Kaulakys, B.

    2004-11-01

    Signals consisting of a sequence of pulses show that inherent origin of the 1/ f noise is a Brownian fluctuation of the average interevent time between subsequent pulses of the pulse sequence. In this paper, we generalize the model of interevent time to reproduce a variety of self-affine time series exhibiting power spectral density S( f) scaling as a power of the frequency f. Furthermore, we analyze the relation between the power-law correlations and the origin of the power-law probability distribution of the signal intensity. We introduce a stochastic multiplicative model for the time intervals between point events and analyze the statistical properties of the signal analytically and numerically. Such model system exhibits power-law spectral density S( f)∼1/ fβ for various values of β, including β= {1}/{2}, 1 and {3}/{2}. Explicit expressions for the power spectra in the low-frequency limit and for the distribution density of the interevent time are obtained. The counting statistics of the events is analyzed analytically and numerically, as well. The specific interest of our analysis is related with the financial markets, where long-range correlations of price fluctuations largely depend on the number of transactions. We analyze the spectral density and counting statistics of the number of transactions. The model reproduces spectral properties of the real markets and explains the mechanism of power-law distribution of trading activity. The study provides evidence that the statistical properties of the financial markets are enclosed in the statistics of the time interval between trades. A multiplicative point process serves as a consistent model generating this statistics.

  14. Formation and distribution of fragments in the spontaneous fission of 240Pu

    NASA Astrophysics Data System (ADS)

    Sadhukhan, Jhilam; Zhang, Chunli; Nazarewicz, Witold; Schunck, Nicolas

    2017-12-01

    Background: Fission is a fundamental decay mode of heavy atomic nuclei. The prevalent theoretical approach is based on mean-field theory and its extensions where fission is modeled as a large amplitude motion of a nucleus in a multidimensional collective space. One of the important observables characterizing fission is the charge and mass distribution of fission fragments. Purpose: The goal of this Rapid Communication is to better understand the structure of fission fragment distributions by investigating the competition between the static structure of the collective manifold and the stochastic dynamics. In particular, we study the characteristics of the tails of yield distributions, which correspond to very asymmetric fission into a very heavy and a very light fragment. Methods: We use the stochastic Langevin framework to simulate the nuclear evolution after the system tunnels through the multidimensional potential barrier. For a representative sample of different initial configurations along the outer turning-point line, we define effective fission paths by computing a large number of Langevin trajectories. We extract the relative contribution of each such path to the fragment distribution. We then use nucleon localization functions along effective fission pathways to analyze the characteristics of prefragments at prescission configurations. Results: We find that non-Newtonian Langevin trajectories, strongly impacted by the random force, produce the tails of the fission fragment distribution of 240Pu. The prefragments deduced from nucleon localizations are formed early and change little as the nucleus evolves towards scission. On the other hand, the system contains many nucleons that are not localized in the prefragments even near the scission point. Such nucleons are distributed rapidly at scission to form the final fragments. Fission prefragments extracted from direct integration of the density and from the localization functions typically differ by more than

  15. Robust group-wise rigid registration of point sets using t-mixture model

    NASA Astrophysics Data System (ADS)

    Ravikumar, Nishant; Gooya, Ali; Frangi, Alejandro F.; Taylor, Zeike A.

    2016-03-01

    A probabilistic framework for robust, group-wise rigid alignment of point-sets using a mixture of Students t-distribution especially when the point sets are of varying lengths, are corrupted by an unknown degree of outliers or in the presence of missing data. Medical images (in particular magnetic resonance (MR) images), their segmentations and consequently point-sets generated from these are highly susceptible to corruption by outliers. This poses a problem for robust correspondence estimation and accurate alignment of shapes, necessary for training statistical shape models (SSMs). To address these issues, this study proposes to use a t-mixture model (TMM), to approximate the underlying joint probability density of a group of similar shapes and align them to a common reference frame. The heavy-tailed nature of t-distributions provides a more robust registration framework in comparison to state of the art algorithms. Significant reduction in alignment errors is achieved in the presence of outliers, using the proposed TMM-based group-wise rigid registration method, in comparison to its Gaussian mixture model (GMM) counterparts. The proposed TMM-framework is compared with a group-wise variant of the well-known Coherent Point Drift (CPD) algorithm and two other group-wise methods using GMMs, using both synthetic and real data sets. Rigid alignment errors for groups of shapes are quantified using the Hausdorff distance (HD) and quadratic surface distance (QSD) metrics.

  16. An analysis of the least-squares problem for the DSN systematic pointing error model

    NASA Technical Reports Server (NTRS)

    Alvarez, L. S.

    1991-01-01

    A systematic pointing error model is used to calibrate antennas in the Deep Space Network. The least squares problem is described and analyzed along with the solution methods used to determine the model's parameters. Specifically studied are the rank degeneracy problems resulting from beam pointing error measurement sets that incorporate inadequate sky coverage. A least squares parameter subset selection method is described and its applicability to the systematic error modeling process is demonstrated on Voyager 2 measurement distribution.

  17. Two-point functions in a holographic Kondo model

    NASA Astrophysics Data System (ADS)

    Erdmenger, Johanna; Hoyos, Carlos; O'Bannon, Andy; Papadimitriou, Ioannis; Probst, Jonas; Wu, Jackson M. S.

    2017-03-01

    We develop the formalism of holographic renormalization to compute two-point functions in a holographic Kondo model. The model describes a (0 + 1)-dimensional impurity spin of a gauged SU( N ) interacting with a (1 + 1)-dimensional, large- N , strongly-coupled Conformal Field Theory (CFT). We describe the impurity using Abrikosov pseudo-fermions, and define an SU( N )-invariant scalar operator O built from a pseudo-fermion and a CFT fermion. At large N the Kondo interaction is of the form O^{\\dagger}O, which is marginally relevant, and generates a Renormalization Group (RG) flow at the impurity. A second-order mean-field phase transition occurs in which O condenses below a critical temperature, leading to the Kondo effect, including screening of the impurity. Via holography, the phase transition is dual to holographic superconductivity in (1 + 1)-dimensional Anti-de Sitter space. At all temperatures, spectral functions of O exhibit a Fano resonance, characteristic of a continuum of states interacting with an isolated resonance. In contrast to Fano resonances observed for example in quantum dots, our continuum and resonance arise from a (0 + 1)-dimensional UV fixed point and RG flow, respectively. In the low-temperature phase, the resonance comes from a pole in the Green's function of the form - i< O >2, which is characteristic of a Kondo resonance.

  18. Multiple-point principle with a scalar singlet extension of the standard model

    DOE PAGES

    Haba, Naoyuki; Ishida, Hiroyuki; Okada, Nobuchika; ...

    2017-01-21

    Here, we suggest a scalar singlet extension of the standard model, in which the multiple-point principle (MPP) condition of a vanishing Higgs potential at the Planck scale is realized. Although there have been lots of attempts to realize the MPP at the Planck scale, the realization with keeping naturalness is quite difficult. This model can easily achieve the MPP at the Planck scale without large Higgs mass corrections. It is worth noting that the electroweak symmetry can be radiatively broken in our model. In the naturalness point of view, the singlet scalar mass should be of O(1 TeV) or less.more » Also, we consider right-handed neutrino extension of the model for neutrino mass generation. The model does not affect the MPP scenario, and might keep the naturalness with the new particle mass scale beyond TeV, thanks to accidental cancellation of Higgs mass corrections.« less

  19. The three-point function as a probe of models for large-scale structure

    NASA Astrophysics Data System (ADS)

    Frieman, Joshua A.; Gaztanaga, Enrique

    1994-04-01

    We analyze the consequences of models of structure formation for higher order (n-point) galaxy correlation functions in the mildly nonlinear regime. Several variations of the standard Omega = 1 cold dark matter model with scale-invariant primordial perturbations have recently been introduced to obtain more power on large scales, Rp is approximately 20/h Mpc, e.g., low matter-density (nonzero cosmological constant) models, 'tilted' primordial spectra, and scenarios with a mixture of cold and hot dark matter. They also include models with an effective scale-dependent bias, such as the cooperative galaxy formation scenario of Bower et al. We show that higher-order (n-point) galaxy correlation functions can provide a useful test of such models and can discriminate between models with true large-scale power in the density field and those where the galaxy power arises from scale-dependent bias: a bias with rapid scale dependence leads to a dramatic decrease of the the hierarchical amplitudes QJ at large scales, r is greater than or approximately Rp. Current observational constraints on the three-point amplitudes Q3 and S3 can place limits on the bias parameter(s) and appear to disfavor, but not yet rule out, the hypothesis that scale-dependent bias is responsible for the extra power observed on large scales.

  20. Post-processing of global model output to forecast point rainfall

    NASA Astrophysics Data System (ADS)

    Hewson, Tim; Pillosu, Fatima

    2016-04-01

    ECMWF (the European Centre for Medium range Weather Forecasts) has recently embarked upon a new project to post-process gridbox rainfall forecasts from its ensemble prediction system, to provide probabilistic forecasts of point rainfall. The new post-processing strategy relies on understanding how different rainfall generation mechanisms lead to different degrees of sub-grid variability in rainfall totals. We use a number of simple global model parameters, such as the convective rainfall fraction, to anticipate the sub-grid variability, and then post-process each ensemble forecast into a pdf (probability density function) for a point-rainfall total. The final forecast will comprise the sum of the different pdfs from all ensemble members. The post-processing is essentially a re-calibration exercise, which needs only rainfall totals from standard global reporting stations (and forecasts) to train it. High density observations are not needed. This presentation will describe results from the initial 'proof of concept' study, which has been remarkably successful. Reference will also be made to other useful outcomes of the work, such as gaining insights into systematic model biases in different synoptic settings. The special case of orographic rainfall will also be discussed. Work ongoing this year will also be described. This involves further investigations of which model parameters can provide predictive skill, and will then move on to development of an operational system for predicting point rainfall across the globe. The main practical benefit of this system will be a greatly improved capacity to predict extreme point rainfall, and thereby provide early warnings, for the whole world, of flash flood potential for lead times that extend beyond day 5. This will be incorporated into the suite of products output by GLOFAS (the GLObal Flood Awareness System) which is hosted at ECMWF. As such this work offers a very cost-effective approach to satisfying user needs right

  1. A Multidimensional Ideal Point Item Response Theory Model for Binary Data

    ERIC Educational Resources Information Center

    Maydeu-Olivares, Albert; Hernandez, Adolfo; McDonald, Roderick P.

    2006-01-01

    We introduce a multidimensional item response theory (IRT) model for binary data based on a proximity response mechanism. Under the model, a respondent at the mode of the item response function (IRF) endorses the item with probability one. The mode of the IRF is the ideal point, or in the multidimensional case, an ideal hyperplane. The model…

  2. Material point method modeling in oil and gas reservoirs

    DOEpatents

    Vanderheyden, William Brian; Zhang, Duan

    2016-06-28

    A computer system and method of simulating the behavior of an oil and gas reservoir including changes in the margins of frangible solids. A system of equations including state equations such as momentum, and conservation laws such as mass conservation and volume fraction continuity, are defined and discretized for at least two phases in a modeled volume, one of which corresponds to frangible material. A material point model technique for numerically solving the system of discretized equations, to derive fluid flow at each of a plurality of mesh nodes in the modeled volume, and the velocity of at each of a plurality of particles representing the frangible material in the modeled volume. A time-splitting technique improves the computational efficiency of the simulation while maintaining accuracy on the deformation scale. The method can be applied to derive accurate upscaled model equations for larger volume scale simulations.

  3. Indoor Navigation from Point Clouds: 3d Modelling and Obstacle Detection

    NASA Astrophysics Data System (ADS)

    Díaz-Vilariño, L.; Boguslawski, P.; Khoshelham, K.; Lorenzo, H.; Mahdjoubi, L.

    2016-06-01

    In the recent years, indoor modelling and navigation has become a research of interest because many stakeholders require navigation assistance in various application scenarios. The navigational assistance for blind or wheelchair people, building crisis management such as fire protection, augmented reality for gaming, tourism or training emergency assistance units are just some of the direct applications of indoor modelling and navigation. Navigational information is traditionally extracted from 2D drawings or layouts. Real state of indoors, including opening position and geometry for both windows and doors, and the presence of obstacles is commonly ignored. In this work, a real indoor-path planning methodology based on 3D point clouds is developed. The value and originality of the approach consist on considering point clouds not only for reconstructing semantically-rich 3D indoor models, but also for detecting potential obstacles in the route planning and using these for readapting the routes according to the real state of the indoor depictured by the laser scanner.

  4. Saturation point model for the formation of metal nitrate in nitrogen tetroxide oxidizer

    NASA Technical Reports Server (NTRS)

    Torrance, Paul R.

    1991-01-01

    A model was developed for the formation of metal nitrate in nitrogen tetroxide (N2O4). The basis of this model is the saturation point of metal nitrate in N2O4. This basis is chosen mainly because of the White Sands Test Facility's metal nitrate in N2O4 experience. Means of reaching the saturation point are examined, and a relationship is made for the reaction/formation rate and diffusion rate of metal nitrate in N2O4.

  5. Defining the end-point of mastication: A conceptual model.

    PubMed

    Gray-Stuart, Eli M; Jones, Jim R; Bronlund, John E

    2017-10-01

    The great risks of swallowing are choking and aspiration of food into the lungs. Both are rare in normal functioning humans, which is remarkable given the diversity of foods and the estimated 10 million swallows performed in a lifetime. Nevertheless, it remains a major challenge to define the food properties that are necessary to ensure a safe swallow. Here, the mouth is viewed as a well-controlled processor where mechanical sensory assessment occurs throughout the occlusion-circulation cycle of mastication. Swallowing is a subsequent action. It is proposed here that, during mastication, temporal maps of interfacial property data are generated, which the central nervous system compares against a series of criteria in order to be sure that the bolus is safe to swallow. To determine these criteria, an engineering hazard analysis tool, alongside an understanding of fluid and particle mechanics, is used to deduce the mechanisms by which food may deposit or become stranded during swallowing. These mechanisms define the food properties that must be avoided. By inverting the thinking, from hazards to ensuring safety, six criteria arise which are necessary for a safe-to-swallow bolus. A new conceptual model is proposed to define when food is safe to swallow during mastication. This significantly advances earlier mouth models. The conceptual model proposed in this work provides a framework of decision-making to define when food is safe to swallow. This will be of interest to designers of dietary foods, foods for dysphagia sufferers and will aid the further development of mastication robots for preparation of artificial boluses for digestion research. It enables food designers to influence the swallow-point properties of their products. For example, a product may be designed to satisfy five of the criteria for a safe-to-swallow bolus, which means the sixth criterion and its attendant food properties define the swallow-point. Alongside other organoleptic factors, these

  6. Asymptotic behaviour of two-point functions in multi-species models

    NASA Astrophysics Data System (ADS)

    Kozlowski, Karol K.; Ragoucy, Eric

    2016-05-01

    We extract the long-distance asymptotic behaviour of two-point correlation functions in massless quantum integrable models containing multi-species excitations. For such a purpose, we extend to these models the method of a large-distance regime re-summation of the form factor expansion of correlation functions. The key feature of our analysis is a technical hypothesis on the large-volume behaviour of the form factors of local operators in such models. We check the validity of this hypothesis on the example of the SU (3)-invariant XXX magnet by means of the determinant representations for the form factors of local operators in this model. Our approach confirms the structure of the critical exponents obtained previously for numerous models solvable by the nested Bethe Ansatz.

  7. A Point-process Response Model for Spike Trains from Single Neurons in Neural Circuits under Optogenetic Stimulation

    PubMed Central

    Luo, X.; Gee, S.; Sohal, V.; Small, D.

    2015-01-01

    Optogenetics is a new tool to study neuronal circuits that have been genetically modified to allow stimulation by flashes of light. We study recordings from single neurons within neural circuits under optogenetic stimulation. The data from these experiments present a statistical challenge of modeling a high frequency point process (neuronal spikes) while the input is another high frequency point process (light flashes). We further develop a generalized linear model approach to model the relationships between two point processes, employing additive point-process response functions. The resulting model, Point-process Responses for Optogenetics (PRO), provides explicit nonlinear transformations to link the input point process with the output one. Such response functions may provide important and interpretable scientific insights into the properties of the biophysical process that governs neural spiking in response to optogenetic stimulation. We validate and compare the PRO model using a real dataset and simulations, and our model yields a superior area-under-the- curve value as high as 93% for predicting every future spike. For our experiment on the recurrent layer V circuit in the prefrontal cortex, the PRO model provides evidence that neurons integrate their inputs in a sophisticated manner. Another use of the model is that it enables understanding how neural circuits are altered under various disease conditions and/or experimental conditions by comparing the PRO parameters. PMID:26411923

  8. The three-point function as a probe of models for large-scale structure

    NASA Technical Reports Server (NTRS)

    Frieman, Joshua A.; Gaztanaga, Enrique

    1993-01-01

    The consequences of models of structure formation for higher-order (n-point) galaxy correlation functions in the mildly non-linear regime are analyzed. Several variations of the standard Omega = 1 cold dark matter model with scale-invariant primordial perturbations were recently introduced to obtain more power on large scales, R(sub p) is approximately 20 h(sup -1) Mpc, e.g., low-matter-density (non-zero cosmological constant) models, 'tilted' primordial spectra, and scenarios with a mixture of cold and hot dark matter. They also include models with an effective scale-dependent bias, such as the cooperative galaxy formation scenario of Bower, etal. It is shown that higher-order (n-point) galaxy correlation functions can provide a useful test of such models and can discriminate between models with true large-scale power in the density field and those where the galaxy power arises from scale-dependent bias: a bias with rapid scale-dependence leads to a dramatic decrease of the hierarchical amplitudes Q(sub J) at large scales, r is approximately greater than R(sub p). Current observational constraints on the three-point amplitudes Q(sub 3) and S(sub 3) can place limits on the bias parameter(s) and appear to disfavor, but not yet rule out, the hypothesis that scale-dependent bias is responsible for the extra power observed on large scales.

  9. A point cloud modeling method based on geometric constraints mixing the robust least squares method

    NASA Astrophysics Data System (ADS)

    Yue, JIanping; Pan, Yi; Yue, Shun; Liu, Dapeng; Liu, Bin; Huang, Nan

    2016-10-01

    The appearance of 3D laser scanning technology has provided a new method for the acquisition of spatial 3D information. It has been widely used in the field of Surveying and Mapping Engineering with the characteristics of automatic and high precision. 3D laser scanning data processing process mainly includes the external laser data acquisition, the internal industry laser data splicing, the late 3D modeling and data integration system. For the point cloud modeling, domestic and foreign researchers have done a lot of research. Surface reconstruction technology mainly include the point shape, the triangle model, the triangle Bezier surface model, the rectangular surface model and so on, and the neural network and the Alfa shape are also used in the curved surface reconstruction. But in these methods, it is often focused on single surface fitting, automatic or manual block fitting, which ignores the model's integrity. It leads to a serious problems in the model after stitching, that is, the surfaces fitting separately is often not satisfied with the well-known geometric constraints, such as parallel, vertical, a fixed angle, or a fixed distance. However, the research on the special modeling theory such as the dimension constraint and the position constraint is not used widely. One of the traditional modeling methods adding geometric constraints is a method combing the penalty function method and the Levenberg-Marquardt algorithm (L-M algorithm), whose stability is pretty good. But in the research process, it is found that the method is greatly influenced by the initial value. In this paper, we propose an improved method of point cloud model taking into account the geometric constraint. We first apply robust least-squares to enhance the initial value's accuracy, and then use penalty function method to transform constrained optimization problems into unconstrained optimization problems, and finally solve the problems using the L-M algorithm. The experimental results

  10. Poisson point process modeling for polyphonic music transcription.

    PubMed

    Peeling, Paul; Li, Chung-fai; Godsill, Simon

    2007-04-01

    Peaks detected in the frequency domain spectrum of a musical chord are modeled as realizations of a nonhomogeneous Poisson point process. When several notes are superimposed to make a chord, the processes for individual notes combine to give another Poisson process, whose likelihood is easily computable. This avoids a data association step linking individual harmonics explicitly with detected peaks in the spectrum. The likelihood function is ideal for Bayesian inference about the unknown note frequencies in a chord. Here, maximum likelihood estimation of fundamental frequencies shows very promising performance on real polyphonic piano music recordings.

  11. Civil tiltrotor transport point design: Model 940A

    NASA Technical Reports Server (NTRS)

    Rogers, Charles; Reisdorfer, Dale

    1993-01-01

    The objective of this effort is to produce a vehicle layout for the civil tiltrotor wing and center fuselage in sufficient detail to obtain aerodynamic and inertia loads for determining member sizing. This report addresses the parametric configuration and loads definition for a 40 passenger civil tilt rotor transport. A preliminary (point) design is developed for the tiltrotor wing box and center fuselage. This summary report provides all design details used in the pre-design; provides adequate detail to allow a preliminary design finite element model to be developed; and contains guidelines for dynamic constraints.

  12. TARDEC FIXED HEEL POINT (FHP): DRIVER CAD ACCOMMODATION MODEL VERIFICATION REPORT

    DTIC Science & Technology

    2017-11-09

    SUPPLEMENTARY NOTES N/A 14. ABSTRACT Easy-to-use Computer-Aided Design (CAD) tools, known as accommodation models, are needed by the ground vehicle... designers when developing the interior workspace for the occupant. The TARDEC Fixed Heel Point (FHP): Driver CAD Accommodation Model described in this...is intended to provide the composite boundaries representing the body of the defined target design population, including posture prediction

  13. Normalized Implicit Radial Models for Scattered Point Cloud Data without Normal Vectors

    DTIC Science & Technology

    2009-03-23

    points by shrinking a discrete membrane, Computer Graphics Forum, Vol. 24-4, 2005, pp. 791-808 [8] Floater , M. S., Reimers, M.: Meshless...Parameterization and Surface Reconstruction, Computer Aided Geometric Design 18, 2001, pp 77-92 [9] Floater , M. S.: Parameterization of Triangulations and...Unorganized Points, In: Tutorials on Multiresolution in Geometric Modelling, A. Iske, E. Quak and M. S. Floater (eds.), Springer , 2002, pp. 287-316 [10

  14. Optimal time points sampling in pathway modelling.

    PubMed

    Hu, Shiyan

    2004-01-01

    Modelling cellular dynamics based on experimental data is at the heart of system biology. Considerable progress has been made to dynamic pathway modelling as well as the related parameter estimation. However, few of them gives consideration for the issue of optimal sampling time selection for parameter estimation. Time course experiments in molecular biology rarely produce large and accurate data sets and the experiments involved are usually time consuming and expensive. Therefore, to approximate parameters for models with only few available sampling data is of significant practical value. For signal transduction, the sampling intervals are usually not evenly distributed and are based on heuristics. In the paper, we investigate an approach to guide the process of selecting time points in an optimal way to minimize the variance of parameter estimates. In the method, we first formulate the problem to a nonlinear constrained optimization problem by maximum likelihood estimation. We then modify and apply a quantum-inspired evolutionary algorithm, which combines the advantages of both quantum computing and evolutionary computing, to solve the optimization problem. The new algorithm does not suffer from the morass of selecting good initial values and being stuck into local optimum as usually accompanied with the conventional numerical optimization techniques. The simulation results indicate the soundness of the new method.

  15. Ocean Turbulence I: One-Point Closure Model Momentum and Heat Vertical Diffusivities

    NASA Technical Reports Server (NTRS)

    Canuto, V. M.; Howard, A.; Cheng, Y.; Dubovikov, M. S.

    1999-01-01

    Since the early forties, one-point turbulence closure models have been the canonical tools used to describe turbulent flows in many fields. In geophysics, Mellor and Yamada applied such models using the 1980 state-of-the art. Since then, no improvements were introduced to alleviate two major difficulties: 1) closure of the pressure correlations, which affects the correct determination of the critical Richardson number Ri(sub cr) above which turbulent mixing is no longer possible and 2) the need to express the non-local third-order moments (TOM) in terms of lower order moments rather than via the down-gradient approximation as done thus far, since the latter seriously underestimates the TOMs. Since 1) and 2) are still being dealt with adjustable parameters which weaken the credibility of the models, alternative models, not based on turbulence modeling, have been suggested. The aim of this paper is to show that new information, partly derived from the newest 2-point closure model discussed, can be used to solve these shortcomings. The new one-point closure model, which in its simplest form is algebraic and thus simple to implement, is first shown to reproduce a variety of data. Then, it is used in a Ocean-General Circulation Model (O-GCM) where it reproduces well a large variety of ocean data. While phenomenological models are specifically tuned to ocean turbulence, the present model is not. It is first tested against laboratory data on stably stratified flows and then used in an O-GCM. It is more general, more predictive and more resilient, e.g., it can incorporate phenomena like wave-breaking at the surface, salinity diffusivity, non-locality, etc. One important feature that naturally comes out of the new model is that the predicted Richardson critical value Ri(sub cr) is Ri (sub cr approx. = 1) in agreement with both Large Eddy Simulations (LES) and empirical evidence while all previous models predicted Ri (sub cr approx. = 0.2) which led to a considerable

  16. Using Model Point Spread Functions to Identifying Binary Brown Dwarf Systems

    NASA Astrophysics Data System (ADS)

    Matt, Kyle; Stephens, Denise C.; Lunsford, Leanne T.

    2017-01-01

    A Brown Dwarf (BD) is a celestial object that is not massive enough to undergo hydrogen fusion in its core. BDs can form in pairs called binaries. Due to the great distances between Earth and these BDs, they act as point sources of light and the angular separation between binary BDs can be small enough to appear as a single, unresolved object in images, according to Rayleigh Criterion. It is not currently possible to resolve some of these objects into separate light sources. Stephens and Noll (2006) developed a method that used model point spread functions (PSFs) to identify binary Trans-Neptunian Objects, we will use this method to identify binary BD systems in the Hubble Space Telescope archive. This method works by comparing model PSFs of single and binary sources to the observed PSFs. We also use a method to compare model spectral data for single and binary fits to determine the best parameter values for each component of the system. We describe these methods, its challenges and other possible uses in this poster.

  17. LIDAR Point Cloud Data Extraction and Establishment of 3D Modeling of Buildings

    NASA Astrophysics Data System (ADS)

    Zhang, Yujuan; Li, Xiuhai; Wang, Qiang; Liu, Jiang; Liang, Xin; Li, Dan; Ni, Chundi; Liu, Yan

    2018-01-01

    This paper takes the method of Shepard’s to deal with the original LIDAR point clouds data, and generate regular grid data DSM, filters the ground point cloud and non ground point cloud through double least square method, and obtains the rules of DSM. By using region growing method for the segmentation of DSM rules, the removal of non building point cloud, obtaining the building point cloud information. Uses the Canny operator to extract the image segmentation is needed after the edges of the building, uses Hough transform line detection to extract the edges of buildings rules of operation based on the smooth and uniform. At last, uses E3De3 software to establish the 3D model of buildings.

  18. Multiple-Point statistics for stochastic modeling of aquifers, where do we stand?

    NASA Astrophysics Data System (ADS)

    Renard, P.; Julien, S.

    2017-12-01

    In the last 20 years, multiple-point statistics have been a focus of much research, successes and disappointments. The aim of this geostatistical approach was to integrate geological information into stochastic models of aquifer heterogeneity to better represent the connectivity of high or low permeability structures in the underground. Many different algorithms (ENESIM, SNESIM, SIMPAT, CCSIM, QUILTING, IMPALA, DEESSE, FILTERSIM, HYPPS, etc.) have been and are still proposed. They are all based on the concept of a training data set from which spatial statistics are derived and used in a further step to generate conditional realizations. Some of these algorithms evaluate the statistics of the spatial patterns for every pixel, other techniques consider the statistics at the scale of a patch or a tile. While the method clearly succeeded in enabling modelers to generate realistic models, several issues are still the topic of debate both from a practical and theoretical point of view, and some issues such as training data set availability are often hindering the application of the method in practical situations. In this talk, the aim is to present a review of the status of these approaches both from a theoretical and practical point of view using several examples at different scales (from pore network to regional aquifer).

  19. Models for mean bonding length, melting point and lattice thermal expansion of nanoparticle materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Omar, M.S., E-mail: dr_m_s_omar@yahoo.com

    2012-11-15

    Graphical abstract: Three models are derived to explain the nanoparticles size dependence of mean bonding length, melting temperature and lattice thermal expansion applied on Sn, Si and Au. The following figures are shown as an example for Sn nanoparticles indicates hilly applicable models for nanoparticles radius larger than 3 nm. Highlights: ► A model for a size dependent mean bonding length is derived. ► The size dependent melting point of nanoparticles is modified. ► The bulk model for lattice thermal expansion is successfully used on nanoparticles. -- Abstract: A model, based on the ratio number of surface atoms to thatmore » of its internal, is derived to calculate the size dependence of lattice volume of nanoscaled materials. The model is applied to Si, Sn and Au nanoparticles. For Si, that the lattice volume is increases from 20 Å{sup 3} for bulk to 57 Å{sup 3} for a 2 nm size nanocrystals. A model, for calculating melting point of nanoscaled materials, is modified by considering the effect of lattice volume. A good approach of calculating size-dependent melting point begins from the bulk state down to about 2 nm diameter nanoparticle. Both values of lattice volume and melting point obtained for nanosized materials are used to calculate lattice thermal expansion by using a formula applicable for tetrahedral semiconductors. Results for Si, change from 3.7 × 10{sup −6} K{sup −1} for a bulk crystal down to a minimum value of 0.1 × 10{sup −6} K{sup −1} for a 6 nm diameter nanoparticle.« less

  20. Pairwise-interaction extended point-particle model for particle-laden flows

    NASA Astrophysics Data System (ADS)

    Akiki, G.; Moore, W. C.; Balachandar, S.

    2017-12-01

    In this work we consider the pairwise interaction extended point-particle (PIEP) model for Euler-Lagrange simulations of particle-laden flows. By accounting for the precise location of neighbors the PIEP model goes beyond local particle volume fraction, and distinguishes the influence of upstream, downstream and laterally located neighbors. The two main ingredients of the PIEP model are (i) the undisturbed flow at any particle is evaluated as a superposition of the macroscale flow and a microscale flow that is approximated as a pairwise superposition of perturbation fields induced by each of the neighboring particles, and (ii) the forces and torque on the particle are then calculated from the undisturbed flow using the Faxén form of the force relation. The computational efficiency of the standard Euler-Lagrange approach is retained, since the microscale perturbation fields induced by a neighbor are pre-computed and stored as PIEP maps. Here we extend the PIEP force model of Akiki et al. [3] with a corresponding torque model to systematically include the effect of perturbation fields induced by the neighbors in evaluating the net torque. Also, we use DNS results from a uniform flow over two stationary spheres to further improve the PIEP force and torque models. We then test the PIEP model in three different sedimentation problems and compare the results against corresponding DNS to assess the accuracy of the PIEP model and improvement over the standard point-particle approach. In the case of two sedimenting spheres in a quiescent ambient the PIEP model is shown to capture the drafting-kissing-tumbling process. In cases of 5 and 80 sedimenting spheres a good agreement is obtained between the PIEP simulation and the DNS. For all three simulations, the DEM-PIEP was able to recreate, to a good extent, the results from the DNS, while requiring only a negligible fraction of the numerical resources required by the fully-resolved DNS.

  1. Tight-binding modeling and low-energy behavior of the semi-Dirac point.

    PubMed

    Banerjee, S; Singh, R R P; Pardo, V; Pickett, W E

    2009-07-03

    We develop a tight-binding model description of semi-Dirac electronic spectra, with highly anisotropic dispersion around point Fermi surfaces, recently discovered in electronic structure calculations of VO2-TiO2 nanoheterostructures. We contrast their spectral properties with the well-known Dirac points on the honeycomb lattice relevant to graphene layers and the spectra of bands touching each other in zero-gap semiconductors. We also consider the lowest order dispersion around one of the semi-Dirac points and calculate the resulting electronic energy levels in an external magnetic field. In spite of apparently similar electronic structures, Dirac and semi-Dirac systems support diverse low-energy physics.

  2. Self-Exciting Point Process Modeling of Conversation Event Sequences

    NASA Astrophysics Data System (ADS)

    Masuda, Naoki; Takaguchi, Taro; Sato, Nobuo; Yano, Kazuo

    Self-exciting processes of Hawkes type have been used to model various phenomena including earthquakes, neural activities, and views of online videos. Studies of temporal networks have revealed that sequences of social interevent times for individuals are highly bursty. We examine some basic properties of event sequences generated by the Hawkes self-exciting process to show that it generates bursty interevent times for a wide parameter range. Then, we fit the model to the data of conversation sequences recorded in company offices in Japan. In this way, we can estimate relative magnitudes of the self excitement, its temporal decay, and the base event rate independent of the self excitation. These variables highly depend on individuals. We also point out that the Hawkes model has an important limitation that the correlation in the interevent times and the burstiness cannot be independently modulated.

  3. MCNP-REN - A Monte Carlo Tool for Neutron Detector Design Without Using the Point Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abhold, M.E.; Baker, M.C.

    1999-07-25

    The development of neutron detectors makes extensive use of the predictions of detector response through the use of Monte Carlo techniques in conjunction with the point reactor model. Unfortunately, the point reactor model fails to accurately predict detector response in common applications. For this reason, the general Monte Carlo N-Particle code (MCNP) was modified to simulate the pulse streams that would be generated by a neutron detector and normally analyzed by a shift register. This modified code, MCNP - Random Exponentially Distributed Neutron Source (MCNP-REN), along with the Time Analysis Program (TAP) predict neutron detector response without using the pointmore » reactor model, making it unnecessary for the user to decide whether or not the assumptions of the point model are met for their application. MCNP-REN is capable of simulating standard neutron coincidence counting as well as neutron multiplicity counting. Measurements of MOX fresh fuel made using the Underwater Coincidence Counter (UWCC) as well as measurements of HEU reactor fuel using the active neutron Research Reactor Fuel Counter (RRFC) are compared with calculations. The method used in MCNP-REN is demonstrated to be fundamentally sound and shown to eliminate the need to use the point model for detector performance predictions.« less

  4. QSAR modeling of cumulative environmental end-points for the prioritization of hazardous chemicals.

    PubMed

    Gramatica, Paola; Papa, Ester; Sangion, Alessandro

    2018-01-24

    The hazard of chemicals in the environment is inherently related to the molecular structure and derives simultaneously from various chemical properties/activities/reactivities. Models based on Quantitative Structure Activity Relationships (QSARs) are useful to screen, rank and prioritize chemicals that may have an adverse impact on humans and the environment. This paper reviews a selection of QSAR models (based on theoretical molecular descriptors) developed for cumulative multivariate endpoints, which were derived by mathematical combination of multiple effects and properties. The cumulative end-points provide an integrated holistic point of view to address environmentally relevant properties of chemicals.

  5. Analysis of 3d Building Models Accuracy Based on the Airborne Laser Scanning Point Clouds

    NASA Astrophysics Data System (ADS)

    Ostrowski, W.; Pilarska, M.; Charyton, J.; Bakuła, K.

    2018-05-01

    Creating 3D building models in large scale is becoming more popular and finds many applications. Nowadays, a wide term "3D building models" can be applied to several types of products: well-known CityGML solid models (available on few Levels of Detail), which are mainly generated from Airborne Laser Scanning (ALS) data, as well as 3D mesh models that can be created from both nadir and oblique aerial images. City authorities and national mapping agencies are interested in obtaining the 3D building models. Apart from the completeness of the models, the accuracy aspect is also important. Final accuracy of a building model depends on various factors (accuracy of the source data, complexity of the roof shapes, etc.). In this paper the methodology of inspection of dataset containing 3D models is presented. The proposed approach check all building in dataset with comparison to ALS point clouds testing both: accuracy and level of details. Using analysis of statistical parameters for normal heights for reference point cloud and tested planes and segmentation of point cloud provides the tool that can indicate which building and which roof plane in do not fulfill requirement of model accuracy and detail correctness. Proposed method was tested on two datasets: solid and mesh model.

  6. A Hybrid Vortex Sheet / Point Vortex Model for Unsteady Separated Flows

    NASA Astrophysics Data System (ADS)

    Darakananda, Darwin; Eldredge, Jeff D.; Colonius, Tim; Williams, David R.

    2015-11-01

    The control of separated flow over an airfoil is essential for obtaining lift enhancement, drag reduction, and the overall ability to perform high agility maneuvers. In order to develop reliable flight control systems capable of realizing agile maneuvers, we need a low-order aerodynamics model that can accurately predict the force response of an airfoil to arbitrary disturbances and/or actuation. In the present work, we integrate vortex sheets and variable strength point vortices into a method that is able to capture the formation of coherent vortex structures while remaining computationally tractable for control purposes. The role of the vortex sheet is limited to tracking the dynamics of the shear layer immediately behind the airfoil. When parts of the sheet develop into large scale structures, those sections are replaced by variable strength point vortices. We prevent the vortex sheets from growing indefinitely by truncating the tips of the sheets and transfering their circulation into nearby point vortices whenever the length of sheet exceeds a threshold. We demonstrate the model on a variety of canonical problems, including pitch-up and impulse translation of an airfoil at various angles of attack. Support by the U.S. Air Force Office of Scientific Research (FA9550-14-1-0328) with program manager Dr. Douglas Smith is gratefully acknowledged.

  7. Modeling elephant-mediated cascading effects of water point closure.

    PubMed

    Hilbers, Jelle P; Van Langevelde, Frank; Prins, Herbert H T; Grant, C C; Peel, Mike J S; Coughenour, Michael B; De Knegt, Henrik J; Slotow, Rob; Smit, Izak P J; Kiker, Greg A; De Boer, Willem F

    2015-03-01

    Wildlife management to reduce the impact of wildlife on their habitat can be done in several ways, among which removing animals (by either culling or translocation) is most often used. There are, however, alternative ways to control wildlife densities, such as opening or closing water points. The effects of these alternatives are poorly studied. In this paper, we focus on manipulating large herbivores through the closure of water points (WPs). Removal of artificial WPs has been suggested in order to change the distribution of African elephants, which occur in high densities in national parks in Southern Africa and are thought to have a destructive effect on the vegetation. Here, we modeled the long-term effects of different scenarios of WP closure on the spatial distribution of elephants, and consequential effects on the vegetation and other herbivores in Kruger National Park, South Africa. Using a dynamic ecosystem model, SAVANNA, scenarios were evaluated that varied in availability of artificial WPs; levels of natural water; and elephant densities. Our modeling results showed that elephants can indirectly negatively affect the distributions of meso-mixed feeders, meso-browsers, and some meso-grazers under wet conditions. The closure of artificial WPs hardly had any effect during these natural wet conditions. Under dry conditions, the spatial distribution of both elephant bulls and cows changed when the availability of artificial water was severely reduced in the model. These changes in spatial distribution triggered changes in the spatial availability of woody biomass over the simulation period of 80 years, and this led to changes in the rest of the herbivore community, resulting in increased densities of all herbivores, except for giraffe and steenbok, in areas close to rivers. The spatial distributions of elephant bulls and cows showed to be less affected by the closure of WPs than most of the other herbivore species. Our study contributes to ecologically

  8. Multistate Landau-Zener models with all levels crossing at one point

    DOE PAGES

    Li, Fuxiang; Sun, Chen; Chernyak, Vladimir Y.; ...

    2017-08-04

    Within this paper, we discuss common properties and reasons for integrability in the class of multistate Landau-Zener models with all diabatic levels crossing at one point. Exploring the Stokes phenomenon, we show that each previously solved model has a dual one, whose scattering matrix can be also obtained analytically. For applications, we demonstrate how our results can be used to study conversion of molecular into atomic Bose condensates during passage through the Feshbach resonance, and provide purely algebraic solutions of the bowtie and special cases of the driven Tavis-Cummings model.

  9. Creative use of pilot points to address site and regional scale heterogeneity in a variable-density model

    USGS Publications Warehouse

    Dausman, Alyssa M.; Doherty, John; Langevin, Christian D.

    2010-01-01

    Pilot points for parameter estimation were creatively used to address heterogeneity at both the well field and regional scales in a variable-density groundwater flow and solute transport model designed to test multiple hypotheses for upward migration of fresh effluent injected into a highly transmissive saline carbonate aquifer. Two sets of pilot points were used within in multiple model layers, with one set of inner pilot points (totaling 158) having high spatial density to represent hydraulic conductivity at the site, while a second set of outer points (totaling 36) of lower spatial density was used to represent hydraulic conductivity further from the site. Use of a lower spatial density outside the site allowed (1) the total number of pilot points to be reduced while maintaining flexibility to accommodate heterogeneity at different scales, and (2) development of a model with greater areal extent in order to simulate proper boundary conditions that have a limited effect on the area of interest. The parameters associated with the inner pilot points were log transformed hydraulic conductivity multipliers of the conductivity field obtained by interpolation from outer pilot points. The use of this dual inner-outer scale parameterization (with inner parameters constituting multipliers for outer parameters) allowed smooth transition of hydraulic conductivity from the site scale, where greater spatial variability of hydraulic properties exists, to the regional scale where less spatial variability was necessary for model calibration. While the model is highly parameterized to accommodate potential aquifer heterogeneity, the total number of pilot points is kept at a minimum to enable reasonable calibration run times.

  10. Virtual and Printed 3D Models for Teaching Crystal Symmetry and Point Groups

    ERIC Educational Resources Information Center

    Casas, Lluís; Estop, Euge`nia

    2015-01-01

    Both, virtual and printed 3D crystal models can help students and teachers deal with chemical education topics such as symmetry and point groups. In the present paper, two freely downloadable tools (interactive PDF files and a mobile app) are presented as examples of the application of 3D design to study point-symmetry. The use of 3D printing to…

  11. Hole-ness of point clouds

    NASA Astrophysics Data System (ADS)

    Gronz, Oliver; Seeger, Manuel; Klaes, Björn; Casper, Markus C.; Ries, Johannes B.

    2015-04-01

    Accurate and dense 3D models of soil surfaces can be used in various ways: They can be used as initial shapes for erosion models. They can be used as benchmark shapes for erosion model outputs. They can be used to derive metrics, such as random roughness... One easy and low-cost method to produce these models is structure from motion (SfM). Using this method, two questions arise: Does the soil moisture, which changes the colour, albedo and reflectivity of the soil, influence the model quality? How can the model quality be evaluated? To answer these questions, a suitable data set has been produced: soil has been placed on a tray and areas with different roughness structures have been formed. For different moisture states - dry, medium, saturated - and two different lighting conditions - direct and indirect - sets of high-resolution images at the same camera positions have been taken. From the six image sets, 3D point clouds have been produced using VisualSfM. The visual inspection of the 3D models showed that all models have different areas, where holes of different sizes occur. But it is obviously a subjective task to determine the model's quality by visual inspection. One typical approach to evaluate model quality objectively is to estimate the point density on a regular, two-dimensional grid: the number of 3D points in each grid cell projected on a plane is calculated. This works well for surfaces that do not show vertical structures. Along vertical structures, many points will be projected on the same grid cell and thus the point density rather depends on the shape of the surface but less on the quality of the model. Another approach has been applied by using the points resulting from Poisson Surface Reconstructions. One of this algorithm's properties is the filling of holes: new points are interpolated inside the holes. Using the original 3D point cloud and the interpolated Poisson point set, two analyses have been performed: For all Poisson points, the

  12. Cubature/ Unscented/ Sigma Point Kalman Filtering with Angular Measurement Models

    DTIC Science & Technology

    2015-07-06

    Cubature/ Unscented/ Sigma Point Kalman Filtering with Angular Measurement Models David Frederic Crouse Naval Research Laboratory 4555 Overlook Ave...measurement and process non- linearities, such as the cubature Kalman filter , can perform ex- tremely poorly in many applications involving angular... Kalman filtering is a realization of the best linear unbiased estimator (BLUE) that evaluates certain integrals for expected values using different forms

  13. Modeling and Visualization Process of the Curve of Pen Point by GeoGebra

    ERIC Educational Resources Information Center

    Aktümen, Muharem; Horzum, Tugba; Ceylan, Tuba

    2013-01-01

    This study describes the mathematical construction of a real-life model by means of parametric equations, as well as the two- and three-dimensional visualization of the model using the software GeoGebra. The model was initially considered as "determining the parametric equation of the curve formed on a plane by the point of a pen, positioned…

  14. Accuracy analysis of point cloud modeling for evaluating concrete specimens

    NASA Astrophysics Data System (ADS)

    D'Amico, Nicolas; Yu, Tzuyang

    2017-04-01

    Photogrammetric methods such as structure from motion (SFM) have the capability to acquire accurate information about geometric features, surface cracks, and mechanical properties of specimens and structures in civil engineering. Conventional approaches to verify the accuracy in photogrammetric models usually require the use of other optical techniques such as LiDAR. In this paper, geometric accuracy of photogrammetric modeling is investigated by studying the effects of number of photos, radius of curvature, and point cloud density (PCD) on estimated lengths, areas, volumes, and different stress states of concrete cylinders and panels. Four plain concrete cylinders and two plain mortar panels were used for the study. A commercially available mobile phone camera was used in collecting all photographs. Agisoft PhotoScan software was applied in photogrammetric modeling of all concrete specimens. From our results, it was found that the increase of number of photos does not necessarily improve the geometric accuracy of point cloud models (PCM). It was also found that the effect of radius of curvature is not significant when compared with the ones of number of photos and PCD. A PCD threshold of 15.7194 pts/cm3 is proposed to construct reliable and accurate PCM for condition assessment. At this PCD threshold, all errors for estimating lengths, areas, and volumes were less than 5%. Finally, from the study of mechanical property of a plain concrete cylinder, we have found that the increase of stress level inside the concrete cylinder can be captured by the increase of radial strain in its PCM.

  15. A model of the 8-25 micron point source infrared sky

    NASA Technical Reports Server (NTRS)

    Wainscoat, Richard J.; Cohen, Martin; Volk, Kevin; Walker, Helen J.; Schwartz, Deborah E.

    1992-01-01

    We present a detailed model for the IR point-source sky that comprises geometrically and physically realistic representations of the Galactic disk, bulge, stellar halo, spiral arms (including the 'local arm'), molecular ring, and the extragalactic sky. We represent each of the distinct Galactic components by up to 87 types of Galactic source, each fully characterized by scale heights, space densities, and absolute magnitudes at BVJHK, 12, and 25 microns. The model is guided by a parallel Monte Carlo simulation of the Galaxy at 12 microns. The content of our Galactic source table constitutes a good match to the 12 micron luminosity function in the simulation, as well as to the luminosity functions at V and K. We are able to produce differential and cumulative IR source counts for any bandpass lying fully within the IRAS Low-Resolution Spectrometer's range (7.7-22.7 microns as well as for the IRAS 12 and 25 micron bands. These source counts match the IRAS observations well. The model can be used to predict the character of the point source sky expected for observations from IR space experiments.

  16. Exploration of reaction mechanisms of anthocyanin degradation in a roselle extract through kinetic studies on formulated model media.

    PubMed

    Sinela, André Mundombe; Mertz, Christian; Achir, Nawel; Rawat, Nadirah; Vidot, Kevin; Fulcrand, Hélène; Dornier, Manuel

    2017-11-15

    Effect of oxygen, polyphenols and metals was studied on degradation of delphinidin and cyanidin 3-O-sambubioside of Hibiscus sabdariffa L. Experiments were conducted on aqueous extracts degassed or not, an isolated polyphenolic fraction and extract-like model media, allowing the impact of the different constituents to be decoupled. All solutions were stored for 2months at 37°C. Anthocyanin and their degradation compounds were regularly HPLC-DAD-analyzed. Oxygen concentration did not impact the anthocyanin degradation rate. Degradation rate of delphinidin 3-O-sambubioside increased 6-fold when mixed with iron from 1 to 13mg.kg -1 but decreased with chlorogenic and gallic acids. Degradation rate of cyanidin 3-O-sambubioside was not affected by polyphenols but increased by 3-fold with increasing iron concentration with a concomitant yield decrease of scission product, protocatechuic acid. Two pathways of degradation of anthocyanins were identified: a major metal-catalyzed oxidation followed by condensation and a minor scission which represents about 10% of degraded anthocyanins. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. van der Waals model for the surface tension of liquid 4He near the λ point

    NASA Astrophysics Data System (ADS)

    Tavan, Paul; Widom, B.

    1983-01-01

    We develop a phenomenological model of the 4He liquid-vapor interface. With it we calculate the surface tension of liquid helium near the λ point and compare with the experimental measurements by Magerlein and Sanders. The model is a form of the van der Waals surface-tension theory, extended to apply to a phase equilibrium in which the simultaneous variation of two order parameters-here the superfluid order parameter and the total density-is essential. The properties of the model are derived analytically above the λ point and numerically below it. Just below the λ point the superfluid order parameter is found to approach its bulk-superfluid-phase value very slowly with distance on the liquid side of the interface (the characteristic distance being the superfluid coherence length), and to vanish rapidly with distance on the vapor side, while the total density approaches its bulk-phase values rapidly and nearly symmetrically on the two sides. Below the λ point the surface tension has a |ɛ|32 singularity (ɛ~T-Tλ) arising from the temperature dependence of the spatially varying superfluid order parameter. This is the mean-field form of the more general |ɛ|μ singularity predicted by Sobyanin and by Hohenberg, in which μ (which is in reality close to 1.35 at the λ point of helium) is the exponent with which the interfacial tension between two critical phases vanishes. Above the λ point the surface tension in this model is analytic in ɛ. A singular term |ɛ|μ may in reality be present in the surface tension above as well as below the λ point, although there should still be a pronounced asymmetry. The variation with temperature of the model surface tension is overall much like that in experiment.

  18. Estimating animal resource selection from telemetry data using point process models

    USGS Publications Warehouse

    Johnson, Devin S.; Hooten, Mevin B.; Kuhn, Carey E.

    2013-01-01

    To demonstrate the analysis of telemetry data with the point process approach, we analysed a data set of telemetry locations from northern fur seals (Callorhinus ursinus) in the Pribilof Islands, Alaska. Both a space–time and an aggregated space-only model were fitted. At the individual level, the space–time analysis showed little selection relative to the habitat covariates. However, at the study area level, the space-only model showed strong selection relative to the covariates.

  19. Point model equations for neutron correlation counting: Extension of Böhnel's equations to any order

    DOE PAGES

    Favalli, Andrea; Croft, Stephen; Santi, Peter

    2015-06-15

    Various methods of autocorrelation neutron analysis may be used to extract information about a measurement item containing spontaneously fissioning material. The two predominant approaches being the time correlation analysis (that make use of a coincidence gate) methods of multiplicity shift register logic and Feynman sampling. The common feature is that the correlated nature of the pulse train can be described by a vector of reduced factorial multiplet rates. We call these singlets, doublets, triplets etc. Within the point reactor model the multiplet rates may be related to the properties of the item, the parameters of the detector, and basic nuclearmore » data constants by a series of coupled algebraic equations – the so called point model equations. Solving, or inverting, the point model equations using experimental calibration model parameters is how assays of unknown items is performed. Currently only the first three multiplets are routinely used. In this work we develop the point model equations to higher order multiplets using the probability generating functions approach combined with the general derivative chain rule, the so called Faà di Bruno Formula. Explicit expression up to 5th order are provided, as well the general iterative formula to calculate any order. This study represents the first necessary step towards determining if higher order multiplets can add value to nondestructive measurement practice for nuclear materials control and accountancy.« less

  20. Fission yield calculation using toy model based on Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Jubaidah, Kurniadi, Rizal

    2015-09-01

    Toy model is a new approximation in predicting fission yield distribution. Toy model assumes nucleus as an elastic toy consist of marbles. The number of marbles represents the number of nucleons, A. This toy nucleus is able to imitate the real nucleus properties. In this research, the toy nucleons are only influenced by central force. A heavy toy nucleus induced by a toy nucleon will be split into two fragments. These two fission fragments are called fission yield. In this research, energy entanglement is neglected. Fission process in toy model is illustrated by two Gaussian curves intersecting each other. There are five Gaussian parameters used in this research. They are scission point of the two curves (Rc), mean of left curve (μL) and mean of right curve (μR), deviation of left curve (σL) and deviation of right curve (σR). The fission yields distribution is analyses based on Monte Carlo simulation. The result shows that variation in σ or µ can significanly move the average frequency of asymmetry fission yields. This also varies the range of fission yields distribution probability. In addition, variation in iteration coefficient only change the frequency of fission yields. Monte Carlo simulation for fission yield calculation using toy model successfully indicates the same tendency with experiment results, where average of light fission yield is in the range of 90

  1. Fission yield calculation using toy model based on Monte Carlo simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jubaidah, E-mail: jubaidah@student.itb.ac.id; Physics Department, Faculty of Mathematics and Natural Science – State University of Medan. Jl. Willem Iskandar Pasar V Medan Estate – North Sumatera, Indonesia 20221; Kurniadi, Rizal, E-mail: rijalk@fi.itb.ac.id

    2015-09-30

    Toy model is a new approximation in predicting fission yield distribution. Toy model assumes nucleus as an elastic toy consist of marbles. The number of marbles represents the number of nucleons, A. This toy nucleus is able to imitate the real nucleus properties. In this research, the toy nucleons are only influenced by central force. A heavy toy nucleus induced by a toy nucleon will be split into two fragments. These two fission fragments are called fission yield. In this research, energy entanglement is neglected. Fission process in toy model is illustrated by two Gaussian curves intersecting each other. Theremore » are five Gaussian parameters used in this research. They are scission point of the two curves (R{sub c}), mean of left curve (μ{sub L}) and mean of right curve (μ{sub R}), deviation of left curve (σ{sub L}) and deviation of right curve (σ{sub R}). The fission yields distribution is analyses based on Monte Carlo simulation. The result shows that variation in σ or µ can significanly move the average frequency of asymmetry fission yields. This also varies the range of fission yields distribution probability. In addition, variation in iteration coefficient only change the frequency of fission yields. Monte Carlo simulation for fission yield calculation using toy model successfully indicates the same tendency with experiment results, where average of light fission yield is in the range of 90« less

  2. Modeling fixation locations using spatial point processes.

    PubMed

    Barthelmé, Simon; Trukenbrod, Hans; Engbert, Ralf; Wichmann, Felix

    2013-10-01

    Whenever eye movements are measured, a central part of the analysis has to do with where subjects fixate and why they fixated where they fixated. To a first approximation, a set of fixations can be viewed as a set of points in space; this implies that fixations are spatial data and that the analysis of fixation locations can be beneficially thought of as a spatial statistics problem. We argue that thinking of fixation locations as arising from point processes is a very fruitful framework for eye-movement data, helping turn qualitative questions into quantitative ones. We provide a tutorial introduction to some of the main ideas of the field of spatial statistics, focusing especially on spatial Poisson processes. We show how point processes help relate image properties to fixation locations. In particular we show how point processes naturally express the idea that image features' predictability for fixations may vary from one image to another. We review other methods of analysis used in the literature, show how they relate to point process theory, and argue that thinking in terms of point processes substantially extends the range of analyses that can be performed and clarify their interpretation.

  3. Improving Gastric Cancer Outcome Prediction Using Single Time-Point Artificial Neural Network Models

    PubMed Central

    Nilsaz-Dezfouli, Hamid; Abu-Bakar, Mohd Rizam; Arasan, Jayanthi; Adam, Mohd Bakri; Pourhoseingholi, Mohamad Amin

    2017-01-01

    In cancer studies, the prediction of cancer outcome based on a set of prognostic variables has been a long-standing topic of interest. Current statistical methods for survival analysis offer the possibility of modelling cancer survivability but require unrealistic assumptions about the survival time distribution or proportionality of hazard. Therefore, attention must be paid in developing nonlinear models with less restrictive assumptions. Artificial neural network (ANN) models are primarily useful in prediction when nonlinear approaches are required to sift through the plethora of available information. The applications of ANN models for prognostic and diagnostic classification in medicine have attracted a lot of interest. The applications of ANN models in modelling the survival of patients with gastric cancer have been discussed in some studies without completely considering the censored data. This study proposes an ANN model for predicting gastric cancer survivability, considering the censored data. Five separate single time-point ANN models were developed to predict the outcome of patients after 1, 2, 3, 4, and 5 years. The performance of ANN model in predicting the probabilities of death is consistently high for all time points according to the accuracy and the area under the receiver operating characteristic curve. PMID:28469384

  4. Comparative study of the fragments' mass and energy characteristics in the spontaneous fussion of 238Pu, 240Pu and 242Pu and in the thermal-neutron-induced fission of 239Pu

    NASA Astrophysics Data System (ADS)

    Schillebeeckx, P.; Wagemans, C.; Deruytter, A. J.; Barthélémy, R.

    1992-08-01

    The energy and mass distribution and their correlations have been studied for the spontaneous fission of 238, 240, 242Pu and for the thermal-neutron-induced fission of 239Pu. A comparison of 240Pu(s.f.) and 239Pu(nth,f) shows that the increase in excitation energy mainly results in an increase of the intrinsic excitation energy. A comparison of the results for 238Pu, 240Pu and 242Pu(s.f.) demonstrates the occurence of different fission modes with varying relative probability. These results are discussed in terms of the scission point model as well as in terms of the fission channel model with random neck-rupture.

  5. On two-point boundary correlations in the six-vertex model with domain wall boundary conditions

    NASA Astrophysics Data System (ADS)

    Colomo, F.; Pronko, A. G.

    2005-05-01

    The six-vertex model with domain wall boundary conditions on an N × N square lattice is considered. The two-point correlation function describing the probability of having two vertices in a given state at opposite (top and bottom) boundaries of the lattice is calculated. It is shown that this two-point boundary correlator is expressible in a very simple way in terms of the one-point boundary correlators of the model on N × N and (N - 1) × (N - 1) lattices. In alternating sign matrix (ASM) language this result implies that the doubly refined x-enumerations of ASMs are just appropriate combinations of the singly refined ones.

  6. A comparative analysis of speed profile models for wrist pointing movements.

    PubMed

    Vaisman, Lev; Dipietro, Laura; Krebs, Hermano Igo

    2013-09-01

    Following two decades of design and clinical research on robot-mediated therapy for the shoulder and elbow, therapeutic robotic devices for other joints are being proposed: several research groups including ours have designed robots for the wrist, either to be used as stand-alone devices or in conjunction with shoulder and elbow devices. However, in contrast with robots for the shoulder and elbow which were able to take advantage of descriptive kinematic models developed in neuroscience for the past 30 years, design of wrist robots controllers cannot rely on similar prior art: wrist movement kinematics has been largely unexplored. This study aimed at examining speed profiles of fast, visually evoked, visually guided, target-directed human wrist pointing movements. One thousand three-hundred ninety-eight (1398) trials were recorded from seven unimpaired subjects who performed center-out flexion/extension and abduction/adduction wrist movements and fitted with 19 models previously proposed for describing reaching speed profiles. A nonlinear, least squares optimization procedure extracted parameters' sets that minimized error between experimental and reconstructed data. Models' performances were compared based on their ability to reconstruct experimental data. Results suggest that the support-bounded lognormal is the best model for speed profiles of fast, wrist pointing movements. Applications include design of control algorithms for therapeutic wrist robots and quantitative metrics of motor recovery.

  7. Fast maximum likelihood estimation using continuous-time neural point process models.

    PubMed

    Lepage, Kyle Q; MacDonald, Christopher J

    2015-06-01

    A recent report estimates that the number of simultaneously recorded neurons is growing exponentially. A commonly employed statistical paradigm using discrete-time point process models of neural activity involves the computation of a maximum-likelihood estimate. The time to computate this estimate, per neuron, is proportional to the number of bins in a finely spaced discretization of time. By using continuous-time models of neural activity and the optimally efficient Gaussian quadrature, memory requirements and computation times are dramatically decreased in the commonly encountered situation where the number of parameters p is much less than the number of time-bins n. In this regime, with q equal to the quadrature order, memory requirements are decreased from O(np) to O(qp), and the number of floating-point operations are decreased from O(np(2)) to O(qp(2)). Accuracy of the proposed estimates is assessed based upon physiological consideration, error bounds, and mathematical results describing the relation between numerical integration error and numerical error affecting both parameter estimates and the observed Fisher information. A check is provided which is used to adapt the order of numerical integration. The procedure is verified in simulation and for hippocampal recordings. It is found that in 95 % of hippocampal recordings a q of 60 yields numerical error negligible with respect to parameter estimate standard error. Statistical inference using the proposed methodology is a fast and convenient alternative to statistical inference performed using a discrete-time point process model of neural activity. It enables the employment of the statistical methodology available with discrete-time inference, but is faster, uses less memory, and avoids any error due to discretization.

  8. Optimal Number and Allocation of Data Collection Points for Linear Spline Growth Curve Modeling: A Search for Efficient Designs

    ERIC Educational Resources Information Center

    Wu, Wei; Jia, Fan; Kinai, Richard; Little, Todd D.

    2017-01-01

    Spline growth modelling is a popular tool to model change processes with distinct phases and change points in longitudinal studies. Focusing on linear spline growth models with two phases and a fixed change point (the transition point from one phase to the other), we detail how to find optimal data collection designs that maximize the efficiency…

  9. Model reference adaptive control for the azimuth-pointing system of a balloon-borne stabilized platform

    NASA Technical Reports Server (NTRS)

    Lubin, Philip M.; Tomizuka, Masayoshi; Chingcuanco, Alfredo O.; Meinhold, Peter R.

    1991-01-01

    A balloon-born stabilized platform has been developed for the remotely operated altitude-azimuth pointing of a millimeter wave telescope system. This paper presents a development and implementation of model reference adaptive control (MRAC) for the azimuth-pointing system of the stabilized platform. The primary goal of the controller is to achieve pointing rms better than 0.1 deg. Simulation results indicate that MRAC can achieve pointing rms better than 0.1 deg. Ground test results show pointing rms better than 0.03 deg. Data from the first flight at the National Scientific Balloon Facility (NSBF) Palestine, Texas show pointing rms better than 0.02 deg.

  10. A Corner-Point-Grid-Based Voxelization Method for Complex Geological Structure Model with Folds

    NASA Astrophysics Data System (ADS)

    Chen, Qiyu; Mariethoz, Gregoire; Liu, Gang

    2017-04-01

    3D voxelization is the foundation of geological property modeling, and is also an effective approach to realize the 3D visualization of the heterogeneous attributes in geological structures. The corner-point grid is a representative data model among all voxel models, and is a structured grid type that is widely applied at present. When carrying out subdivision for complex geological structure model with folds, we should fully consider its structural morphology and bedding features to make the generated voxels keep its original morphology. And on the basis of which, they can depict the detailed bedding features and the spatial heterogeneity of the internal attributes. In order to solve the shortage of the existing technologies, this work puts forward a corner-point-grid-based voxelization method for complex geological structure model with folds. We have realized the fast conversion from the 3D geological structure model to the fine voxel model according to the rule of isocline in Ramsay's fold classification. In addition, the voxel model conforms to the spatial features of folds, pinch-out and other complex geological structures, and the voxels of the laminas inside a fold accords with the result of geological sedimentation and tectonic movement. This will provide a carrier and model foundation for the subsequent attribute assignment as well as the quantitative analysis and evaluation based on the spatial voxels. Ultimately, we use examples and the contrastive analysis between the examples and the Ramsay's description of isoclines to discuss the effectiveness and advantages of the method proposed in this work when dealing with the voxelization of 3D geologic structural model with folds based on corner-point grids.

  11. Symmetric and asymmetric ternary fission of hot nuclei

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Siwek-Wilczynska, K.; Wilczynski, J.; Leegte, H.K.W.

    1993-07-01

    Emission of [alpha] particles accompanying fusion-fission processes in the [sup 40]Ar +[sup 232]Th reaction at [ital E]([sup 40]Ar) = 365 MeV was studied in a wide range of in-fission-plane and out-of-plane angles. The exact determination of the emission angles of both fission fragments combined with the time-of-flight measurements allowed us to reconstruct the complete kinematics of each ternary event. The coincident energy spectra of [alpha] particles were analyzed by using predictions of the energy spectra of the statistical code CASCADE . The analysis clearly demonstrates emission from the composite system prior to fission, emission from fully accelerated fragments after fission,more » and also emission during scission. The analysis is presented for both symmetric and asymmetric fission. The results have been analyzed using a time-dependent statistical decay code and confronted with dynamical calculations based on a classical one-body dissipation model. The observed near-scission emission is consistent with evaporation from a dinuclear system just before scission and evaporation from separated fragments just after scission. The analysis suggests that the time scale of fission of the hot composite systems is long (about 7[times]10[sup [minus]20] s) and the motion during the descent to scission almost completely damped.« less

  12. A Stochastic Point Cloud Sampling Method for Multi-Template Protein Comparative Modeling.

    PubMed

    Li, Jilong; Cheng, Jianlin

    2016-05-10

    Generating tertiary structural models for a target protein from the known structure of its homologous template proteins and their pairwise sequence alignment is a key step in protein comparative modeling. Here, we developed a new stochastic point cloud sampling method, called MTMG, for multi-template protein model generation. The method first superposes the backbones of template structures, and the Cα atoms of the superposed templates form a point cloud for each position of a target protein, which are represented by a three-dimensional multivariate normal distribution. MTMG stochastically resamples the positions for Cα atoms of the residues whose positions are uncertain from the distribution, and accepts or rejects new position according to a simulated annealing protocol, which effectively removes atomic clashes commonly encountered in multi-template comparative modeling. We benchmarked MTMG on 1,033 sequence alignments generated for CASP9, CASP10 and CASP11 targets, respectively. Using multiple templates with MTMG improves the GDT-TS score and TM-score of structural models by 2.96-6.37% and 2.42-5.19% on the three datasets over using single templates. MTMG's performance was comparable to Modeller in terms of GDT-TS score, TM-score, and GDT-HA score, while the average RMSD was improved by a new sampling approach. The MTMG software is freely available at: http://sysbio.rnet.missouri.edu/multicom_toolbox/mtmg.html.

  13. A Stochastic Point Cloud Sampling Method for Multi-Template Protein Comparative Modeling

    PubMed Central

    Li, Jilong; Cheng, Jianlin

    2016-01-01

    Generating tertiary structural models for a target protein from the known structure of its homologous template proteins and their pairwise sequence alignment is a key step in protein comparative modeling. Here, we developed a new stochastic point cloud sampling method, called MTMG, for multi-template protein model generation. The method first superposes the backbones of template structures, and the Cα atoms of the superposed templates form a point cloud for each position of a target protein, which are represented by a three-dimensional multivariate normal distribution. MTMG stochastically resamples the positions for Cα atoms of the residues whose positions are uncertain from the distribution, and accepts or rejects new position according to a simulated annealing protocol, which effectively removes atomic clashes commonly encountered in multi-template comparative modeling. We benchmarked MTMG on 1,033 sequence alignments generated for CASP9, CASP10 and CASP11 targets, respectively. Using multiple templates with MTMG improves the GDT-TS score and TM-score of structural models by 2.96–6.37% and 2.42–5.19% on the three datasets over using single templates. MTMG’s performance was comparable to Modeller in terms of GDT-TS score, TM-score, and GDT-HA score, while the average RMSD was improved by a new sampling approach. The MTMG software is freely available at: http://sysbio.rnet.missouri.edu/multicom_toolbox/mtmg.html. PMID:27161489

  14. Evaluating Change in Behavioral Preferences: Multidimensional Scaling Single-Ideal Point Model

    ERIC Educational Resources Information Center

    Ding, Cody

    2016-01-01

    The purpose of the article is to propose a multidimensional scaling single-ideal point model as a method to evaluate changes in individuals' preferences under the explicit methodological framework of behavioral preference assessment. One example is used to illustrate the approach for a clear idea of what this approach can accomplish.

  15. Model for a Ferromagnetic Quantum Critical Point in a 1D Kondo Lattice

    NASA Astrophysics Data System (ADS)

    Komijani, Yashar; Coleman, Piers

    2018-04-01

    Motivated by recent experiments, we study a quasi-one-dimensional model of a Kondo lattice with ferromagnetic coupling between the spins. Using bosonization and dynamical large-N techniques, we establish the presence of a Fermi liquid and a magnetic phase separated by a local quantum critical point, governed by the Kondo breakdown picture. Thermodynamic properties are studied and a gapless charged mode at the quantum critical point is highlighted.

  16. Higher-order triangular spectral element method with optimized cubature points for seismic wavefield modeling

    NASA Astrophysics Data System (ADS)

    Liu, Youshan; Teng, Jiwen; Xu, Tao; Badal, José

    2017-05-01

    The mass-lumped method avoids the cost of inverting the mass matrix and simultaneously maintains spatial accuracy by adopting additional interior integration points, known as cubature points. To date, such points are only known analytically in tensor domains, such as quadrilateral or hexahedral elements. Thus, the diagonal-mass-matrix spectral element method (SEM) in non-tensor domains always relies on numerically computed interpolation points or quadrature points. However, only the cubature points for degrees 1 to 6 are known, which is the reason that we have developed a p-norm-based optimization algorithm to obtain higher-order cubature points. In this way, we obtain and tabulate new cubature points with all positive integration weights for degrees 7 to 9. The dispersion analysis illustrates that the dispersion relation determined from the new optimized cubature points is comparable to that of the mass and stiffness matrices obtained by exact integration. Simultaneously, the Lebesgue constant for the new optimized cubature points indicates its surprisingly good interpolation properties. As a result, such points provide both good interpolation properties and integration accuracy. The Courant-Friedrichs-Lewy (CFL) numbers are tabulated for the conventional Fekete-based triangular spectral element (TSEM), the TSEM with exact integration, and the optimized cubature-based TSEM (OTSEM). A complementary study demonstrates the spectral convergence of the OTSEM. A numerical example conducted on a half-space model demonstrates that the OTSEM improves the accuracy by approximately one order of magnitude compared to the conventional Fekete-based TSEM. In particular, the accuracy of the 7th-order OTSEM is even higher than that of the 14th-order Fekete-based TSEM. Furthermore, the OTSEM produces a result that can compete in accuracy with the quadrilateral SEM (QSEM). The high accuracy of the OTSEM is also tested with a non-flat topography model. In terms of computational

  17. Hydrodynamic Modeling for Channel and Shoreline Stabilization at Rhodes Point, Smith Island, MD

    DTIC Science & Technology

    2016-11-01

    shorelines. Both Alternatives included the same revetment structure for protecting the south shoreline. The Coastal Modeling System (CMS, including CMS...ER D C/ CH L TR -1 6- 17 Coastal Inlets Research Program Hydrodynamic Modeling for Channel and Shoreline Stabilization at Rhodes Point...acwc.sdp.sirsi.net/client/default. Coastal Inlets Research Program ERDC/CHL TR-16-17 November 2016 Hydrodynamic Modeling for Channel and Shoreline

  18. a Modeling Method of Fluttering Leaves Based on Point Cloud

    NASA Astrophysics Data System (ADS)

    Tang, J.; Wang, Y.; Zhao, Y.; Hao, W.; Ning, X.; Lv, K.; Shi, Z.; Zhao, M.

    2017-09-01

    Leaves falling gently or fluttering are common phenomenon in nature scenes. The authenticity of leaves falling plays an important part in the dynamic modeling of natural scenes. The leaves falling model has a widely applications in the field of animation and virtual reality. We propose a novel modeling method of fluttering leaves based on point cloud in this paper. According to the shape, the weight of leaves and the wind speed, three basic trajectories of leaves falling are defined, which are the rotation falling, the roll falling and the screw roll falling. At the same time, a parallel algorithm based on OpenMP is implemented to satisfy the needs of real-time in practical applications. Experimental results demonstrate that the proposed method is amenable to the incorporation of a variety of desirable effects.

  19. Modeling an enhanced ridesharing system with meet points and time windows

    PubMed Central

    Li, Xin; Hu, Sangen; Deng, Kai

    2018-01-01

    With the rising of e-hailing services in urban areas, ride sharing is becoming a common mode of transportation. This paper presents a mathematical model to design an enhanced ridesharing system with meet points and users’ preferable time windows. The introduction of meet points allows ridesharing operators to trade off the benefits of saving en-route delays and the cost of additional walking for some passengers to be collectively picked up or dropped off. This extension to the traditional door-to-door ridesharing problem brings more operation flexibility in urban areas (where potential requests may be densely distributed in neighborhood), and thus could achieve better system performance in terms of reducing the total travel time and increasing the served passengers. We design and implement a Tabu-based meta-heuristic algorithm to solve the proposed mixed integer linear program (MILP). To evaluate the validation and effectiveness of the proposed model and solution algorithm, several scenarios are designed and also resolved to optimality by CPLEX. Results demonstrate that (i) detailed route plan associated with passenger assignment to meet points can be obtained with en-route delay savings; (ii) as compared to CPLEX, the meta-heuristic algorithm bears the advantage of higher computation efficiency and produces good quality solutions with 8%~15% difference from the global optima; and (iii) introducing meet points to ridesharing system saves the total travel time by 2.7%-3.8% for small-scale ridesharing systems. More benefits are expected for ridesharing systems with large size of fleet. This study provides a new tool to efficiently operate the ridesharing system, particularly when the ride sharing vehicles are in short supply during peak hours. Traffic congestion mitigation will also be expected. PMID:29715302

  20. Triple point temperature of neon isotopes: Dependence on nitrogen impurity and sealed-cell model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pavese, F.; Steur, P. P. M.; Giraudi, D.

    2013-09-11

    This paper illustrates a study conducted at INRIM, to further check how some quantities influence the value of the triple point temperature of the neon high-purity isotopes {sup 20}Ne and {sup 22}Ne. The influence of nitrogen as a chemical impurity in neon is critical with regard to the present best total uncertainty achieved in the measurement of these triple points, but only one determination is available in the literature. Checks are reported, performed on two different samples of {sup 22}Ne known to contain a N{sub 2} amount of 157⋅10{sup −6}, using two different models of sealed cells. The model ofmore » the cell can, in principle, have some effects on the shape of the melting plateau or on the triple point temperature observed for the sample sealed in it. This can be due to cell thermal parameters, or because the INRIM cell element mod. c contains many copper wires closely packed, which can, in principle, constrain the interface and induce a premelting-like effect. The reported results on a cell mod. Bter show no evident effect from the cell model and provide a value for the effect of N{sub 2} in Ne liquidus point of 8.6(1.9) μK ppm N{sub 2}{sup −1}, only slightly different from the literature datum.« less

  1. A Multidimensional Ideal Point Item Response Theory Model for Binary Data.

    PubMed

    Maydeu-Olivares, Albert; Hernández, Adolfo; McDonald, Roderick P

    2006-12-01

    We introduce a multidimensional item response theory (IRT) model for binary data based on a proximity response mechanism. Under the model, a respondent at the mode of the item response function (IRF) endorses the item with probability one. The mode of the IRF is the ideal point, or in the multidimensional case, an ideal hyperplane. The model yields closed form expressions for the cell probabilities. We estimate and test the goodness of fit of the model using only information contained in the univariate and bivariate moments of the data. Also, we pit the new model against the multidimensional normal ogive model estimated using NOHARM in four applications involving (a) attitudes toward censorship, (b) satisfaction with life, (c) attitudes of morality and equality, and (d) political efficacy. The normal PDF model is not invariant to simple operations such as reverse scoring. Thus, when there is no natural category to be modeled, as in many personality applications, it should be fit separately with and without reverse scoring for comparisons.

  2. Universality away from critical points in a thermostatistical model

    NASA Astrophysics Data System (ADS)

    Lapilli, C. M.; Wexler, C.; Pfeifer, P.

    Nature uses phase transitions as powerful regulators of processes ranging from climate to the alteration of phase behavior of cell membranes to protect cells from cold, building on the fact that thermodynamic properties of a solid, liquid, or gas are sensitive fingerprints of intermolecular interactions. The only known exceptions from this sensitivity are critical points. At a critical point, two phases become indistinguishable and thermodynamic properties exhibit universal behavior: systems with widely different intermolecular interactions behave identically. Here we report a major counterexample. We show that different members of a family of two-dimensional systems —the discrete p-state clock model— with different Hamiltonians describing different microscopic interactions between molecules or spins, may exhibit identical thermodynamic behavior over a wide range of temperatures. The results generate a comprehensive map of the phase diagram of the model and, by virtue of the discrete rotors behaving like continuous rotors, an emergent symmetry, not present in the Hamiltonian. This symmetry, or many-to-one map of intermolecular interactions onto thermodynamic states, demonstrates previously unknown limits for macroscopic distinguishability of different microscopic interactions.

  3. Comparison and validation of point spread models for imaging in natural waters.

    PubMed

    Hou, Weilin; Gray, Deric J; Weidemann, Alan D; Arnone, Robert A

    2008-06-23

    It is known that scattering by particulates within natural waters is the main cause of the blur in underwater images. Underwater images can be better restored or enhanced with knowledge of the point spread function (PSF) of the water. This will extend the performance range as well as the information retrieval from underwater electro-optical systems, which is critical in many civilian and military applications, including target and especially mine detection, search and rescue, and diver visibility. A better understanding of the physical process involved also helps to predict system performance and simulate it accurately on demand. The presented effort first reviews several PSF models, including the introduction of a semi-analytical PSF given optical properties of the medium, including scattering albedo, mean scattering angles and the optical range. The models under comparison include the empirical model of Duntley, a modified PSF model by Dolin et al, as well as the numerical integration of analytical forms from Wells, as a benchmark of theoretical results. For experimental results, in addition to that of Duntley, we validate the above models with measured point spread functions by applying field measured scattering properties with Monte Carlo simulations. Results from these comparisons suggest it is sufficient but necessary to have the three parameters listed above to model PSFs. The simplified approach introduced also provides adequate accuracy and flexibility for imaging applications, as shown by examples of restored underwater images.

  4. Transition point prediction in a multicomponent lattice Boltzmann model: Forcing scheme dependencies

    NASA Astrophysics Data System (ADS)

    Küllmer, Knut; Krämer, Andreas; Joppich, Wolfgang; Reith, Dirk; Foysi, Holger

    2018-02-01

    Pseudopotential-based lattice Boltzmann models are widely used for numerical simulations of multiphase flows. In the special case of multicomponent systems, the overall dynamics are characterized by the conservation equations for mass and momentum as well as an additional advection diffusion equation for each component. In the present study, we investigate how the latter is affected by the forcing scheme, i.e., by the way the underlying interparticle forces are incorporated into the lattice Boltzmann equation. By comparing two model formulations for pure multicomponent systems, namely the standard model [X. Shan and G. D. Doolen, J. Stat. Phys. 81, 379 (1995), 10.1007/BF02179985] and the explicit forcing model [M. L. Porter et al., Phys. Rev. E 86, 036701 (2012), 10.1103/PhysRevE.86.036701], we reveal that the diffusion characteristics drastically change. We derive a generalized, potential function-dependent expression for the transition point from the miscible to the immiscible regime and demonstrate that it is shifted between the models. The theoretical predictions for both the transition point and the mutual diffusion coefficient are validated in simulations of static droplets and decaying sinusoidal concentration waves, respectively. To show the universality of our analysis, two common and one new potential function are investigated. As the shift in the diffusion characteristics directly affects the interfacial properties, we additionally show that phenomena related to the interfacial tension such as the modeling of contact angles are influenced as well.

  5. Assessing accuracy of point fire intervals across landscapes with simulation modelling

    Treesearch

    Russell A. Parsons; Emily K. Heyerdahl; Robert E. Keane; Brigitte Dorner; Joseph Fall

    2007-01-01

    We assessed accuracy in point fire intervals using a simulation model that sampled four spatially explicit simulated fire histories. These histories varied in fire frequency and size and were simulated on a flat landscape with two forest types (dry versus mesic). We used three sampling designs (random, systematic grids, and stratified). We assessed the sensitivity of...

  6. Lieb-Thirring inequality for a model of particles with point interactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frank, Rupert L.; Seiringer, Robert

    2012-09-15

    We consider a model of quantum-mechanical particles interacting via point interactions of infinite scattering length. In the case of fermions we prove a Lieb-Thirring inequality for the energy, i.e., we show that the energy is bounded from below by a constant times the integral of the particle density to the power (5/3).

  7. Sample size and classification error for Bayesian change-point models with unlabelled sub-groups and incomplete follow-up.

    PubMed

    White, Simon R; Muniz-Terrera, Graciela; Matthews, Fiona E

    2018-05-01

    Many medical (and ecological) processes involve the change of shape, whereby one trajectory changes into another trajectory at a specific time point. There has been little investigation into the study design needed to investigate these models. We consider the class of fixed effect change-point models with an underlying shape comprised two joined linear segments, also known as broken-stick models. We extend this model to include two sub-groups with different trajectories at the change-point, a change and no change class, and also include a missingness model to account for individuals with incomplete follow-up. Through a simulation study, we consider the relationship of sample size to the estimates of the underlying shape, the existence of a change-point, and the classification-error of sub-group labels. We use a Bayesian framework to account for the missing labels, and the analysis of each simulation is performed using standard Markov chain Monte Carlo techniques. Our simulation study is inspired by cognitive decline as measured by the Mini-Mental State Examination, where our extended model is appropriate due to the commonly observed mixture of individuals within studies who do or do not exhibit accelerated decline. We find that even for studies of modest size ( n = 500, with 50 individuals observed past the change-point) in the fixed effect setting, a change-point can be detected and reliably estimated across a range of observation-errors.

  8. Material point method of modelling and simulation of reacting flow of oxygen

    NASA Astrophysics Data System (ADS)

    Mason, Matthew; Chen, Kuan; Hu, Patrick G.

    2014-07-01

    Aerospace vehicles are continually being designed to sustain flight at higher speeds and higher altitudes than previously attainable. At hypersonic speeds, gases within a flow begin to chemically react and the fluid's physical properties are modified. It is desirable to model these effects within the Material Point Method (MPM). The MPM is a combined Eulerian-Lagrangian particle-based solver that calculates the physical properties of individual particles and uses a background grid for information storage and exchange. This study introduces chemically reacting flow modelling within the MPM numerical algorithm and illustrates a simple application using the AeroElastic Material Point Method (AEMPM) code. The governing equations of reacting flows are introduced and their direct application within an MPM code is discussed. A flow of 100% oxygen is illustrated and the results are compared with independently developed computational non-equilibrium algorithms. Observed trends agree well with results from an independently developed source.

  9. Once more on the equilibrium-point hypothesis (lambda model) for motor control.

    PubMed

    Feldman, A G

    1986-03-01

    The equilibrium control hypothesis (lambda model) is considered with special reference to the following concepts: (a) the length-force invariant characteristic (IC) of the muscle together with central and reflex systems subserving its activity; (b) the tonic stretch reflex threshold (lambda) as an independent measure of central commands descending to alpha and gamma motoneurons; (c) the equilibrium point, defined in terms of lambda, IC and static load characteristics, which is associated with the notion that posture and movement are controlled by a single mechanism; and (d) the muscle activation area (a reformulation of the "size principle")--the area of kinematic and command variables in which a rank-ordered recruitment of motor units takes place. The model is used for the interpretation of various motor phenomena, particularly electromyographic patterns. The stretch reflex in the lambda model has no mechanism to follow-up a certain muscle length prescribed by central commands. Rather, its task is to bring the system to an equilibrium, load-dependent position. Another currently popular version defines the equilibrium point concept in terms of alpha motoneuron activity alone (the alpha model). Although the model imitates (as does the lambda model) spring-like properties of motor performance, it nevertheless is inconsistent with a substantial data base on intact motor control. An analysis of alpha models, including their treatment of motor performance in deafferented animals, reveals that they suffer from grave shortcomings. It is concluded that parameterization of the stretch reflex is a basis for intact motor control. Muscle deafferentation impairs this graceful mechanism though it does not remove the possibility of movement.

  10. Inflection-point inflation in a hyper-charge oriented U ( 1 ) X model

    DOE PAGES

    Okada, Nobuchika; Okada, Satomi; Raut, Digesh

    2017-03-31

    Inflection-point inflation is an interesting possibility to realize a successful slow-roll inflation when inflation is driven by a single scalar field with its value during inflation below the Planck mass (ΦI≲M Pl). In order for a renormalization group (RG) improved effective λΦ 4 potential to develop an inflection-point, the running quartic coupling λ(Φ) must exhibit a minimum with an almost vanishing value in its RG evolution, namely λ(Φ I)≃0 and β λ(ΦI)≃0, where β λ is the beta-function of the quartic coupling. Here in this paper, we consider the inflection-point inflation in the context of the minimal gauged U(1) Xmore » extended Standard Model (SM), which is a generalization of the minimal U(1) B$-$L model, and is constructed as a linear combination of the SM U(1) Y and U(1) B$-$L gauge symmetries. We identify the U(1) X Higgs field with the inflaton field. For a successful inflection-point inflation to be consistent with the current cosmological observations, the mass ratios among the U(1) X gauge boson, the right-handed neutrinos and the U(1) X Higgs boson are fixed. Focusing on the case that the extra U(1) X gauge symmetry is mostly aligned along the SM U(1) Y direction, we investigate a consistency between the inflationary predictions and the latest LHC Run-2 results on the search for a narrow resonance with the di-lepton final state.« less

  11. Inflection-point inflation in a hyper-charge oriented U ( 1 ) X model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Okada, Nobuchika; Okada, Satomi; Raut, Digesh

    Inflection-point inflation is an interesting possibility to realize a successful slow-roll inflation when inflation is driven by a single scalar field with its value during inflation below the Planck mass (ΦI≲M Pl). In order for a renormalization group (RG) improved effective λΦ 4 potential to develop an inflection-point, the running quartic coupling λ(Φ) must exhibit a minimum with an almost vanishing value in its RG evolution, namely λ(Φ I)≃0 and β λ(ΦI)≃0, where β λ is the beta-function of the quartic coupling. Here in this paper, we consider the inflection-point inflation in the context of the minimal gauged U(1) Xmore » extended Standard Model (SM), which is a generalization of the minimal U(1) B$-$L model, and is constructed as a linear combination of the SM U(1) Y and U(1) B$-$L gauge symmetries. We identify the U(1) X Higgs field with the inflaton field. For a successful inflection-point inflation to be consistent with the current cosmological observations, the mass ratios among the U(1) X gauge boson, the right-handed neutrinos and the U(1) X Higgs boson are fixed. Focusing on the case that the extra U(1) X gauge symmetry is mostly aligned along the SM U(1) Y direction, we investigate a consistency between the inflationary predictions and the latest LHC Run-2 results on the search for a narrow resonance with the di-lepton final state.« less

  12. Modeling and Assessment of GPS/BDS Combined Precise Point Positioning.

    PubMed

    Chen, Junping; Wang, Jungang; Zhang, Yize; Yang, Sainan; Chen, Qian; Gong, Xiuqiang

    2016-07-22

    Precise Point Positioning (PPP) technique enables stand-alone receivers to obtain cm-level positioning accuracy. Observations from multi-GNSS systems can augment users with improved positioning accuracy, reliability and availability. In this paper, we present and evaluate the GPS/BDS combined PPP models, including the traditional model and a simplified model, where the inter-system bias (ISB) is treated in different way. To evaluate the performance of combined GPS/BDS PPP, kinematic and static PPP positions are compared to the IGS daily estimates, where 1 month GPS/BDS data of 11 IGS Multi-GNSS Experiment (MGEX) stations are used. The results indicate apparent improvement of GPS/BDS combined PPP solutions in both static and kinematic cases, where much smaller standard deviations are presented in the magnitude distribution of coordinates RMS statistics. Comparisons between the traditional and simplified combined PPP models show no difference in coordinate estimations, and the inter system biases between the GPS/BDS system are assimilated into receiver clock, ambiguities and pseudo-range residuals accordingly.

  13. Development of an Open Rotor Cycle Model in NPSS Using a Multi-Design Point Approach

    NASA Technical Reports Server (NTRS)

    Hendricks, Eric S.

    2011-01-01

    NASA's Environmentally Responsible Aviation Project and Subsonic Fixed Wing Project are focused on developing concepts and technologies which may enable dramatic reductions to the environmental impact of future generation subsonic aircraft (Refs. 1 and 2). The open rotor concept (also referred to as the Unducted Fan or advanced turboprop) may allow the achievement of this objective by reducing engine emissions and fuel consumption. To evaluate its potential impact, an open rotor cycle modeling capability is needed. This paper presents the initial development of an open rotor cycle model in the Numerical Propulsion System Simulation (NPSS) computer program which can then be used to evaluate the potential benefit of this engine. The development of this open rotor model necessitated addressing two modeling needs within NPSS. First, a method for evaluating the performance of counter-rotating propellers was needed. Therefore, a new counter-rotating propeller NPSS component was created. This component uses propeller performance maps developed from historic counter-rotating propeller experiments to determine the thrust delivered and power required. Second, several methods for modeling a counter-rotating power turbine within NPSS were explored. These techniques used several combinations of turbine components within NPSS to provide the necessary power to the propellers. Ultimately, a single turbine component with a conventional turbine map was selected. Using these modeling enhancements, an open rotor cycle model was developed in NPSS using a multi-design point approach. The multi-design point (MDP) approach improves the engine cycle analysis process by making it easier to properly size the engine to meet a variety of thrust targets throughout the flight envelope. A number of design points are considered including an aerodynamic design point, sea-level static, takeoff and top of climb. The development of this MDP model was also enabled by the selection of a simple power

  14. Some Applications of the Model of the Partion Points on a One Dimensional Lattice

    NASA Astrophysics Data System (ADS)

    Mejdani, R.; Huseini, H.

    1996-02-01

    We have shown that by using a model of gas of partition points on a one-dimensional lattice, we can find some results about the saturation curves for enzyme kinetics or the average domain-size, which we have obtained before by using a correlated walks' theory or a probabilistic (combinatoric) way. We have studied, using the same model and the same technique, the denaturation process, i.e., the breaking of the hydrogen bonds connecting the two strands, under treatment by heat. Also, we have discussed, without entering in details, the problem related to the spread of an infections disease and the stochastic model of partition points. We think that this model, being simple and mathematically transparent, can be advantageous for the other theoratical investigations in chemistry or modern biology. PACS NOS.: 05.50. + q; 05.70.Ce; 64.10.+h; 87.10. +e; 87.15.Rn

  15. Study of Fission Barrier Heights of Uranium Isotopes by the Macroscopic-Microscopic Method

    NASA Astrophysics Data System (ADS)

    Zhong, Chun-Lai; Fan, Tie-Shuan

    2014-09-01

    Potential energy surfaces of uranium nuclei in the range of mass numbers 229 through 244 are investigated in the framework of the macroscopic-microscopic model and the heights of static fission barriers are obtained in terms of a double-humped structure. The macroscopic part of the nuclear energy is calculated according to Lublin—Strasbourg-drop (LSD) model. Shell and pairing corrections as the microscopic part are calculated with a folded-Yukawa single-particle potential. The calculation is carried out in a five-dimensional parameter space of the generalized Lawrence shapes. In order to extract saddle points on the potential energy surface, a new algorithm which can effectively find an optimal fission path leading from the ground state to the scission point is developed. The comparison of our results with available experimental data and others' theoretical results confirms the reliability of our calculations.

  16. Modeling a distribution of point defects as misfitting inclusions in stressed solids

    NASA Astrophysics Data System (ADS)

    Cai, W.; Sills, R. B.; Barnett, D. M.; Nix, W. D.

    2014-05-01

    The chemical equilibrium distribution of point defects modeled as non-overlapping, spherical inclusions with purely positive dilatational eigenstrain in an isotropically elastic solid is derived. The compressive self-stress inside existing inclusions must be excluded from the stress dependence of the equilibrium concentration of the point defects, because it does no work when a new inclusion is introduced. On the other hand, a tensile image stress field must be included to satisfy the boundary conditions in a finite solid. Through the image stress, existing inclusions promote the introduction of additional inclusions. This is contrary to the prevailing approach in the literature in which the equilibrium point defect concentration depends on a homogenized stress field that includes the compressive self-stress. The shear stress field generated by the equilibrium distribution of such inclusions is proved to be proportional to the pre-existing stress field in the solid, provided that the magnitude of the latter is small, so that a solid containing an equilibrium concentration of point defects can be described by a set of effective elastic constants in the small-stress limit.

  17. Higher-order triangular spectral element method with optimized cubature points for seismic wavefield modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Youshan, E-mail: ysliu@mail.iggcas.ac.cn; Teng, Jiwen, E-mail: jwteng@mail.iggcas.ac.cn; Xu, Tao, E-mail: xutao@mail.iggcas.ac.cn

    2017-05-01

    The mass-lumped method avoids the cost of inverting the mass matrix and simultaneously maintains spatial accuracy by adopting additional interior integration points, known as cubature points. To date, such points are only known analytically in tensor domains, such as quadrilateral or hexahedral elements. Thus, the diagonal-mass-matrix spectral element method (SEM) in non-tensor domains always relies on numerically computed interpolation points or quadrature points. However, only the cubature points for degrees 1 to 6 are known, which is the reason that we have developed a p-norm-based optimization algorithm to obtain higher-order cubature points. In this way, we obtain and tabulate newmore » cubature points with all positive integration weights for degrees 7 to 9. The dispersion analysis illustrates that the dispersion relation determined from the new optimized cubature points is comparable to that of the mass and stiffness matrices obtained by exact integration. Simultaneously, the Lebesgue constant for the new optimized cubature points indicates its surprisingly good interpolation properties. As a result, such points provide both good interpolation properties and integration accuracy. The Courant–Friedrichs–Lewy (CFL) numbers are tabulated for the conventional Fekete-based triangular spectral element (TSEM), the TSEM with exact integration, and the optimized cubature-based TSEM (OTSEM). A complementary study demonstrates the spectral convergence of the OTSEM. A numerical example conducted on a half-space model demonstrates that the OTSEM improves the accuracy by approximately one order of magnitude compared to the conventional Fekete-based TSEM. In particular, the accuracy of the 7th-order OTSEM is even higher than that of the 14th-order Fekete-based TSEM. Furthermore, the OTSEM produces a result that can compete in accuracy with the quadrilateral SEM (QSEM). The high accuracy of the OTSEM is also tested with a non-flat topography model. In terms of

  18. Insilico modeling and molecular dynamic simulation of claudin-1 point mutations in HCV infection.

    PubMed

    Vipperla, Bhavaniprasad; Dass, J Febin Prabhu; Jayanthi, S

    2014-01-01

    Claudin-1 (CLDN1) in association with envelope glycoprotein (CD81) mediates the fusion of HCV into the cytosol. Recent studies have indicated that point mutations in CLDN1 are important for the entry of hepatitis C virus (HCV). To validate these findings, we employed a computational platform to investigate the structural effect of two point mutations (I32M and E48K). Initially, three-dimensional co-ordinates for CLDN1 receptor sequence were generated. Then, three mutant models were built using the point mutation including a double mutant (I32M/E48K) model from the native model structure. Finally, all the four model structures including the native and three mutant models were subjected to molecular dynamics (MD) simulation for a period of 25 ns to appreciate their dynamic behavior. The MD trajectory files were analyzed using cluster and principal component method. The analysis suggested that either of the single mutation has negligible effect on the overall structure of CLDN1 compared to the double mutant form. However, the double mutant model of CLDN1 shows significant negative impact through the impairment of H-bonds and the simultaneous increase in solvent accessible surface area. Our simulation results are visibly consistent with the experimental report suggesting that the CLDN1 receptor distortion is prominent due to the double mutation with large surface accessibility. This increase in accessible surface area due to the coexistence of double mutation may be presumed as one of the key factor that results in permissive action of HCV attachment and infection.

  19. Supervised Outlier Detection in Large-Scale Mvs Point Clouds for 3d City Modeling Applications

    NASA Astrophysics Data System (ADS)

    Stucker, C.; Richard, A.; Wegner, J. D.; Schindler, K.

    2018-05-01

    We propose to use a discriminative classifier for outlier detection in large-scale point clouds of cities generated via multi-view stereo (MVS) from densely acquired images. What makes outlier removal hard are varying distributions of inliers and outliers across a scene. Heuristic outlier removal using a specific feature that encodes point distribution often delivers unsatisfying results. Although most outliers can be identified correctly (high recall), many inliers are erroneously removed (low precision), too. This aggravates object 3D reconstruction due to missing data. We thus propose to discriminatively learn class-specific distributions directly from the data to achieve high precision. We apply a standard Random Forest classifier that infers a binary label (inlier or outlier) for each 3D point in the raw, unfiltered point cloud and test two approaches for training. In the first, non-semantic approach, features are extracted without considering the semantic interpretation of the 3D points. The trained model approximates the average distribution of inliers and outliers across all semantic classes. Second, semantic interpretation is incorporated into the learning process, i.e. we train separate inlieroutlier classifiers per semantic class (building facades, roof, ground, vegetation, fields, and water). Performance of learned filtering is evaluated on several large SfM point clouds of cities. We find that results confirm our underlying assumption that discriminatively learning inlier-outlier distributions does improve precision over global heuristics by up to ≍ 12 percent points. Moreover, semantically informed filtering that models class-specific distributions further improves precision by up to ≍ 10 percent points, being able to remove very isolated building, roof, and water points while preserving inliers on building facades and vegetation.

  20. GPU-accelerated Modeling and Element-free Reverse-time Migration with Gauss Points Partition

    NASA Astrophysics Data System (ADS)

    Zhen, Z.; Jia, X.

    2014-12-01

    Element-free method (EFM) has been applied to seismic modeling and migration. Compared with finite element method (FEM) and finite difference method (FDM), it is much cheaper and more flexible because only the information of the nodes and the boundary of the study area are required in computation. In the EFM, the number of Gauss points should be consistent with the number of model nodes; otherwise the accuracy of the intermediate coefficient matrices would be harmed. Thus when we increase the nodes of velocity model in order to obtain higher resolution, we find that the size of the computer's memory will be a bottleneck. The original EFM can deal with at most 81×81 nodes in the case of 2G memory, as tested by Jia and Hu (2006). In order to solve the problem of storage and computation efficiency, we propose a concept of Gauss points partition (GPP), and utilize the GPUs to improve the computation efficiency. Considering the characteristics of the Gaussian points, the GPP method doesn't influence the propagation of seismic wave in the velocity model. To overcome the time-consuming computation of the stiffness matrix (K) and the mass matrix (M), we also use the GPUs in our computation program. We employ the compressed sparse row (CSR) format to compress the intermediate sparse matrices and try to simplify the operations by solving the linear equations with the CULA Sparse's Conjugate Gradient (CG) solver instead of the linear sparse solver 'PARDISO'. It is observed that our strategy can significantly reduce the computational time of K and Mcompared with the algorithm based on CPU. The model tested is Marmousi model. The length of the model is 7425m and the depth is 2990m. We discretize the model with 595x298 nodes, 300x300 Gauss cells and 3x3 Gauss points in each cell. In contrast to the computational time of the conventional EFM, the GPUs-GPP approach can substantially improve the efficiency. The speedup ratio of time consumption of computing K, M is 120 and the

  1. Point Cloud and Digital Surface Model Generation from High Resolution Multiple View Stereo Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Gong, K.; Fritsch, D.

    2018-05-01

    Nowadays, multiple-view stereo satellite imagery has become a valuable data source for digital surface model generation and 3D reconstruction. In 2016, a well-organized multiple view stereo publicly benchmark for commercial satellite imagery has been released by the John Hopkins University Applied Physics Laboratory, USA. This benchmark motivates us to explore the method that can generate accurate digital surface models from a large number of high resolution satellite images. In this paper, we propose a pipeline for processing the benchmark data to digital surface models. As a pre-procedure, we filter all the possible image pairs according to the incidence angle and capture date. With the selected image pairs, the relative bias-compensated model is applied for relative orientation. After the epipolar image pairs' generation, dense image matching and triangulation, the 3D point clouds and DSMs are acquired. The DSMs are aligned to a quasi-ground plane by the relative bias-compensated model. We apply the median filter to generate the fused point cloud and DSM. By comparing with the reference LiDAR DSM, the accuracy, the completeness and the robustness are evaluated. The results show, that the point cloud reconstructs the surface with small structures and the fused DSM generated by our pipeline is accurate and robust.

  2. Self-Exciting Point Process Models of Civilian Deaths in Iraq

    DTIC Science & Technology

    2010-01-01

    Tita , 2009), we propose that violence in Iraq arises from a combination of exogenous and en- dogenous effects. Spatial heterogeneity in background...Schoenberg, and Tita (2010) where they analyze burgarly and robbery data in Los Angeles. Related work has also been done 2 in Short et al. (2009) where...Control , 4 , 215–240. Mohler, G. O., Short, M. B., Brantingham, P. J., Schoenberg, F. P., & Tita , G. E. (2010). Self- exciting point process modeling of

  3. Experimental and computational models of neurite extension at a choice point in response to controlled diffusive gradients

    NASA Astrophysics Data System (ADS)

    Catig, G. C.; Figueroa, S.; Moore, M. J.

    2015-08-01

    Ojective. Axons are guided toward desired targets through a series of choice points that they navigate by sensing cues in the cellular environment. A better understanding of how microenvironmental factors influence neurite growth during development can inform strategies to address nerve injury. Therefore, there is a need for biomimetic models to systematically investigate the influence of guidance cues at such choice points. Approach. We ran an adapted in silico biased turning axon growth model under the influence of nerve growth factor (NGF) and compared the results to corresponding in vitro experiments. We examined if growth simulations were predictive of neurite population behavior at a choice point. We used a biphasic micropatterned hydrogel system consisting of an outer cell restrictive mold that enclosed a bifurcated cell permissive region and placed a well near a bifurcating end to allow proteins to diffuse and form a gradient. Experimental diffusion profiles in these constructs were used to validate a diffusion computational model that utilized experimentally measured diffusion coefficients in hydrogels. The computational diffusion model was then used to establish defined soluble gradients within the permissive region of the hydrogels and maintain the profiles in physiological ranges for an extended period of time. Computational diffusion profiles informed the neurite growth model, which was compared with neurite growth experiments in the bifurcating hydrogel constructs. Main results. Results indicated that when applied to the constrained choice point geometry, the biased turning model predicted experimental behavior closely. Results for both simulated and in vitro neurite growth studies showed a significant chemoattractive response toward the bifurcated end containing an NGF gradient compared to the control, though some neurites were found in the end with no NGF gradient. Significance. The integrated model of neurite growth we describe will allow

  4. Reconstruction of forest geometries from terrestrial laser scanning point clouds for canopy radiative transfer modelling

    NASA Astrophysics Data System (ADS)

    Bremer, Magnus; Schmidtner, Korbinian; Rutzinger, Martin

    2015-04-01

    The architecture of forest canopies is a key parameter for forest ecological issues helping to model the variability of wood biomass and foliage in space and time. In order to understand the nature of subpixel effects of optical space-borne sensors with coarse spatial resolution, hypothetical 3D canopy models are widely used for the simulation of radiative transfer in forests. Thereby, radiation is traced through the atmosphere and canopy geometries until it reaches the optical sensor. For a realistic simulation scene we decompose terrestrial laser scanning point cloud data of leaf-off larch forest plots in the Austrian Alps and reconstruct detailed model ready input data for radiative transfer simulations. The point clouds are pre-classified into primitive classes using Principle Component Analysis (PCA) using scale adapted radius neighbourhoods. Elongated point structures are extracted as tree trunks. The tree trunks are used as seeds for a Dijkstra-growing procedure, in order to obtain single tree segmentation in the interlinked canopies. For the optimized reconstruction of branching architectures as vector models, point cloud skeletonisation is used in combination with an iterative Dijkstra-growing and by applying distance constraints. This allows conducting a hierarchical reconstruction preferring the tree trunk and higher order branches and avoiding over-skeletonization effects. Based on the reconstructed branching architectures, larch needles are modelled based on the hierarchical level of branches and the geometrical openness of the canopy. For radiative transfer simulations, branch architectures are used as mesh geometries representing branches as cylindrical pipes. Needles are either used as meshes or as voxel-turbids. The presented workflow allows an automatic classification and single tree segmentation in interlinked canopies. The iterative Dijkstra-growing using distance constraints generated realistic reconstruction results. As the mesh representation

  5. Instantaneous nonlinear assessment of complex cardiovascular dynamics by Laguerre-Volterra point process models.

    PubMed

    Valenza, Gaetano; Citi, Luca; Barbieri, Riccardo

    2013-01-01

    We report an exemplary study of instantaneous assessment of cardiovascular dynamics performed using point-process nonlinear models based on Laguerre expansion of the linear and nonlinear Wiener-Volterra kernels. As quantifiers, instantaneous measures such as high order spectral features and Lyapunov exponents can be estimated from a quadratic and cubic autoregressive formulation of the model first order moment, respectively. Here, these measures are evaluated on heartbeat series coming from 16 healthy subjects and 14 patients with Congestive Hearth Failure (CHF). Data were gathered from the on-line repository PhysioBank, which has been taken as landmark for testing nonlinear indices. Results show that the proposed nonlinear Laguerre-Volterra point-process methods are able to track the nonlinear and complex cardiovascular dynamics, distinguishing significantly between CHF and healthy heartbeat series.

  6. Modeling of Aerobrake Ballute Stagnation Point Temperature and Heat Transfer to Inflation Gas

    NASA Technical Reports Server (NTRS)

    Bahrami, Parviz A.

    2012-01-01

    A trailing Ballute drag device concept for spacecraft aerocapture is considered. A thermal model for calculation of the Ballute membrane temperature and the inflation gas temperature is developed. An algorithm capturing the most salient features of the concept is implemented. In conjunction with the thermal model, trajectory calculations for two candidate missions, Titan Explorer and Neptune Orbiter missions, are used to estimate the stagnation point temperature and the inflation gas temperature. Radiation from both sides of the membrane at the stagnation point and conduction to the inflating gas is included. The results showed that the radiation from the membrane and to a much lesser extent conduction to the inflating gas, are likely to be the controlling heat transfer mechanisms and that the increase in gas temperature due to aerodynamic heating is of secondary importance.

  7. Coronary risk assessment by point-based vs. equation-based Framingham models: significant implications for clinical care.

    PubMed

    Gordon, William J; Polansky, Jesse M; Boscardin, W John; Fung, Kathy Z; Steinman, Michael A

    2010-11-01

    US cholesterol guidelines use original and simplified versions of the Framingham model to estimate future coronary risk and thereby classify patients into risk groups with different treatment strategies. We sought to compare risk estimates and risk group classification generated by the original, complex Framingham model and the simplified, point-based version. We assessed 2,543 subjects age 20-79 from the 2001-2006 National Health and Nutrition Examination Surveys (NHANES) for whom Adult Treatment Panel III (ATP-III) guidelines recommend formal risk stratification. For each subject, we calculated the 10-year risk of major coronary events using the original and point-based Framingham models, and then compared differences in these risk estimates and whether these differences would place subjects into different ATP-III risk groups (<10% risk, 10-20% risk, or >20% risk). Using standard procedures, all analyses were adjusted for survey weights, clustering, and stratification to make our results nationally representative. Among 39 million eligible adults, the original Framingham model categorized 71% of subjects as having "moderate" risk (<10% risk of a major coronary event in the next 10 years), 22% as having "moderately high" (10-20%) risk, and 7% as having "high" (>20%) risk. Estimates of coronary risk by the original and point-based models often differed substantially. The point-based system classified 15% of adults (5.7 million) into different risk groups than the original model, with 10% (3.9 million) misclassified into higher risk groups and 5% (1.8 million) into lower risk groups, for a net impact of classifying 2.1 million adults into higher risk groups. These risk group misclassifications would impact guideline-recommended drug treatment strategies for 25-46% of affected subjects. Patterns of misclassifications varied significantly by gender, age, and underlying CHD risk. Compared to the original Framingham model, the point-based version misclassifies millions

  8. Extended Fitts' model of pointing time in eye-gaze input system - Incorporating effects of target shape and movement direction into modeling.

    PubMed

    Murata, Atsuo; Fukunaga, Daichi

    2018-04-01

    This study attempted to investigate the effects of the target shape and the movement direction on the pointing time using an eye-gaze input system and extend Fitts' model so that these factors are incorporated into the model and the predictive power of Fitts' model is enhanced. The target shape, the target size, the movement distance, and the direction of target presentation were set as within-subject experimental variables. The target shape included: a circle, and rectangles with an aspect ratio of 1:1, 1:2, 1:3, and 1:4. The movement direction included eight directions: upper, lower, left, right, upper left, upper right, lower left, and lower right. On the basis of the data for identifying the effects of the target shape and the movement direction on the pointing time, an attempt was made to develop a generalized and extended Fitts' model that took into account the movement direction and the target shape. As a result, the generalized and extended model was found to fit better to the experimental data, and be more effective for predicting the pointing time for a variety of human-computer interaction (HCI) task using an eye-gaze input system. Copyright © 2017. Published by Elsevier Ltd.

  9. Two-point modeling of SOL losses of HHFW power in NSTX

    NASA Astrophysics Data System (ADS)

    Kish, Ayden; Perkins, Rory; Ahn, Joon-Wook; Diallo, Ahmed; Gray, Travis; Hosea, Joel; Jaworski, Michael; Kramer, Gerrit; Leblanc, Benoit; Sabbagh, Steve

    2017-10-01

    High-harmonic fast-wave (HHFW) heating is a heating and current-drive scheme on the National Spherical Torus eXperiment (NSTX) complimentary to neutral beam injection. Previous experiments suggest that a significant fraction, up to 50%, of the HHFW power is promptly lost to the scrape-off layer (SOL). Research indicates that the lost power reaches the divertor via wave propagation and is converted to a heat flux at the divertor through RF rectification rather than heating the SOL plasma at the midplane. This counter-intuitive hypothesis is investigated using a simplified two-point model, relating plasma parameters at the divertor to those at the midplane. Taking measurements at the divertor region of NSTX as input, this two-point model is used to predict midplane parameters, using the predicted heat flux as an indicator of power input to the SOL. These predictions are compared to measurements at the midplane to evaluate the extent to which they are consistent with experiment. This work was made possible by funding from the Department of Energy for the Summer Undergraduate Laboratory Internship (SULI) program. This work is supported by the US DOE Contract No. DE-AC02-09CH11466.

  10. Augmenting epidemiological models with point-of-care diagnostics data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pullum, Laura L.; Ramanathan, Arvind; Nutaro, James J.

    Although adoption of newer Point-of-Care (POC) diagnostics is increasing, there is a significant challenge using POC diagnostics data to improve epidemiological models. In this work, we propose a method to process zip-code level POC datasets and apply these processed data to calibrate an epidemiological model. We specifically develop a calibration algorithm using simulated annealing and calibrate a parsimonious equation-based model of modified Susceptible-Infected-Recovered (SIR) dynamics. The results show that parsimonious models are remarkably effective in predicting the dynamics observed in the number of infected patients and our calibration algorithm is sufficiently capable of predicting peak loads observed in POC diagnosticsmore » data while staying within reasonable and empirical parameter ranges reported in the literature. Additionally, we explore the future use of the calibrated values by testing the correlation between peak load and population density from Census data. Our results show that linearity assumptions for the relationships among various factors can be misleading, therefore further data sources and analysis are needed to identify relationships between additional parameters and existing calibrated ones. As a result, calibration approaches such as ours can determine the values of newly added parameters along with existing ones and enable policy-makers to make better multi-scale decisions.« less

  11. Augmenting epidemiological models with point-of-care diagnostics data

    DOE PAGES

    Pullum, Laura L.; Ramanathan, Arvind; Nutaro, James J.; ...

    2016-04-20

    Although adoption of newer Point-of-Care (POC) diagnostics is increasing, there is a significant challenge using POC diagnostics data to improve epidemiological models. In this work, we propose a method to process zip-code level POC datasets and apply these processed data to calibrate an epidemiological model. We specifically develop a calibration algorithm using simulated annealing and calibrate a parsimonious equation-based model of modified Susceptible-Infected-Recovered (SIR) dynamics. The results show that parsimonious models are remarkably effective in predicting the dynamics observed in the number of infected patients and our calibration algorithm is sufficiently capable of predicting peak loads observed in POC diagnosticsmore » data while staying within reasonable and empirical parameter ranges reported in the literature. Additionally, we explore the future use of the calibrated values by testing the correlation between peak load and population density from Census data. Our results show that linearity assumptions for the relationships among various factors can be misleading, therefore further data sources and analysis are needed to identify relationships between additional parameters and existing calibrated ones. As a result, calibration approaches such as ours can determine the values of newly added parameters along with existing ones and enable policy-makers to make better multi-scale decisions.« less

  12. Reconstruction of 3d Models from Point Clouds with Hybrid Representation

    NASA Astrophysics Data System (ADS)

    Hu, P.; Dong, Z.; Yuan, P.; Liang, F.; Yang, B.

    2018-05-01

    The three-dimensional (3D) reconstruction of urban buildings from point clouds has long been an active topic in applications related to human activities. However, due to the structures significantly differ in terms of complexity, the task of 3D reconstruction remains a challenging issue especially for the freeform surfaces. In this paper, we present a new reconstruction algorithm which allows the 3D-models of building as a combination of regular structures and irregular surfaces, where the regular structures are parameterized plane primitives and the irregular surfaces are expressed as meshes. The extraction of irregular surfaces starts with an over-segmented method for the unstructured point data, a region growing approach based the adjacent graph of super-voxels is then applied to collapse these super-voxels, and the freeform surfaces can be clustered from the voxels filtered by a thickness threshold. To achieve these regular planar primitives, the remaining voxels with a larger flatness will be further divided into multiscale super-voxels as basic units, and the final segmented planes are enriched and refined in a mutually reinforcing manner under the framework of a global energy optimization. We have implemented the proposed algorithms and mainly tested on two point clouds that differ in point density and urban characteristic, and experimental results on complex building structures illustrated the efficacy of the proposed framework.

  13. Mathematical model for calculation of the heat-hydraulic modes of heating points of heat-supplying systems

    NASA Astrophysics Data System (ADS)

    Shalaginova, Z. I.

    2016-03-01

    The mathematical model and calculation method of the thermal-hydraulic modes of heat points, based on the theory of hydraulic circuits, being developed at the Melentiev Energy Systems Institute are presented. The redundant circuit of the heat point was developed, in which all possible connecting circuits (CC) of the heat engineering equipment and the places of possible installation of control valve were inserted. It allows simulating the operating modes both at central heat points (CHP) and individual heat points (IHP). The configuration of the desired circuit is carried out automatically by removing the unnecessary links. The following circuits connecting the heating systems (HS) are considered: the dependent circuit (direct and through mixing elevator) and independent one (through the heater). The following connecting circuits of the load of hot water supply (HWS) were considered: open CC (direct water pumping from pipelines of heat networks) and a closed CC with connecting the HWS heaters on single-level (serial and parallel) and two-level (sequential and combined) circuits. The following connecting circuits of the ventilation systems (VS) were also considered: dependent circuit and independent one through a common heat exchanger with HS load. In the heat points, water temperature regulators for the hot water supply and ventilation and flow regulators for the heating system, as well as to the inlet as a whole, are possible. According to the accepted decomposition, the model of the heat point is an integral part of the overall heat-hydraulic model of the heat-supplying system having intermediate control stages (CHP and IHP), which allows to consider the operating modes of the heat networks of different levels connected with each other through CHP as well as connected through IHP of consumers with various connecting circuits of local systems of heat consumption: heating, ventilation and hot water supply. The model is implemented in the Angara data

  14. Bayesian Modeling for Identification and Estimation of the Learning Effects of Pointing Tasks

    NASA Astrophysics Data System (ADS)

    Kyo, Koki

    Recently, in the field of human-computer interaction, a model containing the systematic factor and human factor has been proposed to evaluate the performance of the input devices of a computer. This is called the SH-model. In this paper, in order to extend the range of application of the SH-model, we propose some new models based on the Box-Cox transformation and apply a Bayesian modeling method for identification and estimation of the learning effects of pointing tasks. We consider the parameters describing the learning effect as random variables and introduce smoothness priors for them. Illustrative results show that the newly-proposed models work well.

  15. Impact of confinement housing on study end-points in the calf model of cryptosporidiosis.

    PubMed

    Graef, Geneva; Hurst, Natalie J; Kidder, Lance; Sy, Tracy L; Goodman, Laura B; Preston, Whitney D; Arnold, Samuel L M; Zambriski, Jennifer A

    2018-04-01

    Diarrhea is the second leading cause of death in children < 5 years globally and the parasite genus Cryptosporidium is a leading cause of that diarrhea. The global disease burden attributable to cryptosporidiosis is substantial and the only approved chemotherapeutic, nitazoxanide, has poor efficacy in HIV positive children. Chemotherapeutic development is dependent on the calf model of cryptosporidiosis, which is the best approximation of human disease. However, the model is not consistently applied across research studies. Data collection commonly occurs using two different methods: Complete Fecal Collection (CFC), which requires use of confinement housing, and Interval Collection (IC), which permits use of box stalls. CFC mimics human challenge model methodology but it is unknown if confinement housing impacts study end-points and if data gathered via this method is suitable for generalization to human populations. Using a modified crossover study design we compared CFC and IC and evaluated the impact of housing on study end-points. At birth, calves were randomly assigned to confinement (n = 14) or box stall housing (n = 9), or were challenged with 5 x 107 C. parvum oocysts, and followed for 10 days. Study end-points included fecal oocyst shedding, severity of diarrhea, degree of dehydration, and plasma cortisol. Calves in confinement had no significant differences in mean log oocysts enumerated per gram of fecal dry matter between CFC and IC samples (P = 0.6), nor were there diurnal variations in oocyst shedding (P = 0.1). Confinement housed calves shed significantly more oocysts (P = 0.05), had higher plasma cortisol (P = 0.001), and required more supportive care (P = 0.0009) than calves in box stalls. Housing method confounds study end-points in the calf model of cryptosporidiosis. Due to increased stress data collected from calves in confinement housing may not accurately estimate the efficacy of chemotherapeutics targeting C. parvum.

  16. Modelling of thermal field and point defect dynamics during silicon single crystal growth using CZ technique

    NASA Astrophysics Data System (ADS)

    Sabanskis, A.; Virbulis, J.

    2018-05-01

    Mathematical modelling is employed to numerically analyse the dynamics of the Czochralski (CZ) silicon single crystal growth. The model is axisymmetric, its thermal part describes heat transfer by conduction and thermal radiation, and allows to predict the time-dependent shape of the crystal-melt interface. Besides the thermal field, the point defect dynamics is modelled using the finite element method. The considered process consists of cone growth and cylindrical phases, including a short period of a reduced crystal pull rate, and a power jump to avoid large diameter changes. The influence of the thermal stresses on the point defects is also investigated.

  17. An application of change-point recursive models to the relationship between litter size and number of stillborns in pigs.

    PubMed

    Ibáñez-Escriche, N; López de Maturana, E; Noguera, J L; Varona, L

    2010-11-01

    We developed and implemented change-point recursive models and compared them with a linear recursive model and a standard mixed model (SMM), in the scope of the relationship between litter size (LS) and number of stillborns (NSB) in pigs. The proposed approach allows us to estimate the point of change in multiple-segment modeling of a nonlinear relationship between phenotypes. We applied the procedure to a data set provided by a commercial Large White selection nucleus. The data file consisted of LS and NSB records of 4,462 parities. The results of the analysis clearly identified the location of the change points between different structural regression coefficients. The magnitude of these coefficients increased with LS, indicating an increasing incidence of LS on the NSB ratio. However, posterior distributions of correlations were similar across subpopulations (defined by the change points on LS), except for those between residuals. The heritability estimates of NSB did not present differences between recursive models. Nevertheless, these heritabilities were greater than those obtained for SMM (0.05) with a posterior probability of 85%. These results suggest a nonlinear relationship between LS and NSB, which supports the adequacy of a change-point recursive model for its analysis. Furthermore, the results from model comparisons support the use of recursive models. However, the adequacy of the different recursive models depended on the criteria used: the linear recursive model was preferred on account of its smallest deviance value, whereas nonlinear recursive models provided a better fit and predictive ability based on the cross-validation approach.

  18. CADASTER QSPR Models for Predictions of Melting and Boiling Points of Perfluorinated Chemicals.

    PubMed

    Bhhatarai, Barun; Teetz, Wolfram; Liu, Tao; Öberg, Tomas; Jeliazkova, Nina; Kochev, Nikolay; Pukalov, Ognyan; Tetko, Igor V; Kovarich, Simona; Papa, Ester; Gramatica, Paola

    2011-03-14

    Quantitative structure property relationship (QSPR) studies on per- and polyfluorinated chemicals (PFCs) on melting point (MP) and boiling point (BP) are presented. The training and prediction chemicals used for developing and validating the models were selected from Syracuse PhysProp database and literatures. The available experimental data sets were split in two different ways: a) random selection on response value, and b) structural similarity verified by self-organizing-map (SOM), in order to propose reliable predictive models, developed only on the training sets and externally verified on the prediction sets. Individual linear and non-linear approaches based models developed by different CADASTER partners on 0D-2D Dragon descriptors, E-state descriptors and fragment based descriptors as well as consensus model and their predictions are presented. In addition, the predictive performance of the developed models was verified on a blind external validation set (EV-set) prepared using PERFORCE database on 15 MP and 25 BP data respectively. This database contains only long chain perfluoro-alkylated chemicals, particularly monitored by regulatory agencies like US-EPA and EU-REACH. QSPR models with internal and external validation on two different external prediction/validation sets and study of applicability-domain highlighting the robustness and high accuracy of the models are discussed. Finally, MPs for additional 303 PFCs and BPs for 271 PFCs were predicted for which experimental measurements are unknown. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Methanol Oxidation on Pt3Sn(111) for Direct Methanol Fuel Cells: Methanol Decomposition.

    PubMed

    Lu, Xiaoqing; Deng, Zhigang; Guo, Chen; Wang, Weili; Wei, Shuxian; Ng, Siu-Pang; Chen, Xiangfeng; Ding, Ning; Guo, Wenyue; Wu, Chi-Man Lawrence

    2016-05-18

    PtSn alloy, which is a potential material for use in direct methanol fuel cells, can efficiently promote methanol oxidation and alleviate the CO poisoning problem. Herein, methanol decomposition on Pt3Sn(111) was systematically investigated using periodic density functional theory and microkinetic modeling. The geometries and energies of all of the involved species were analyzed, and the decomposition network was mapped out to elaborate the reaction mechanisms. Our results indicated that methanol and formaldehyde were weakly adsorbed, and the other derivatives (CHxOHy, x = 1-3, y = 0-1) were strongly adsorbed and preferred decomposition rather than desorption on Pt3Sn(111). The competitive methanol decomposition started with the initial O-H bond scission followed by successive C-H bond scissions, (i.e., CH3OH → CH3O → CH2O → CHO → CO). The Brønsted-Evans-Polanyi relations and energy barrier decomposition analyses identified the C-H and O-H bond scissions as being more competitive than the C-O bond scission. Microkinetic modeling confirmed that the vast majority of the intermediates and products from methanol decomposition would escape from the Pt3Sn(111) surface at a relatively low temperature, and the coverage of the CO residue decreased with an increase in the temperature and decrease in partial methanol pressure.

  20. Improved equivalent magnetic network modeling for analyzing working points of PMs in interior permanent magnet machine

    NASA Astrophysics Data System (ADS)

    Guo, Liyan; Xia, Changliang; Wang, Huimin; Wang, Zhiqiang; Shi, Tingna

    2018-05-01

    As is well known, the armature current will be ahead of the back electromotive force (back-EMF) under load condition of the interior permanent magnet (PM) machine. This kind of advanced armature current will produce a demagnetizing field, which may make irreversible demagnetization appeared in PMs easily. To estimate the working points of PMs more accurately and take demagnetization under consideration in the early design stage of a machine, an improved equivalent magnetic network model is established in this paper. Each PM under each magnetic pole is segmented, and the networks in the rotor pole shoe are refined, which makes a more precise model of the flux path in the rotor pole shoe possible. The working point of each PM under each magnetic pole can be calculated accurately by the established improved equivalent magnetic network model. Meanwhile, the calculated results are compared with those calculated by FEM. And the effects of d-axis component and q-axis component of armature current, air-gap length and flux barrier size on working points of PMs are analyzed by the improved equivalent magnetic network model.

  1. Development of discrete choice model considering internal reference points and their effects in travel mode choice context

    NASA Astrophysics Data System (ADS)

    Sarif; Kurauchi, Shinya; Yoshii, Toshio

    2017-06-01

    In the conventional travel behavior models such as logit and probit, decision makers are assumed to conduct the absolute evaluations on the attributes of the choice alternatives. On the other hand, many researchers in cognitive psychology and marketing science have been suggesting that the perceptions of attributes are characterized by the benchmark called “reference points” and the relative evaluations based on them are often employed in various choice situations. Therefore, this study developed a travel behavior model based on the mental accounting theory in which the internal reference points are explicitly considered. A questionnaire survey about the shopping trip to the CBD in Matsuyama city was conducted, and then the roles of reference points in travel mode choice contexts were investigated. The result showed that the goodness-of-fit of the developed model was higher than that of the conventional model, indicating that the internal reference points might play the major roles in the choice of travel mode. Also shown was that the respondents seem to utilize various reference points: some tend to adopt the lowest fuel price they have experienced, others employ fare price they feel in perceptions of the travel cost.

  2. Performance Analysis of Several GPS/Galileo Precise Point Positioning Models

    PubMed Central

    Afifi, Akram; El-Rabbany, Ahmed

    2015-01-01

    This paper examines the performance of several precise point positioning (PPP) models, which combine dual-frequency GPS/Galileo observations in the un-differenced and between-satellite single-difference (BSSD) modes. These include the traditional un-differenced model, the decoupled clock model, the semi-decoupled clock model, and the between-satellite single-difference model. We take advantage of the IGS-MGEX network products to correct for the satellite differential code biases and the orbital and satellite clock errors. Natural Resources Canada’s GPSPace PPP software is modified to handle the various GPS/Galileo PPP models. A total of six data sets of GPS and Galileo observations at six IGS stations are processed to examine the performance of the various PPP models. It is shown that the traditional un-differenced GPS/Galileo PPP model, the GPS decoupled clock model, and the semi-decoupled clock GPS/Galileo PPP model improve the convergence time by about 25% in comparison with the un-differenced GPS-only model. In addition, the semi-decoupled GPS/Galileo PPP model improves the solution precision by about 25% compared to the traditional un-differenced GPS/Galileo PPP model. Moreover, the BSSD GPS/Galileo PPP model improves the solution convergence time by about 50%, in comparison with the un-differenced GPS PPP model, regardless of the type of BSSD combination used. As well, the BSSD model improves the precision of the estimated parameters by about 50% and 25% when the loose and the tight combinations are used, respectively, in comparison with the un-differenced GPS-only model. Comparable results are obtained through the tight combination when either a GPS or a Galileo satellite is selected as a reference. PMID:26102495

  3. Performance Analysis of Several GPS/Galileo Precise Point Positioning Models.

    PubMed

    Afifi, Akram; El-Rabbany, Ahmed

    2015-06-19

    This paper examines the performance of several precise point positioning (PPP) models, which combine dual-frequency GPS/Galileo observations in the un-differenced and between-satellite single-difference (BSSD) modes. These include the traditional un-differenced model, the decoupled clock model, the semi-decoupled clock model, and the between-satellite single-difference model. We take advantage of the IGS-MGEX network products to correct for the satellite differential code biases and the orbital and satellite clock errors. Natural Resources Canada's GPSPace PPP software is modified to handle the various GPS/Galileo PPP models. A total of six data sets of GPS and Galileo observations at six IGS stations are processed to examine the performance of the various PPP models. It is shown that the traditional un-differenced GPS/Galileo PPP model, the GPS decoupled clock model, and the semi-decoupled clock GPS/Galileo PPP model improve the convergence time by about 25% in comparison with the un-differenced GPS-only model. In addition, the semi-decoupled GPS/Galileo PPP model improves the solution precision by about 25% compared to the traditional un-differenced GPS/Galileo PPP model. Moreover, the BSSD GPS/Galileo PPP model improves the solution convergence time by about 50%, in comparison with the un-differenced GPS PPP model, regardless of the type of BSSD combination used. As well, the BSSD model improves the precision of the estimated parameters by about 50% and 25% when the loose and the tight combinations are used, respectively, in comparison with the un-differenced GPS-only model. Comparable results are obtained through the tight combination when either a GPS or a Galileo satellite is selected as a reference.

  4. PIV study of the wake of a model wind turbine transitioning between operating set points

    NASA Astrophysics Data System (ADS)

    Houck, Dan; Cowen, Edwin (Todd)

    2016-11-01

    Wind turbines are ideally operated at their most efficient tip speed ratio for a given wind speed. There is increasing interest, however, in operating turbines at other set points to increase the overall power production of a wind farm. Specifically, Goit and Meyers (2015) used LES to examine a wind farm optimized by unsteady operation of its turbines. In this study, the wake of a model wind turbine is measured in a water channel using PIV. We measure the wake response to a change in operational set point of the model turbine, e.g., from low to high tip speed ratio or vice versa, to examine how it might influence a downwind turbine. A modified torque transducer after Kang et al. (2010) is used to calibrate in situ voltage measurements of the model turbine's generator operating across a resistance to the torque on the generator. Changes in operational set point are made by changing the resistance or the flow speed, which change the rotation rate measured by an encoder. Single camera PIV on vertical planes reveals statistics of the wake at various distances downstream as the turbine transitions from one set point to another. From these measurements, we infer how the unsteady operation of a turbine may affect the performance of a downwind turbine as its incoming flow. National Science Foundation and the Atkinson Center for a Sustainable Future.

  5. A Feedback Loop between Dynamin and Actin Recruitment during Clathrin-Mediated Endocytosis

    PubMed Central

    Taylor, Marcus J.; Lampe, Marko; Merrifield, Christien J.

    2012-01-01

    Clathrin-mediated endocytosis proceeds by a sequential series of reactions catalyzed by discrete sets of protein machinery. The final reaction in clathrin-mediated endocytosis is membrane scission, which is mediated by the large guanosine triophosphate hydrolase (GTPase) dynamin and which may involve the actin-dependent recruitment of N-terminal containing BIN/Amphiphysin/RVS domain containing (N-BAR) proteins. Optical microscopy has revealed a detailed picture of when and where particular protein types are recruited in the ∼20–30 s preceding scission. Nevertheless, the regulatory mechanisms and functions that underpin protein recruitment are not well understood. Here we used an optical assay to investigate the coordination and interdependencies between the recruitment of dynamin, the actin cytoskeleton, and N-BAR proteins to individual clathrin-mediated endocytic scission events. These measurements revealed that a feedback loop exists between dynamin and actin at sites of membrane scission. The kinetics of dynamin, actin, and N-BAR protein recruitment were modulated by dynamin GTPase activity. Conversely, acute ablation of actin dynamics using latrunculin-B led to a ∼50% decrease in the incidence of scission, an ∼50% decrease in the amplitude of dynamin recruitment, and abolished actin and N-BAR recruitment to scission events. Collectively these data suggest that dynamin, actin, and N-BAR proteins work cooperatively to efficiently catalyze membrane scission. Dynamin controls its own recruitment to scission events by modulating the kinetics of actin and N-BAR recruitment to sites of scission. Conversely actin serves as a dynamic scaffold that concentrates dynamin and N-BAR proteins at sites of scission. PMID:22505844

  6. Modelling the association of dengue fever cases with temperature and relative humidity in Jeddah, Saudi Arabia-A generalised linear model with break-point analysis.

    PubMed

    Alkhaldy, Ibrahim

    2017-04-01

    The aim of this study was to examine the role of environmental factors in the temporal distribution of dengue fever in Jeddah, Saudi Arabia. The relationship between dengue fever cases and climatic factors such as relative humidity and temperature was investigated during 2006-2009 to determine whether there is any relationship between dengue fever cases and climatic parameters in Jeddah City, Saudi Arabia. A generalised linear model (GLM) with a break-point was used to determine how different levels of temperature and relative humidity affected the distribution of the number of cases of dengue fever. Break-point analysis was performed to modelled the effect before and after a break-point (change point) in the explanatory parameters under various scenarios. Akaike information criterion (AIC) and cross validation (CV) were used to assess the performance of the models. The results showed that maximum temperature and mean relative humidity are most probably the better predictors of the number of dengue fever cases in Jeddah. In this study three scenarios were modelled: no time lag, 1-week lag and 2-weeks lag. Among these scenarios, the 1-week lag model using mean relative humidity as an explanatory variable showed better performance. This study showed a clear relationship between the meteorological variables and the number of dengue fever cases in Jeddah. The results also demonstrated that meteorological variables can be successfully used to estimate the number of dengue fever cases for a given period of time. Break-point analysis provides further insight into the association between meteorological parameters and dengue fever cases by dividing the meteorological parameters into certain break-points. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Chemical composition and inhibitory effects of water extract of Henna leaves on reactive oxygen species, DNA scission and proliferation of cancer cells

    PubMed Central

    Kumar, Manish; Chandel, Madhu; Kaur, Paramjeet; Pandit, Kritika; Kaur, Varinder; Kaur, Sandeep; Kaur, Satwinderjeet

    2016-01-01

    From the centuries, Lawsonia inermis L. (Henna) is utilized in traditional health care system as a medicinal and cosmetic agent. The present study was intended to assess antiradical, DNA protective and antiproliferative activity of water extract of Lawsonia inermis L. leaves (W-LI). Antioxidant activity was estimated using various in vitro assays such as DPPH, ABTS, superoxide anion radical scavenging, FRAP, deoxyribose degradation and DNA protection assay. Growth inhibitory effects of W-LI were assessed using MTT assay against different cancer cell lines viz. HeLa, MCF-7, A549, C6 and COLO-205. From the results of antioxidant assays, it was found that W-LI quenched DPPH and ABTS cation radicals with IC50 value of 352.77 µg/ml and 380.87 µg/ml respectively. It demonstrated hydroxyl radical scavenging potential of 59.75 % at highest test dose of 1000 µg/ml in deoxyribose degradation assay. The results of FRAP assay showed that W-LI also possesses significant reducing activity. Extract inhibited hydroxyl radical induced pBR322 plasmid DNA strand scission, thus conferring DNA protection. Growth inhibition of various cancer cell lines was achieved to the varying extent on treatment with W-LI. Further, it was observed that activity was quite promising against colon cancer COLO-205 cells (GI50 121.03 µg/ml). HPLC profiling of W-LI revealed the presence of different polyphenolic compounds such as ellagic acid, catechin, quercetin, kaempferol etc. which might be contributing towards antioxidant and cytotoxic activity. The present study demonstrated that polyphenols rich W-LI extract from leaves of L. inermis possesses ability to inhibit oxidative radicals and cancer cells proliferation. PMID:28337113

  8. Pilot points method for conditioning multiple-point statistical facies simulation on flow data

    NASA Astrophysics Data System (ADS)

    Ma, Wei; Jafarpour, Behnam

    2018-05-01

    We propose a new pilot points method for conditioning discrete multiple-point statistical (MPS) facies simulation on dynamic flow data. While conditioning MPS simulation on static hard data is straightforward, their calibration against nonlinear flow data is nontrivial. The proposed method generates conditional models from a conceptual model of geologic connectivity, known as a training image (TI), by strategically placing and estimating pilot points. To place pilot points, a score map is generated based on three sources of information: (i) the uncertainty in facies distribution, (ii) the model response sensitivity information, and (iii) the observed flow data. Once the pilot points are placed, the facies values at these points are inferred from production data and then are used, along with available hard data at well locations, to simulate a new set of conditional facies realizations. While facies estimation at the pilot points can be performed using different inversion algorithms, in this study the ensemble smoother (ES) is adopted to update permeability maps from production data, which are then used to statistically infer facies types at the pilot point locations. The developed method combines the information in the flow data and the TI by using the former to infer facies values at selected locations away from the wells and the latter to ensure consistent facies structure and connectivity where away from measurement locations. Several numerical experiments are used to evaluate the performance of the developed method and to discuss its important properties.

  9. Effect of Finite Particle Size on Convergence of Point Particle Models in Euler-Lagrange Multiphase Dispersed Flow

    NASA Astrophysics Data System (ADS)

    Nili, Samaun; Park, Chanyoung; Haftka, Raphael T.; Kim, Nam H.; Balachandar, S.

    2017-11-01

    Point particle methods are extensively used in simulating Euler-Lagrange multiphase dispersed flow. When particles are much smaller than the Eulerian grid the point particle model is on firm theoretical ground. However, this standard approach of evaluating the gas-particle coupling at the particle center fails to converge as the Eulerian grid is reduced below particle size. We present an approach to model the interaction between particles and fluid for finite size particles that permits convergence. We use the generalized Faxen form to compute the force on a particle and compare the results against traditional point particle method. We apportion the different force components on the particle to fluid cells based on the fraction of particle volume or surface in the cell. The application is to a one-dimensional model of shock propagation through a particle-laden field at moderate volume fraction, where the convergence is achieved for a well-formulated force model and back coupling for finite size particles. Comparison with 3D direct fully resolved numerical simulations will be used to check if the approach also improves accuracy compared to the point particle model. Work supported by the U.S. Department of Energy, National Nuclear Security Administration, Advanced Simulation and Computing Program, as a Cooperative Agreement under the Predictive Science Academic Alliance Program, under Contract No. DE-NA0002378.

  10. Implicit Shape Models for Object Detection in 3d Point Clouds

    NASA Astrophysics Data System (ADS)

    Velizhev, A.; Shapovalov, R.; Schindler, K.

    2012-07-01

    We present a method for automatic object localization and recognition in 3D point clouds representing outdoor urban scenes. The method is based on the implicit shape models (ISM) framework, which recognizes objects by voting for their center locations. It requires only few training examples per class, which is an important property for practical use. We also introduce and evaluate an improved version of the spin image descriptor, more robust to point density variation and uncertainty in normal direction estimation. Our experiments reveal a significant impact of these modifications on the recognition performance. We compare our results against the state-of-the-art method and get significant improvement in both precision and recall on the Ohio dataset, consisting of combined aerial and terrestrial LiDAR scans of 150,000 m2 of urban area in total.

  11. Scan-To Output Validation: Towards a Standardized Geometric Quality Assessment of Building Information Models Based on Point Clouds

    NASA Astrophysics Data System (ADS)

    Bonduel, M.; Bassier, M.; Vergauwen, M.; Pauwels, P.; Klein, R.

    2017-11-01

    The use of Building Information Modeling (BIM) for existing buildings based on point clouds is increasing. Standardized geometric quality assessment of the BIMs is needed to make them more reliable and thus reusable for future users. First, available literature on the subject is studied. Next, an initial proposal for a standardized geometric quality assessment is presented. Finally, this method is tested and evaluated with a case study. The number of specifications on BIM relating to existing buildings is limited. The Levels of Accuracy (LOA) specification of the USIBD provides definitions and suggestions regarding geometric model accuracy, but lacks a standardized assessment method. A deviation analysis is found to be dependent on (1) the used mathematical model, (2) the density of the point clouds and (3) the order of comparison. Results of the analysis can be graphical and numerical. An analysis on macro (building) and micro (BIM object) scale is necessary. On macro scale, the complete model is compared to the original point cloud and vice versa to get an overview of the general model quality. The graphical results show occluded zones and non-modeled objects respectively. Colored point clouds are derived from this analysis and integrated in the BIM. On micro scale, the relevant surface parts are extracted per BIM object and compared to the complete point cloud. Occluded zones are extracted based on a maximum deviation. What remains is classified according to the LOA specification. The numerical results are integrated in the BIM with the use of object parameters.

  12. The Φ43 and Φ63 matricial QFT models have reflection positive two-point function

    NASA Astrophysics Data System (ADS)

    Grosse, Harald; Sako, Akifumi; Wulkenhaar, Raimar

    2018-01-01

    We extend our previous work (on D = 2) to give an exact solution of the ΦD3 large- N matrix model (or renormalised Kontsevich model) in D = 4 and D = 6 dimensions. Induction proofs and the difficult combinatorics are unchanged compared with D = 2, but the renormalisation - performed according to Zimmermann - is much more involved. As main result we prove that the Schwinger 2-point function resulting from the ΦD3 -QFT model on Moyal space satisfies, for real coupling constant, reflection positivity in D = 4 and D = 6 dimensions. The Källén-Lehmann mass spectrum of the associated Wightman 2-point function describes a scattering part | p|2 ≥ 2μ2 and an isolated broadened mass shell around | p|2 =μ2.

  13. SPS antenna pointing control

    NASA Technical Reports Server (NTRS)

    Hung, J. C.

    1980-01-01

    The pointing control of a microwave antenna of the Satellite Power System was investigated emphasizing: (1) the SPS antenna pointing error sensing method; (2) a rigid body pointing control design; and (3) approaches for modeling the flexible body characteristics of the solar collector. Accuracy requirements for the antenna pointing control consist of a mechanical pointing control accuracy of three arc-minutes and an electronic phased array pointing accuracy of three arc-seconds. Results based on the factors considered in current analysis, show that the three arc-minute overall pointing control accuracy can be achieved in practice.

  14. Adding-point strategy for reduced-order hypersonic aerothermodynamics modeling based on fuzzy clustering

    NASA Astrophysics Data System (ADS)

    Chen, Xin; Liu, Li; Zhou, Sida; Yue, Zhenjiang

    2016-09-01

    Reduced order models(ROMs) based on the snapshots on the CFD high-fidelity simulations have been paid great attention recently due to their capability of capturing the features of the complex geometries and flow configurations. To improve the efficiency and precision of the ROMs, it is indispensable to add extra sampling points to the initial snapshots, since the number of sampling points to achieve an adequately accurate ROM is generally unknown in prior, but a large number of initial sampling points reduces the parsimony of the ROMs. A fuzzy-clustering-based adding-point strategy is proposed and the fuzzy clustering acts an indicator of the region in which the precision of ROMs is relatively low. The proposed method is applied to construct the ROMs for the benchmark mathematical examples and a numerical example of hypersonic aerothermodynamics prediction for a typical control surface. The proposed method can achieve a 34.5% improvement on the efficiency than the estimated mean squared error prediction algorithm and shows same-level prediction accuracy.

  15. Ocean Turbulence. Paper 3; Two-Point Closure Model Momentum, Heat and Salt Vertical Diffusivities in the Presence of Shear

    NASA Technical Reports Server (NTRS)

    Canuto, V. M.; Dubovikov, M. S.; Howard, A.; Cheng, Y.

    1999-01-01

    In papers 1 and 2 we have presented the results of the most updated 1-point closure model for the turbulent vertical diffusivities of momentum, heat and salt, K(sub m,h,s). In this paper, we derive the analytic expressions for K(sub m,h,s) using a new 2-point closure model that has recently been developed and successfully tested against some approx. 80 turbulence statistics for different flows. The new model has no free parameters. The expressions for K(sub m, h. s) are analytical functions of two stability parameters: the Turner number R(sub rho) (salinity gradient/temperature gradient) and the Richardson number R(sub i) (temperature gradient/shear). The turbulent kinetic energy K and its rate of dissipation may be taken local or non-local (K-epsilon model). Contrary to all previous models that to describe turbulent mixing below the mixed layer (ML) have adopted three adjustable "background diffusivities" for momentum. heat and salt, we propose a model that avoids such adjustable diffusivities. We assume that below the ML, K(sub m,h,s) have the same functional dependence on R(sub i) and R(sub rho) derived from the turbulence model. However, in order to compute R(sub i) below the ML, we use data of vertical shear due to wave-breaking measured by Gargett et al. (1981). The procedure frees the model from adjustable background diffusivities and indeed we use the same model throughout the entire vertical extent of the ocean. Using the new K(sub m,h, s), we run an O-GCM and present a variety of results that we compare with Levitus and the KPP model. Since the traditional 1-point (used in papers 1 and 2) and the new 2-point closure models used here represent different modeling philosophies and procedures, testing them in an O-GCM is indispensable. The basic motivation is to show that the new 2-point closure model gives results that are overall superior to the 1-point closure in spite of the fact that the latter rely on several adjustable parameters while the new 2-point

  16. General model for the pointing error analysis of Risley-prism system based on ray direction deviation in light refraction

    NASA Astrophysics Data System (ADS)

    Zhang, Hao; Yuan, Yan; Su, Lijuan; Huang, Fengzhen; Bai, Qing

    2016-09-01

    The Risley-prism-based light beam steering apparatus delivers superior pointing accuracy and it is used in imaging LIDAR and imaging microscopes. A general model for pointing error analysis of the Risley prisms is proposed in this paper, based on ray direction deviation in light refraction. This model captures incident beam deviation, assembly deflections, and prism rotational error. We derive the transmission matrixes of the model firstly. Then, the independent and cumulative effects of different errors are analyzed through this model. Accuracy study of the model shows that the prediction deviation of pointing error for different error is less than 4.1×10-5° when the error amplitude is 0.1°. Detailed analyses of errors indicate that different error sources affect the pointing accuracy to varying degree, and the major error source is the incident beam deviation. The prism tilting has a relative big effect on the pointing accuracy when prism tilts in the principal section. The cumulative effect analyses of multiple errors represent that the pointing error can be reduced by tuning the bearing tilting in the same direction. The cumulative effect of rotational error is relative big when the difference of these two prism rotational angles equals 0 or π, while it is relative small when the difference equals π/2. The novelty of these results suggests that our analysis can help to uncover the error distribution and aid in measurement calibration of Risley-prism systems.

  17. 78 FR 55629 - Special Conditions: Cirrus Design Corporation, Model SF50; Inflatable Three-Point Restraint...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-11

    ...-0781; Special Conditions No. 23-261-SC] Special Conditions: Cirrus Design Corporation, Model SF50... conditions are issued for the Cirrus Design Corporation (Cirrus), model SF50. This airplane will have novel and unusual design features associated with installation of an inflatable three-point restraint safety...

  18. Spacing distribution functions for the one-dimensional point-island model with irreversible attachment

    NASA Astrophysics Data System (ADS)

    González, Diego Luis; Pimpinelli, Alberto; Einstein, T. L.

    2011-07-01

    We study the configurational structure of the point-island model for epitaxial growth in one dimension. In particular, we calculate the island gap and capture zone distributions. Our model is based on an approximate description of nucleation inside the gaps. Nucleation is described by the joint probability density pnXY(x,y), which represents the probability density to have nucleation at position x within a gap of size y. Our proposed functional form for pnXY(x,y) describes excellently the statistical behavior of the system. We compare our analytical model with extensive numerical simulations. Our model retains the most relevant physical properties of the system.

  19. Numerical simulation of a lattice polymer model at its integrable point

    NASA Astrophysics Data System (ADS)

    Bedini, A.; Owczarek, A. L.; Prellberg, T.

    2013-07-01

    We revisit an integrable lattice model of polymer collapse using numerical simulations. This model was first studied by Blöte and Nienhuis (1989 J. Phys. A: Math. Gen. 22 1415) and it describes polymers with some attraction, providing thus a model for the polymer collapse transition. At a particular set of Boltzmann weights the model is integrable and the exponents ν = 12/23 ≈ 0.522 and γ = 53/46 ≈ 1.152 have been computed via identification of the scaling dimensions xt = 1/12 and xh = -5/48. We directly investigate the polymer scaling exponents via Monte Carlo simulations using the pruned-enriched Rosenbluth method algorithm. By simulating this polymer model for walks up to length 4096 we find ν = 0.576(6) and γ = 1.045(5), which are clearly different from the predicted values. Our estimate for the exponent ν is compatible with the known θ-point value of 4/7 and in agreement with very recent numerical evaluation by Foster and Pinettes (2012 J. Phys. A: Math. Theor. 45 505003).

  20. A real-time ionospheric model based on GNSS Precise Point Positioning

    NASA Astrophysics Data System (ADS)

    Tu, Rui; Zhang, Hongping; Ge, Maorong; Huang, Guanwen

    2013-09-01

    This paper proposes a method of real-time monitoring and modeling the ionospheric Total Electron Content (TEC) by Precise Point Positioning (PPP). Firstly, the ionospheric TEC and receiver’s Differential Code Biases (DCB) are estimated with the undifferenced raw observation in real-time, then the ionospheric TEC model is established based on the Single Layer Model (SLM) assumption and the recovered ionospheric TEC. In this study, phase observations with high precision are directly used instead of phase smoothed code observations. In addition, the DCB estimation is separated from the establishment of the ionospheric model which will limit the impacts of the SLM assumption impacts. The ionospheric model is established at every epoch for real time application. The method is validated with three different GNSS networks on a local, regional, and global basis. The results show that the method is feasible and effective, the real-time ionosphere and DCB results are very consistent with the IGS final products, with a bias of 1-2 TECU and 0.4 ns respectively.

  1. A hierarchical model combining distance sampling and time removal to estimate detection probability during avian point counts

    USGS Publications Warehouse

    Amundson, Courtney L.; Royle, J. Andrew; Handel, Colleen M.

    2014-01-01

    Imperfect detection during animal surveys biases estimates of abundance and can lead to improper conclusions regarding distribution and population trends. Farnsworth et al. (2005) developed a combined distance-sampling and time-removal model for point-transect surveys that addresses both availability (the probability that an animal is available for detection; e.g., that a bird sings) and perceptibility (the probability that an observer detects an animal, given that it is available for detection). We developed a hierarchical extension of the combined model that provides an integrated analysis framework for a collection of survey points at which both distance from the observer and time of initial detection are recorded. Implemented in a Bayesian framework, this extension facilitates evaluating covariates on abundance and detection probability, incorporating excess zero counts (i.e. zero-inflation), accounting for spatial autocorrelation, and estimating population density. Species-specific characteristics, such as behavioral displays and territorial dispersion, may lead to different patterns of availability and perceptibility, which may, in turn, influence the performance of such hierarchical models. Therefore, we first test our proposed model using simulated data under different scenarios of availability and perceptibility. We then illustrate its performance with empirical point-transect data for a songbird that consistently produces loud, frequent, primarily auditory signals, the Golden-crowned Sparrow (Zonotrichia atricapilla); and for 2 ptarmigan species (Lagopus spp.) that produce more intermittent, subtle, and primarily visual cues. Data were collected by multiple observers along point transects across a broad landscape in southwest Alaska, so we evaluated point-level covariates on perceptibility (observer and habitat), availability (date within season and time of day), and abundance (habitat, elevation, and slope), and included a nested point

  2. The scalar-scalar-tensor inflationary three-point function in the axion monodromy model

    NASA Astrophysics Data System (ADS)

    Chowdhury, Debika; Sreenath, V.; Sriramkumar, L.

    2016-11-01

    The axion monodromy model involves a canonical scalar field that is governed by a linear potential with superimposed modulations. The modulations in the potential are responsible for a resonant behavior which gives rise to persisting oscillations in the scalar and, to a smaller extent, in the tensor power spectra. Interestingly, such spectra have been shown to lead to an improved fit to the cosmological data than the more conventional, nearly scale invariant, primordial power spectra. The scalar bi-spectrum in the model too exhibits continued modulations and the resonance is known to boost the amplitude of the scalar non-Gaussianity parameter to rather large values. An analytical expression for the scalar bi-spectrum had been arrived at earlier which, in fact, has been used to compare the model with the cosmic microwave background anisotropies at the level of three-point functions involving scalars. In this work, with future applications in mind, we arrive at a similar analytical template for the scalar-scalar-tensor cross-correlation. We also analytically establish the consistency relation (in the squeezed limit) for this three-point function. We conclude with a summary of the main results obtained.

  3. The OPALS Plan for Operations: Use of ISS Trajectory and Attitude Models in the OPALS Pointing Strategy

    NASA Technical Reports Server (NTRS)

    Abrahamson, Matthew J.; Oaida, Bogdan; Erkmen, Baris

    2013-01-01

    This paper will discuss the OPALS pointing strategy, focusing on incorporation of ISS trajectory and attitude models to build pointing predictions. Methods to extrapolate an ISS prediction based on past data will be discussed and will be compared to periodically published ISS predictions and Two-Line Element (TLE) predictions. The prediction performance will also be measured against GPS states available in telemetry. The performance of the pointing products will be compared to the allocated values in the OPALS pointing budget to assess compliance with requirements.

  4. DNA denaturation through a model of the partition points on a one-dimensional lattice

    NASA Astrophysics Data System (ADS)

    Mejdani, R.; Huseini, H.

    1994-08-01

    We have shown that by using a model of the partition points gas on a one-dimensional lattice, we can study, besides the saturation curves obtained before for the enzyme kinetics, also the denaturation process, i.e. the breaking of the hydrogen bonds connecting the two strands, under treatment by heat of DNA. We think that this model, as a very simple model and mathematically transparent, can be advantageous for pedagogic goals or other theoretical investigations in chemistry or modern biology.

  5. The modeling and design of the Annular Suspension and Pointing System /ASPS/. [for Space Shuttle

    NASA Technical Reports Server (NTRS)

    Kuo, B. C.; Lin, W. C. W.

    1979-01-01

    The Annular Suspension and Pointing System (ASPS) is a payload auxiliary pointing device of the Space Shuttle. The ASPS is comprised of two major subassemblies, a vernier and a coarse pointing subsystem. The three functions provided by the ASPS are related to the pointing of the payload, centering the payload in the magnetic actuator assembly, and tracking the payload mounting plate and shuttle motions by the coarse gimbals. The equations of motion of a simplified planar model of the ASPS are derived. Attention is given to a state diagram of the dynamics of the ASPS with position-plus-rate controller, the nonlinear spring characteristic for the wire-cable torque of the ASPS, the design of the analog ASPS through decoupling and pole placement, and the time response of different components of the continuous control system.

  6. Hourly predictive Levenberg-Marquardt ANN and multi linear regression models for predicting of dew point temperature

    NASA Astrophysics Data System (ADS)

    Zounemat-Kermani, Mohammad

    2012-08-01

    In this study, the ability of two models of multi linear regression (MLR) and Levenberg-Marquardt (LM) feed-forward neural network was examined to estimate the hourly dew point temperature. Dew point temperature is the temperature at which water vapor in the air condenses into liquid. This temperature can be useful in estimating meteorological variables such as fog, rain, snow, dew, and evapotranspiration and in investigating agronomical issues as stomatal closure in plants. The availability of hourly records of climatic data (air temperature, relative humidity and pressure) which could be used to predict dew point temperature initiated the practice of modeling. Additionally, the wind vector (wind speed magnitude and direction) and conceptual input of weather condition were employed as other input variables. The three quantitative standard statistical performance evaluation measures, i.e. the root mean squared error, mean absolute error, and absolute logarithmic Nash-Sutcliffe efficiency coefficient ( {| {{{Log}}({{NS}})} |} ) were employed to evaluate the performances of the developed models. The results showed that applying wind vector and weather condition as input vectors along with meteorological variables could slightly increase the ANN and MLR predictive accuracy. The results also revealed that LM-NN was superior to MLR model and the best performance was obtained by considering all potential input variables in terms of different evaluation criteria.

  7. Spacing distribution functions for 1D point island model with irreversible attachment

    NASA Astrophysics Data System (ADS)

    Gonzalez, Diego; Einstein, Theodore; Pimpinelli, Alberto

    2011-03-01

    We study the configurational structure of the point island model for epitaxial growth in one dimension. In particular, we calculate the island gap and capture zone distributions. Our model is based on an approximate description of nucleation inside the gaps. Nucleation is described by the joint probability density p xy n (x,y), which represents the probability density to have nucleation at position x within a gap of size y. Our proposed functional form for p xy n (x,y) describes excellently the statistical behavior of the system. We compare our analytical model with extensive numerical simulations. Our model retains the most relevant physical properties of the system. This work was supported by the NSF-MRSEC at the University of Maryland, Grant No. DMR 05-20471, with ancillary support from the Center for Nanophysics and Advanced Materials (CNAM).

  8. Estimation of boiling points using density functional theory with polarized continuum model solvent corrections.

    PubMed

    Chan, Poh Yin; Tong, Chi Ming; Durrant, Marcus C

    2011-09-01

    An empirical method for estimation of the boiling points of organic molecules based on density functional theory (DFT) calculations with polarized continuum model (PCM) solvent corrections has been developed. The boiling points are calculated as the sum of three contributions. The first term is calculated directly from the structural formula of the molecule, and is related to its effective surface area. The second is a measure of the electronic interactions between molecules, based on the DFT-PCM solvation energy, and the third is employed only for planar aromatic molecules. The method is applicable to a very diverse range of organic molecules, with normal boiling points in the range of -50 to 500 °C, and includes ten different elements (C, H, Br, Cl, F, N, O, P, S and Si). Plots of observed versus calculated boiling points gave R²=0.980 for a training set of 317 molecules, and R²=0.979 for a test set of 74 molecules. The role of intramolecular hydrogen bonding in lowering the boiling points of certain molecules is quantitatively discussed. Crown Copyright © 2011. Published by Elsevier Inc. All rights reserved.

  9. Interlinked population balance and cybernetic models for the simultaneous saccharification and fermentation of natural polymers.

    PubMed

    Ho, Yong Kuen; Doshi, Pankaj; Yeoh, Hak Koon; Ngoh, Gek Cheng

    2015-10-01

    Simultaneous Saccharification and Fermentation (SSF) is a process where microbes have to first excrete extracellular enzymes to break polymeric substrates such as starch or cellulose into edible nutrients, followed by in situ conversion of those nutrients into more valuable metabolites via fermentation. As such, SSF is very attractive as a one-pot synthesis method of biological products. However, due to the co-existence of multiple biochemical steps, modeling SSF faces two major challenges. The first is to capture the successive chain-end and/or random scission of the polymeric substrates over time, which determines the rate of generation of various fermentable substrates. The second is to incorporate the response of microbes, including their preferential substrate utilization, to such a complex broth. Each of the above-mentioned challenges has manifested itself in many related areas, and has been competently but separately attacked with two diametrically different tools, i.e., the Population Balance Modeling (PBM) and the Cybernetic Modeling (CM), respectively. To date, they have yet to be applied in unison on SSF resulting in a general inadequacy or haphazard approaches to examine the dynamics and interactions of depolymerization and fermentation. To overcome this unsatisfactory state of affairs, here, the general linkage between PBM and CM is established to model SSF. A notable feature is the flexible linkage, which allows the individual PBM and CM models to be independently modified to the desired levels of detail. A more general treatment of the secretion of extracellular enzyme is also proposed in the CM model. Through a case study on the growth of a recombinant Saccharomyces cerevisiae capable of excreting a chain-end scission enzyme (glucoamylase) on starch, the interlinked model calibrated using data from the literature (Nakamura et al., Biotechnol. Bioeng. 53:21-25, 1997), captured features not attainable by existing approaches. In particular, the effect

  10. A Hybrid Physics-Based Data-Driven Approach for Point-Particle Force Modeling

    NASA Astrophysics Data System (ADS)

    Moore, Chandler; Akiki, Georges; Balachandar, S.

    2017-11-01

    This study improves upon the physics-based pairwise interaction extended point-particle (PIEP) model. The PIEP model leverages a physical framework to predict fluid mediated interactions between solid particles. While the PIEP model is a powerful tool, its pairwise assumption leads to increased error in flows with high particle volume fractions. To reduce this error, a regression algorithm is used to model the differences between the current PIEP model's predictions and the results of direct numerical simulations (DNS) for an array of monodisperse solid particles subjected to various flow conditions. The resulting statistical model and the physical PIEP model are superimposed to construct a hybrid, physics-based data-driven PIEP model. It must be noted that the performance of a pure data-driven approach without the model-form provided by the physical PIEP model is substantially inferior. The hybrid model's predictive capabilities are analyzed using more DNS. In every case tested, the hybrid PIEP model's prediction are more accurate than those of physical PIEP model. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE-1315138 and the U.S. DOE, NNSA, ASC Program, as a Cooperative Agreement under Contract No. DE-NA0002378.

  11. A non-ideal model for predicting the effect of dissolved salt on the flash point of solvent mixtures.

    PubMed

    Liaw, Horng-Jang; Wang, Tzu-Ai

    2007-03-06

    Flash point is one of the major quantities used to characterize the fire and explosion hazard of liquids. Herein, a liquid with dissolved salt is presented in a salt-distillation process for separating close-boiling or azeotropic systems. The addition of salts to a liquid may reduce fire and explosion hazard. In this study, we have modified a previously proposed model for predicting the flash point of miscible mixtures to extend its application to solvent/salt mixtures. This modified model was verified by comparison with the experimental data for organic solvent/salt and aqueous-organic solvent/salt mixtures to confirm its efficacy in terms of prediction of the flash points of these mixtures. The experimental results confirm marked increases in liquid flash point increment with addition of inorganic salts relative to supplementation with equivalent quantities of water. Based on this evidence, it appears reasonable to suggest potential application for the model in assessment of the fire and explosion hazard for solvent/salt mixtures and, further, that addition of inorganic salts may prove useful for hazard reduction in flammable liquids.

  12. A Lidar Point Cloud Based Procedure for Vertical Canopy Structure Analysis And 3D Single Tree Modelling in Forest

    PubMed Central

    Wang, Yunsheng; Weinacker, Holger; Koch, Barbara

    2008-01-01

    A procedure for both vertical canopy structure analysis and 3D single tree modelling based on Lidar point cloud is presented in this paper. The whole area of research is segmented into small study cells by a raster net. For each cell, a normalized point cloud whose point heights represent the absolute heights of the ground objects is generated from the original Lidar raw point cloud. The main tree canopy layers and the height ranges of the layers are detected according to a statistical analysis of the height distribution probability of the normalized raw points. For the 3D modelling of individual trees, individual trees are detected and delineated not only from the top canopy layer but also from the sub canopy layer. The normalized points are resampled into a local voxel space. A series of horizontal 2D projection images at the different height levels are then generated respect to the voxel space. Tree crown regions are detected from the projection images. Individual trees are then extracted by means of a pre-order forest traversal process through all the tree crown regions at the different height levels. Finally, 3D tree crown models of the extracted individual trees are reconstructed. With further analyses on the 3D models of individual tree crowns, important parameters such as crown height range, crown volume and crown contours at the different height levels can be derived. PMID:27879916

  13. Backward bifurcations, turning points and rich dynamics in simple disease models.

    PubMed

    Zhang, Wenjing; Wahl, Lindi M; Yu, Pei

    2016-10-01

    In this paper, dynamical systems theory and bifurcation theory are applied to investigate the rich dynamical behaviours observed in three simple disease models. The 2- and 3-dimensional models we investigate have arisen in previous investigations of epidemiology, in-host disease, and autoimmunity. These closely related models display interesting dynamical behaviors including bistability, recurrence, and regular oscillations, each of which has possible clinical or public health implications. In this contribution we elucidate the key role of backward bifurcations in the parameter regimes leading to the behaviors of interest. We demonstrate that backward bifurcations with varied positions of turning points facilitate the appearance of Hopf bifurcations, and the varied dynamical behaviors are then determined by the properties of the Hopf bifurcation(s), including their location and direction. A Maple program developed earlier is implemented to determine the stability of limit cycles bifurcating from the Hopf bifurcation. Numerical simulations are presented to illustrate phenomena of interest such as bistability, recurrence and oscillation. We also discuss the physical motivations for the models and the clinical implications of the resulting dynamics.

  14. Liquid-liquid critical point in a simple analytical model of water.

    PubMed

    Urbic, Tomaz

    2016-10-01

    A statistical model for a simple three-dimensional Mercedes-Benz model of water was used to study phase diagrams. This model on a simple level describes the thermal and volumetric properties of waterlike molecules. A molecule is presented as a soft sphere with four directions in which hydrogen bonds can be formed. Two neighboring waters can interact through a van der Waals interaction or an orientation-dependent hydrogen-bonding interaction. For pure water, we explored properties such as molar volume, density, heat capacity, thermal expansion coefficient, and isothermal compressibility and found that the volumetric and thermal properties follow the same trends with temperature as in real water and are in good general agreement with Monte Carlo simulations. The model exhibits also two critical points for liquid-gas transition and transition between low-density and high-density fluid. Coexistence curves and a Widom line for the maximum and minimum in thermal expansion coefficient divides the phase space of the model into three parts: in one part we have gas region, in the second a high-density liquid, and the third region contains low-density liquid.

  15. Liquid-liquid critical point in a simple analytical model of water

    NASA Astrophysics Data System (ADS)

    Urbic, Tomaz

    2016-10-01

    A statistical model for a simple three-dimensional Mercedes-Benz model of water was used to study phase diagrams. This model on a simple level describes the thermal and volumetric properties of waterlike molecules. A molecule is presented as a soft sphere with four directions in which hydrogen bonds can be formed. Two neighboring waters can interact through a van der Waals interaction or an orientation-dependent hydrogen-bonding interaction. For pure water, we explored properties such as molar volume, density, heat capacity, thermal expansion coefficient, and isothermal compressibility and found that the volumetric and thermal properties follow the same trends with temperature as in real water and are in good general agreement with Monte Carlo simulations. The model exhibits also two critical points for liquid-gas transition and transition between low-density and high-density fluid. Coexistence curves and a Widom line for the maximum and minimum in thermal expansion coefficient divides the phase space of the model into three parts: in one part we have gas region, in the second a high-density liquid, and the third region contains low-density liquid.

  16. Adaptive surrogate model based multi-objective transfer trajectory optimization between different libration points

    NASA Astrophysics Data System (ADS)

    Peng, Haijun; Wang, Wei

    2016-10-01

    An adaptive surrogate model-based multi-objective optimization strategy that combines the benefits of invariant manifolds and low-thrust control toward developing a low-computational-cost transfer trajectory between libration orbits around the L1 and L2 libration points in the Sun-Earth system has been proposed in this paper. A new structure for a multi-objective transfer trajectory optimization model that divides the transfer trajectory into several segments and gives the dominations for invariant manifolds and low-thrust control in different segments has been established. To reduce the computational cost of multi-objective transfer trajectory optimization, a mixed sampling strategy-based adaptive surrogate model has been proposed. Numerical simulations show that the results obtained from the adaptive surrogate-based multi-objective optimization are in agreement with the results obtained using direct multi-objective optimization methods, and the computational workload of the adaptive surrogate-based multi-objective optimization is only approximately 10% of that of direct multi-objective optimization. Furthermore, the generating efficiency of the Pareto points of the adaptive surrogate-based multi-objective optimization is approximately 8 times that of the direct multi-objective optimization. Therefore, the proposed adaptive surrogate-based multi-objective optimization provides obvious advantages over direct multi-objective optimization methods.

  17. Testing the Simple Biosphere model (SiB) using point micrometeorological and biophysical data

    NASA Technical Reports Server (NTRS)

    Sellers, P. J.; Dorman, J. L.

    1987-01-01

    The suitability of the Simple Biosphere (SiB) model of Sellers et al. (1986) for calculation of the surface fluxes for use within general circulation models is assessed. The structure of the SiB model is described, and its performance is evaluated in terms of its ability to realistically and accurately simulate biophysical processes over a number of test sites, including Ruthe (Germany), South Carolina (U.S.), and Central Wales (UK), for which point biophysical and micrometeorological data were available. The model produced simulations of the energy balances of barley, wheat, maize, and Norway Spruce sites over periods ranging from 1 to 40 days. Generally, it was found that the model reproduced time series of latent, sensible, and ground-heat fluxes and surface radiative temperature comparable with the available data.

  18. Milestone-specific, Observed data points for evaluating levels of performance (MODEL) assessment strategy for anesthesiology residency programs.

    PubMed

    Nagy, Christopher J; Fitzgerald, Brian M; Kraus, Gregory P

    2014-01-01

    Anesthesiology residency programs will be expected to have Milestones-based evaluation systems in place by July 2014 as part of the Next Accreditation System. The San Antonio Uniformed Services Health Education Consortium (SAUSHEC) anesthesiology residency program developed and implemented a Milestones-based feedback and evaluation system a year ahead of schedule. It has been named the Milestone-specific, Observed Data points for Evaluating Levels of performance (MODEL) assessment strategy. The "MODEL Menu" and the "MODEL Blueprint" are tools that other anesthesiology residency programs can use in developing their own Milestones-based feedback and evaluation systems prior to ACGME-required implementation. Data from our early experience with the streamlined MODEL blueprint assessment strategy showed substantially improved faculty compliance with reporting requirements. The MODEL assessment strategy provides programs with a workable assessment method for residents, and important Milestones data points to programs for ACGME reporting.

  19. Industrial point source CO2 emission strength estimation with aircraft measurements and dispersion modelling.

    PubMed

    Carotenuto, Federico; Gualtieri, Giovanni; Miglietta, Franco; Riccio, Angelo; Toscano, Piero; Wohlfahrt, Georg; Gioli, Beniamino

    2018-02-22

    CO 2 remains the greenhouse gas that contributes most to anthropogenic global warming, and the evaluation of its emissions is of major interest to both research and regulatory purposes. Emission inventories generally provide quite reliable estimates of CO 2 emissions. However, because of intrinsic uncertainties associated with these estimates, it is of great importance to validate emission inventories against independent estimates. This paper describes an integrated approach combining aircraft measurements and a puff dispersion modelling framework by considering a CO 2 industrial point source, located in Biganos, France. CO 2 density measurements were obtained by applying the mass balance method, while CO 2 emission estimates were derived by implementing the CALMET/CALPUFF model chain. For the latter, three meteorological initializations were used: (i) WRF-modelled outputs initialized by ECMWF reanalyses; (ii) WRF-modelled outputs initialized by CFSR reanalyses and (iii) local in situ observations. Governmental inventorial data were used as reference for all applications. The strengths and weaknesses of the different approaches and how they affect emission estimation uncertainty were investigated. The mass balance based on aircraft measurements was quite succesful in capturing the point source emission strength (at worst with a 16% bias), while the accuracy of the dispersion modelling, markedly when using ECMWF initialization through the WRF model, was only slightly lower (estimation with an 18% bias). The analysis will help in highlighting some methodological best practices that can be used as guidelines for future experiments.

  20. Fixed point and anomaly mediation in partial {\\boldsymbol{N}}=2 supersymmetric standard models

    NASA Astrophysics Data System (ADS)

    Yin, Wen

    2018-01-01

    Motivated by the simple toroidal compactification of extra-dimensional SUSY theories, we investigate a partial N = 2 supersymmetric (SUSY) extension of the standard model which has an N = 2 SUSY sector and an N = 1 SUSY sector. We point out that below the scale of the partial breaking of N = 2 to N = 1, the ratio of Yukawa to gauge couplings embedded in the original N = 2 gauge interaction in the N = 2 sector becomes greater due to a fixed point. Since at the partial breaking scale the sfermion masses in the N = 2 sector are suppressed due to the N = 2 non-renormalization theorem, the anomaly mediation effect becomes important. If dominant, the anomaly-induced masses for the sfermions in the N = 2 sector are almost UV-insensitive due to the fixed point. Interestingly, these masses are always positive, i.e. there is no tachyonic slepton problem. From an example model, we show interesting phenomena differing from ordinary MSSM. In particular, the dark matter particle can be a sbino, i.e. the scalar component of the N = 2 vector multiplet of {{U}}{(1)}Y. To obtain the correct dark matter abundance, the mass of the sbino, as well as the MSSM sparticles in the N = 2 sector which have a typical mass pattern of anomaly mediation, is required to be small. Therefore, this scenario can be tested and confirmed in the LHC and may be further confirmed by the measurement of the N = 2 Yukawa couplings in future colliders. This model can explain dark matter, the muon g-2 anomaly, and gauge coupling unification, and relaxes some ordinary problems within the MSSM. It is also compatible with thermal leptogenesis.

  1. Cosmological model-independent test of ΛCDM with two-point diagnostic by the observational Hubble parameter data

    NASA Astrophysics Data System (ADS)

    Cao, Shu-Lei; Duan, Xiao-Wei; Meng, Xiao-Lei; Zhang, Tong-Jie

    2018-04-01

    Aiming at exploring the nature of dark energy (DE), we use forty-three observational Hubble parameter data (OHD) in the redshift range 0 < z ≤slant 2.36 to make a cosmological model-independent test of the ΛCDM model with two-point Omh^2(z2;z1) diagnostic. In ΛCDM model, with equation of state (EoS) w=-1, two-point diagnostic relation Omh^2 ≡ Ωmh^2 is tenable, where Ωm is the present matter density parameter, and h is the Hubble parameter divided by 100 {km s^{-1 Mpc^{-1}}}. We utilize two methods: the weighted mean and median statistics to bin the OHD to increase the signal-to-noise ratio of the measurements. The binning methods turn out to be promising and considered to be robust. By applying the two-point diagnostic to the binned data, we find that although the best-fit values of Omh^2 fluctuate as the continuous redshift intervals change, on average, they are continuous with being constant within 1 σ confidence interval. Therefore, we conclude that the ΛCDM model cannot be ruled out.

  2. Teaching Point-Group Symmetry with Three-Dimensional Models

    ERIC Educational Resources Information Center

    Flint, Edward B.

    2011-01-01

    Three tools for teaching symmetry in the context of an upper-level undergraduate or introductory graduate course on the chemical applications of group theory are presented. The first is a collection of objects that have the symmetries of all the low-symmetry and high-symmetry point groups and the point groups with rotational symmetries from 2-fold…

  3. Molecular and Kinetic Models for High-rate Thermal Degradation of Polyethylene

    DOE PAGES

    Lane, J. Matthew; Moore, Nathan W.

    2018-02-01

    Thermal degradation of polyethylene is studied under the extremely high rate temperature ramps expected in laser-driven and X-ray ablation experiments—from 10 10 to 10 14 K/s in isochoric, condensed phases. The molecular evolution and macroscopic state variables are extracted as a function of density from reactive molecular dynamics simulations using the ReaxFF potential. The enthalpy, dissociation onset temperature, bond evolution, and observed cross-linking are shown to be rate dependent. These results are used to parametrize a kinetic rate model for the decomposition and coalescence of hydrocarbons as a function of temperature, temperature ramp rate, and density. In conclusion, the resultsmore » are contrasted to first-order random-scission macrokinetic models often assumed for pyrolysis of linear polyethylene under ambient conditions.« less

  4. Molecular and Kinetic Models for High-rate Thermal Degradation of Polyethylene

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lane, J. Matthew; Moore, Nathan W.

    Thermal degradation of polyethylene is studied under the extremely high rate temperature ramps expected in laser-driven and X-ray ablation experiments—from 10 10 to 10 14 K/s in isochoric, condensed phases. The molecular evolution and macroscopic state variables are extracted as a function of density from reactive molecular dynamics simulations using the ReaxFF potential. The enthalpy, dissociation onset temperature, bond evolution, and observed cross-linking are shown to be rate dependent. These results are used to parametrize a kinetic rate model for the decomposition and coalescence of hydrocarbons as a function of temperature, temperature ramp rate, and density. In conclusion, the resultsmore » are contrasted to first-order random-scission macrokinetic models often assumed for pyrolysis of linear polyethylene under ambient conditions.« less

  5. Modelling The Effect of Changing Point Systems to Teams’ Competition Standing in A Malaysian Soccer Super League

    NASA Astrophysics Data System (ADS)

    Mat Yusof, Muhammad; Khalid, Ruzelan; Hamid, Mohamad Shukri Abdul; Mansor, Rosnalini; Sulaiman, Tajularipin

    2018-05-01

    In a sports league such as in a soccer league, the teams’ competition standing is based on a cumulative point system. Typically, the standard point system is given to every single match for win, draw and lose teams is the 3-1-0 point system. In this paper, we explore the effect of changing point systems to teams’ competition standing by changing the weightage values for win, draw and lose teams. Three types of point systems are explored in our soccer simulation model; firstly the 3-1-0, secondly the 2-1-0 and thirdly the 4-1-0 point system. Based on the teams participating in a Malaysian soccer Super League, our simulation result shows that there are small changes in term of teams’ competition standing when we compared the actual rank and the simulation rank position. However, the 4-1-0 point system recorded the highest Pearson correlation value which is 0.97, followed by the 2-1-0 point system (0.95) and thirdly the 3-1-0 point system (0.94).

  6. Point-point and point-line moving-window correlation spectroscopy and its applications

    NASA Astrophysics Data System (ADS)

    Zhou, Qun; Sun, Suqin; Zhan, Daqi; Yu, Zhiwu

    2008-07-01

    In this paper, we present a new extension of generalized two-dimensional (2D) correlation spectroscopy. Two new algorithms, namely point-point (P-P) correlation and point-line (P-L) correlation, have been introduced to do the moving-window 2D correlation (MW2D) analysis. The new method has been applied to a spectral model consisting of two different processes. The results indicate that P-P correlation spectroscopy can unveil the details and re-constitute the entire process, whilst the P-L can provide general feature of the concerned processes. Phase transition behavior of dimyristoylphosphotidylethanolamine (DMPE) has been studied using MW2D correlation spectroscopy. The newly proposed method verifies that the phase transition temperature is 56 °C, same as the result got from a differential scanning calorimeter. To illustrate the new method further, a lysine and lactose mixture has been studied under thermo perturbation. Using the P-P MW2D, the Maillard reaction of the mixture was clearly monitored, which has been very difficult using conventional display of FTIR spectra.

  7. GPS/GLONASS Combined Precise Point Positioning with Receiver Clock Modeling

    PubMed Central

    Wang, Fuhong; Chen, Xinghan; Guo, Fei

    2015-01-01

    Research has demonstrated that receiver clock modeling can reduce the correlation coefficients among the parameters of receiver clock bias, station height and zenith tropospheric delay. This paper introduces the receiver clock modeling to GPS/GLONASS combined precise point positioning (PPP), aiming to better separate the receiver clock bias and station coordinates and therefore improve positioning accuracy. Firstly, the basic mathematic models including the GPS/GLONASS observation equations, stochastic model, and receiver clock model are briefly introduced. Then datasets from several IGS stations equipped with high-stability atomic clocks are used for kinematic PPP tests. To investigate the performance of PPP, including the positioning accuracy and convergence time, a week of (1–7 January 2014) GPS/GLONASS data retrieved from these IGS stations are processed with different schemes. The results indicate that the positioning accuracy as well as convergence time can benefit from the receiver clock modeling. This is particularly pronounced for the vertical component. Statistic RMSs show that the average improvement of three-dimensional positioning accuracy reaches up to 30%–40%. Sometimes, it even reaches over 60% for specific stations. Compared to the GPS-only PPP, solutions of the GPS/GLONASS combined PPP are much better no matter if the receiver clock offsets are modeled or not, indicating that the positioning accuracy and reliability are significantly improved with the additional GLONASS satellites in the case of insufficient number of GPS satellites or poor geometry conditions. In addition to the receiver clock modeling, the impacts of different inter-system timing bias (ISB) models are investigated. For the case of a sufficient number of satellites with fairly good geometry, the PPP performances are not seriously affected by the ISB model due to the low correlation between the ISB and the other parameters. However, the refinement of ISB model weakens the

  8. On the stability and dynamics of stochastic spiking neuron models: Nonlinear Hawkes process and point process GLMs

    PubMed Central

    Truccolo, Wilson

    2017-01-01

    Point process generalized linear models (PP-GLMs) provide an important statistical framework for modeling spiking activity in single-neurons and neuronal networks. Stochastic stability is essential when sampling from these models, as done in computational neuroscience to analyze statistical properties of neuronal dynamics and in neuro-engineering to implement closed-loop applications. Here we show, however, that despite passing common goodness-of-fit tests, PP-GLMs estimated from data are often unstable, leading to divergent firing rates. The inclusion of absolute refractory periods is not a satisfactory solution since the activity then typically settles into unphysiological rates. To address these issues, we derive a framework for determining the existence and stability of fixed points of the expected conditional intensity function (CIF) for general PP-GLMs. Specifically, in nonlinear Hawkes PP-GLMs, the CIF is expressed as a function of the previous spike history and exogenous inputs. We use a mean-field quasi-renewal (QR) approximation that decomposes spike history effects into the contribution of the last spike and an average of the CIF over all spike histories prior to the last spike. Fixed points for stationary rates are derived as self-consistent solutions of integral equations. Bifurcation analysis and the number of fixed points predict that the original models can show stable, divergent, and metastable (fragile) dynamics. For fragile models, fluctuations of the single-neuron dynamics predict expected divergence times after which rates approach unphysiologically high values. This metric can be used to estimate the probability of rates to remain physiological for given time periods, e.g., for simulation purposes. We demonstrate the use of the stability framework using simulated single-neuron examples and neurophysiological recordings. Finally, we show how to adapt PP-GLM estimation procedures to guarantee model stability. Overall, our results provide a

  9. On the stability and dynamics of stochastic spiking neuron models: Nonlinear Hawkes process and point process GLMs.

    PubMed

    Gerhard, Felipe; Deger, Moritz; Truccolo, Wilson

    2017-02-01

    Point process generalized linear models (PP-GLMs) provide an important statistical framework for modeling spiking activity in single-neurons and neuronal networks. Stochastic stability is essential when sampling from these models, as done in computational neuroscience to analyze statistical properties of neuronal dynamics and in neuro-engineering to implement closed-loop applications. Here we show, however, that despite passing common goodness-of-fit tests, PP-GLMs estimated from data are often unstable, leading to divergent firing rates. The inclusion of absolute refractory periods is not a satisfactory solution since the activity then typically settles into unphysiological rates. To address these issues, we derive a framework for determining the existence and stability of fixed points of the expected conditional intensity function (CIF) for general PP-GLMs. Specifically, in nonlinear Hawkes PP-GLMs, the CIF is expressed as a function of the previous spike history and exogenous inputs. We use a mean-field quasi-renewal (QR) approximation that decomposes spike history effects into the contribution of the last spike and an average of the CIF over all spike histories prior to the last spike. Fixed points for stationary rates are derived as self-consistent solutions of integral equations. Bifurcation analysis and the number of fixed points predict that the original models can show stable, divergent, and metastable (fragile) dynamics. For fragile models, fluctuations of the single-neuron dynamics predict expected divergence times after which rates approach unphysiologically high values. This metric can be used to estimate the probability of rates to remain physiological for given time periods, e.g., for simulation purposes. We demonstrate the use of the stability framework using simulated single-neuron examples and neurophysiological recordings. Finally, we show how to adapt PP-GLM estimation procedures to guarantee model stability. Overall, our results provide a

  10. Modelling the large-scale redshift-space 3-point correlation function of galaxies

    NASA Astrophysics Data System (ADS)

    Slepian, Zachary; Eisenstein, Daniel J.

    2017-08-01

    We present a configuration-space model of the large-scale galaxy 3-point correlation function (3PCF) based on leading-order perturbation theory and including redshift-space distortions (RSD). This model should be useful in extracting distance-scale information from the 3PCF via the baryon acoustic oscillation method. We include the first redshift-space treatment of biasing by the baryon-dark matter relative velocity. Overall, on large scales the effect of RSD is primarily a renormalization of the 3PCF that is roughly independent of both physical scale and triangle opening angle; for our adopted Ωm and bias values, the rescaling is a factor of ˜1.8. We also present an efficient scheme for computing 3PCF predictions from our model, important for allowing fast exploration of the space of cosmological parameters in future analyses.

  11. Birth-death models and coalescent point processes: the shape and probability of reconstructed phylogenies.

    PubMed

    Lambert, Amaury; Stadler, Tanja

    2013-12-01

    Forward-in-time models of diversification (i.e., speciation and extinction) produce phylogenetic trees that grow "vertically" as time goes by. Pruning the extinct lineages out of such trees leads to natural models for reconstructed trees (i.e., phylogenies of extant species). Alternatively, reconstructed trees can be modelled by coalescent point processes (CPPs), where trees grow "horizontally" by the sequential addition of vertical edges. Each new edge starts at some random speciation time and ends at the present time; speciation times are drawn from the same distribution independently. CPPs lead to extremely fast computation of tree likelihoods and simulation of reconstructed trees. Their topology always follows the uniform distribution on ranked tree shapes (URT). We characterize which forward-in-time models lead to URT reconstructed trees and among these, which lead to CPP reconstructed trees. We show that for any "asymmetric" diversification model in which speciation rates only depend on time and extinction rates only depend on time and on a non-heritable trait (e.g., age), the reconstructed tree is CPP, even if extant species are incompletely sampled. If rates additionally depend on the number of species, the reconstructed tree is (only) URT (but not CPP). We characterize the common distribution of speciation times in the CPP description, and discuss incomplete species sampling as well as three special model cases in detail: (1) the extinction rate does not depend on a trait; (2) rates do not depend on time; (3) mass extinctions may happen additionally at certain points in the past. Copyright © 2013 Elsevier Inc. All rights reserved.

  12. Model-based decoding, information estimation, and change-point detection techniques for multineuron spike trains.

    PubMed

    Pillow, Jonathan W; Ahmadian, Yashar; Paninski, Liam

    2011-01-01

    One of the central problems in systems neuroscience is to understand how neural spike trains convey sensory information. Decoding methods, which provide an explicit means for reading out the information contained in neural spike responses, offer a powerful set of tools for studying the neural coding problem. Here we develop several decoding methods based on point-process neural encoding models, or forward models that predict spike responses to stimuli. These models have concave log-likelihood functions, which allow efficient maximum-likelihood model fitting and stimulus decoding. We present several applications of the encoding model framework to the problem of decoding stimulus information from population spike responses: (1) a tractable algorithm for computing the maximum a posteriori (MAP) estimate of the stimulus, the most probable stimulus to have generated an observed single- or multiple-neuron spike train response, given some prior distribution over the stimulus; (2) a gaussian approximation to the posterior stimulus distribution that can be used to quantify the fidelity with which various stimulus features are encoded; (3) an efficient method for estimating the mutual information between the stimulus and the spike trains emitted by a neural population; and (4) a framework for the detection of change-point times (the time at which the stimulus undergoes a change in mean or variance) by marginalizing over the posterior stimulus distribution. We provide several examples illustrating the performance of these estimators with simulated and real neural data.

  13. Modeling shock responses of plastic bonded explosives using material point method

    NASA Astrophysics Data System (ADS)

    Shang, Hailin; Zhao, Feng; Fu, Hua

    2017-01-01

    Shock responses of plastic bonded explosives are modeled using material point method as implemented in the Uintah Computational Framework. Two-dimensional simulation model was established based on the micrograph of PBX9501. Shock loading for the explosive was performed by a piston moving at a constant velocity. Unreactive simulation results indicate that under shock loading serious plastic strain appears on the boundary of HMX grains. Simultaneously, the plastic strain energy transforms to thermal energy, causing the temperature to rise rapidly on grain boundary areas. The influence of shock strength on the responses of explosive was also investigated by increasing the piston velocity. And the results show that with increasing shock strength, the distribution of plastic strain and temperature does not have significant changes, but their values increase obviously. Namely, the higher the shock strength is, the higher the temperature rise will be.

  14. Exploring the squeezed three-point galaxy correlation function with generalized halo occupation distribution models

    NASA Astrophysics Data System (ADS)

    Yuan, Sihan; Eisenstein, Daniel J.; Garrison, Lehman H.

    2018-04-01

    We present the GeneRalized ANd Differentiable Halo Occupation Distribution (GRAND-HOD) routine that generalizes the standard 5 parameter halo occupation distribution model (HOD) with various halo-scale physics and assembly bias. We describe the methodology of 4 different generalizations: satellite distribution generalization, velocity bias, closest approach distance generalization, and assembly bias. We showcase the signatures of these generalizations in the 2-point correlation function (2PCF) and the squeezed 3-point correlation function (squeezed 3PCF). We identify generalized HOD prescriptions that are nearly degenerate in the projected 2PCF and demonstrate that these degeneracies are broken in the redshift-space anisotropic 2PCF and the squeezed 3PCF. We also discuss the possibility of identifying degeneracies in the anisotropic 2PCF and further demonstrate the extra constraining power of the squeezed 3PCF on galaxy-halo connection models. We find that within our current HOD framework, the anisotropic 2PCF can predict the squeezed 3PCF better than its statistical error. This implies that a discordant squeezed 3PCF measurement could falsify the particular HOD model space. Alternatively, it is possible that further generalizations of the HOD model would open opportunities for the squeezed 3PCF to provide novel parameter measurements. The GRAND-HOD Python package is publicly available at https://github.com/SandyYuan/GRAND-HOD.

  15. Freezing point depression in model Lennard-Jones solutions

    NASA Astrophysics Data System (ADS)

    Koschke, Konstantin; Jörg Limbach, Hans; Kremer, Kurt; Donadio, Davide

    2015-09-01

    Crystallisation of liquid solutions is of uttermost importance in a wide variety of processes in materials, atmospheric and food science. Depending on the type and concentration of solutes the freezing point shifts, thus allowing control on the thermodynamics of complex fluids. Here we investigate the basic principles of solute-induced freezing point depression by computing the melting temperature of a Lennard-Jones fluid with low concentrations of solutes, by means of equilibrium molecular dynamics simulations. The effect of solvophilic and weakly solvophobic solutes at low concentrations is analysed, scanning systematically the size and the concentration. We identify the range of parameters that produce deviations from the linear dependence of the freezing point on the molal concentration of solutes, expected for ideal solutions. Our simulations allow us also to link the shifts in coexistence temperature to the microscopic structure of the solutions.

  16. From Particles and Point Clouds to Voxel Models: High Resolution Modeling of Dynamic Landscapes in Open Source GIS

    NASA Astrophysics Data System (ADS)

    Mitasova, H.; Hardin, E. J.; Kratochvilova, A.; Landa, M.

    2012-12-01

    Multitemporal data acquired by modern mapping technologies provide unique insights into processes driving land surface dynamics. These high resolution data also offer an opportunity to improve the theoretical foundations and accuracy of process-based simulations of evolving landforms. We discuss development of new generation of visualization and analytics tools for GRASS GIS designed for 3D multitemporal data from repeated lidar surveys and from landscape process simulations. We focus on data and simulation methods that are based on point sampling of continuous fields and lead to representation of evolving surfaces as series of raster map layers or voxel models. For multitemporal lidar data we present workflows that combine open source point cloud processing tools with GRASS GIS and custom python scripts to model and analyze dynamics of coastal topography (Figure 1) and we outline development of coastal analysis toolbox. The simulations focus on particle sampling method for solving continuity equations and its application for geospatial modeling of landscape processes. In addition to water and sediment transport models, already implemented in GIS, the new capabilities under development combine OpenFOAM for wind shear stress simulation with a new module for aeolian sand transport and dune evolution simulations. Comparison of observed dynamics with the results of simulations is supported by a new, integrated 2D and 3D visualization interface that provides highly interactive and intuitive access to the redesigned and enhanced visualization tools. Several case studies will be used to illustrate the presented methods and tools and demonstrate the power of workflows built with FOSS and highlight their interoperability.Figure 1. Isosurfaces representing evolution of shoreline and a z=4.5m contour between the years 1997-2011at Cape Hatteras, NC extracted from a voxel model derived from series of lidar-based DEMs.

  17. Powerful model for the point source sky: Far-ultraviolet and enhanced midinfrared performance

    NASA Technical Reports Server (NTRS)

    Cohen, Martin

    1994-01-01

    I report further developments of the Wainscoat et al. (1992) model originally created for the point source infrared sky. The already detailed and realistic representation of the Galaxy (disk, spiral arms and local spur, molecular ring, bulge, spheroid) has been improved, guided by CO surveys of local molecular clouds, and by the inclusion of a component to represent Gould's Belt. The newest version of the model is very well validated by Infrared Astronomy Satellite (IRAS) source counts. A major new aspect is the extension of the same model down to the far ultraviolet. I compare predicted and observed far-utraviolet source counts from the Apollo 16 'S201' experiment (1400 A) and the TD1 satellite (for the 1565 A band).

  18. Quantitative structure-property relationships for prediction of boiling point, vapor pressure, and melting point.

    PubMed

    Dearden, John C

    2003-08-01

    Boiling point, vapor pressure, and melting point are important physicochemical properties in the modeling of the distribution and fate of chemicals in the environment. However, such data often are not available, and therefore must be estimated. Over the years, many attempts have been made to calculate boiling points, vapor pressures, and melting points by using quantitative structure-property relationships, and this review examines and discusses the work published in this area, and concentrates particularly on recent studies. A number of software programs are commercially available for the calculation of boiling point, vapor pressure, and melting point, and these have been tested for their predictive ability with a test set of 100 organic chemicals.

  19. Simple point vortex model for the relaxation of 2D superfluid turbulence in a Bose-Einstein condensate

    NASA Astrophysics Data System (ADS)

    Kim, Joon Hyun; Kwon, Woo Jin; Shin, Yong-Il

    2016-05-01

    In a recent experiment, it was found that the dissipative evolution of a corotating vortex pair in a trapped Bose-Einstein condensate is well described by a point vortex model with longitudinal friction on the vortex motion and the thermal friction coefficient was determined as a function of sample temperature. In this poster, we present a numerical study on the relaxation of 2D superfluid turbulence based on the dissipative point vortex model. We consider a homogeneous system in a cylindrical trap having randomly distributed vortices and implement the vortex-antivortex pair annihilation by removing a pair when its separation becomes smaller than a certain threshold value. We characterize the relaxation of the turbulent vortex states with the decay time required for the vortex number to be reduced to a quarter of initial number. We find the vortex decay time is inversely proportional to the thermal friction coefficient. In particular, we observe the decay times obtained from this work show good quantitative agreement with the experimental results in, indicating that in spite of its simplicity, the point vortex model reasonably captures the physics in the relaxation dynamics of the real system.

  20. A High Precision Survey of the Molecular Dynamics of Mammalian Clathrin-Mediated Endocytosis

    PubMed Central

    Taylor, Marcus J.; Perrais, David; Merrifield, Christien J.

    2011-01-01

    Dual colour total internal reflection fluorescence microscopy is a powerful tool for decoding the molecular dynamics of clathrin-mediated endocytosis (CME). Typically, the recruitment of a fluorescent protein–tagged endocytic protein was referenced to the disappearance of spot-like clathrin-coated structure (CCS), but the precision of spot-like CCS disappearance as a marker for canonical CME remained unknown. Here we have used an imaging assay based on total internal reflection fluorescence microscopy to detect scission events with a resolution of ∼2 s. We found that scission events engulfed comparable amounts of transferrin receptor cargo at CCSs of different sizes and CCS did not always disappear following scission. We measured the recruitment dynamics of 34 types of endocytic protein to scission events: Abp1, ACK1, amphiphysin1, APPL1, Arp3, BIN1, CALM, CIP4, clathrin light chain (Clc), cofilin, coronin1B, cortactin, dynamin1/2, endophilin2, Eps15, Eps8, epsin2, FBP17, FCHo1/2, GAK, Hip1R, lifeAct, mu2 subunit of the AP2 complex, myosin1E, myosin6, NECAP, N-WASP, OCRL1, Rab5, SNX9, synaptojanin2β1, and syndapin2. For each protein we aligned ∼1,000 recruitment profiles to their respective scission events and constructed characteristic “recruitment signatures” that were grouped, as for yeast, to reveal the modular organization of mammalian CME. A detailed analysis revealed the unanticipated recruitment dynamics of SNX9, FBP17, and CIP4 and showed that the same set of proteins was recruited, in the same order, to scission events at CCSs of different sizes and lifetimes. Collectively these data reveal the fine-grained temporal structure of CME and suggest a simplified canonical model of mammalian CME in which the same core mechanism of CME, involving actin, operates at CCSs of diverse sizes and lifetimes. PMID:21445324

  1. Nonuniform multiview color texture mapping of image sequence and three-dimensional model for faded cultural relics with sift feature points

    NASA Astrophysics Data System (ADS)

    Li, Na; Gong, Xingyu; Li, Hongan; Jia, Pengtao

    2018-01-01

    For faded relics, such as Terracotta Army, the 2D-3D registration between an optical camera and point cloud model is an important part for color texture reconstruction and further applications. This paper proposes a nonuniform multiview color texture mapping for the image sequence and the three-dimensional (3D) model of point cloud collected by Handyscan3D. We first introduce nonuniform multiview calibration, including the explanation of its algorithm principle and the analysis of its advantages. We then establish transformation equations based on sift feature points for the multiview image sequence. At the same time, the selection of nonuniform multiview sift feature points is introduced in detail. Finally, the solving process of the collinear equations based on multiview perspective projection is given with three steps and the flowchart. In the experiment, this method is applied to the color reconstruction of the kneeling figurine, Tangsancai lady, and general figurine. These results demonstrate that the proposed method provides an effective support for the color reconstruction of the faded cultural relics and be able to improve the accuracy of 2D-3D registration between the image sequence and the point cloud model.

  2. General Description of Fission Observables: GEF Model Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmidt, K.-H.; Jurado, B., E-mail: jurado@cenbg.in2p3.fr; Amouroux, C.

    2016-01-15

    The GEF (“GEneral description of Fission observables”) model code is documented. It describes the observables for spontaneous fission, neutron-induced fission and, more generally, for fission of a compound nucleus from any other entrance channel, with given excitation energy and angular momentum. The GEF model is applicable for a wide range of isotopes from Z = 80 to Z = 112 and beyond, up to excitation energies of about 100 MeV. The results of the GEF model are compared with fission barriers, fission probabilities, fission-fragment mass- and nuclide distributions, isomeric ratios, total kinetic energies, and prompt-neutron and prompt-gamma yields and energymore » spectra from neutron-induced and spontaneous fission. Derived properties of delayed neutrons and decay heat are also considered. The GEF model is based on a general approach to nuclear fission that explains a great part of the complex appearance of fission observables on the basis of fundamental laws of physics and general properties of microscopic systems and mathematical objects. The topographic theorem is used to estimate the fission-barrier heights from theoretical macroscopic saddle-point and ground-state masses and experimental ground-state masses. Motivated by the theoretically predicted early localisation of nucleonic wave functions in a necked-in shape, the properties of the relevant fragment shells are extracted. These are used to determine the depths and the widths of the fission valleys corresponding to the different fission channels and to describe the fission-fragment distributions and deformations at scission by a statistical approach. A modified composite nuclear-level-density formula is proposed. It respects some features in the superfluid regime that are in accordance with new experimental findings and with theoretical expectations. These are a constant-temperature behaviour that is consistent with a considerably increased heat capacity and an increased pairing condensation energy

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mei, Donghai; Lebarbier, Vanessa M.; Rousseau, Roger

    In a combined experimental and first-principles density functional theory (DFT) study, benzene steam reforming (BSR) over MgAl 2O 4 supported Rh and Ir catalysts was investigated. Experimentally, it has been found that both highly dispersed Rh and Ir clusters (1-2 nm) on the MgAl 2O 4 spinel support are stable during the BSR in the temperature range of 700-850°C. Compared to the Ir/MgAl 2O 4 catalyst, the Rh/MgAl 2O 4 catalyst is more active with higher benzene turnover frequency and conversion. At typical steam conditions with the steam-to-carbon ratio > 12, the benzene conversion is only a weak function ofmore » the H 2O concentration in the feed. This suggests that the initial benzene decomposition step rather than the benzene adsorption is most likely the rate-determined step in BSR over supported Rh and Ir catalysts. In order to understand the differences between the two catalysts, we followed with a comparative DFT study of initial benzene decomposition pathways over two representative model systems for each supported metal (Rh and Ir) catalysts. A periodic terrace (111) surface and an amorphous 50-atom metal cluster with a diameter of 1.0 nm were used to represent the two supported model catalysts under low and high dispersion conditions. Our DFT results show that the decreasing catalyst particle size enhances the benzene decomposition on supported Rh catalysts by lowering both C-C and C-H bond scission. The activation barriers of the C-C and the C-H bond scission decrease from 1.60 and 1.61 eV on the Rh(111) surface to 1.34 and 1.26 eV on the Rh50 cluster. For supported Ir catalysts, the decreasing particle size only affects the C-C scission. The activation barrier of the C-C scission of benzene decreases from 1.60 eV on the Ir(111) surface to 1.35 eV on the Ir50 cluster while the barriers of the C-H scission are practically the same. The experimentally measured higher BSR activity on the supported highly dispersed Rh catalyst can be rationalized by the

  4. Intraflagellar transport particle size scales inversely with flagellar length: revisiting the balance-point length control model.

    PubMed

    Engel, Benjamin D; Ludington, William B; Marshall, Wallace F

    2009-10-05

    The assembly and maintenance of eukaryotic flagella are regulated by intraflagellar transport (IFT), the bidirectional traffic of IFT particles (recently renamed IFT trains) within the flagellum. We previously proposed the balance-point length control model, which predicted that the frequency of train transport should decrease as a function of flagellar length, thus modulating the length-dependent flagellar assembly rate. However, this model was challenged by the differential interference contrast microscopy observation that IFT frequency is length independent. Using total internal reflection fluorescence microscopy to quantify protein traffic during the regeneration of Chlamydomonas reinhardtii flagella, we determined that anterograde IFT trains in short flagella are composed of more kinesin-associated protein and IFT27 proteins than trains in long flagella. This length-dependent remodeling of train size is consistent with the kinetics of flagellar regeneration and supports a revised balance-point model of flagellar length control in which the size of anterograde IFT trains tunes the rate of flagellar assembly.

  5. Refinements in the Los Alamos model of the prompt fission neutron spectrum

    DOE PAGES

    Madland, D. G.; Kahler, A. C.

    2017-01-01

    This paper presents a number of refinements to the original Los Alamos model of the prompt fission neutron spectrum and average prompt neutron multiplicity as derived in 1982. The four refinements are due to new measurements of the spectrum and related fission observables many of which were not available in 1982. Here, they are also due to a number of detailed studies and comparisons of the model with previous and present experimental results including not only the differential spectrum, but also integal cross sections measured in the field of the differential spectrum. The four refinements are (a) separate neutron contributionsmore » in binary fission, (b) departure from statistical equilibrium at scission, (c) fission-fragment nuclear level-density models, and (d) center-of-mass anisotropy. With these refinements, for the first time, good agreement has been obtained for both differential and integral measurements using the same Los Alamos model spectrum.« less

  6. Multicritical points for spin-glass models on hierarchical lattices.

    PubMed

    Ohzeki, Masayuki; Nishimori, Hidetoshi; Berker, A Nihat

    2008-06-01

    The locations of multicritical points on many hierarchical lattices are numerically investigated by the renormalization group analysis. The results are compared with an analytical conjecture derived by using the duality, the gauge symmetry, and the replica method. We find that the conjecture does not give the exact answer but leads to locations slightly away from the numerically reliable data. We propose an improved conjecture to give more precise predictions of the multicritical points than the conventional one. This improvement is inspired by a different point of view coming from the renormalization group and succeeds in deriving very consistent answers with many numerical data.

  7. Boolean Modeling of Neural Systems with Point-Process Inputs and Outputs. Part I: Theory and Simulations

    PubMed Central

    Marmarelis, Vasilis Z.; Zanos, Theodoros P.; Berger, Theodore W.

    2010-01-01

    This paper presents a new modeling approach for neural systems with point-process (spike) inputs and outputs that utilizes Boolean operators (i.e. modulo 2 multiplication and addition that correspond to the logical AND and OR operations respectively, as well as the AND_NOT logical operation representing inhibitory effects). The form of the employed mathematical models is akin to a “Boolean-Volterra” model that contains the product terms of all relevant input lags in a hierarchical order, where terms of order higher than first represent nonlinear interactions among the various lagged values of each input point-process or among lagged values of various inputs (if multiple inputs exist) as they reflect on the output. The coefficients of this Boolean-Volterra model are also binary variables that indicate the presence or absence of the respective term in each specific model/system. Simulations are used to explore the properties of such models and the feasibility of their accurate estimation from short data-records in the presence of noise (i.e. spurious spikes). The results demonstrate the feasibility of obtaining reliable estimates of such models, with excitatory and inhibitory terms, in the presence of considerable noise (spurious spikes) in the outputs and/or the inputs in a computationally efficient manner. A pilot application of this approach to an actual neural system is presented in the companion paper (Part II). PMID:19517238

  8. Anatomy of point-contact Andreev reflection spectroscopy from the experimental point of view

    NASA Astrophysics Data System (ADS)

    Naidyuk, Yu. G.; Gloos, K.

    2018-04-01

    We review applications of point-contact Andreev-reflection spectroscopy to study elemental superconductors, where theoretical conditions for the smallness of the point-contact size with respect to the characteristic lengths in the superconductor can be satisfied. We discuss existing theoretical models and identify new issues that have to be solved, especially when applying this method to investigate more complex superconductors. We will also demonstrate that some aspects of point-contact Andreev-reflection spectroscopy still need to be addressed even when investigating ordinary metals.

  9. Mobile capture of remote points of interest using line of sight modelling

    NASA Astrophysics Data System (ADS)

    Meek, Sam; Priestnall, Gary; Sharples, Mike; Goulding, James

    2013-03-01

    Recording points of interest using GPS whilst working in the field is an established technique in geographical fieldwork, where the user's current position is used as the spatial reference to be captured; this is known as geo-tagging. We outline the development and evaluation of a smartphone application called Zapp that enables geo-tagging of any distant point on the visible landscape. The ability of users to log or retrieve information relating to what they can see, rather than where they are standing, allows them to record observations of points in the broader landscape scene, or to access descriptions of landscape features from any viewpoint. The application uses the compass orientation and tilt of the phone to provide data for a line of sight algorithm that intersects with a Digital Surface Model stored on the mobile device. We describe the development process and design decisions for Zapp present the results of a controlled study of the accuracy of the application, and report on the use of Zapp for a student field exercise. The studies indicate the feasibility of the approach, but also how the appropriate use of such techniques will be constrained by current levels of precision in mobile sensor technology. The broader implications for interactive query of the distant landscape and for remote data logging are discussed.

  10. General phase transition models for vehicular traffic with point constraints on the flow

    NASA Astrophysics Data System (ADS)

    Dal Santo, E.; Rosini, M. D.; Dymski, N.; Benyahia, M.

    2017-12-01

    We generalize the phase transition model studied in [R. Colombo. Hyperbolic phase transition in traffic flow.\\ SIAM J.\\ Appl.\\ Math., 63(2):708-721, 2002], that describes the evolution of vehicular traffic along a one-lane road. Two different phases are taken into account, according to whether the traffic is low or heavy. The model is given by a scalar conservation law in the \\emph{free-flow} phase and by a system of two conservation laws in the \\emph{congested} phase. In particular, we study the resulting Riemann problems in the case a local point constraint on the flux of the solutions is enforced.

  11. An Efficient Method to Create Digital Terrain Models from Point Clouds Collected by Mobile LiDAR Systems

    NASA Astrophysics Data System (ADS)

    Gézero, L.; Antunes, C.

    2017-05-01

    The digital terrain models (DTM) assume an essential role in all types of road maintenance, water supply and sanitation projects. The demand of such information is more significant in developing countries, where the lack of infrastructures is higher. In recent years, the use of Mobile LiDAR Systems (MLS) proved to be a very efficient technique in the acquisition of precise and dense point clouds. These point clouds can be a solution to obtain the data for the production of DTM in remote areas, due mainly to the safety, precision, speed of acquisition and the detail of the information gathered. However, the point clouds filtering and algorithms to separate "terrain points" from "no terrain points", quickly and consistently, remain a challenge that has caught the interest of researchers. This work presents a method to create the DTM from point clouds collected by MLS. The method is based in two interactive steps. The first step of the process allows reducing the cloud point to a set of points that represent the terrain's shape, being the distance between points inversely proportional to the terrain variation. The second step is based on the Delaunay triangulation of the points resulting from the first step. The achieved results encourage a wider use of this technology as a solution for large scale DTM production in remote areas.

  12. Simulation of agricultural non-point source pollution in Xichuan by using SWAT model

    NASA Astrophysics Data System (ADS)

    Xing, Linan; Zuo, Jiane; Liu, Fenglin; Zhang, Xiaohui; Cao, Qiguang

    2018-02-01

    This paper evaluated the applicability of using SWAT to access agricultural non-point source pollution in Xichuan area. In order to build the model, DEM, soil sort and land use map, climate monitoring data were collected as basic database. The SWAT model was calibrated and validated for the SWAT was carried out using streamflow, suspended solids, total phosphorus and total nitrogen records from 2009 to 2011. Errors, coefficient of determination and Nash-Sutcliffe coefficient were considered to evaluate the applicability. The coefficient of determination were 0.96, 0.66, 0.55 and 0.66 for streamflow, SS, TN, and TP, respectively. Nash-Sutcliffe coefficient were 0.93, 0.5, 0.52 and 0.63, respectively. The results all meet the requirements. It suggested that the SWAT model can simulate the study area.

  13. Liquidus slopes of impurities in ITS-90 fixed points from the mercury point to the copper point in the low concentration limit

    NASA Astrophysics Data System (ADS)

    Pearce, Jonathan V.; Gisby, John A.; Steur, Peter P. M.

    2016-08-01

    A knowledge of the effect of impurities at the level of parts per million on the freezing temperature of very pure metals is essential for realisation of ITS-90 fixed points. New information has become available for use with the thermodynamic modelling software MTDATA, permitting calculation of liquidus slopes, in the low concentration limit, of a wider range of binary alloy systems than was previously possible. In total, calculated values for 536 binary systems are given. In addition, new experimental determinations of phase diagrams, in the low impurity concentration limit, have recently appeared. All available data have been combined to provide a comprehensive set of liquidus slopes for impurities in ITS-90 metal fixed points. In total, liquidus slopes for 838 systems are tabulated for the fixed points Hg, Ga, In, Sn, Zn, Al, Ag, Au, and Cu. It is shown that the value of the liquidus slope as a function of impurity element atomic number can be approximated using a simple formula, and good qualitative agreement with the existing data is observed for the fixed points Al, Ag, Au and Cu, but curiously the formula is not applicable to the fixed points Hg, Ga, In, Sn, and Zn. Some discussion is made concerning the influence of oxygen on the liquidus slopes, and some calculations using MTDATA are discussed. The BIPM’s consultative committee for thermometry has long recognised that the sum of individual estimates method is the ideal approach for assessing uncertainties due to impurities, but the community has been largely powerless to use the model due to lack of data. Here, not only is data provided, but a simple model is given to enable known thermophysical data to be used directly to estimate impurity effects for a large fraction of the ITS-90 fixed points.

  14. Reconstruction of dynamical systems from resampled point processes produced by neuron models

    NASA Astrophysics Data System (ADS)

    Pavlova, Olga N.; Pavlov, Alexey N.

    2018-04-01

    Characterization of dynamical features of chaotic oscillations from point processes is based on embedding theorems for non-uniformly sampled signals such as the sequences of interspike intervals (ISIs). This theoretical background confirms the ability of attractor reconstruction from ISIs generated by chaotically driven neuron models. The quality of such reconstruction depends on the available length of the analyzed dataset. We discuss how data resampling improves the reconstruction for short amount of data and show that this effect is observed for different types of mechanisms for spike generation.

  15. A point-based prediction model for cardiovascular risk in orthotopic liver transplantation: The CAR-OLT score.

    PubMed

    VanWagner, Lisa B; Ning, Hongyan; Whitsett, Maureen; Levitsky, Josh; Uttal, Sarah; Wilkins, John T; Abecassis, Michael M; Ladner, Daniela P; Skaro, Anton I; Lloyd-Jones, Donald M

    2017-12-01

    Cardiovascular disease (CVD) complications are important causes of morbidity and mortality after orthotopic liver transplantation (OLT). There is currently no preoperative risk-assessment tool that allows physicians to estimate the risk for CVD events following OLT. We sought to develop a point-based prediction model (risk score) for CVD complications after OLT, the Cardiovascular Risk in Orthotopic Liver Transplantation risk score, among a cohort of 1,024 consecutive patients aged 18-75 years who underwent first OLT in a tertiary-care teaching hospital (2002-2011). The main outcome measures were major 1-year CVD complications, defined as death from a CVD cause or hospitalization for a major CVD event (myocardial infarction, revascularization, heart failure, atrial fibrillation, cardiac arrest, pulmonary embolism, and/or stroke). The bootstrap method yielded bias-corrected 95% confidence intervals for the regression coefficients of the final model. Among 1,024 first OLT recipients, major CVD complications occurred in 329 (32.1%). Variables selected for inclusion in the model (using model optimization strategies) included preoperative recipient age, sex, race, employment status, education status, history of hepatocellular carcinoma, diabetes, heart failure, atrial fibrillation, pulmonary or systemic hypertension, and respiratory failure. The discriminative performance of the point-based score (C statistic = 0.78, bias-corrected C statistic = 0.77) was superior to other published risk models for postoperative CVD morbidity and mortality, and it had appropriate calibration (Hosmer-Lemeshow P = 0.33). The point-based risk score can identify patients at risk for CVD complications after OLT surgery (available at www.carolt.us); this score may be useful for identification of candidates for further risk stratification or other management strategies to improve CVD outcomes after OLT. (Hepatology 2017;66:1968-1979). © 2017 by the American Association for the Study of Liver

  16. Evaluation of reduced point charge models of proteins through Molecular Dynamics simulations: application to the Vps27 UIM-1-Ubiquitin complex.

    PubMed

    Leherte, Laurence; Vercauteren, Daniel P

    2014-02-01

    Reduced point charge models of amino acids are designed, (i) from local extrema positions in charge density distribution functions built from the Poisson equation applied to smoothed molecular electrostatic potential (MEP) functions, and (ii) from local maxima positions in promolecular electron density distribution functions. Corresponding charge values are fitted versus all-atom Amber99 MEPs. To easily generate reduced point charge models for protein structures, libraries of amino acid templates are built. The program GROMACS is used to generate stable Molecular Dynamics trajectories of an Ubiquitin-ligand complex (PDB: 1Q0W), under various implementation schemes, solvation, and temperature conditions. Point charges that are not located on atoms are considered as virtual sites with a nul mass and radius. The results illustrate how the intra- and inter-molecular H-bond interactions are affected by the degree of reduction of the point charge models and give directions for their implementation; a special attention to the atoms selected to locate the virtual sites and to the Coulomb-14 interactions is needed. Results obtained at various temperatures suggest that the use of reduced point charge models allows to probe local potential hyper-surface minima that are similar to the all-atom ones, but are characterized by lower energy barriers. It enables to generate various conformations of the protein complex more rapidly than the all-atom point charge representation. Copyright © 2013 Elsevier Inc. All rights reserved.

  17. Point process models for localization and interdependence of punctate cellular structures.

    PubMed

    Li, Ying; Majarian, Timothy D; Naik, Armaghan W; Johnson, Gregory R; Murphy, Robert F

    2016-07-01

    Accurate representations of cellular organization for multiple eukaryotic cell types are required for creating predictive models of dynamic cellular function. To this end, we have previously developed the CellOrganizer platform, an open source system for generative modeling of cellular components from microscopy images. CellOrganizer models capture the inherent heterogeneity in the spatial distribution, size, and quantity of different components among a cell population. Furthermore, CellOrganizer can generate quantitatively realistic synthetic images that reflect the underlying cell population. A current focus of the project is to model the complex, interdependent nature of organelle localization. We built upon previous work on developing multiple non-parametric models of organelles or structures that show punctate patterns. The previous models described the relationships between the subcellular localization of puncta and the positions of cell and nuclear membranes and microtubules. We extend these models to consider the relationship to the endoplasmic reticulum (ER), and to consider the relationship between the positions of different puncta of the same type. Our results do not suggest that the punctate patterns we examined are dependent on ER position or inter- and intra-class proximity. With these results, we built classifiers to update previous assignments of proteins to one of 11 patterns in three distinct cell lines. Our generative models demonstrate the ability to construct statistically accurate representations of puncta localization from simple cellular markers in distinct cell types, capturing the complex phenomena of cellular structure interaction with little human input. This protocol represents a novel approach to vesicular protein annotation, a field that is often neglected in high-throughput microscopy. These results suggest that spatial point process models provide useful insight with respect to the spatial dependence between cellular structures.

  18. Modeling Initial Stage of Ablation Material Pyrolysis: Graphitic Precursor Formation and Interfacial Effects

    NASA Technical Reports Server (NTRS)

    Desai, Tapan G.; Lawson, John W.; Keblinski, Pawel

    2010-01-01

    Reactive molecular dynamics simulations are used to study initial stage of pyrolysis of ablation materials and their composites with carbon nanotubes and carbon fibers. The products formed during pyrolysis are characterized and water is found as the primary product in all cases. The water formation mechanisms are analyzed and the value of the activation energy for water formation is estimated. A detailed study on graphitic precursor formation reveals the presence of two temperature zones. In the lower temperature zone (less than 2000 K) polymerization occurs resulting in formation of large, stable graphitic precursors, and in the high temperature zone (greater than 2000 K) polymer scission results in formation of short polymer chains/molecules. Simulations performed in the high temperature zone on the phenolic resin composites (with carbon nanotubes and carbon fibers) shows that the presence of interfaces had no substantial effect on the chain scission rate or the activation energy value for water formation.

  19. Combustion modeling and kinetic rate calculations for a stoichiometric cyclohexane flame. 1. Major reaction pathways.

    PubMed

    Zhang, Hongzhi R; Huynh, Lam K; Kungwan, Nawee; Yang, Zhiwei; Zhang, Shaowen

    2007-05-17

    The Utah Surrogate Mechanism was extended in order to model a stoichiometric premixed cyclohexane flame (P = 30 Torr). Generic rates were assigned to reaction classes of hydrogen abstraction, beta scission, and isomerization, and the resulting mechanism was found to be adequate in describing the combustion chemistry of cyclohexane. Satisfactory results were obtained in comparison with the experimental data of oxygen, major products and important intermediates, which include major soot precursors of C2-C5 unsaturated species. Measured concentrations of immediate products of fuel decomposition were also successfully reproduced. For example, the maximum concentrations of benzene and 1,3-butadiene, two major fuel decomposition products via competing pathways, were predicted within 10% of the measured values. Ring-opening reactions compete with those of cascading dehydrogenation for the decomposition of the conjugate cyclohexyl radical. The major ring-opening pathways produce 1-buten-4-yl radical, molecular ethylene, and 1,3-butadiene. The butadiene species is formed via beta scission after a 1-4 internal hydrogen migration of 1-hexen-6-yl radical. Cascading dehydrogenation also makes an important contribution to the fuel decomposition and provides the exclusive formation pathway of benzene. Benzene formation routes via combination of C2-C4 hydrocarbon fragments were found to be insignificant under current flame conditions, inferred by the later concentration peak of fulvene, in comparison with benzene, because the analogous species series for benzene formation via dehydrogenation was found to be precursors with regard to parent species of fulvene.

  20. PHOTOCHEMICAL SIMULATIONS OF POINT SOURCE EMISSIONS WITH THE MODELS-3 CMAQ PLUME-IN-GRID APPROACH

    EPA Science Inventory

    A plume-in-grid (PinG) approach has been designed to provide a realistic treatment for the simulation the dynamic and chemical processes impacting pollutant species in major point source plumes during a subgrid scale phase within an Eulerian grid modeling framework. The PinG sci...

  1. SOFIA pointing history

    NASA Astrophysics Data System (ADS)

    Kärcher, Hans J.; Kunz, Nans; Temi, Pasquale; Krabbe, Alfred; Wagner, Jörg; Süß, Martin

    2014-07-01

    The original pointing accuracy requirement of the Stratospheric Observatory for Infrared Astronomy SOFIA was defined at the beginning of the program in the late 1980s as very challenging 0.2 arcsec rms. The early science flights of the observatory started in December 2010 and the observatory has reached in the mean time nearly 0.7 arcsec rms, which is sufficient for most of the SOFIA science instruments. NASA and DLR, the owners of SOFIA, are planning now a future 4 year program to bring the pointing down to the ultimate 0.2 arcsec rms. This may be the right time to recall the history of the pointing requirement and its verification and the possibility of its achievement via early computer models and wind tunnel tests, later computer aided end-to-end simulations up to the first commissioning flights some years ago. The paper recollects the tools used in the different project phases for the verification of the pointing performance, explains the achievements and may give hints for the planning of the upcoming final pointing improvement phase.

  2. Investigating the impact of the properties of pilot points on calibration of groundwater models: case study of a karst catchment in Rote Island, Indonesia

    NASA Astrophysics Data System (ADS)

    Klaas, Dua K. S. Y.; Imteaz, Monzur Alam

    2017-09-01

    A robust configuration of pilot points in the parameterisation step of a model is crucial to accurately obtain a satisfactory model performance. However, the recommendations provided by the majority of recent researchers on pilot-point use are considered somewhat impractical. In this study, a practical approach is proposed for using pilot-point properties (i.e. number, distance and distribution method) in the calibration step of a groundwater model. For the first time, the relative distance-area ratio ( d/ A) and head-zonation-based (HZB) method are introduced, to assign pilot points into the model domain by incorporating a user-friendly zone ratio. This study provides some insights into the trade-off between maximising and restricting the number of pilot points, and offers a relative basis for selecting the pilot-point properties and distribution method in the development of a physically based groundwater model. The grid-based (GB) method is found to perform comparably better than the HZB method in terms of model performance and computational time. When using the GB method, this study recommends a distance-area ratio of 0.05, a distance-x-grid length ratio ( d/ X grid) of 0.10, and a distance-y-grid length ratio ( d/ Y grid) of 0.20.

  3. Primordial blackholes and gravitational waves for an inflection-point model of inflation

    NASA Astrophysics Data System (ADS)

    Choudhury, Sayantan; Mazumdar, Anupam

    2014-06-01

    In this article we provide a new closed relationship between cosmic abundance of primordial gravitational waves and primordial blackholes that originated from initial inflationary perturbations for inflection-point models of inflation where inflation occurs below the Planck scale. The current Planck constraint on tensor-to-scalar ratio, running of the spectral tilt, and from the abundance of dark matter content in the universe, we can deduce a strict bound on the current abundance of primordial blackholes to be within a range, 9.99712 ×10-3 <ΩPBHh2 < 9.99736 ×10-3.

  4. Colour computer-generated holography for point clouds utilizing the Phong illumination model.

    PubMed

    Symeonidou, Athanasia; Blinder, David; Schelkens, Peter

    2018-04-16

    A technique integrating the bidirectional reflectance distribution function (BRDF) is proposed to generate realistic high-quality colour computer-generated holograms (CGHs). We build on prior work, namely a fast computer-generated holography method for point clouds that handles occlusions. We extend the method by integrating the Phong illumination model so that the properties of the objects' surfaces are taken into account to achieve natural light phenomena such as reflections and shadows. Our experiments show that rendering holograms with the proposed algorithm provides realistic looking objects without any noteworthy increase to the computational cost.

  5. Detection and localization of change points in temporal networks with the aid of stochastic block models

    NASA Astrophysics Data System (ADS)

    De Ridder, Simon; Vandermarliere, Benjamin; Ryckebusch, Jan

    2016-11-01

    A framework based on generalized hierarchical random graphs (GHRGs) for the detection of change points in the structure of temporal networks has recently been developed by Peel and Clauset (2015 Proc. 29th AAAI Conf. on Artificial Intelligence). We build on this methodology and extend it to also include the versatile stochastic block models (SBMs) as a parametric family for reconstructing the empirical networks. We use five different techniques for change point detection on prototypical temporal networks, including empirical and synthetic ones. We find that none of the considered methods can consistently outperform the others when it comes to detecting and locating the expected change points in empirical temporal networks. With respect to the precision and the recall of the results of the change points, we find that the method based on a degree-corrected SBM has better recall properties than other dedicated methods, especially for sparse networks and smaller sliding time window widths.

  6. SIMO optical wireless links with nonzero boresight pointing errors over M modeled turbulence channels

    NASA Astrophysics Data System (ADS)

    Varotsos, G. K.; Nistazakis, H. E.; Petkovic, M. I.; Djordjevic, G. T.; Tombras, G. S.

    2017-11-01

    Over the last years terrestrial free-space optical (FSO) communication systems have demonstrated an increasing scientific and commercial interest in response to the growing demands for ultra high bandwidth, cost-effective and secure wireless data transmissions. However, due the signal propagation through the atmosphere, the performance of such links depends strongly on the atmospheric conditions such as weather phenomena and turbulence effect. Additionally, their operation is affected significantly by the pointing errors effect which is caused by the misalignment of the optical beam between the transmitter and the receiver. In order to address this significant performance degradation, several statistical models have been proposed, while particular attention has been also given to diversity methods. Here, the turbulence-induced fading of the received optical signal irradiance is studied through the M (alaga) distribution, which is an accurate model suitable for weak to strong turbulence conditions and unifies most of the well-known, previously emerged models. Thus, taking into account the atmospheric turbulence conditions along with the pointing errors effect with nonzero boresight and the modulation technique that is used, we derive mathematical expressions for the estimation of the average bit error rate performance for SIMO FSO links. Finally, proper numerical results are given to verify our derived expressions and Monte Carlo simulations are also provided to further validate the accuracy of the analysis proposed and the obtained mathematical expressions.

  7. Point clouds in BIM

    NASA Astrophysics Data System (ADS)

    Antova, Gergana; Kunchev, Ivan; Mickrenska-Cherneva, Christina

    2016-10-01

    The representation of physical buildings in Building Information Models (BIM) has been a subject of research since four decades in the fields of Construction Informatics and GeoInformatics. The early digital representations of buildings mainly appeared as 3D drawings constructed by CAD software, and the 3D representation of the buildings was only geometric, while semantics and topology were out of modelling focus. On the other hand, less detailed building representations, with often focus on ‘outside’ representations were also found in form of 2D /2,5D GeoInformation models. Point clouds from 3D laser scanning data give a full and exact representation of the building geometry. The article presents different aspects and the benefits of using point clouds in BIM in the different stages of a lifecycle of a building.

  8. SU-E-T-17: A Mathematical Model for PinPoint Chamber Correction in Measuring Small Fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, T; Zhang, Y; Li, X

    2014-06-01

    Purpose: For small field dosimetry, such as measuring the cone output factor for stereotactic radiosurgery, ion chambers often result in underestimation of the dose, due to both the volume averaging effect and the lack of electron equilibrium. The purpose of this work is to develop a mathematical model, specifically for the pinpoint chamber, to calculate the correction factors corresponding to different type of small fields, including single cone-based circular field and non-standard composite fields. Methods: A PTW 0.015cc PinPoint chamber was used in the study. Its response in a certain field was modeled as the total contribution of many smallmore » beamlets, each with different response factor depending on the relative strength, radial distance to the chamber axis, and the beam angle. To get these factors, 12 cone-shaped circular fields (5mm,7.5mm, 10mm, 12.5mm, 15mm, 20mm, 25mm, 30mm, 35mm, 40mm, 50mm, 60mm) were irradiated and measured with the PinPoint chamber. For each field size, hundreds of readings were recorded for every 2mm chamber shift in the horizontal plane. These readings were then compared with the theoretical doses as obtained with Monte Carlo calculation. A penalized-least-square optimization algorithm was developed to find out the beamlet response factors. After the parameter fitting, the established mathematical model was validated with the same MC code for other non-circular fields. Results: The optimization algorithm used for parameter fitting was stable and the resulted response factors were smooth in spatial domain. After correction with the mathematical model, the chamber reading matched with the Monte Carlo calculation for all the tested fields to within 2%. Conclusion: A novel mathematical model has been developed for the PinPoint chamber for dosimetric measurement of small fields. The current model is applicable only when the beam axis is perpendicular to the chamber axis. It can be applied to non-standard composite fields. Further

  9. Focal Point Theory Models for Dissecting Dynamic Duality Problems of Microbial Infections

    PubMed Central

    Huang, S.-H.; Zhou, W.; Jong, A.

    2008-01-01

    Extending along the dynamic continuum from conflict to cooperation, microbial infections always involve symbiosis (Sym) and pathogenesis (Pat). There exists a dynamic Sym-Pat duality (DSPD) in microbial infection that is the most fundamental problem in infectomics. DSPD is encoded by the genomes of both the microbes and their hosts. Three focal point (FP) theory-based game models (pure cooperative, dilemma, and pure conflict) are proposed for resolving those problems. Our health is associated with the dynamic interactions of three microbial communities (nonpathogenic microbiota (NP) (Cooperation), conditional pathogens (CP) (Dilemma), and unconditional pathogens (UP) (Conflict)) with the hosts at different health statuses. Sym and Pat can be quantitated by measuring symbiotic index (SI), which is quantitative fitness for the symbiotic partnership, and pathogenic index (PI), which is quantitative damage to the symbiotic partnership, respectively. Symbiotic point (SP), which bears analogy to FP, is a function of SI and PI. SP-converting and specific pathogen-targeting strategies can be used for the rational control of microbial infections. PMID:18350122

  10. Critical points of the O(n) loop model on the martini and the 3-12 lattices

    NASA Astrophysics Data System (ADS)

    Ding, Chengxiang; Fu, Zhe; Guo, Wenan

    2012-06-01

    We derive the critical line of the O(n) loop model on the martini lattice as a function of the loop weight n basing on the critical points on the honeycomb lattice conjectured by Nienhuis [Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.49.1062 49, 1062 (1982)]. In the limit n→0 we prove the connective constant μ=1.7505645579⋯ of self-avoiding walks on the martini lattice. A finite-size scaling analysis based on transfer matrix calculations is also performed. The numerical results coincide with the theoretical predictions with a very high accuracy. Using similar numerical methods, we also study the O(n) loop model on the 3-12 lattice. We obtain similarly precise agreement with the critical points given by Batchelor [J. Stat. Phys.JSTPBS0022-471510.1023/A:1023065215233 92, 1203 (1998)].

  11. Linear and quadratic models of point process systems: contributions of patterned input to output.

    PubMed

    Lindsay, K A; Rosenberg, J R

    2012-08-01

    In the 1880's Volterra characterised a nonlinear system using a functional series connecting continuous input and continuous output. Norbert Wiener, in the 1940's, circumvented problems associated with the application of Volterra series to physical problems by deriving from it a new series of terms that are mutually uncorrelated with respect to Gaussian processes. Subsequently, Brillinger, in the 1970's, introduced a point-process analogue of Volterra's series connecting point-process inputs to the instantaneous rate of point-process output. We derive here a new series from this analogue in which its terms are mutually uncorrelated with respect to Poisson processes. This new series expresses how patterned input in a spike train, represented by third-order cross-cumulants, is converted into the instantaneous rate of an output point-process. Given experimental records of suitable duration, the contribution of arbitrary patterned input to an output process can, in principle, be determined. Solutions for linear and quadratic point-process models with one and two inputs and a single output are investigated. Our theoretical results are applied to isolated muscle spindle data in which the spike trains from the primary and secondary endings from the same muscle spindle are recorded in response to stimulation of one and then two static fusimotor axons in the absence and presence of a random length change imposed on the parent muscle. For a fixed mean rate of input spikes, the analysis of the experimental data makes explicit which patterns of two input spikes contribute to an output spike. Copyright © 2012 Elsevier Ltd. All rights reserved.

  12. Point process modeling and estimation: Advances in the analysis of dynamic neural spiking data

    NASA Astrophysics Data System (ADS)

    Deng, Xinyi

    2016-08-01

    A common interest of scientists in many fields is to understand the relationship between the dynamics of a physical system and the occurrences of discrete events within such physical system. Seismologists study the connection between mechanical vibrations of the Earth and the occurrences of earthquakes so that future earthquakes can be better predicted. Astrophysicists study the association between the oscillating energy of celestial regions and the emission of photons to learn the Universe's various objects and their interactions. Neuroscientists study the link between behavior and the millisecond-timescale spike patterns of neurons to understand higher brain functions. Such relationships can often be formulated within the framework of state-space models with point process observations. The basic idea is that the dynamics of the physical systems are driven by the dynamics of some stochastic state variables and the discrete events we observe in an interval are noisy observations with distributions determined by the state variables. This thesis proposes several new methodological developments that advance the framework of state-space models with point process observations at the intersection of statistics and neuroscience. In particular, we develop new methods 1) to characterize the rhythmic spiking activity using history-dependent structure, 2) to model population spike activity using marked point process models, 3) to allow for real-time decision making, and 4) to take into account the need for dimensionality reduction for high-dimensional state and observation processes. We applied these methods to a novel problem of tracking rhythmic dynamics in the spiking of neurons in the subthalamic nucleus of Parkinson's patients with the goal of optimizing placement of deep brain stimulation electrodes. We developed a decoding algorithm that can make decision in real-time (for example, to stimulate the neurons or not) based on various sources of information present in

  13. Application of Bayesian Techniques to Model the Burden of Human Salmonellosis Attributable to U.S. Food Commodities at the Point of Processing: Adaptation of a Danish Model

    PubMed Central

    Guo, Chuanfa; Hoekstra, Robert M.; Schroeder, Carl M.; Pires, Sara Monteiro; Ong, Kanyin Liane; Hartnett, Emma; Naugle, Alecia; Harman, Jane; Bennett, Patricia; Cieslak, Paul; Scallan, Elaine; Rose, Bonnie; Holt, Kristin G.; Kissler, Bonnie; Mbandi, Evelyne; Roodsari, Reza; Angulo, Frederick J.

    2011-01-01

    Abstract Mathematical models that estimate the proportion of foodborne illnesses attributable to food commodities at specific points in the food chain may be useful to risk managers and policy makers to formulate public health goals, prioritize interventions, and document the effectiveness of mitigations aimed at reducing illness. Using human surveillance data on laboratory-confirmed Salmonella infections from the Centers for Disease Control and Prevention and Salmonella testing data from U.S. Department of Agriculture Food Safety and Inspection Service's regulatory programs, we developed a point-of-processing foodborne illness attribution model by adapting the Hald Salmonella Bayesian source attribution model. Key model outputs include estimates of the relative proportions of domestically acquired sporadic human Salmonella infections resulting from contamination of raw meat, poultry, and egg products processed in the United States from 1998 through 2003. The current model estimates the relative contribution of chicken (48%), ground beef (28%), turkey (17%), egg products (6%), intact beef (1%), and pork (<1%) across 109 Salmonella serotypes found in food commodities at point of processing. While interpretation of the attribution estimates is constrained by data inputs, the adapted model shows promise and may serve as a basis for a common approach to attribution of human salmonellosis and food safety decision-making in more than one country. PMID:21235394

  14. Absolute, SI-traceable lunar irradiance tie-points for the USGS Lunar Model

    NASA Astrophysics Data System (ADS)

    Brown, Steven W.; Eplee, Robert E.; Xiong, Xiaoxiong J.

    2017-10-01

    The United States Geological Survey (USGS) has developed an empirical model, known as the Robotic Lunar Observatory (ROLO) Model, that predicts the reflectance of the Moon for any Sun-sensor-Moon configuration over the spectral range from 350 nm to 2500 nm. The lunar irradiance can be predicted from the modeled lunar reflectance using a spectrum of the incident solar irradiance. While extremely successful as a relative exo-atmospheric calibration target, the ROLO Model is not SI-traceable and has estimated uncertainties too large for the Moon to be used as an absolute celestial calibration target. In this work, two recent absolute, low uncertainty, SI-traceable top-of-the-atmosphere (TOA) lunar irradiances, measured over the spectral range from 380 nm to 1040 nm, at lunar phase angles of 6.6° and 16.9° , are used as tie-points to the output of the ROLO Model. Combined with empirically derived phase and libration corrections to the output of the ROLO Model and uncertainty estimates in those corrections, the measurements enable development of a corrected TOA lunar irradiance model and its uncertainty budget for phase angles between +/-80° and libration angles from 7° to 51° . The uncertainties in the empirically corrected output from the ROLO model are approximately 1 % from 440 nm to 865 nm and increase to almost 3 % at 412 nm. The dominant components in the uncertainty budget are the uncertainty in the absolute TOA lunar irradiance and the uncertainty in the fit to the phase correction from the output of the ROLO model.

  15. A Direct Latent Variable Modeling Based Method for Point and Interval Estimation of Coefficient Alpha

    ERIC Educational Resources Information Center

    Raykov, Tenko; Marcoulides, George A.

    2015-01-01

    A direct approach to point and interval estimation of Cronbach's coefficient alpha for multiple component measuring instruments is outlined. The procedure is based on a latent variable modeling application with widely circulated software. As a by-product, using sample data the method permits ascertaining whether the population discrepancy…

  16. Impact of selected troposphere models on Precise Point Positioning convergence

    NASA Astrophysics Data System (ADS)

    Kalita, Jakub; Rzepecka, Zofia

    2016-04-01

    The Precise Point Positioning (PPP) absolute method is currently intensively investigated in order to reach fast convergence time. Among various sources that influence the convergence of the PPP, the tropospheric delay is one of the most important. Numerous models of tropospheric delay are developed and applied to PPP processing. However, with rare exceptions, the quality of those models does not allow fixing the zenith path delay tropospheric parameter, leaving difference between nominal and final value to the estimation process. Here we present comparison of several PPP result sets, each of which based on different troposphere model. The respective nominal values are adopted from models: VMF1, GPT2w, MOPS and ZERO-WET. The PPP solution admitted as reference is based on the final troposphere product from the International GNSS Service (IGS). The VMF1 mapping function was used for all processing variants in order to provide capability to compare impact of applied nominal values. The worst case initiates zenith wet delay with zero value (ZERO-WET). Impact from all possible models for tropospheric nominal values should fit inside both IGS and ZERO-WET border variants. The analysis is based on data from seven IGS stations located in mid-latitude European region from year 2014. For the purpose of this study several days with the most active troposphere were selected for each of the station. All the PPP solutions were determined using gLAB open-source software, with the Kalman filter implemented independently by the authors of this work. The processing was performed on 1 hour slices of observation data. In addition to the analysis of the output processing files, the presented study contains detailed analysis of the tropospheric conditions for the selected data. The overall results show that for the height component the VMF1 model outperforms GPT2w and MOPS by 35-40% and ZERO-WET variant by 150%. In most of the cases all solutions converge to the same values during first

  17. Statistical aspects of point count sampling

    USGS Publications Warehouse

    Barker, R.J.; Sauer, J.R.; Ralph, C.J.; Sauer, J.R.; Droege, S.

    1995-01-01

    The dominant feature of point counts is that they do not census birds, but instead provide incomplete counts of individuals present within a survey plot. Considering a simple model for point count sampling, we demon-strate that use of these incomplete counts can bias estimators and testing procedures, leading to inappropriate conclusions. A large portion of the variability in point counts is caused by the incomplete counting, and this within-count variation can be confounded with ecologically meaningful varia-tion. We recommend caution in the analysis of estimates obtained from point counts. Using; our model, we also consider optimal allocation of sampling effort. The critical step in the optimization process is in determining the goals of the study and methods that will be used to meet these goals. By explicitly defining the constraints on sampling and by estimating the relationship between precision and bias of estimators and time spent counting, we can predict the optimal time at a point for each of several monitoring goals. In general, time spent at a point will differ depending on the goals of the study.

  18. Chapter 8: Pyrolysis Mechanisms of Lignin Model Compounds Using a Heated Micro-Reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robichaud, David J.; Nimlos, Mark R.; Ellison, G. Barney

    2015-10-03

    Lignin is an important component of biomass, and the decomposition of its thermal deconstruction products is important in pyrolysis and gasification. In this chapter, we investigate the unimolecular pyrolysis chemistry through the use of singly and doubly substituted benzene molecules that are model compounds representative of lignin and its primary pyrolysis products. These model compounds are decomposed in a heated micro-reactor, and the products, including radicals and unstable intermediates, are measured using photoionization mass spectrometry and matrix isolation infrared spectroscopy. We show that the unimolecular chemistry can yield insight into the initial decomposition of these species. At pyrolysis and gasificationmore » severities, singly substituted benzenes typically undergo bond scission and elimination reactions to form radicals. Some require radical-driven chain reactions. For doubly substituted benzenes, proximity effects of the substituents can change the reaction pathways.« less

  19. Automatic markerless registration of point clouds with semantic-keypoint-based 4-points congruent sets

    NASA Astrophysics Data System (ADS)

    Ge, Xuming

    2017-08-01

    The coarse registration of point clouds from urban building scenes has become a key topic in applications of terrestrial laser scanning technology. Sampling-based algorithms in the random sample consensus (RANSAC) model have emerged as mainstream solutions to address coarse registration problems. In this paper, we propose a novel combined solution to automatically align two markerless point clouds from building scenes. Firstly, the method segments non-ground points from ground points. Secondly, the proposed method detects feature points from each cross section and then obtains semantic keypoints by connecting feature points with specific rules. Finally, the detected semantic keypoints from two point clouds act as inputs to a modified 4PCS algorithm. Examples are presented and the results compared with those of K-4PCS to demonstrate the main contributions of the proposed method, which are the extension of the original 4PCS to handle heavy datasets and the use of semantic keypoints to improve K-4PCS in relation to registration accuracy and computational efficiency.

  20. Point-by-point model calculation of the prompt neutron multiplicity distribution ν(A) in the incident neutron energy range of multi-chance fission

    NASA Astrophysics Data System (ADS)

    Tudora, Anabella; Hambsch, Franz-Josef; Tobosaru, Viorel

    2017-09-01

    Prompt neutron multiplicity distributions ν(A) are required for prompt emission correction of double energy (2E) measurements of fission fragments to determine pre-neutron fragment properties. The lack of experimental ν(A) data especially at incident neutron energies (En) where the multi-chance fission occurs impose the use of ν(A) predicted by models. The Point-by-Point model of prompt emission is able to provide the individual ν(A) of the compound nuclei of the main and secondary nucleus chains undergoing fission at a given En. The total ν(A) is obtained by averaging these individual ν(A) over the probabilities of fission chances (expressed as total and partial fission cross-section ratios). An indirect validation of the total ν(A) results is proposed. At high En, above 70 MeV, the PbP results of individual ν(A) of the first few nuclei of the main and secondary nucleus chains exhibit an almost linear increase. This shape is explained by the damping of shell effects entering the super-fluid expression of the level density parameters. They tend to approach the asymptotic values for most of the fragments. This fact leads to a smooth and almost linear increase of fragment excitation energy with the mass number that is reflected in a smooth and almost linear behaviour of ν(A).

  1. Reduced Point Charge Models of Proteins: Effect of Protein-Water Interactions in Molecular Dynamics Simulations of Ubiquitin Systems.

    PubMed

    Leherte, Laurence; Vercauteren, Daniel P

    2017-10-26

    We investigate the influence of various solvent models on the structural stability and protein-water interface of three ubiquitin complexes (PDB access codes: 1Q0W , 2MBB , 2G3Q ) modeled using the Amber99sb force field (FF) and two different point charge distributions. A previously developed reduced point charge model (RPCM), wherein each amino acid residue is described by a limited number of point charges, is tested and compared to its all-atom (AA) version. The complexes are solvated in TIP4P-Ew or TIP3P type water molecules, involving either the scaling of the Lennard-Jones protein-O water interaction parameters, or the coarse-grain (CG) SIRAH water description. The best agreements between the RPCM and AA models were obtained for structural, protein-water, and ligand-ubiquitin properties when using the TIP4P-Ew water FF with a scaling factor γ of 0.7. At the RPCM level, a decrease in γ, or the inclusion of SIRAH particles, allows weakening of the protein-water interactions. It results in a slight collapse of the protein structure and a less compact hydration shell and, thus, in a decrease in the number of protein-water and water-water H-bonds. The dynamics of the surface protein atoms and of the water shell molecules are also slightly refrained, which allow the generation of stable RPCM trajectories.

  2. Deriving Points of Departure and Performance Baselines for Predictive Modeling of Systemic Toxicity using ToxRefDB (SOT)

    EPA Science Inventory

    A primary goal of computational toxicology is to generate predictive models of toxicity. An elusive target of alternative test methods and models has been the accurate prediction of systemic toxicity points of departure (PoD). We aim not only to provide a large and valuable resou...

  3. Kinetics of a Collagen-Like Polypeptide Fragmentation after Mid-IR Free-Electron Laser Ablation

    PubMed Central

    Zavalin, Andrey; Hachey, David L.; Sundaramoorthy, Munirathinam; Banerjee, Surajit; Morgan, Steven; Feldman, Leonard; Tolk, Norman; Piston, David W.

    2008-01-01

    Tissue ablation with mid-infrared irradiation tuned to collagen vibrational modes results in minimal collateral damage. The hypothesis for this effect includes selective scission of protein molecules and excitation of surrounding water molecules, with the scission process currently favored. In this article, we describe the postablation infrared spectral decay kinetics in a model collagen-like peptide (Pro-Pro-Gly)10. We find that the decay is exponential with different decay times for other, simpler dipeptides. Furthermore, we find that collagen-like polypeptides, such as (Pro-Pro-Gly)10, show multiple decay times, indicating multiple scission locations and cross-linking to form longer chain molecules. In combination with data from high-resolution mass spectrometry, we interpret these products to result from the generation of reactive intermediates, such as free radicals, cyanate ions, and isocyanic acid, which can form cross-links and protein adducts. Our results lead to a more complete explanation of the reduced collateral damage resulting from infrared laser irradiation through a mechanism involving cross-linking in which collagen-like molecules form a network of cross-linked fibers. PMID:18441025

  4. Atomic Forces for Geometry-Dependent Point Multipole and Gaussian Multipole Models

    PubMed Central

    Elking, Dennis M.; Perera, Lalith; Duke, Robert; Darden, Thomas; Pedersen, Lee G.

    2010-01-01

    In standard treatments of atomic multipole models, interaction energies, total molecular forces, and total molecular torques are given for multipolar interactions between rigid molecules. However, if the molecules are assumed to be flexible, two additional multipolar atomic forces arise due to 1) the transfer of torque between neighboring atoms, and 2) the dependence of multipole moment on internal geometry (bond lengths, bond angles, etc.) for geometry-dependent multipole models. In the current study, atomic force expressions for geometry-dependent multipoles are presented for use in simulations of flexible molecules. The atomic forces are derived by first proposing a new general expression for Wigner function derivatives ∂Dlm′m/∂Ω. The force equations can be applied to electrostatic models based on atomic point multipoles or Gaussian multipole charge density. Hydrogen bonded dimers are used to test the inter-molecular electrostatic energies and atomic forces calculated by geometry-dependent multipoles fit to the ab initio electrostatic potential (ESP). The electrostatic energies and forces are compared to their reference ab initio values. It is shown that both static and geometry-dependent multipole models are able to reproduce total molecular forces and torques with respect to ab initio, while geometry-dependent multipoles are needed to reproduce ab initio atomic forces. The expressions for atomic force can be used in simulations of flexible molecules with atomic multipoles. In addition, the results presented in this work should lead to further development of next generation force fields composed of geometry-dependent multipole models. PMID:20839297

  5. The Annular Suspension and Pointing System /ASPS/

    NASA Technical Reports Server (NTRS)

    Anderson, W. W.; Woolley, C. T.

    1978-01-01

    The Annular Suspension and Pointing System (ASPS) may be attached to a carrier vehicle for orientation, mechanical isolation, and fine pointing purposes applicable to space experiments. It has subassemblies for both coarse and vernier pointing. A fourteen-degree-of-freedom simulation of the ASPS mounted on a Space Shuttle has yielded initial performance data. The simulation describes: the magnetic actuators, payload sensors, coarse gimbal assemblies, control algorithms, rigid body dynamic models of the payload and Shuttle, and a control system firing model.

  6. Experimental and numerical investigation of feed-point parameters in a 3-D hyperthermia applicator using different FDTD models of feed networks.

    PubMed

    Nadobny, Jacek; Fähling, Horst; Hagmann, Mark J; Turner, Paul F; Wlodarczyk, Waldemar; Gellermann, Johanna M; Deuflhard, Peter; Wust, Peter

    2002-11-01

    Experimental and numerical methods were used to determine the coupling of energy in a multichannel three-dimensional hyperthermia applicator (SIGMA-Eye), consisting of 12 short dipole antenna pairs with stubs for impedance matching. The relationship between the amplitudes and phases of the forward waves from the amplifiers, to the resulting amplitudes and phases at the antenna feed-points was determined in terms of interaction matrices. Three measuring methods were used: 1) a differential probe soldered directly at the antenna feed-points; 2) an E-field sensor placed near the feed-points; and 3) measurements were made at the outputs of the amplifier. The measured data were compared with finite-difference time-domain (FDTD) calculations made with three different models. The first model assumes that single antennas are fed independently. The second model simulates antenna pairs connected to the transmission lines. The measured data correlate best with the latter FDTD model, resulting in an improvement of more than 20% and 20 degrees (average difference in amplitudes and phases) when compared with the two simpler FDTD models.

  7. Numerical modeling of a point-source image under relative motion of radiation receiver and atmosphere

    NASA Astrophysics Data System (ADS)

    Kucherov, A. N.; Makashev, N. K.; Ustinov, E. V.

    1994-02-01

    A procedure is proposed for numerical modeling of instantaneous and averaged (over various time intervals) distant-point-source images perturbed by a turbulent atmosphere that moves relative to the radiation receiver. Examples of image calculations under conditions of the significant effect of atmospheric turbulence in an approximation of geometrical optics are presented and analyzed.

  8. Digital identification of cartographic control points

    NASA Technical Reports Server (NTRS)

    Gaskell, R. W.

    1988-01-01

    Techniques have been developed for the sub-pixel location of control points in satellite images returned by the Voyager spacecraft. The procedure uses digital imaging data in the neighborhood of the point to form a multipicture model of a piece of the surface. Comparison of this model with the digital image in each picture determines the control point locations to about a tenth of a pixel. At this level of precision, previously insignificant effects must be considered, including chromatic aberration, high level imaging distortions, and systematic errors due to navigation uncertainties. Use of these methods in the study of Jupiter's satellite Io has proven very fruitful.

  9. Process-based coastal erosion modeling for Drew Point (North Slope, Alaska)

    USGS Publications Warehouse

    Ravens, Thomas M.; Jones, Benjamin M.; Zhang, Jinlin; Arp, Christopher D.; Schmutz, Joel A.

    2012-01-01

    A predictive, coastal erosion/shoreline change model has been developed for a small coastal segment near Drew Point, Beaufort Sea, Alaska. This coastal setting has experienced a dramatic increase in erosion since the early 2000’s. The bluffs at this site are 3-4 m tall and consist of ice-wedge bounded blocks of fine-grained sediments cemented by ice-rich permafrost and capped with a thin organic layer. The bluffs are typically fronted by a narrow (∼ 5  m wide) beach or none at all. During a storm surge, the sea contacts the base of the bluff and a niche is formed through thermal and mechanical erosion. The niche grows both vertically and laterally and eventually undermines the bluff, leading to block failure or collapse. The fallen block is then eroded both thermally and mechanically by waves and currents, which must occur before a new niche forming episode may begin. The erosion model explicitly accounts for and integrates a number of these processes including: (1) storm surge generation resulting from wind and atmospheric forcing, (2) erosional niche growth resulting from wave-induced turbulent heat transfer and sediment transport (using the Kobayashi niche erosion model), and (3) thermal and mechanical erosion of the fallen block. The model was calibrated with historic shoreline change data for one time period (1979-2002), and validated with a later time period (2002-2007).

  10. Critical Two-Point Function for Long-Range O( n) Models Below the Upper Critical Dimension

    NASA Astrophysics Data System (ADS)

    Lohmann, Martin; Slade, Gordon; Wallace, Benjamin C.

    2017-12-01

    We consider the n-component |φ|^4 lattice spin model (n ≥ 1) and the weakly self-avoiding walk (n=0) on Z^d, in dimensions d=1,2,3. We study long-range models based on the fractional Laplacian, with spin-spin interactions or walk step probabilities decaying with distance r as r^{-(d+α )} with α \\in (0,2). The upper critical dimension is d_c=2α . For ɛ >0, and α = 1/2 (d+ɛ ), the dimension d=d_c-ɛ is below the upper critical dimension. For small ɛ , weak coupling, and all integers n ≥ 0, we prove that the two-point function at the critical point decays with distance as r^{-(d-α )}. This "sticking" of the critical exponent at its mean-field value was first predicted in the physics literature in 1972. Our proof is based on a rigorous renormalisation group method. The treatment of observables differs from that used in recent work on the nearest-neighbour 4-dimensional case, via our use of a cluster expansion.

  11. Insights into mortality patterns and causes of death through a process point of view model

    PubMed Central

    Anderson, James J.; Li, Ting; Sharrow, David J.

    2016-01-01

    Process point of view models of mortality, such as the Strehler-Mildvan and stochastic vitality models, represent death in terms of the loss of survival capacity through challenges and dissipation. Drawing on hallmarks of aging, we link these concepts to candidate biological mechanisms through a framework that defines death as challenges to vitality where distal factors defined the age-evolution of vitality and proximal factors define the probability distribution of challenges. To illustrate the process point of view, we hypothesize that the immune system is a mortality nexus, characterized by two vitality streams: increasing vitality representing immune system development and immunosenescence representing vitality dissipation. Proximal challenges define three mortality partitions: juvenile and adult extrinsic mortalities and intrinsic adult mortality. Model parameters, generated from Swedish mortality data (1751-2010), exhibit biologically meaningful correspondences to economic, health and cause-of-death patterns. The model characterizes the 20th century epidemiological transition mainly as a reduction in extrinsic mortality resulting from a shift from high magnitude disease challenges on individuals at all vitality levels to low magnitude stress challenges on low vitality individuals. Of secondary importance, intrinsic mortality was described by a gradual reduction in the rate of loss of vitality presumably resulting from reduction in the rate of immunosenescence. Extensions and limitations of a distal/proximal framework for characterizing more explicit causes of death, e.g. the young adult mortality hump or cancer in old age are discussed. PMID:27885527

  12. Modelling the transport of solid contaminants originated from a point source

    NASA Astrophysics Data System (ADS)

    Salgueiro, Dora V.; Conde, Daniel A. S.; Franca, Mário J.; Schleiss, Anton J.; Ferreira, Rui M. L.

    2017-04-01

    The solid phases of natural flows can comprise an important repository for contaminants in aquatic ecosystems and can propagate as turbidity currents generating a stratified environment. Contaminants can be desorbed under specific environmental conditions becoming re-suspended, with a potential impact on the aquatic biota. Forecasting the distribution of the contaminated turbidity current is thus crucial for a complete assessment of environmental exposure. In this work we validate the ability of the model STAV-2D, developed at CERIS (IST), to simulate stratified flows such as those resulting from turbidity currents in complex geometrical environments. The validation involves not only flow phenomena inherent to flows generated by density imbalance but also convective effects brought about by the complex geometry of the water basin where the current propagates. This latter aspect is of paramount importance since, in real applications, currents may propagate in semi-confined geometries in plan view, generating important convective accelerations. Velocity fields and mass distributions obtained from experiments carried out at CERIS - (IST) are used as validation data for the model. The experimental set-up comprises a point source in a rectangular basin with a wall placed perpendicularly to the outer walls. Thus generates a complex 2D flow with an advancing wave front and shocks due to the flow reflection from the walls. STAV-2D is based on the depth- and time-averaged mass and momentum equations for mixtures of water and sediment, understood as continua. It is closed in terms of flow resistance and capacity bedload discharge by a set of classic closure models and a specific high concentration formulation. The two-layer model is derived from layer-averaged Navier-Stokes equations, resulting in a system of layer-specific non-linear shallow-water equations, solved through explicit first or second-order schemes. According to the experimental data for mass distribution, the

  13. Application of Bayesian techniques to model the burden of human salmonellosis attributable to U.S. food commodities at the point of processing: adaptation of a Danish model.

    PubMed

    Guo, Chuanfa; Hoekstra, Robert M; Schroeder, Carl M; Pires, Sara Monteiro; Ong, Kanyin Liane; Hartnett, Emma; Naugle, Alecia; Harman, Jane; Bennett, Patricia; Cieslak, Paul; Scallan, Elaine; Rose, Bonnie; Holt, Kristin G; Kissler, Bonnie; Mbandi, Evelyne; Roodsari, Reza; Angulo, Frederick J; Cole, Dana

    2011-04-01

    Mathematical models that estimate the proportion of foodborne illnesses attributable to food commodities at specific points in the food chain may be useful to risk managers and policy makers to formulate public health goals, prioritize interventions, and document the effectiveness of mitigations aimed at reducing illness. Using human surveillance data on laboratory-confirmed Salmonella infections from the Centers for Disease Control and Prevention and Salmonella testing data from U.S. Department of Agriculture Food Safety and Inspection Service's regulatory programs, we developed a point-of-processing foodborne illness attribution model by adapting the Hald Salmonella Bayesian source attribution model. Key model outputs include estimates of the relative proportions of domestically acquired sporadic human Salmonella infections resulting from contamination of raw meat, poultry, and egg products processed in the United States from 1998 through 2003. The current model estimates the relative contribution of chicken (48%), ground beef (28%), turkey (17%), egg products (6%), intact beef (1%), and pork (<1%) across 109 Salmonella serotypes found in food commodities at point of processing. While interpretation of the attribution estimates is constrained by data inputs, the adapted model shows promise and may serve as a basis for a common approach to attribution of human salmonellosis and food safety decision-making in more than one country. © Mary Ann Liebert, Inc.

  14. Hydraulic modeling of clay ceramic water filters for point-of-use water treatment.

    PubMed

    Schweitzer, Ryan W; Cunningham, Jeffrey A; Mihelcic, James R

    2013-01-02

    The acceptability of ceramic filters for point-of-use water treatment depends not only on the quality of the filtered water, but also on the quantity of water the filters can produce. This paper presents two mathematical models for the hydraulic performance of ceramic water filters under typical usage. A model is developed for two common filter geometries: paraboloid- and frustum-shaped. Both models are calibrated and evaluated by comparison to experimental data. The hydraulic models are able to predict the following parameters as functions of time: water level in the filter (h), instantaneous volumetric flow rate of filtrate (Q), and cumulative volume of water produced (V). The models' utility is demonstrated by applying them to estimate how the volume of water produced depends on factors such as the filter shape and the frequency of filling. Both models predict that the volume of water produced can be increased by about 45% if users refill the filter three times per day versus only once per day. Also, the models predict that filter geometry affects the volume of water produced: for two filters with equal volume, equal wall thickness, and equal hydraulic conductivity, a filter that is tall and thin will produce as much as 25% more water than one which is shallow and wide. We suggest that the models can be used as tools to help optimize filter performance.

  15. Point-spread function reconstruction in ground-based astronomy by l(1)-l(p) model.

    PubMed

    Chan, Raymond H; Yuan, Xiaoming; Zhang, Wenxing

    2012-11-01

    In ground-based astronomy, images of objects in outer space are acquired via ground-based telescopes. However, the imaging system is generally interfered by atmospheric turbulence, and hence images so acquired are blurred with unknown point-spread function (PSF). To restore the observed images, the wavefront of light at the telescope's aperture is utilized to derive the PSF. A model with the Tikhonov regularization has been proposed to find the high-resolution phase gradients by solving a least-squares system. Here we propose the l(1)-l(p) (p=1, 2) model for reconstructing the phase gradients. This model can provide sharper edges in the gradients while removing noise. The minimization models can easily be solved by the Douglas-Rachford alternating direction method of a multiplier, and the convergence rate is readily established. Numerical results are given to illustrate that the model can give better phase gradients and hence a more accurate PSF. As a result, the restored images are much more accurate when compared to the traditional Tikhonov regularization model.

  16. A global reference model of Curie-point depths based on EMAG2

    NASA Astrophysics Data System (ADS)

    Li, Chun-Feng; Lu, Yu; Wang, Jian

    2017-03-01

    In this paper, we use a robust inversion algorithm, which we have tested in many regional studies, to obtain the first global model of Curie-point depth (GCDM) from magnetic anomaly inversion based on fractal magnetization. Statistically, the oceanic Curie depth mean is smaller than the continental one, but continental Curie depths are almost bimodal, showing shallow Curie points in some old cratons. Oceanic Curie depths show modifications by hydrothermal circulations in young oceanic lithosphere and thermal perturbations in old oceanic lithosphere. Oceanic Curie depths also show strong dependence on the spreading rate along active spreading centers. Curie depths and heat flow are correlated, following optimal theoretical curves of average thermal conductivities K = ~2.0 W(m°C)-1 for the ocean and K = ~2.5 W(m°C)-1 for the continent. The calculated heat flow from Curie depths and large-interval gridding of measured heat flow all indicate that the global heat flow average is about 70.0 mW/m2, leading to a global heat loss ranging from ~34.6 to 36.6 TW.

  17. A global reference model of Curie-point depths based on EMAG2

    PubMed Central

    Li, Chun-Feng; Lu, Yu; Wang, Jian

    2017-01-01

    In this paper, we use a robust inversion algorithm, which we have tested in many regional studies, to obtain the first global model of Curie-point depth (GCDM) from magnetic anomaly inversion based on fractal magnetization. Statistically, the oceanic Curie depth mean is smaller than the continental one, but continental Curie depths are almost bimodal, showing shallow Curie points in some old cratons. Oceanic Curie depths show modifications by hydrothermal circulations in young oceanic lithosphere and thermal perturbations in old oceanic lithosphere. Oceanic Curie depths also show strong dependence on the spreading rate along active spreading centers. Curie depths and heat flow are correlated, following optimal theoretical curves of average thermal conductivities K = ~2.0 W(m°C)−1 for the ocean and K = ~2.5 W(m°C)−1 for the continent. The calculated heat flow from Curie depths and large-interval gridding of measured heat flow all indicate that the global heat flow average is about 70.0 mW/m2, leading to a global heat loss ranging from ~34.6 to 36.6 TW. PMID:28322332

  18. A global reference model of Curie-point depths based on EMAG2.

    PubMed

    Li, Chun-Feng; Lu, Yu; Wang, Jian

    2017-03-21

    In this paper, we use a robust inversion algorithm, which we have tested in many regional studies, to obtain the first global model of Curie-point depth (GCDM) from magnetic anomaly inversion based on fractal magnetization. Statistically, the oceanic Curie depth mean is smaller than the continental one, but continental Curie depths are almost bimodal, showing shallow Curie points in some old cratons. Oceanic Curie depths show modifications by hydrothermal circulations in young oceanic lithosphere and thermal perturbations in old oceanic lithosphere. Oceanic Curie depths also show strong dependence on the spreading rate along active spreading centers. Curie depths and heat flow are correlated, following optimal theoretical curves of average thermal conductivities K = ~2.0 W(m°C) -1 for the ocean and K = ~2.5 W(m°C) -1 for the continent. The calculated heat flow from Curie depths and large-interval gridding of measured heat flow all indicate that the global heat flow average is about 70.0 mW/m 2 , leading to a global heat loss ranging from ~34.6 to 36.6 TW.

  19. Temperature distribution model for the semiconductor dew point detector

    NASA Astrophysics Data System (ADS)

    Weremczuk, Jerzy; Gniazdowski, Z.; Jachowicz, Ryszard; Lysko, Jan M.

    2001-08-01

    The simulation results of temperature distribution in the new type silicon dew point detector are presented in this paper. Calculations were done with use of the SMACEF simulation program. Fabricated structures, apart from the impedance detector used to the dew point detection, contained the resistive four terminal thermometer and two heaters. Two detector structures, the first one located on the silicon membrane and the second one placed on the bulk materials were compared in this paper.

  20. Pairwise Interaction Extended Point-Particle (PIEP) model for multiphase jets and sedimenting particles

    NASA Astrophysics Data System (ADS)

    Liu, Kai; Balachandar, S.

    2017-11-01

    We perform a series of Euler-Lagrange direct numerical simulations (DNS) for multiphase jets and sedimenting particles. The forces the flow exerts on the particles in these two-way coupled simulations are computed using the Basset-Bousinesq-Oseen (BBO) equations. These forces do not explicitly account for particle-particle interactions, even though such pairwise interactions induced by the perturbations from neighboring particles may be important especially when the particle volume fraction is high. Such effects have been largely unaddressed in the literature. Here, we implement the Pairwise Interaction Extended Point-Particle (PIEP) model to simulate the effect of neighboring particle pairs. A simple collision model is also applied to avoid unphysical overlapping of solid spherical particles. The simulation results indicate that the PIEP model provides a more elaborative and complicated movement of the dispersed phase (droplets and particles). Office of Naval Research (ONR) Multidisciplinary University Research Initiative (MURI) project N00014-16-1-2617.

  1. The Point of Creative Frustration and the Creative Process: A New Look at an Old Model.

    ERIC Educational Resources Information Center

    Sapp, D. David

    1992-01-01

    This paper offers an extension of Graham Wallas' model of the creative process. It identifies periods of problem solving, incubation, and growth with specific points of initial idea inception, creative frustration, and illumination. Responses to creative frustration are described including denial, rationalization, acceptance of stagnation, and new…

  2. An aggregate method to calibrate the reference point of cumulative prospect theory-based route choice model for urban transit network

    NASA Astrophysics Data System (ADS)

    Zhang, Yufeng; Long, Man; Luo, Sida; Bao, Yu; Shen, Hanxia

    2015-12-01

    Transit route choice model is the key technology of public transit systems planning and management. Traditional route choice models are mostly based on expected utility theory which has an evident shortcoming that it cannot accurately portray travelers' subjective route choice behavior for their risk preferences are not taken into consideration. Cumulative prospect theory (CPT), a brand new theory, can be used to describe travelers' decision-making process under the condition of uncertainty of transit supply and risk preferences of multi-type travelers. The method to calibrate the reference point, a key parameter to CPT-based transit route choice model, determines the precision of the model to a great extent. In this paper, a new method is put forward to obtain the value of reference point which combines theoretical calculation and field investigation results. Comparing the proposed method with traditional method, it shows that the new method can promote the quality of CPT-based model by improving the accuracy in simulating travelers' route choice behaviors based on transit trip investigation from Nanjing City, China. The proposed method is of great significance to logical transit planning and management, and to some extent makes up the defect that obtaining the reference point is solely based on qualitative analysis.

  3. Point kernel calculations of skyshine exposure rates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roseberry, M.L.; Shultis, J.K.

    1982-02-01

    A simple point kernel model is presented for the calculation of skyshine exposure rates arising from the atmospheric reflection of gamma radiation produced by a vertically collimated or a shielded point source. This model is shown to be in good agreement with benchmark experimental data from a /sup 60/Co source for distances out to 700 m.

  4. Modeling strategies for pharmaceutical blend monitoring and end-point determination by near-infrared spectroscopy.

    PubMed

    Igne, Benoît; de Juan, Anna; Jaumot, Joaquim; Lallemand, Jordane; Preys, Sébastien; Drennen, James K; Anderson, Carl A

    2014-10-01

    The implementation of a blend monitoring and control method based on a process analytical technology such as near infrared spectroscopy requires the selection and optimization of numerous criteria that will affect the monitoring outputs and expected blend end-point. Using a five component formulation, the present article contrasts the modeling strategies and end-point determination of a traditional quantitative method based on the prediction of the blend parameters employing partial least-squares regression with a qualitative strategy based on principal component analysis and Hotelling's T(2) and residual distance to the model, called Prototype. The possibility to monitor and control blend homogeneity with multivariate curve resolution was also assessed. The implementation of the above methods in the presence of designed experiments (with variation of the amount of active ingredient and excipients) and with normal operating condition samples (nominal concentrations of the active ingredient and excipients) was tested. The impact of criteria used to stop the blends (related to precision and/or accuracy) was assessed. Results demonstrated that while all methods showed similarities in their outputs, some approaches were preferred for decision making. The selectivity of regression based methods was also contrasted with the capacity of qualitative methods to determine the homogeneity of the entire formulation. Copyright © 2014. Published by Elsevier B.V.

  5. Estimating the physicochemical properties of polyhalogenated aromatic and aliphatic compounds using UPPER: part 1. Boiling point and melting point.

    PubMed

    Admire, Brittany; Lian, Bo; Yalkowsky, Samuel H

    2015-01-01

    The UPPER (Unified Physicochemical Property Estimation Relationships) model uses enthalpic and entropic parameters to estimate 20 biologically relevant properties of organic compounds. The model has been validated by Lian and Yalkowsky on a data set of 700 hydrocarbons. The aim of this work is to expand the UPPER model to estimate the boiling and melting points of polyhalogenated compounds. In this work, 19 new group descriptors are defined and used to predict the transition temperatures of an additional 1288 compounds. The boiling points of 808 and the melting points of 742 polyhalogenated compounds are predicted with average absolute errors of 13.56 K and 25.85 K, respectively. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Multi-scale quantum point contact model for filamentary conduction in resistive random access memories devices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lian, Xiaojuan, E-mail: xjlian2005@gmail.com; Cartoixà, Xavier; Miranda, Enrique

    2014-06-28

    We depart from first-principle simulations of electron transport along paths of oxygen vacancies in HfO{sub 2} to reformulate the Quantum Point Contact (QPC) model in terms of a bundle of such vacancy paths. By doing this, the number of model parameters is reduced and a much clearer link between the microscopic structure of the conductive filament (CF) and its electrical properties can be provided. The new multi-scale QPC model is applied to two different HfO{sub 2}-based devices operated in the unipolar and bipolar resistive switching (RS) modes. Extraction of the QPC model parameters from a statistically significant number of CFsmore » allows revealing significant structural differences in the CF of these two types of devices and RS modes.« less

  7. Automatic Method for Building Indoor Boundary Models from Dense Point Clouds Collected by Laser Scanners

    PubMed Central

    Valero, Enrique; Adán, Antonio; Cerrada, Carlos

    2012-01-01

    In this paper we present a method that automatically yields Boundary Representation Models (B-rep) for indoors after processing dense point clouds collected by laser scanners from key locations through an existing facility. Our objective is particularly focused on providing single models which contain the shape, location and relationship of primitive structural elements of inhabited scenarios such as walls, ceilings and floors. We propose a discretization of the space in order to accurately segment the 3D data and generate complete B-rep models of indoors in which faces, edges and vertices are coherently connected. The approach has been tested in real scenarios with data coming from laser scanners yielding promising results. We have deeply evaluated the results by analyzing how reliably these elements can be detected and how accurately they are modeled. PMID:23443369

  8. Children with Autism Wearing Action Cameras: Changing Parent/Child Interactions Using Point-of-View Video Modeling

    ERIC Educational Resources Information Center

    Stump, Keenan C.

    2017-01-01

    My dissertation research involves the implementation of a parent-provided point-of-view modeling (POVM) intervention created to improve social interaction between parents and their children with autism spectrum disorder (ASD). A series of studies ultimately lead to my dissertation study. The first manuscript entitled "Autism-Related Insurance…

  9. Reconstruction of measurable three-dimensional point cloud model based on large-scene archaeological excavation sites

    NASA Astrophysics Data System (ADS)

    Zhang, Chun-Sen; Zhang, Meng-Meng; Zhang, Wei-Xing

    2017-01-01

    This paper outlines a low-cost, user-friendly photogrammetric technique with nonmetric cameras to obtain excavation site digital sequence images, based on photogrammetry and computer vision. Digital camera calibration, automatic aerial triangulation, image feature extraction, image sequence matching, and dense digital differential rectification are used, combined with a certain number of global control points of the excavation site, to reconstruct the high precision of measured three-dimensional (3-D) models. Using the acrobatic figurines in the Qin Shi Huang mausoleum excavation as an example, our method solves the problems of little base-to-height ratio, high inclination, unstable altitudes, and significant ground elevation changes affecting image matching. Compared to 3-D laser scanning, the 3-D color point cloud obtained by this method can maintain the same visual result and has advantages of low project cost, simple data processing, and high accuracy. Structure-from-motion (SfM) is often used to reconstruct 3-D models of large scenes and has lower accuracy if it is a reconstructed 3-D model of a small scene at close range. Results indicate that this method quickly achieves 3-D reconstruction of large archaeological sites and produces heritage site distribution of orthophotos providing a scientific basis for accurate location of cultural relics, archaeological excavations, investigation, and site protection planning. This proposed method has a comprehensive application value.

  10. Superposition and alignment of labeled point clouds.

    PubMed

    Fober, Thomas; Glinca, Serghei; Klebe, Gerhard; Hüllermeier, Eyke

    2011-01-01

    Geometric objects are often represented approximately in terms of a finite set of points in three-dimensional euclidean space. In this paper, we extend this representation to what we call labeled point clouds. A labeled point cloud is a finite set of points, where each point is not only associated with a position in three-dimensional space, but also with a discrete class label that represents a specific property. This type of model is especially suitable for modeling biomolecules such as proteins and protein binding sites, where a label may represent an atom type or a physico-chemical property. Proceeding from this representation, we address the question of how to compare two labeled points clouds in terms of their similarity. Using fuzzy modeling techniques, we develop a suitable similarity measure as well as an efficient evolutionary algorithm to compute it. Moreover, we consider the problem of establishing an alignment of the structures in the sense of a one-to-one correspondence between their basic constituents. From a biological point of view, alignments of this kind are of great interest, since mutually corresponding molecular constituents offer important information about evolution and heredity, and can also serve as a means to explain a degree of similarity. In this paper, we therefore develop a method for computing pairwise or multiple alignments of labeled point clouds. To this end, we proceed from an optimal superposition of the corresponding point clouds and construct an alignment which is as much as possible in agreement with the neighborhood structure established by this superposition. We apply our methods to the structural analysis of protein binding sites.

  11. Fixed points, stability, and intermittency in a shell model for advection of passive scalars

    PubMed

    Kockelkoren; Jensen

    2000-08-01

    We investigate the fixed points of a shell model for the turbulent advection of passive scalars introduced in Jensen, Paladin, and Vulpiani [Phys. Rev. A 45, 7214 (1992)]. The passive scalar field is driven by the velocity field of the popular Gledzer-Ohkitani-Yamada (GOY) shell model. The scaling behavior of the static solutions is found to differ significantly from Obukhov-Corrsin scaling straight theta(n) approximately k(-1/3)(n), which is only recovered in the limit where the diffusivity vanishes, D-->0. From the eigenvalue spectrum we show that any perturbation in the scalar will always damp out, i.e., the eigenvalues of the scalar are negative and are decoupled from the eigenvalues of the velocity. We estimate Lyapunov exponents and the intermittency parameters using a definition proposed by Benzi, Paladin, Parisi, and Vulpiani [J. Phys. A 18, 2157 (1985)]. The full model is found to be as chaotic as the GOY model, measured by the maximal Lyapunov exponent, but is more intermittent.

  12. Atmospheric neutral points outside of the principal plane. [points of vanished skylight polarization

    NASA Technical Reports Server (NTRS)

    Fraser, R. S.

    1981-01-01

    It is noted that the positions in the sky where the skylight is unpolarized, that is, the neutral points, are in most cases located in the vertical plane through the sun (the principal plane). Points have been observed outside the principal plane (Soret, 1888) when the plane intersected a lake or sea. Here, the neutral points were located at an azimuth of about 15 deg from the sun and near the almucantar through the sun. In order to investigate the effects of water surface and aerosols in the neutral point positions, the positions are computed for models of the earth-atmosphere system that simulate the observational conditions. The computed and measured positions are found to agree well. While previous observations provided only qualitative information on the degree of polarization, it is noted that the computations provide details concerning the polarization parameters.

  13. Quantum chemical determination of Young's modulus of lignin. Calculations on a beta-O-4' model compound.

    PubMed

    Elder, Thomas

    2007-11-01

    The calculation of Young's modulus of lignin has been examined by subjecting a dimeric model compound to strain, coupled with the determination of energy and stress. The computational results, derived from quantum chemical calculations, are in agreement with available experimental results. Changes in geometry indicate that modifications in dihedral angles occur in response to linear strain. At larger levels of strain, bond rupture is evidenced by abrupt changes in energy, structure, and charge. Based on the current calculations, the bond scission may be occurring through a homolytic reaction between aliphatic carbon atoms. These results may have implications in the reactivity of lignin especially when subjected to processing methods that place large mechanical forces on the structure.

  14. Chemical & Biological Point Detection Decontamination

    DTIC Science & Technology

    2002-04-01

    high priority in biological defense. Research on multivalent assays is also ongoing. Biased libraries, generated from immunized animals, or unbiased ...2003 TBD decontamination and modeling and simulation I I The Chem-Bio Point Detection Roadmap The summary level updated and expanded Bio Point... Molecular Imprinted Polymer Sensor, Dendrimer-based Antibody Assays, Pyrolysis-GC-ion mobility spectrometry, and surface enhanced Raman spectroscopy. Data

  15. Detecting determinism from point processes.

    PubMed

    Andrzejak, Ralph G; Mormann, Florian; Kreuz, Thomas

    2014-12-01

    The detection of a nonrandom structure from experimental data can be crucial for the classification, understanding, and interpretation of the generating process. We here introduce a rank-based nonlinear predictability score to detect determinism from point process data. Thanks to its modular nature, this approach can be adapted to whatever signature in the data one considers indicative of deterministic structure. After validating our approach using point process signals from deterministic and stochastic model dynamics, we show an application to neuronal spike trains recorded in the brain of an epilepsy patient. While we illustrate our approach in the context of temporal point processes, it can be readily applied to spatial point processes as well.

  16. Modeling the Global Coronal Field with Simulated Synoptic Magnetograms from Earth and the Lagrange Points L3, L4, and L5

    NASA Astrophysics Data System (ADS)

    Petrie, Gordon; Pevtsov, Alexei; Schwarz, Andrew; DeRosa, Marc

    2018-06-01

    The solar photospheric magnetic flux distribution is key to structuring the global solar corona and heliosphere. Regular full-disk photospheric magnetogram data are therefore essential to our ability to model and forecast heliospheric phenomena such as space weather. However, our spatio-temporal coverage of the photospheric field is currently limited by our single vantage point at/near Earth. In particular, the polar fields play a leading role in structuring the large-scale corona and heliosphere, but each pole is unobservable for {>} 6 months per year. Here we model the possible effect of full-disk magnetogram data from the Lagrange points L4 and L5, each extending longitude coverage by 60°. Adding data also from the more distant point L3 extends the longitudinal coverage much further. The additional vantage points also improve the visibility of the globally influential polar fields. Using a flux-transport model for the solar photospheric field, we model full-disk observations from Earth/L1, L3, L4, and L5 over a solar cycle, construct synoptic maps using a novel weighting scheme adapted for merging magnetogram data from multiple viewpoints, and compute potential-field models for the global coronal field. Each additional viewpoint brings the maps and models into closer agreement with the reference field from the flux-transport simulation, with particular improvement at polar latitudes, the main source of the fast solar wind.

  17. Glue detection based on teaching points constraint and tracking model of pixel convolution

    NASA Astrophysics Data System (ADS)

    Geng, Lei; Ma, Xiao; Xiao, Zhitao; Wang, Wen

    2018-01-01

    On-line glue detection based on machine version is significant for rust protection and strengthening in car production. Shadow stripes caused by reflect light and unevenness of inside front cover of car reduce the accuracy of glue detection. In this paper, we propose an effective algorithm to distinguish the edges of the glue and shadow stripes. Teaching points are utilized to calculate slope between the two adjacent points. Then a tracking model based on pixel convolution along motion direction is designed to segment several local rectangular regions using distance. The distance is the height of rectangular region. The pixel convolution along the motion direction is proposed to extract edges of gules in local rectangular region. A dataset with different illumination and complexity shape stripes are used to evaluate proposed method, which include 500 thousand images captured from the camera of glue gun machine. Experimental results demonstrate that the proposed method can detect the edges of glue accurately. The shadow stripes are distinguished and removed effectively. Our method achieves the 99.9% accuracies for the image dataset.

  18. Estimating the melting point, entropy of fusion, and enthalpy of ...

    EPA Pesticide Factsheets

    The entropies of fusion, enthalies of fusion, and melting points of organic compounds can be estimated through three models developed using the SPARC (SPARC Performs Automated Reasoning in Chemistry) platform. The entropy of fusion is modeled through a combination of interaction terms and physical descriptors. The enthalpy of fusion is modeled as a function of the entropy of fusion, boiling point, and fexibility of the molecule. The melting point model is the enthlapy of fusion divided by the entropy of fusion. These models were developed in part to improve SPARC's vapor pressure and solubility models. These models have been tested on 904 unique compounds. The entropy model has a RMS of 12.5 J mol-1K-1. The enthalpy model has a RMS of 4.87 kJ mol-1. The melting point model has a RMS of 54.4°C. Published in the journal, SAR and QSAR in Environmental Research

  19. Point-to-point connectivity prediction in porous media using percolation theory

    NASA Astrophysics Data System (ADS)

    Tavagh-Mohammadi, Behnam; Masihi, Mohsen; Ganjeh-Ghazvini, Mostafa

    2016-10-01

    The connectivity between two points in porous media is important for evaluating hydrocarbon recovery in underground reservoirs or toxic migration in waste disposal. For example, the connectivity between a producer and an injector in a hydrocarbon reservoir impact the fluid dispersion throughout the system. The conventional approach, flow simulation, is computationally very expensive and time consuming. Alternative method employs percolation theory. Classical percolation approach investigates the connectivity between two lines (representing the wells) in 2D cross sectional models whereas we look for the connectivity between two points (representing the wells) in 2D aerial models. In this study, site percolation is used to determine the fraction of permeable regions connected between two cells at various occupancy probabilities and system sizes. The master curves of mean connectivity and its uncertainty are then generated by finite size scaling. The results help to predict well-to-well connectivity without need to any further simulation.

  20. Arctic climate tipping points.

    PubMed

    Lenton, Timothy M

    2012-02-01

    There is widespread concern that anthropogenic global warming will trigger Arctic climate tipping points. The Arctic has a long history of natural, abrupt climate changes, which together with current observations and model projections, can help us to identify which parts of the Arctic climate system might pass future tipping points. Here the climate tipping points are defined, noting that not all of them involve bifurcations leading to irreversible change. Past abrupt climate changes in the Arctic are briefly reviewed. Then, the current behaviour of a range of Arctic systems is summarised. Looking ahead, a range of potential tipping phenomena are described. This leads to a revised and expanded list of potential Arctic climate tipping elements, whose likelihood is assessed, in terms of how much warming will be required to tip them. Finally, the available responses are considered, especially the prospects for avoiding Arctic climate tipping points.

  1. Point vortex modelling of the wake dynamics behind asymmetric vortex generator arrays

    NASA Astrophysics Data System (ADS)

    Baldacchino, D.; Ferreira, C.; Ragni, D.; van Bussel, G. J. W.

    2016-09-01

    In this work, we present a simple inviscid point vortex model to study the dynamics of asymmetric vortex rows, as might appear behind misaligned vortex generator vanes. Starting from the existing solution of the infinite vortex cascade, a numerical model of four base-vortices is chosen to represent two primary counter-rotating vortex pairs and their mirror plane images, introducing the vortex strength ratio as a free parameter. The resulting system of equations is also defined in terms of the vortex row separation and the qualitative features of the ensuing motion are mapped. A translating and orbiting regime are identified for different cascade separations. The latter occurs for all unequal strength vortex pairs. Thus, the motion is further classified by studying the cyclic behaviour of the orbiting regime and it is shown that for small mismatches in vortex strength, the orbiting length and time scales are sufficiently large as to appear, in the near wake, as translational (non-orbiting). However, for larger mismatches in vortex strength, the orbiting motion approaches the order of the starting height of the vortex. Comparisons between experimental data and the potential flow model show qualitative agreement whilst viscous effects account for the major discrepancies. Despite this, the model captures the orbital mode observed in the measurements and provides an impetus for considering the impact of these complex interactions on vortex generator designs.

  2. Critical points of metal vapors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khomkin, A. L., E-mail: alhomkin@mail.ru; Shumikhin, A. S.

    2015-09-15

    A new method is proposed for calculating the parameters of critical points and binodals for the vapor–liquid (insulator–metal) phase transition in vapors of metals with multielectron valence shells. The method is based on a model developed earlier for the vapors of alkali metals, atomic hydrogen, and exciton gas, proceeding from the assumption that the cohesion determining the basic characteristics of metals under normal conditions is also responsible for their properties in the vicinity of the critical point. It is proposed to calculate the cohesion of multielectron atoms using well-known scaling relations for the binding energy, which are constructed for mostmore » metals in the periodic table by processing the results of many numerical calculations. The adopted model allows the parameters of critical points and binodals for the vapor–liquid phase transition in metal vapors to be calculated using published data on the properties of metals under normal conditions. The parameters of critical points have been calculated for a large number of metals and show satisfactory agreement with experimental data for alkali metals and with available estimates for all other metals. Binodals of metals have been calculated for the first time.« less

  3. Network traffic behaviour near phase transition point

    NASA Astrophysics Data System (ADS)

    Lawniczak, A. T.; Tang, X.

    2006-03-01

    We explore packet traffic dynamics in a data network model near phase transition point from free flow to congestion. The model of data network is an abstraction of the Network Layer of the OSI (Open Systems Interconnect) Reference Model of packet switching networks. The Network Layer is responsible for routing packets across the network from their sources to their destinations and for control of congestion in data networks. Using the model we investigate spatio-temporal packets traffic dynamics near the phase transition point for various network connection topologies, and static and adaptive routing algorithms. We present selected simulation results and analyze them.

  4. Applicability of the single equivalent point dipole model to represent a spatially distributed bio-electrical source

    NASA Technical Reports Server (NTRS)

    Armoundas, A. A.; Feldman, A. B.; Sherman, D. A.; Cohen, R. J.

    2001-01-01

    Although the single equivalent point dipole model has been used to represent well-localised bio-electrical sources, in realistic situations the source is distributed. Consequently, position estimates of point dipoles determined by inverse algorithms suffer from systematic error due to the non-exact applicability of the inverse model. In realistic situations, this systematic error cannot be avoided, a limitation that is independent of the complexity of the torso model used. This study quantitatively investigates the intrinsic limitations in the assignment of a location to the equivalent dipole due to distributed electrical source. To simulate arrhythmic activity in the heart, a model of a wave of depolarisation spreading from a focal source over the surface of a spherical shell is used. The activity is represented by a sequence of concentric belt sources (obtained by slicing the shell with a sequence of parallel plane pairs), with constant dipole moment per unit length (circumferentially) directed parallel to the propagation direction. The distributed source is represented by N dipoles at equal arc lengths along the belt. The sum of the dipole potentials is calculated at predefined electrode locations. The inverse problem involves finding a single equivalent point dipole that best reproduces the electrode potentials due to the distributed source. The inverse problem is implemented by minimising the chi2 per degree of freedom. It is found that the trajectory traced by the equivalent dipole is sensitive to the location of the spherical shell relative to the fixed electrodes. It is shown that this trajectory does not coincide with the sequence of geometrical centres of the consecutive belt sources. For distributed sources within a bounded spherical medium, displaced from the sphere's centre by 40% of the sphere's radius, it is found that the error in the equivalent dipole location varies from 3 to 20% for sources with size between 5 and 50% of the sphere's radius

  5. Degradation Analysis of NBR and Epichlorohydrin Rubber by New Micro Analysis Method

    NASA Astrophysics Data System (ADS)

    Katoh, Hisao; Kamoto, Ritsu; Murata, Jun

    The degradation analysis of NBR and Epichlorohydrin rubber was carried out by infrared micro spectroscopy (μ-IR) and micro sampling mass spectrometry (μ-MS) which gives information on the scission and crosslinking of rubber molecules. Samples were prepared by three different treatments, heat as well as ultra violet (UV) and electron beam (EB) irradiations. It was found for NBR vulcanizates that the heat treatment induced the oxidation, scission and crosslinking of rubber molecules. By the UV treatment, chain scission and crosslinking accompanied by a slight oxidation were induced. The EB treatment enhanced the crosslinking, however, the extent of oxidation was negligible. For Epichlorohydrin rubber vulcanizates, the heat treatment accelerated chain scission rather than crosslinking. On the other hand, the oxidation and crosslinking were induced by the UV and EB treatments.

  6. A Unique Computational Algorithm to Simulate Probabilistic Multi-Factor Interaction Model Complex Material Point Behavior

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.; Abumeri, Galib H.

    2010-01-01

    The Multi-Factor Interaction Model (MFIM) is used to evaluate the divot weight (foam weight ejected) from the launch external tanks. The multi-factor has sufficient degrees of freedom to evaluate a large number of factors that may contribute to the divot ejection. It also accommodates all interactions by its product form. Each factor has an exponent that satisfies only two points--the initial and final points. The exponent describes a monotonic path from the initial condition to the final. The exponent values are selected so that the described path makes sense in the absence of experimental data. In the present investigation, the data used was obtained by testing simulated specimens in launching conditions. Results show that the MFIM is an effective method of describing the divot weight ejected under the conditions investigated.

  7. Temporally consistent segmentation of point clouds

    NASA Astrophysics Data System (ADS)

    Owens, Jason L.; Osteen, Philip R.; Daniilidis, Kostas

    2014-06-01

    We consider the problem of generating temporally consistent point cloud segmentations from streaming RGB-D data, where every incoming frame extends existing labels to new points or contributes new labels while maintaining the labels for pre-existing segments. Our approach generates an over-segmentation based on voxel cloud connectivity, where a modified k-means algorithm selects supervoxel seeds and associates similar neighboring voxels to form segments. Given the data stream from a potentially mobile sensor, we solve for the camera transformation between consecutive frames using a joint optimization over point correspondences and image appearance. The aligned point cloud may then be integrated into a consistent model coordinate frame. Previously labeled points are used to mask incoming points from the new frame, while new and previous boundary points extend the existing segmentation. We evaluate the algorithm on newly-generated RGB-D datasets.

  8. Effect of transverse vibrations of fissile nuclei on the angular and spin distributions of low-energy fission fragments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bunakov, V. E.; Kadmensky, S. G., E-mail: kadmensky@phys.vsu.ru; Lyubashevsky, D. E.

    2016-05-15

    It is shown that A. Bohr’s classic theory of angular distributions of fragments originating from low-energy fission should be supplemented with quantum corrections based on the involvement of a superposition of a very large number of angular momenta L{sub m} in the description of the relative motion of fragments flying apart along the straight line coincidentwith the symmetry axis. It is revealed that quantum zero-point wriggling-type vibrations of the fissile system in the vicinity of its scission point are a source of these angular momenta and of high fragment spins observed experimentally.

  9. Analytical volcano deformation modelling: A new and fast generalized point-source approach with application to the 2015 Calbuco eruption

    NASA Astrophysics Data System (ADS)

    Nikkhoo, M.; Walter, T. R.; Lundgren, P.; Prats-Iraola, P.

    2015-12-01

    Ground deformation at active volcanoes is one of the key precursors of volcanic unrest, monitored by InSAR and GPS techniques at high spatial and temporal resolution, respectively. Modelling of the observed displacements establishes the link between them and the underlying subsurface processes and volume change. The so-called Mogi model and the rectangular dislocation are two commonly applied analytical solutions that allow for quick interpretations based on the location, depth and volume change of pressurized spherical cavities and planar intrusions, respectively. Geological observations worldwide, however, suggest elongated, tabular or other non-equidimensional geometries for the magma chambers. How can these be modelled? Generalized models such as the Davis's point ellipsoidal cavity or the rectangular dislocation solutions, are geometrically limited and could barely improve the interpretation of data. We develop a new analytical artefact-free solution for a rectangular dislocation, which also possesses full rotational degrees of freedom. We construct a kinematic model in terms of three pairwise-perpendicular rectangular dislocations with a prescribed opening only. This model represents a generalized point source in the far field, and also performs as a finite dislocation model for planar intrusions in the near field. We show that through calculating the Eshelby's shape tensor the far-field displacements and stresses of any arbitrary triaxial ellipsoidal cavity can be reproduced by using this model. Regardless of its aspect ratios, the volume change of this model is simply the sum of the volume change of the individual dislocations. Our model can be integrated in any inversion scheme as simply as the Mogi model, profiting at the same time from the advantages of a generalized point source. After evaluating our model by using a boundary element method code, we apply it to ground displacements of the 2015 Calbuco eruption, Chile, observed by the Sentinel-1

  10. User's Guide for the Agricultural Non-Point Source (AGNPS) Pollution Model Data Generator

    USGS Publications Warehouse

    Finn, Michael P.; Scheidt, Douglas J.; Jaromack, Gregory M.

    2003-01-01

    BACKGROUND Throughout this user guide, we refer to datasets that we used in conjunction with developing of this software for supporting cartographic research and producing the datasets to conduct research. However, this software can be used with these datasets or with more 'generic' versions of data of the appropriate type. For example, throughout the guide, we refer to national land cover data (NLCD) and digital elevation model (DEM) data from the U.S. Geological Survey (USGS) at a 30-m resolution, but any digital terrain model or land cover data at any appropriate resolution will produce results. Another key point to keep in mind is to use a consistent data resolution for all the datasets per model run. The U.S. Department of Agriculture (USDA) developed the Agricultural Nonpoint Source (AGNPS) pollution model of watershed hydrology in response to the complex problem of managing nonpoint sources of pollution. AGNPS simulates the behavior of runoff, sediment, and nutrient transport from watersheds that have agriculture as their prime use. The model operates on a cell basis and is a distributed parameter, event-based model. The model requires 22 input parameters. Output parameters are grouped primarily by hydrology, sediment, and chemical output (Young and others, 1995.) Elevation, land cover, and soil are the base data from which to extract the 22 input parameters required by the AGNPS. For automatic parameter extraction, follow the general process described in this guide of extraction from the geospatial data through the AGNPS Data Generator to generate input parameters required by the pollution model (Finn and others, 2002.)

  11. Merging LIDAR digital terrain model with direct observed elevation points for urban flood numerical simulation

    NASA Astrophysics Data System (ADS)

    Arrighi, Chiara; Campo, Lorenzo

    2017-04-01

    In last years, the concern about the economical and lives loss due to urban floods has grown hand in hand with the numerical skills in simulating such events. The large amount of computational power needed in order to address the problem (simulating a flood in a complex terrain such as a medium-large city) is only one of the issues. Among them it is possible to consider the general lack of exhaustive observations during the event (exact extension, dynamic, water level reached in different parts of the involved area), needed for calibration and validation of the model, the need of considering the sewers effects, and the availability of a correct and precise description of the geometry of the problem. In large cities the topographic surveys are in general available with a number of points, but a complete hydraulic simulation needs a detailed description of the terrain on the whole computational domain. LIDAR surveys can achieve this goal, providing a comprehensive description of the terrain, although they often lack precision. In this work an optimal merging of these two sources of geometrical information, measured elevation points and LIDAR survey, is proposed, by taking into account the error variance of both. The procedure is applied to a flood-prone city over an area of 35 square km approximately starting with a DTM from LIDAR with a spatial resolution of 1 m, and 13000 measured points. The spatial pattern of the error (LIDAR vs points) is analysed, and the merging method is tested with a series of Jackknife procedures that take into account different densities of the available points. A discussion of the results is provided.

  12. Instantaneous and time-averaged dispersion and measurement models for estimation theory applications with elevated point source plumes

    NASA Technical Reports Server (NTRS)

    Diamante, J. M.; Englar, T. S., Jr.; Jazwinski, A. H.

    1977-01-01

    Estimation theory, which originated in guidance and control research, is applied to the analysis of air quality measurements and atmospheric dispersion models to provide reliable area-wide air quality estimates. A method for low dimensional modeling (in terms of the estimation state vector) of the instantaneous and time-average pollutant distributions is discussed. In particular, the fluctuating plume model of Gifford (1959) is extended to provide an expression for the instantaneous concentration due to an elevated point source. Individual models are also developed for all parameters in the instantaneous and the time-average plume equations, including the stochastic properties of the instantaneous fluctuating plume.

  13. Evaluation Model for Pavement Surface Distress on 3d Point Clouds from Mobile Mapping System

    NASA Astrophysics Data System (ADS)

    Aoki, K.; Yamamoto, K.; Shimamura, H.

    2012-07-01

    This paper proposes a methodology to evaluate the pavement surface distress for maintenance planning of road pavement using 3D point clouds from Mobile Mapping System (MMS). The issue on maintenance planning of road pavement requires scheduled rehabilitation activities for damaged pavement sections to keep high level of services. The importance of this performance-based infrastructure asset management on actual inspection data is globally recognized. Inspection methodology of road pavement surface, a semi-automatic measurement system utilizing inspection vehicles for measuring surface deterioration indexes, such as cracking, rutting and IRI, have already been introduced and capable of continuously archiving the pavement performance data. However, any scheduled inspection using automatic measurement vehicle needs much cost according to the instruments' specification or inspection interval. Therefore, implementation of road maintenance work, especially for the local government, is difficult considering costeffectiveness. Based on this background, in this research, the methodologies for a simplified evaluation for pavement surface and assessment of damaged pavement section are proposed using 3D point clouds data to build urban 3D modelling. The simplified evaluation results of road surface were able to provide useful information for road administrator to find out the pavement section for a detailed examination and for an immediate repair work. In particular, the regularity of enumeration of 3D point clouds was evaluated using Chow-test and F-test model by extracting the section where the structural change of a coordinate value was remarkably achieved. Finally, the validity of the current methodology was investigated by conducting a case study dealing with the actual inspection data of the local roads.

  14. A note on a boundary sine-Gordon model at the free-Fermion point

    NASA Astrophysics Data System (ADS)

    Murgan, Rajan

    2018-02-01

    We investigate the free-Fermion point of a boundary sine-Gordon model with nondiagonal boundary interactions for the ground state using auxiliary functions obtained from T  -  Q equations of a corresponding inhomogeneous open spin-\\frac{1}{2} XXZ chain with nondiagonal boundary terms. In particular, we obtain the Casimir energy. Our result for the Casimir energy is shown to agree with the result from the TBA approach. The analytical result for the effective central charge in the ultraviolet (UV) limit is also verified from the plots of effective central charge for intermediate values of volume.

  15. A toxicological study of inhalable particulates in an industrial region of Lanzhou City, northwestern China: Results from plasmid scission assay

    NASA Astrophysics Data System (ADS)

    Xiao, Zhenghui; Shao, Longyi; Zhang, Ning; Wang, Jing; Chuang, Hsiao-Chi; Deng, Zhenzhen; Wang, Zhen; BéruBé, Kelly

    2014-09-01

    The city of Lanzhou in northwestern China experiences serious air pollution episodes in the form of PM10 that is characterized by having high levels of heavy metals. The Xigu District represents the industrial core area of Lanzhou City and is denoted by having the largest petrochemical bases in western China. This study investigates heavy metal compositions and oxidative potential of airborne PM10 (particulate matter with aerodynamic diameter of 10 μm or less) collected in Xigu District in the summer and winter of 2010. An in vitro plasmid scission assay (PSA) was employed to study the oxidative potential of airborne PM10 and inductively coupled plasma-mass spectrometry (ICP-MS) was used to examine heavy metal compositions. Transmission electron microscopy coupled with energy-dispersive X-ray spectrometry (TEM/EDX) was used to investigate elemental compositions and mixing states of PM10. The average mass concentrations of PM10 collected in Xigu District were generally higher than the national standard for daily PM10 (150 μg/m3). Cr, Zn, Pb and Mn were the most abundant metals in the intact whole particles of PM10. Zn, Mn and As was the most abundant metal in the water-soluble fraction, while Cr, Pb, and V existed primarily in insoluble forms. TD20 values (i.e. toxic dosage of PM10 causing 20% of plasmid DNA damage) varied considerably in both winter and summer (from 19 μg/mL to >1000 μg/mL) but were typically higher in summer, suggesting that the winter PM10 exhibited greater bioreactivity. In addition, the PM10 collected during a dust storm episode had a highest TD20 value and thus the least oxidative damage to supercoiled plasmid DNA, while the particles collected on a hazy day had a lowest TD20 value and thus the highest oxidative damage to supercoiled plasmid DNA. The particles collected on the first day after snow fall and on a day of cold air intrusion exhibited minor oxidative potential (i.e. caused limited DNA damage). The water-soluble Zn, Mn, As, and

  16. Cloud point phenomena for POE-type nonionic surfactants in a model room temperature ionic liquid.

    PubMed

    Inoue, Tohru; Misono, Takeshi

    2008-10-15

    The cloud point phenomenon has been investigated for the solutions of polyoxyethylene (POE)-type nonionic surfactants (C(12)E(5), C(12)E(6), C(12)E(7), C(10)E(6), and C(14)E(6)) in 1-butyl-3-methylimidazolium tetrafluoroborate (bmimBF(4)), a typical room temperature ionic liquid (RTIL). The cloud point, T(c), increases with the elongation of the POE chain, while decreases with the increase in the hydrocarbon chain length. This demonstrates that the solvophilicity/solvophobicity of the surfactants in RTIL comes from POE chain/hydrocarbon chain. When compared with an aqueous system, the chain length dependence of T(c) is larger for the RTIL system regarding both POE and hydrocarbon chains; in particular, hydrocarbon chain length affects T(c) much more strongly in the RTIL system than in equivalent aqueous systems. In a similar fashion to the much-studied aqueous systems, the micellar growth is also observed in this RTIL solvent as the temperature approaches T(c). The cloud point curves have been analyzed using a Flory-Huggins-type model based on phase separation in polymer solutions.

  17. A complete tank test of a flying-boat hull with a pointed step -N.A.C.A. Model No. 22

    NASA Technical Reports Server (NTRS)

    Shoemaker, James M

    1934-01-01

    The results of a complete tank test of a model of a flying-boat hull of unconventional form, having a deep pointed step, are presented in this note. The advantage of the pointed-step type over the usual forms of flying-boat hulls with respect to resistance at high speeds is pointed out. A take-off example using the data from these tests is worked out, and the results are compared with those of an example in which the test data for a hull of the type in general use in the United States are applied to a flying boat having the same design specifications. A definite saving in take-off run is shown by the pointed-step type.

  18. A Point Rainfall Generator With Internal Storm Structure

    NASA Astrophysics Data System (ADS)

    Marien, J. L.; Vandewiele, G. L.

    1986-04-01

    A point rainfall generator is a probabilistic model for the time series of rainfall as observed in one geographical point. The main purpose of such a model is to generate long synthetic sequences of rainfall for simulation studies. The present generator is a continuous time model based on 13.5 years of 10-min point rainfalls observed in Belgium and digitized with a resolution of 0.1 mm. The present generator attempts to model all features of the rainfall time series which are important for flood studies as accurately as possible. The original aspects of the model are on the one hand the way in which storms are defined and on the other hand the theoretical model for the internal storm characteristics. The storm definition has the advantage that the important characteristics of successive storms are fully independent and very precisely modelled, even on time bases as small as 10 min. The model of the internal storm characteristics has a strong theoretical structure. This fact justifies better the extrapolation of this model to severe storms for which the data are very sparse. This can be important when using the model to simulate severe flood events.

  19. PREMOR: a point reactor exposure model computer code for survey analysis of power plant performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vondy, D.R.

    1979-10-01

    The PREMOR computer code was written to exploit a simple, two-group point nuclear reactor power plant model for survey analysis. Up to thirteen actinides, fourteen fission products, and one lumped absorber nuclide density are followed over a reactor history. Successive feed batches are accounted for with provision for from one to twenty batches resident. The effect of exposure of each of the batches to the same neutron flux is determined.

  20. The correlation function for density perturbations in an expanding universe. III The three-point and predictions of the four-point and higher order correlation functions

    NASA Technical Reports Server (NTRS)

    Mcclelland, J.; Silk, J.

    1978-01-01

    Higher-order correlation functions for the large-scale distribution of galaxies in space are investigated. It is demonstrated that the three-point correlation function observed by Peebles and Groth (1975) is not consistent with a distribution of perturbations that at present are randomly distributed in space. The two-point correlation function is shown to be independent of how the perturbations are distributed spatially, and a model of clustered perturbations is developed which incorporates a nonuniform perturbation distribution and which explains the three-point correlation function. A model with hierarchical perturbations incorporating the same nonuniform distribution is also constructed; it is found that this model also explains the three-point correlation function, but predicts different results for the four-point and higher-order correlation functions than does the model with clustered perturbations. It is suggested that the model of hierarchical perturbations might be explained by the single assumption of having density fluctuations or discrete objects all of the same mass randomly placed at some initial epoch.

  1. Dew point measurement technique utilizing fiber cut reflection

    NASA Astrophysics Data System (ADS)

    Kostritskii, S. M.; Dikevich, A. A.; Korkishko, Yu. N.; Fedorov, V. A.

    2009-05-01

    The fiber optical dew point hygrometer based on change of reflection coefficient for fiber cut has been developed and examined. We proposed and verified the model of condensation detector functioning principle. Experimental frost point measurements on air with different frost points have been performed.

  2. Grey-Markov prediction model based on background value optimization and central-point triangular whitenization weight function

    NASA Astrophysics Data System (ADS)

    Ye, Jing; Dang, Yaoguo; Li, Bingjun

    2018-01-01

    Grey-Markov forecasting model is a combination of grey prediction model and Markov chain which show obvious optimization effects for data sequences with characteristics of non-stationary and volatility. However, the state division process in traditional Grey-Markov forecasting model is mostly based on subjective real numbers that immediately affects the accuracy of forecasting values. To seek the solution, this paper introduces the central-point triangular whitenization weight function in state division to calculate possibilities of research values in each state which reflect preference degrees in different states in an objective way. On the other hand, background value optimization is applied in the traditional grey model to generate better fitting data. By this means, the improved Grey-Markov forecasting model is built. Finally, taking the grain production in Henan Province as an example, it verifies this model's validity by comparing with GM(1,1) based on background value optimization and the traditional Grey-Markov forecasting model.

  3. Four-point bending as a method for quantitatively evaluating spinal arthrodesis in a rat model.

    PubMed

    Robinson, Samuel T; Svet, Mark T; Kanim, Linda A; Metzger, Melodie F

    2015-02-01

    The most common method of evaluating the success (or failure) of rat spinal fusion procedures is manual palpation testing. Whereas manual palpation provides only a subjective binary answer (fused or not fused) regarding the success of a fusion surgery, mechanical testing can provide more quantitative data by assessing variations in strength among treatment groups. We here describe a mechanical testing method to quantitatively assess single-level spinal fusion in a rat model, to improve on the binary and subjective nature of manual palpation as an end point for fusion-related studies. We tested explanted lumbar segments from Sprague-Dawley rat spines after single-level posterolateral fusion procedures at L4-L5. Segments were classified as 'not fused,' 'restricted motion,' or 'fused' by using manual palpation testing. After thorough dissection and potting of the spine, 4-point bending in flexion then was applied to the L4-L5 motion segment, and stiffness was measured as the slope of the moment-displacement curve. Results demonstrated statistically significant differences in stiffness among all groups, which were consistent with preliminary grading according to manual palpation. In addition, the 4-point bending results provided quantitative information regarding the quality of the bony union formed and therefore enabled the comparison of fused specimens. Our results demonstrate that 4-point bending is a simple, reliable, and effective way to describe and compare results among rat spines after fusion surgery.

  4. An Improved Statistical Point-source Foreground Model for the Epoch of Reionization

    NASA Astrophysics Data System (ADS)

    Murray, S. G.; Trott, C. M.; Jordan, C. H.

    2017-08-01

    We present a sophisticated statistical point-source foreground model for low-frequency radio Epoch of Reionization (EoR) experiments using the 21 cm neutral hydrogen emission line. Motivated by our understanding of the low-frequency radio sky, we enhance the realism of two model components compared with existing models: the source count distributions as a function of flux density and spatial position (source clustering), extending current formalisms for the foreground covariance of 2D power-spectral modes in 21 cm EoR experiments. The former we generalize to an arbitrarily broken power law, and the latter to an arbitrary isotropically correlated field. This paper presents expressions for the modified covariance under these extensions, and shows that for a more realistic source spatial distribution, extra covariance arises in the EoR window that was previously unaccounted for. Failure to include this contribution can yield bias in the final power-spectrum and under-estimate uncertainties, potentially leading to a false detection of signal. The extent of this effect is uncertain, owing to ignorance of physical model parameters, but we show that it is dependent on the relative abundance of faint sources, to the effect that our extension will become more important for future deep surveys. Finally, we show that under some parameter choices, ignoring source clustering can lead to false detections on large scales, due to both the induced bias and an artificial reduction in the estimated measurement uncertainty.

  5. 3D modeling of building indoor spaces and closed doors from imagery and point clouds.

    PubMed

    Díaz-Vilariño, Lucía; Khoshelham, Kourosh; Martínez-Sánchez, Joaquín; Arias, Pedro

    2015-02-03

    3D models of indoor environments are increasingly gaining importance due to the wide range of applications to which they can be subjected: from redesign and visualization to monitoring and simulation. These models usually exist only for newly constructed buildings; therefore, the development of automatic approaches for reconstructing 3D indoors from imagery and/or point clouds can make the process easier, faster and cheaper. Among the constructive elements defining a building interior, doors are very common elements and their detection can be very useful either for knowing the environment structure, to perform an efficient navigation or to plan appropriate evacuation routes. The fact that doors are topologically connected to walls by being coplanar, together with the unavoidable presence of clutter and occlusions indoors, increases the inherent complexity of the automation of the recognition process. In this work, we present a pipeline of techniques used for the reconstruction and interpretation of building interiors based on point clouds and images. The methodology analyses the visibility problem of indoor environments and goes in depth with door candidate detection. The presented approach is tested in real data sets showing its potential with a high door detection rate and applicability for robust and efficient envelope reconstruction.

  6. Saddle point localization of molecular wavefunctions.

    PubMed

    Mellau, Georg Ch; Kyuberis, Alexandra A; Polyansky, Oleg L; Zobov, Nikolai; Field, Robert W

    2016-09-15

    The quantum mechanical description of isomerization is based on bound eigenstates of the molecular potential energy surface. For the near-minimum regions there is a textbook-based relationship between the potential and eigenenergies. Here we show how the saddle point region that connects the two minima is encoded in the eigenstates of the model quartic potential and in the energy levels of the [H, C, N] potential energy surface. We model the spacing of the eigenenergies with the energy dependent classical oscillation frequency decreasing to zero at the saddle point. The eigenstates with the smallest spacing are localized at the saddle point. The analysis of the HCN ↔ HNC isomerization states shows that the eigenstates with small energy spacing relative to the effective (v1, v3, ℓ) bending potentials are highly localized in the bending coordinate at the transition state. These spectroscopically detectable states represent a chemical marker of the transition state in the eigenenergy spectrum. The method developed here provides a basis for modeling characteristic patterns in the eigenenergy spectrum of bound states.

  7. A spatial model to aggregate point-source and nonpoint-source water-quality data for large areas

    USGS Publications Warehouse

    White, D.A.; Smith, R.A.; Price, C.V.; Alexander, R.B.; Robinson, K.W.

    1992-01-01

    More objective and consistent methods are needed to assess water quality for large areas. A spatial model, one that capitalizes on the topologic relationships among spatial entities, to aggregate pollution sources from upstream drainage areas is described that can be implemented on land surfaces having heterogeneous water-pollution effects. An infrastructure of stream networks and drainage basins, derived from 1:250,000-scale digital-elevation models, define the hydrologic system in this spatial model. The spatial relationships between point- and nonpoint pollution sources and measurement locations are referenced to the hydrologic infrastructure with the aid of a geographic information system. A maximum-branching algorithm has been developed to simulate the effects of distance from a pollutant source to an arbitrary downstream location, a function traditionally employed in deterministic water quality models. ?? 1992.

  8. Study of texture stitching in 3D modeling of lidar point cloud based on per-pixel linear interpolation along loop line buffer

    NASA Astrophysics Data System (ADS)

    Xu, Jianxin; Liang, Hong

    2013-07-01

    Terrestrial laser scanning creates a point cloud composed of thousands or millions of 3D points. Through pre-processing, generating TINs, mapping texture, a 3D model of a real object is obtained. When the object is too large, the object is separated into some parts. This paper mainly focuses on problem of gray uneven of two adjacent textures' intersection. The new algorithm is presented in the paper, which is per-pixel linear interpolation along loop line buffer .The experiment data derives from point cloud of stone lion which is situated in front of west gate of Henan Polytechnic University. The model flow is composed of three parts. First, the large object is separated into two parts, and then each part is modeled, finally the whole 3D model of the stone lion is composed of two part models. When the two part models are combined, there is an obvious fissure line in the overlapping section of two adjacent textures for the two models. Some researchers decrease brightness value of all pixels for two adjacent textures by some algorithms. However, some algorithms are effect and the fissure line still exists. Gray uneven of two adjacent textures is dealt by the algorithm in the paper. The fissure line in overlapping section textures is eliminated. The gray transition in overlapping section become more smoothly.

  9. A statistical model investigating the prevalence of tuberculosis in New York City using counting processes with two change-points

    PubMed Central

    ACHCAR, J. A.; MARTINEZ, E. Z.; RUFFINO-NETTO, A.; PAULINO, C. D.; SOARES, P.

    2008-01-01

    SUMMARY We considered a Bayesian analysis for the prevalence of tuberculosis cases in New York City from 1970 to 2000. This counting dataset presented two change-points during this period. We modelled this counting dataset considering non-homogeneous Poisson processes in the presence of the two-change points. A Bayesian analysis for the data is considered using Markov chain Monte Carlo methods. Simulated Gibbs samples for the parameters of interest were obtained using WinBugs software. PMID:18346287

  10. Comparisons of Satellite Soil Moisture, an Energy Balance Model Driven by LST Data and Point Measurements

    NASA Astrophysics Data System (ADS)

    Laiolo, Paola; Gabellani, Simone; Rudari, Roberto; Boni, Giorgio; Puca, Silvia

    2013-04-01

    Soil moisture plays a fundamental role in the partitioning of mass and energy fluxes between land surface and atmosphere, thereby influencing climate and weather, and it is important in determining the rainfall-runoff response of catchments; moreover, in hydrological modelling and flood forecasting, a correct definition of moisture conditions is a key factor for accurate predictions. Different sources of information for the estimation of the soil moisture state are currently available: satellite data, point measurements and model predictions. All are affected by intrinsic uncertainty. Among different satellite sensors that can be used for soil moisture estimation three major groups can be distinguished: passive microwave sensors (e.g., SSMI), active sensors (e.g. SAR, Scatterometers), and optical sensors (e.g. Spectroradiometers). The last two families, mainly because of their temporal and spatial resolution seem the most suitable for hydrological applications In this work soil moisture point measurements from 10 sensors in the Italian territory are compared of with the satellite products both from the HSAF project SM-OBS-2, derived from the ASCAT scatterometer, and from ACHAB, an operative energy balance model that assimilate LST data derived from MSG and furnishes daily an evaporative fraction index related to soil moisture content for all the Italian region. Distributed comparison of the ACHAB and SM-OBS-2 on the whole Italian territory are performed too.

  11. Deterrent effects of demerit points and license sanctions on drivers' traffic law violations using a proportional hazard model.

    PubMed

    Lee, Jaeyeong; Park, Byung-Jung; Lee, Chungwon

    2018-04-01

    Current traffic law enforcement places an emphasis on reducing accident risk from human factors such as drunk driving and speeding. Among the various strategies implemented, demerit points and license sanction systems have been widely used as punitive and educational measures. Limitations, however, exist in previous studies in terms of estimating the interaction effects of demerit points and license sanctions. To overcome such limitations, this work focused on identifying the interaction effects of demerit points and license sanctions on driver traffic violation behavior. The interaction deterrent effects were assessed by using a Cox's proportional hazard model to provide a more accurate and unbiased estimation. For this purpose, five years of driver conviction data was obtained from the Korea National Police Agency (KNPA). This data included personal characteristics, demerit point accumulation and license sanction status. The analysis showed that accumulated demerit points had specific deterrent effects. Additionally, license revocation showed consistent and significant deterrent effects, greater than those for suspension. Male drivers under their 30s holding a motorcycle license were identified as the most violation-prone driver group, suggesting that stricter testing for the acquisition of a motorcycle driver's license is needed. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. Hydrolytic degradation and morphologic study of poly-p-dioxanone.

    PubMed

    Lin, H L; Chu, C C; Grubb, D

    1993-02-01

    The in vitro hydrolytic degradation of 2-0 size PDS monofilament suture was studied for the purpose of revealing its morphologic structure and degradation mechanism. The sutures were immersed in phosphate buffer of pH 7.44 for up to 120 days at 37 degrees C. These hydrolyzed sutures were examined by the changes in tensile properties, weight, thermal properties, x-ray diffraction structure, surface morphology, and dye diffusion phenomena. It was found that hydrolysis had significant effects on the change of PDS fiber morphology and properties. Hydrolysis, however, had no significant effect on overall molecular orientation of the fiber until the very late stage. PDS suture fibers retained their skeleton throughout the earlier periods of hydrolysis concurrent with mass and tensile strength losses. PDS sutures exhibited an absorption delay of 120 days. Both heat of fusion and melting point exhibited a maximum function of hydrolysis time. Hydrolysis of PDS suture fibers proceeded through two stages: random scission of chain segments located in the amorphous regions of microfibrils and intermicrofibrillar space, followed by stepwise scission of chain segments located in the crystalline regions of microfibrils. Dye diffusion data showed that the passage along the longitudinal direction of the fiber was relatively easier than the lateral direction as evident in the diffusion coefficient, activation energy, and flexibility of chain segments. Swiss-cheese model of fiber structure appears to describe the observed dye diffusion phenomena and their dependence on hydrolysis time and dying temperature.

  13. Accuracy assessment of building point clouds automatically generated from iphone images

    NASA Astrophysics Data System (ADS)

    Sirmacek, B.; Lindenbergh, R.

    2014-06-01

    Low-cost sensor generated 3D models can be useful for quick 3D urban model updating, yet the quality of the models is questionable. In this article, we evaluate the reliability of an automatic point cloud generation method using multi-view iPhone images or an iPhone video file as an input. We register such automatically generated point cloud on a TLS point cloud of the same object to discuss accuracy, advantages and limitations of the iPhone generated point clouds. For the chosen example showcase, we have classified 1.23% of the iPhone point cloud points as outliers, and calculated the mean of the point to point distances to the TLS point cloud as 0.11 m. Since a TLS point cloud might also include measurement errors and noise, we computed local noise values for the point clouds from both sources. Mean (μ) and standard deviation (σ) of roughness histograms are calculated as (μ1 = 0.44 m., σ1 = 0.071 m.) and (μ2 = 0.025 m., σ2 = 0.037 m.) for the iPhone and TLS point clouds respectively. Our experimental results indicate possible usage of the proposed automatic 3D model generation framework for 3D urban map updating, fusion and detail enhancing, quick and real-time change detection purposes. However, further insights should be obtained first on the circumstances that are needed to guarantee a successful point cloud generation from smartphone images.

  14. Enhancing multiple-point geostatistical modeling: 1. Graph theory and pattern adjustment

    NASA Astrophysics Data System (ADS)

    Tahmasebi, Pejman; Sahimi, Muhammad

    2016-03-01

    In recent years, higher-order geostatistical methods have been used for modeling of a wide variety of large-scale porous media, such as groundwater aquifers and oil reservoirs. Their popularity stems from their ability to account for qualitative data and the great flexibility that they offer for conditioning the models to hard (quantitative) data, which endow them with the capability for generating realistic realizations of porous formations with very complex channels, as well as features that are mainly a barrier to fluid flow. One group of such models consists of pattern-based methods that use a set of data points for generating stochastic realizations by which the large-scale structure and highly-connected features are reproduced accurately. The cross correlation-based simulation (CCSIM) algorithm, proposed previously by the authors, is a member of this group that has been shown to be capable of simulating multimillion cell models in a matter of a few CPU seconds. The method is, however, sensitive to pattern's specifications, such as boundaries and the number of replicates. In this paper the original CCSIM algorithm is reconsidered and two significant improvements are proposed for accurately reproducing large-scale patterns of heterogeneities in porous media. First, an effective boundary-correction method based on the graph theory is presented by which one identifies the optimal cutting path/surface for removing the patchiness and discontinuities in the realization of a porous medium. Next, a new pattern adjustment method is proposed that automatically transfers the features in a pattern to one that seamlessly matches the surrounding patterns. The original CCSIM algorithm is then combined with the two methods and is tested using various complex two- and three-dimensional examples. It should, however, be emphasized that the methods that we propose in this paper are applicable to other pattern-based geostatistical simulation methods.

  15. Analysis of a New Variational Model to Restore Point-Like and Curve-Like Singularities in Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aubert, Gilles, E-mail: gaubert@unice.fr; Blanc-Feraud, Laure, E-mail: Laure.Blanc-Feraud@inria.fr; Graziani, Daniele, E-mail: Daniele.Graziani@inria.fr

    2013-02-15

    The paper is concerned with the analysis of a new variational model to restore point-like and curve-like singularities in biological images. To this aim we investigate the variational properties of a suitable energy which governs these pathologies. Finally in order to realize numerical experiments we minimize, in the discrete setting, a regularized version of this functional by fast descent gradient scheme.

  16. Low cost digital photogrammetry: From the extraction of point clouds by SFM technique to 3D mathematical modeling

    NASA Astrophysics Data System (ADS)

    Michele, Mangiameli; Giuseppe, Mussumeci; Salvatore, Zito

    2017-07-01

    The Structure From Motion (SFM) is a technique applied to a series of photographs of an object that returns a 3D reconstruction made up by points in the space (point clouds). This research aims at comparing the results of the SFM approach with the results of a 3D laser scanning in terms of density and accuracy of the model. The experience was conducted by detecting several architectural elements (walls and portals of historical buildings) both with a 3D laser scanner of the latest generation and an amateur photographic camera. The point clouds acquired by laser scanner and those acquired by the photo camera have been systematically compared. In particular we present the experience carried out on the "Don Diego Pappalardo Palace" site in Pedara (Catania, Sicily).

  17. Statistical Aspects of Point Count Sampling

    Treesearch

    Richard J. Barker; John R. Sauer

    1995-01-01

    The dominant feature of point counts is that they do not census birds, but instead provide incomplete counts of individuals present within a survey plot. Considering a simple model for point count sampling, we demonstrate that use of these incomplete counts can bias estimators and testing procedures, leading to inappropriate conclusions. A large portion of the...

  18. Backward deletion to minimize prediction errors in models from factorial experiments with zero to six center points

    NASA Technical Reports Server (NTRS)

    Holms, A. G.

    1980-01-01

    Population model coefficients were chosen to simulate a saturated 2 to the 4th fixed-effects experiment having an unfavorable distribution of relative values. Using random number studies, deletion strategies were compared that were based on the F-distribution, on an order statistics distribution of Cochran's, and on a combination of the two. The strategies were compared under the criterion of minimizing the maximum prediction error, wherever it occurred, among the two-level factorial points. The strategies were evaluated for each of the conditions of 0, 1, 2, 3, 4, 5, or 6 center points. Three classes of strategies were identified as being appropriate, depending on the extent of the experimenter's prior knowledge. In almost every case the best strategy was found to be unique according to the number of center points. Among the three classes of strategies, a security regret class of strategy was demonstrated as being widely useful in that over a range of coefficients of variation from 4 to 65%, the maximum predictive error was never increased by more than 12% over what it would have been if the best strategy had been used for the particular coefficient of variation. The relative efficiency of the experiment, when using the security regret strategy, was examined as a function of the number of center points, and was found to be best when the design used one center point.

  19. Pulse radiolysis in model studies toward radiation processing

    NASA Astrophysics Data System (ADS)

    Von Sonntag, C.; Bothe, E.; Ulanski, P.; Deeble, D. J.

    1995-02-01

    Using the pulse radiolysis technique, the OH-radical-induced reactions of poly(vinyl alcohol) PVAL, poly(acrylic acid) PAA, poly(methacrylic acid) PMA, and hyaluronic acid have been investigated in dilute aqueous solution. The reactions of the free-radical intermediates were followed by UV-spectroscopy and low-angle laser light-scattering; the scission of the charged polymers was also monitored by conductometry. For more detailed product studies, model systems such as 2,4-dihydroxypentane (for PVAL) and 2,4-dimethyl glutaric acid (for PAA) was also investigated. With PVA, OH-radicals react predominantly by abstraction of an H-atom in α-position to the hydroxyl group (70%). The observed bimolecular decay rate constant of the PVAL-radicals decreases with time. This has been interpreted as being due to an initially fast decay of proximate radicals and a decrease of the probability of such encounters with time. Intramolecular crosslinking (loop formation) predominates at high doses per pulse. In the presence of O 2, peroxyl radicals are formed which in the case of the α-hydroxyperoxyl radicals can eliminate HO 2-radicals in competition with bimolecular decay processes which lead to a fragmentation of the polymer. In PAA, radicals both in α-position (characterized by an absorption near 300 nm) and in β-position to the carboxylate groups are formed in an approximately 1:2 ratio. The lifetime of the radicals increases with increasing electrolytic dissociation of the polymer. The β-radicals undergo a slow (intra- as well as intermolecular) H-abstraction yielding α-radicals, in competition to crosslinking and scission reactions. In PMA only β-radicals are formed. Their fragmentation has been followed by conductometry. In hyaluronic acid, considerable fragmeentation is observed even in the absence of oxygen which, in fact, has some protective effect against this process. Thus free-radical attack on this important biopolymer makes it especially vulnerable with respect

  20. Economic-environmental modeling of point source pollution in Jefferson County, Alabama, USA.

    PubMed

    Kebede, Ellene; Schreiner, Dean F; Huluka, Gobena

    2002-05-01

    This paper uses an integrated economic-environmental model to assess the point source pollution from major industries in Jefferson County, Northern Alabama. Industrial expansion generates employment, income, and tax revenue for the public sector; however, it is also often associated with the discharge of chemical pollutants. Jefferson County is one of the largest industrial counties in Alabama that experienced smog warnings and ambient ozone concentration, 1996-1999. Past studies of chemical discharge from industries have used models to assess the pollution impact of individual plants. This study, however, uses an extended Input-Output (I-O) economic model with pollution emission coefficients to assess direct and indirect pollutant emission for several major industries in Jefferson County. The major findings of the study are: (a) the principal emission by the selected industries are volatile organic compounds (VOC) and these contribute to the ambient ozone concentration; (b) the direct and indirect emissions are significantly higher than the direct emission by some industries, indicating that an isolated analysis will underestimate the emission by an industry; (c) while low emission coefficient industries may suggest industry choice they may also emit the most hazardous chemicals. This study is limited by the assumptions made, and the data availability, however it provides a useful analytical tool for direct and cumulative emission estimation and generates insights on the complexity in choice of industries.

  1. Robust Brain-Machine Interface Design Using Optimal Feedback Control Modeling and Adaptive Point Process Filtering

    PubMed Central

    Carmena, Jose M.

    2016-01-01

    Much progress has been made in brain-machine interfaces (BMI) using decoders such as Kalman filters and finding their parameters with closed-loop decoder adaptation (CLDA). However, current decoders do not model the spikes directly, and hence may limit the processing time-scale of BMI control and adaptation. Moreover, while specialized CLDA techniques for intention estimation and assisted training exist, a unified and systematic CLDA framework that generalizes across different setups is lacking. Here we develop a novel closed-loop BMI training architecture that allows for processing, control, and adaptation using spike events, enables robust control and extends to various tasks. Moreover, we develop a unified control-theoretic CLDA framework within which intention estimation, assisted training, and adaptation are performed. The architecture incorporates an infinite-horizon optimal feedback-control (OFC) model of the brain’s behavior in closed-loop BMI control, and a point process model of spikes. The OFC model infers the user’s motor intention during CLDA—a process termed intention estimation. OFC is also used to design an autonomous and dynamic assisted training technique. The point process model allows for neural processing, control and decoder adaptation with every spike event and at a faster time-scale than current decoders; it also enables dynamic spike-event-based parameter adaptation unlike current CLDA methods that use batch-based adaptation on much slower adaptation time-scales. We conducted closed-loop experiments in a non-human primate over tens of days to dissociate the effects of these novel CLDA components. The OFC intention estimation improved BMI performance compared with current intention estimation techniques. OFC assisted training allowed the subject to consistently achieve proficient control. Spike-event-based adaptation resulted in faster and more consistent performance convergence compared with batch-based methods, and was robust to

  2. A Point Kinetics Model for Estimating Neutron Multiplication of Bare Uranium Metal in Tagged Neutron Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tweardy, Matthew C.; McConchie, Seth; Hayward, Jason P.

    An extension of the point kinetics model is developed in this paper to describe the neutron multiplicity response of a bare uranium object under interrogation by an associated particle imaging deuterium-tritium (D-T) measurement system. This extended model is used to estimate the total neutron multiplication of the uranium. Both MCNPX-PoliMi simulations and data from active interrogation measurements of highly enriched and depleted uranium geometries are used to evaluate the potential of this method and to identify the sources of systematic error. The detection efficiency correction for measured coincidence response is identified as a large source of systematic error. If themore » detection process is not considered, results suggest that the method can estimate total multiplication to within 13% of the simulated value. Values for multiplicity constants in the point kinetics equations are sensitive to enrichment due to (n, xn) interactions by D-T neutrons and can introduce another significant source of systematic bias. This can theoretically be corrected if isotopic composition is known a priori. Finally, the spatial dependence of multiplication is also suspected of introducing further systematic bias for high multiplication uranium objects.« less

  3. A Point Kinetics Model for Estimating Neutron Multiplication of Bare Uranium Metal in Tagged Neutron Measurements

    DOE PAGES

    Tweardy, Matthew C.; McConchie, Seth; Hayward, Jason P.

    2017-06-13

    An extension of the point kinetics model is developed in this paper to describe the neutron multiplicity response of a bare uranium object under interrogation by an associated particle imaging deuterium-tritium (D-T) measurement system. This extended model is used to estimate the total neutron multiplication of the uranium. Both MCNPX-PoliMi simulations and data from active interrogation measurements of highly enriched and depleted uranium geometries are used to evaluate the potential of this method and to identify the sources of systematic error. The detection efficiency correction for measured coincidence response is identified as a large source of systematic error. If themore » detection process is not considered, results suggest that the method can estimate total multiplication to within 13% of the simulated value. Values for multiplicity constants in the point kinetics equations are sensitive to enrichment due to (n, xn) interactions by D-T neutrons and can introduce another significant source of systematic bias. This can theoretically be corrected if isotopic composition is known a priori. Finally, the spatial dependence of multiplication is also suspected of introducing further systematic bias for high multiplication uranium objects.« less

  4. Structural analysis and modeling reveals new mechanisms governing ESCRT-III spiral filament assembly

    PubMed Central

    Shen, Qing-Tao; Schuh, Amber L.; Zheng, Yuqing; Quinney, Kyle; Wang, Lei; Hanna, Michael; Mitchell, Julie C.; Otegui, Marisa S.; Ahlquist, Paul; Cui, Qiang

    2014-01-01

    The scission of biological membranes is facilitated by a variety of protein complexes that bind and manipulate lipid bilayers. ESCRT-III (endosomal sorting complex required for transport III) filaments mediate membrane scission during the ostensibly disparate processes of multivesicular endosome biogenesis, cytokinesis, and retroviral budding. However, mechanisms by which ESCRT-III subunits assemble into a polymer remain unknown. Using cryogenic electron microscopy (cryo-EM), we found that the full-length ESCRT-III subunit Vps32/CHMP4B spontaneously forms single-stranded spiral filaments. The resolution afforded by two-dimensional cryo-EM combined with molecular dynamics simulations revealed that individual Vps32/CHMP4B monomers within a filament are flexible and able to accommodate a range of bending angles. In contrast, the interface between monomers is stable and refractory to changes in conformation. We additionally found that the carboxyl terminus of Vps32/CHMP4B plays a key role in restricting the lateral association of filaments. Our findings highlight new mechanisms by which ESCRT-III filaments assemble to generate a unique polymer capable of membrane remodeling in multiple cellular contexts. PMID:25202029

  5. Moduli of quantum Riemannian geometries on <=4 points

    NASA Astrophysics Data System (ADS)

    Majid, S.; Raineri, E.

    2004-12-01

    We classify parallelizable noncommutative manifold structures on finite sets of small size in the general formalism of framed quantum manifolds and vielbeins introduced previously [S. Majid, Commun. Math. Phys. 225, 131 (2002)]. The full moduli space is found for ⩽3 points, and a restricted moduli space for 4 points. Generalized Levi-Cività connections and their curvatures are found for a variety of models including models of a discrete torus. The topological part of the moduli space is found for ⩽9 points based on the known atlas of regular graphs. We also remark on aspects of quantum gravity in this approach.

  6. Leading bureaucracies to the tipping point: An alternative model of multiple stable equilibrium levels of corruption

    PubMed Central

    Caulkins, Jonathan P.; Feichtinger, Gustav; Grass, Dieter; Hartl, Richard F.; Kort, Peter M.; Novak, Andreas J.; Seidl, Andrea

    2013-01-01

    We present a novel model of corruption dynamics in the form of a nonlinear optimal dynamic control problem. It has a tipping point, but one whose origins and character are distinct from that in the classic Schelling (1978) model. The decision maker choosing a level of corruption is the chief or some other kind of authority figure who presides over a bureaucracy whose state of corruption is influenced by the authority figure’s actions, and whose state in turn influences the pay-off for the authority figure. The policy interpretation is somewhat more optimistic than in other tipping models, and there are some surprising implications, notably that reforming the bureaucracy may be of limited value if the bureaucracy takes its cues from a corrupt leader. PMID:23565027

  7. Leading bureaucracies to the tipping point: An alternative model of multiple stable equilibrium levels of corruption.

    PubMed

    Caulkins, Jonathan P; Feichtinger, Gustav; Grass, Dieter; Hartl, Richard F; Kort, Peter M; Novak, Andreas J; Seidl, Andrea

    2013-03-16

    We present a novel model of corruption dynamics in the form of a nonlinear optimal dynamic control problem. It has a tipping point, but one whose origins and character are distinct from that in the classic Schelling (1978) model. The decision maker choosing a level of corruption is the chief or some other kind of authority figure who presides over a bureaucracy whose state of corruption is influenced by the authority figure's actions, and whose state in turn influences the pay-off for the authority figure. The policy interpretation is somewhat more optimistic than in other tipping models, and there are some surprising implications, notably that reforming the bureaucracy may be of limited value if the bureaucracy takes its cues from a corrupt leader.

  8. An equilibrium-point model for fast, single-joint movement: I. Emergence of strategy-dependent EMG patterns.

    PubMed

    Latash, M L; Gottlieb, G L

    1991-09-01

    We describe a model for the regulation of fast, single-joint movements, based on the equilibrium-point hypothesis. Limb movement follows constant rate shifts of independently regulated neuromuscular variables. The independently regulated variables are tentatively identified as thresholds of a length sensitive reflex for each of the participating muscles. We use the model to predict EMG patterns associated with changes in the conditions of movement execution, specifically, changes in movement times, velocities, amplitudes, and moments of limb inertia. The approach provides a theoretical neural framework for the dual-strategy hypothesis, which considers certain movements to be results of one of two basic, speed-sensitive or speed-insensitive strategies. This model is advanced as an alternative to pattern-imposing models based on explicit regulation of timing and amplitudes of signals that are explicitly manifest in the EMG patterns.

  9. Fragmentation approach to the point-island model with hindered aggregation: Accessing the barrier energy

    NASA Astrophysics Data System (ADS)

    González, Diego Luis; Pimpinelli, Alberto; Einstein, T. L.

    2017-07-01

    We study the effect of hindered aggregation on the island formation process in a one- (1D) and two-dimensional (2D) point-island model for epitaxial growth with arbitrary critical nucleus size i . In our model, the attachment of monomers to preexisting islands is hindered by an additional attachment barrier, characterized by length la. For la=0 the islands behave as perfect sinks while for la→∞ they behave as reflecting boundaries. For intermediate values of la, the system exhibits a crossover between two different kinds of processes, diffusion-limited aggregation and attachment-limited aggregation. We calculate the growth exponents of the density of islands and monomers for the low coverage and aggregation regimes. The capture-zone (CZ) distributions are also calculated for different values of i and la. In order to obtain a good spatial description of the nucleation process, we propose a fragmentation model, which is based on an approximate description of nucleation inside of the gaps for 1D and the CZs for 2D. In both cases, the nucleation is described by using two different physically rooted probabilities, which are related with the microscopic parameters of the model (i and la). We test our analytical model with extensive numerical simulations and previously established results. The proposed model describes excellently the statistical behavior of the system for arbitrary values of la and i =1 , 2, and 3.

  10. Estimating Function Approaches for Spatial Point Processes

    NASA Astrophysics Data System (ADS)

    Deng, Chong

    Spatial point pattern data consist of locations of events that are often of interest in biological and ecological studies. Such data are commonly viewed as a realization from a stochastic process called spatial point process. To fit a parametric spatial point process model to such data, likelihood-based methods have been widely studied. However, while maximum likelihood estimation is often too computationally intensive for Cox and cluster processes, pairwise likelihood methods such as composite likelihood, Palm likelihood usually suffer from the loss of information due to the ignorance of correlation among pairs. For many types of correlated data other than spatial point processes, when likelihood-based approaches are not desirable, estimating functions have been widely used for model fitting. In this dissertation, we explore the estimating function approaches for fitting spatial point process models. These approaches, which are based on the asymptotic optimal estimating function theories, can be used to incorporate the correlation among data and yield more efficient estimators. We conducted a series of studies to demonstrate that these estmating function approaches are good alternatives to balance the trade-off between computation complexity and estimating efficiency. First, we propose a new estimating procedure that improves the efficiency of pairwise composite likelihood method in estimating clustering parameters. Our approach combines estimating functions derived from pairwise composite likeli-hood estimation and estimating functions that account for correlations among the pairwise contributions. Our method can be used to fit a variety of parametric spatial point process models and can yield more efficient estimators for the clustering parameters than pairwise composite likelihood estimation. We demonstrate its efficacy through a simulation study and an application to the longleaf pine data. Second, we further explore the quasi-likelihood approach on fitting

  11. Point spread function modeling and image restoration for cone-beam CT

    NASA Astrophysics Data System (ADS)

    Zhang, Hua; Huang, Kui-Dong; Shi, Yi-Kai; Xu, Zhe

    2015-03-01

    X-ray cone-beam computed tomography (CT) has such notable features as high efficiency and precision, and is widely used in the fields of medical imaging and industrial non-destructive testing, but the inherent imaging degradation reduces the quality of CT images. Aimed at the problems of projection image degradation and restoration in cone-beam CT, a point spread function (PSF) modeling method is proposed first. The general PSF model of cone-beam CT is established, and based on it, the PSF under arbitrary scanning conditions can be calculated directly for projection image restoration without the additional measurement, which greatly improved the application convenience of cone-beam CT. Secondly, a projection image restoration algorithm based on pre-filtering and pre-segmentation is proposed, which can make the edge contours in projection images and slice images clearer after restoration, and control the noise in the equivalent level to the original images. Finally, the experiments verified the feasibility and effectiveness of the proposed methods. Supported by National Science and Technology Major Project of the Ministry of Industry and Information Technology of China (2012ZX04007021), Young Scientists Fund of National Natural Science Foundation of China (51105315), Natural Science Basic Research Program of Shaanxi Province of China (2013JM7003) and Northwestern Polytechnical University Foundation for Fundamental Research (JC20120226, 3102014KYJD022)

  12. Modeling of lipase catalyzed ring-opening polymerization of epsilon-caprolactone.

    PubMed

    Sivalingam, G; Madras, Giridhar

    2004-01-01

    Enzymatic ring-opening polymerization of epsilon-caprolactone by various lipases was investigated in toluene at various temperatures. The determination of molecular weight and structural identification was carried out with gel permeation chromatography and proton NMR, respectively. Among the various lipases employed, an immobilized lipase from Candida antartica B (Novozym 435) showed the highest catalytic activity. The polymerization of epsilon-caprolactone by Novozym 435 showed an optimal temperature of 65 degrees C and an optimum toluene content of 50/50 v/v of toluene and epsilon-caprolactone. As lipases can degrade polyesters, a maximum in the molecular weight with time was obtained due to the competition of ring opening polymerization and degradation by specific chain end scission. The optimum temperature, toluene content, and the variation of molecular weight with time are consistent with earlier observations. A comprehensive model based on continuous distribution kinetics was developed to model these phenomena. The model accounts for simultaneous polymerization, degradation and enzyme deactivation and provides a technique to determine the rate coefficients for these processes. The dependence of these rate coefficients with temperature and monomer concentration is also discussed.

  13. Double point source W-phase inversion: Real-time implementation and automated model selection

    USGS Publications Warehouse

    Nealy, Jennifer; Hayes, Gavin

    2015-01-01

    Rapid and accurate characterization of an earthquake source is an extremely important and ever evolving field of research. Within this field, source inversion of the W-phase has recently been shown to be an effective technique, which can be efficiently implemented in real-time. An extension to the W-phase source inversion is presented in which two point sources are derived to better characterize complex earthquakes. A single source inversion followed by a double point source inversion with centroid locations fixed at the single source solution location can be efficiently run as part of earthquake monitoring network operational procedures. In order to determine the most appropriate solution, i.e., whether an earthquake is most appropriately described by a single source or a double source, an Akaike information criterion (AIC) test is performed. Analyses of all earthquakes of magnitude 7.5 and greater occurring since January 2000 were performed with extended analyses of the September 29, 2009 magnitude 8.1 Samoa earthquake and the April 19, 2014 magnitude 7.5 Papua New Guinea earthquake. The AIC test is shown to be able to accurately select the most appropriate model and the selected W-phase inversion is shown to yield reliable solutions that match published analyses of the same events.

  14. QSPR using MOLGEN-QSPR: the challenge of fluoroalkane boiling points.

    PubMed

    Rücker, Christoph; Meringer, Markus; Kerber, Adalbert

    2005-01-01

    By means of the new software MOLGEN-QSPR, a multilinear regression model for the boiling points of lower fluoroalkanes is established. The model is based exclusively on simple descriptors derived directly from molecular structure and nevertheless describes a broader set of data more precisely than previous attempts that used either more demanding (quantum chemical) descriptors or more demanding (nonlinear) statistical methods such as neural networks. The model's internal consistency was confirmed by leave-one-out cross-validation. The model was used to predict all unknown boiling points of fluorobutanes, and the quality of predictions was estimated by means of comparison with boiling point predictions for fluoropentanes.

  15. A location-based multiple point statistics method: modelling the reservoir with non-stationary characteristics

    NASA Astrophysics Data System (ADS)

    Yin, Yanshu; Feng, Wenjie

    2017-12-01

    In this paper, a location-based multiple point statistics method is developed to model a non-stationary reservoir. The proposed method characterizes the relationship between the sedimentary pattern and the deposit location using the relative central position distance function, which alleviates the requirement that the training image and the simulated grids have the same dimension. The weights in every direction of the distance function can be changed to characterize the reservoir heterogeneity in various directions. The local integral replacements of data events, structured random path, distance tolerance and multi-grid strategy are applied to reproduce the sedimentary patterns and obtain a more realistic result. This method is compared with the traditional Snesim method using a synthesized 3-D training image of Poyang Lake and a reservoir model of Shengli Oilfield in China. The results indicate that the new method can reproduce the non-stationary characteristics better than the traditional method and is more suitable for simulation of delta-front deposits. These results show that the new method is a powerful tool for modelling a reservoir with non-stationary characteristics.

  16. Neural Modeling of Fuzzy Controllers for Maximum Power Point Tracking in Photovoltaic Energy Systems

    NASA Astrophysics Data System (ADS)

    Lopez-Guede, Jose Manuel; Ramos-Hernanz, Josean; Altın, Necmi; Ozdemir, Saban; Kurt, Erol; Azkune, Gorka

    2018-06-01

    One field in which electronic materials have an important role is energy generation, especially within the scope of photovoltaic energy. This paper deals with one of the most relevant enabling technologies within that scope, i.e, the algorithms for maximum power point tracking implemented in the direct current to direct current converters and its modeling through artificial neural networks (ANNs). More specifically, as a proof of concept, we have addressed the problem of modeling a fuzzy logic controller that has shown its performance in previous works, and more specifically the dimensionless duty cycle signal that controls a quadratic boost converter. We achieved a very accurate model since the obtained medium squared error is 3.47 × 10-6, the maximum error is 16.32 × 10-3 and the regression coefficient R is 0.99992, all for the test dataset. This neural implementation has obvious advantages such as a higher fault tolerance and a simpler implementation, dispensing with all the complex elements needed to run a fuzzy controller (fuzzifier, defuzzifier, inference engine and knowledge base) because, ultimately, ANNs are sums and products.

  17. Low-energy electron dose-point kernel simulations using new physics models implemented in Geant4-DNA

    NASA Astrophysics Data System (ADS)

    Bordes, Julien; Incerti, Sébastien; Lampe, Nathanael; Bardiès, Manuel; Bordage, Marie-Claude

    2017-05-01

    When low-energy electrons, such as Auger electrons, interact with liquid water, they induce highly localized ionizing energy depositions over ranges comparable to cell diameters. Monte Carlo track structure (MCTS) codes are suitable tools for performing dosimetry at this level. One of the main MCTS codes, Geant4-DNA, is equipped with only two sets of cross section models for low-energy electron interactions in liquid water (;option 2; and its improved version, ;option 4;). To provide Geant4-DNA users with new alternative physics models, a set of cross sections, extracted from CPA100 MCTS code, have been added to Geant4-DNA. This new version is hereafter referred to as ;Geant4-DNA-CPA100;. In this study, ;Geant4-DNA-CPA100; was used to calculate low-energy electron dose-point kernels (DPKs) between 1 keV and 200 keV. Such kernels represent the radial energy deposited by an isotropic point source, a parameter that is useful for dosimetry calculations in nuclear medicine. In order to assess the influence of different physics models on DPK calculations, DPKs were calculated using the existing Geant4-DNA models (;option 2; and ;option 4;), newly integrated CPA100 models, and the PENELOPE Monte Carlo code used in step-by-step mode for monoenergetic electrons. Additionally, a comparison was performed of two sets of DPKs that were simulated with ;Geant4-DNA-CPA100; - the first set using Geant4‧s default settings, and the second using CPA100‧s original code default settings. A maximum difference of 9.4% was found between the Geant4-DNA-CPA100 and PENELOPE DPKs. Between the two Geant4-DNA existing models, slight differences, between 1 keV and 10 keV were observed. It was highlighted that the DPKs simulated with the two Geant4-DNA's existing models were always broader than those generated with ;Geant4-DNA-CPA100;. The discrepancies observed between the DPKs generated using Geant4-DNA's existing models and ;Geant4-DNA-CPA100; were caused solely by their different cross

  18. Modeling non-point source pollutants in the vadose zone: Back to the basics

    NASA Astrophysics Data System (ADS)

    Corwin, Dennis L.; Letey, John, Jr.; Carrillo, Marcia L. K.

    More than ever before in the history of scientific investigation, modeling is viewed as a fundamental component of the scientific method because of the relatively recent development of the computer. No longer must the scientific investigator be confined to artificially isolated studies of individual processes that can lead to oversimplified and sometimes erroneous conceptions of larger phenomena. Computer models now enable scientists to attack problems related to open systems such as climatic change, and the assessment of environmental impacts, where the whole of the interactive processes are greater than the sum of their isolated components. Environmental assessment involves the determination of change of some constituent over time. This change can be measured in real time or predicted with a model. The advantage of prediction, like preventative medicine, is that it can be used to alter the occurrence of potentially detrimental conditions before they are manifest. The much greater efficiency of preventative, rather than remedial, efforts strongly justifies the need for an ability to accurately model environmental contaminants such as non-point source (NPS) pollutants. However, the environmental modeling advances that have accompanied computer technological development are a mixed blessing. Where once we had a plethora of discordant data without a holistic theory, now the pendulum has swung so that we suffer from a growing stockpile of models of which a significant number have never been confirmed or even attempts made to confirm them. Modeling has become an end in itself rather than a means because of limited research funding, the high cost of field studies, limitations in time and patience, difficulty in cooperative research and pressure to publish papers as quickly as possible. Modeling and experimentation should be ongoing processes that reciprocally enhance one another with sound, comprehensive experiments serving as the building blocks of models and models

  19. Pivots for pointing: visually-monitored pointing has higher arm elevations than pointing blindfolded.

    PubMed

    Wnuczko, Marta; Kennedy, John M

    2011-10-01

    Observers pointing to a target viewed directly may elevate their fingertip close to the line of sight. However, pointing blindfolded, after viewing the target, they may pivot lower, from the shoulder, aligning the arm with the target as if reaching to the target. Indeed, in Experiment 1 participants elevated their arms more in visually monitored than blindfolded pointing. In Experiment 2, pointing to a visible target they elevated a short pointer more than a long one, raising its tip to the line of sight. In Experiment 3, the Experimenter aligned the participant's arm with the target. Participants judged they were pointing below a visually monitored target. In Experiment 4, participants viewing another person pointing, eyes-open or eyes-closed, judged the target was aligned with the pointing arm. In Experiment 5, participants viewed their arm and the target via a mirror and posed their arm so that it was aligned with the target. Arm elevation was higher in pointing directly.

  20. Conversion of Component-Based Point Definition to VSP Model and Higher Order Meshing

    NASA Technical Reports Server (NTRS)

    Ordaz, Irian

    2011-01-01

    Vehicle Sketch Pad (VSP) has become a powerful conceptual and parametric geometry tool with numerous export capabilities for third-party analysis codes as well as robust surface meshing capabilities for computational fluid dynamics (CFD) analysis. However, a capability gap currently exists for reconstructing a fully parametric VSP model of a geometry generated by third-party software. A computer code called GEO2VSP has been developed to close this gap and to allow the integration of VSP into a closed-loop geometry design process with other third-party design tools. Furthermore, the automated CFD surface meshing capability of VSP are demonstrated for component-based point definition geometries in a conceptual analysis and design framework.

  1. Tricritical points in a Vicsek model of self-propelled particles with bounded confidence

    NASA Astrophysics Data System (ADS)

    Romensky, Maksym; Lobaskin, Vladimir; Ihle, Thomas

    2014-12-01

    We study the orientational ordering in systems of self-propelled particles with selective interactions. To introduce the selectivity we augment the standard Vicsek model with a bounded-confidence collision rule: a given particle only aligns to neighbors who have directions quite similar to its own. Neighbors whose directions deviate more than a fixed restriction angle α are ignored. The collective dynamics of this system is studied by agent-based simulations and kinetic mean-field theory. We demonstrate that the reduction of the restriction angle leads to a critical noise amplitude decreasing monotonically with that angle, turning into a power law with exponent 3/2 for small angles. Moreover, for small system sizes we show that upon decreasing the restriction angle, the kind of the transition to polar collective motion changes from continuous to discontinuous. Thus, an apparent tricritical point with different scaling laws is identified and calculated analytically. We investigate the shifting and vanishing of this point due to the formation of density bands as the system size is increased. Agent-based simulations in small systems with large particle velocities show excellent agreement with the kinetic theory predictions. We also find that at very small interaction angles, the polar ordered phase becomes unstable with respect to the apolar phase. We derive analytical expressions for the dependence of the threshold noise on the restriction angle. We show that the mean-field kinetic theory also permits stationary nematic states below a restriction angle of 0.681 π . We calculate the critical noise, at which the disordered state bifurcates to a nematic state, and find that it is always smaller than the threshold noise for the transition from disorder to polar order. The disordered-nematic transition features two tricritical points: At low and high restriction angle, the transition is discontinuous but continuous at intermediate α . We generalize our results to

  2. On constraining pilot point calibration with regularization in PEST

    USGS Publications Warehouse

    Fienen, M.N.; Muffels, C.T.; Hunt, R.J.

    2009-01-01

    Ground water model calibration has made great advances in recent years with practical tools such as PEST being instrumental for making the latest techniques available to practitioners. As models and calibration tools get more sophisticated, however, the power of these tools can be misapplied, resulting in poor parameter estimates and/or nonoptimally calibrated models that do not suit their intended purpose. Here, we focus on an increasingly common technique for calibrating highly parameterized numerical models - pilot point parameterization with Tikhonov regularization. Pilot points are a popular method for spatially parameterizing complex hydrogeologic systems; however, additional flexibility offered by pilot points can become problematic if not constrained by Tikhonov regularization. The objective of this work is to explain and illustrate the specific roles played by control variables in the PEST software for Tikhonov regularization applied to pilot points. A recent study encountered difficulties implementing this approach, but through examination of that analysis, insight into underlying sources of potential misapplication can be gained and some guidelines for overcoming them developed. ?? 2009 National Ground Water Association.

  3. Critical point of gas-liquid type phase transition and phase equilibrium functions in developed two-component plasma model.

    PubMed

    Butlitsky, M A; Zelener, B B; Zelener, B V

    2014-07-14

    A two-component plasma model, which we called a "shelf Coulomb" model has been developed in this work. A Monte Carlo study has been undertaken to calculate equations of state, pair distribution functions, internal energies, and other thermodynamics properties. A canonical NVT ensemble with periodic boundary conditions was used. The motivation behind the model is also discussed in this work. The "shelf Coulomb" model can be compared to classical two-component (electron-proton) model where charges with zero size interact via a classical Coulomb law. With important difference for interaction of opposite charges: electrons and protons interact via the Coulomb law for large distances between particles, while interaction potential is cut off on small distances. The cut off distance is defined by an arbitrary ɛ parameter, which depends on system temperature. All the thermodynamics properties of the model depend on dimensionless parameters ɛ and γ = βe(2)n(1/3) (where β = 1/kBT, n is the particle's density, kB is the Boltzmann constant, and T is the temperature) only. In addition, it has been shown that the virial theorem works in this model. All the calculations were carried over a wide range of dimensionless ɛ and γ parameters in order to find the phase transition region, critical point, spinodal, and binodal lines of a model system. The system is observed to undergo a first order gas-liquid type phase transition with the critical point being in the vicinity of ɛ(crit) ≈ 13(T(*)(crit) ≈ 0.076), γ(crit) ≈ 1.8(v(*)(crit) ≈ 0.17), P(*)(crit) ≈ 0.39, where specific volume v* = 1/γ(3) and reduced temperature T(*) = ɛ(-1).

  4. Critical point of gas-liquid type phase transition and phase equilibrium functions in developed two-component plasma model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Butlitsky, M. A.; Zelener, B. V.; Zelener, B. B.

    A two-component plasma model, which we called a “shelf Coulomb” model has been developed in this work. A Monte Carlo study has been undertaken to calculate equations of state, pair distribution functions, internal energies, and other thermodynamics properties. A canonical NVT ensemble with periodic boundary conditions was used. The motivation behind the model is also discussed in this work. The “shelf Coulomb” model can be compared to classical two-component (electron-proton) model where charges with zero size interact via a classical Coulomb law. With important difference for interaction of opposite charges: electrons and protons interact via the Coulomb law for largemore » distances between particles, while interaction potential is cut off on small distances. The cut off distance is defined by an arbitrary ε parameter, which depends on system temperature. All the thermodynamics properties of the model depend on dimensionless parameters ε and γ = βe{sup 2}n{sup 1/3} (where β = 1/k{sub B}T, n is the particle's density, k{sub B} is the Boltzmann constant, and T is the temperature) only. In addition, it has been shown that the virial theorem works in this model. All the calculations were carried over a wide range of dimensionless ε and γ parameters in order to find the phase transition region, critical point, spinodal, and binodal lines of a model system. The system is observed to undergo a first order gas-liquid type phase transition with the critical point being in the vicinity of ε{sub crit}≈13(T{sub crit}{sup *}≈0.076),γ{sub crit}≈1.8(v{sub crit}{sup *}≈0.17),P{sub crit}{sup *}≈0.39, where specific volume v* = 1/γ{sup 3} and reduced temperature T{sup *} = ε{sup −1}.« less

  5. Fermion-induced quantum critical points.

    PubMed

    Li, Zi-Xiang; Jiang, Yi-Fan; Jian, Shao-Kai; Yao, Hong

    2017-08-22

    A unified theory of quantum critical points beyond the conventional Landau-Ginzburg-Wilson paradigm remains unknown. According to Landau cubic criterion, phase transitions should be first-order when cubic terms of order parameters are allowed by symmetry in the Landau-Ginzburg free energy. Here, from renormalization group analysis, we show that second-order quantum phase transitions can occur at such putatively first-order transitions in interacting two-dimensional Dirac semimetals. As such type of Landau-forbidden quantum critical points are induced by gapless fermions, we call them fermion-induced quantum critical points. We further introduce a microscopic model of SU(N) fermions on the honeycomb lattice featuring a transition between Dirac semimetals and Kekule valence bond solids. Remarkably, our large-scale sign-problem-free Majorana quantum Monte Carlo simulations show convincing evidences of a fermion-induced quantum critical points for N = 2, 3, 4, 5 and 6, consistent with the renormalization group analysis. We finally discuss possible experimental realizations of the fermion-induced quantum critical points in graphene and graphene-like materials.Quantum phase transitions are governed by Landau-Ginzburg theory and the exceptions are rare. Here, Li et al. propose a type of Landau-forbidden quantum critical points induced by gapless fermions in two-dimensional Dirac semimetals.

  6. Tank tests of three models of flying-boat hulls of the pointed-step type with different angles of dead rise - NACA model 35 series

    NASA Technical Reports Server (NTRS)

    Dawson, John R

    1936-01-01

    The results of tank tests of three models of flying-boat hulls of the pointed-step type with different angles of dead rise are given in charts and are compared with results from tests of more conventional hulls. Increasing the angle of dead rise from 15 to 25 degrees: had little effect on the hump resistance; increased the resistance throughout the planning range; increased the best trim angle; reduced the maximum positive trimming moment required to obtain best trim angle; and had but a slight effect on the spray characteristics. For approximately the same angles of dead rise the resistance of the pointed-step hulls were considerably lower at high speeds than those of the more conventional hulls.

  7. Fragmentation approach to the point-island model with hindered aggregation: Accessing the barrier energy.

    PubMed

    González, Diego Luis; Pimpinelli, Alberto; Einstein, T L

    2017-07-01

    We study the effect of hindered aggregation on the island formation process in a one- (1D) and two-dimensional (2D) point-island model for epitaxial growth with arbitrary critical nucleus size i. In our model, the attachment of monomers to preexisting islands is hindered by an additional attachment barrier, characterized by length l_{a}. For l_{a}=0 the islands behave as perfect sinks while for l_{a}→∞ they behave as reflecting boundaries. For intermediate values of l_{a}, the system exhibits a crossover between two different kinds of processes, diffusion-limited aggregation and attachment-limited aggregation. We calculate the growth exponents of the density of islands and monomers for the low coverage and aggregation regimes. The capture-zone (CZ) distributions are also calculated for different values of i and l_{a}. In order to obtain a good spatial description of the nucleation process, we propose a fragmentation model, which is based on an approximate description of nucleation inside of the gaps for 1D and the CZs for 2D. In both cases, the nucleation is described by using two different physically rooted probabilities, which are related with the microscopic parameters of the model (i and l_{a}). We test our analytical model with extensive numerical simulations and previously established results. The proposed model describes excellently the statistical behavior of the system for arbitrary values of l_{a} and i=1, 2, and 3.

  8. Validation of Point Clouds Segmentation Algorithms Through Their Application to Several Case Studies for Indoor Building Modelling

    NASA Astrophysics Data System (ADS)

    Macher, H.; Landes, T.; Grussenmeyer, P.

    2016-06-01

    Laser scanners are widely used for the modelling of existing buildings and particularly in the creation process of as-built BIM (Building Information Modelling). However, the generation of as-built BIM from point clouds involves mainly manual steps and it is consequently time consuming and error-prone. Along the path to automation, a three steps segmentation approach has been developed. This approach is composed of two phases: a segmentation into sub-spaces namely floors and rooms and a plane segmentation combined with the identification of building elements. In order to assess and validate the developed approach, different case studies are considered. Indeed, it is essential to apply algorithms to several datasets and not to develop algorithms with a unique dataset which could influence the development with its particularities. Indoor point clouds of different types of buildings will be used as input for the developed algorithms, going from an individual house of almost one hundred square meters to larger buildings of several thousand square meters. Datasets provide various space configurations and present numerous different occluding objects as for example desks, computer equipments, home furnishings and even wine barrels. For each dataset, the results will be illustrated. The analysis of the results will provide an insight into the transferability of the developed approach for the indoor modelling of several types of buildings.

  9. Gaussian mixed model in support of semiglobal matching leveraged by ground control points

    NASA Astrophysics Data System (ADS)

    Ma, Hao; Zheng, Shunyi; Li, Chang; Li, Yingsong; Gui, Li

    2017-04-01

    Semiglobal matching (SGM) has been widely applied in large aerial images because of its good tradeoff between complexity and robustness. The concept of ground control points (GCPs) is adopted to make SGM more robust. We model the effect of GCPs as two data terms for stereo matching between high-resolution aerial epipolar images in an iterative scheme. One term based on GCPs is formulated by Gaussian mixture model, which strengths the relation between GCPs and the pixels to be estimated and encodes some degree of consistency between them with respect to disparity values. Another term depends on pixel-wise confidence, and we further design a confidence updating equation based on three rules. With this confidence-based term, the assignment of disparity can be heuristically selected among disparity search ranges during the iteration process. Several iterations are sufficient to bring out satisfactory results according to our experiments. Experimental results validate that the proposed method outperforms surface reconstruction, which is a representative variant of SGM and behaves excellently on aerial images.

  10. The prediction of the flash point for binary aqueous-organic solutions.

    PubMed

    Liaw, Horng-Jang; Chiu, Yi-Yu

    2003-07-18

    A mathematical model, which may be used for predicting the flash point of aqueous-organic solutions, has been proposed and subsequently verified by experimentally-derived data. The results reveal that this model is able to precisely predict the flash point over the entire composition range of binary aqueous-organic solutions by way of utilizing the flash point data pertaining to the flammable component. The derivative of flash point with respect to composition (solution composition effect upon flash point) can be applied to process safety design/operation in order to identify as to whether the dilution of a flammable liquid solution with water is effective in reducing the fire and explosion hazard of the solution at a specified composition. Such a derivative equation was thus derived based upon the flash point prediction model referred to above and then verified by the application of experimentally-derived data.

  11. A new stochastic model considering satellite clock interpolation errors in precise point positioning

    NASA Astrophysics Data System (ADS)

    Wang, Shengli; Yang, Fanlin; Gao, Wang; Yan, Lizi; Ge, Yulong

    2018-03-01

    Precise clock products are typically interpolated based on the sampling interval of the observational data when they are used for in precise point positioning. However, due to the occurrence of white noise in atomic clocks, a residual component of such noise will inevitable reside within the observations when clock errors are interpolated, and such noise will affect the resolution of the positioning results. In this paper, which is based on a twenty-one-week analysis of the atomic clock noise characteristics of numerous satellites, a new stochastic observation model that considers satellite clock interpolation errors is proposed. First, the systematic error of each satellite in the IGR clock product was extracted using a wavelet de-noising method to obtain the empirical characteristics of atomic clock noise within each clock product. Then, based on those empirical characteristics, a stochastic observation model was structured that considered the satellite clock interpolation errors. Subsequently, the IGR and IGS clock products at different time intervals were used for experimental validation. A verification using 179 stations worldwide from the IGS showed that, compared with the conventional model, the convergence times using the stochastic model proposed in this study were respectively shortened by 4.8% and 4.0% when the IGR and IGS 300-s-interval clock products were used and by 19.1% and 19.4% when the 900-s-interval clock products were used. Furthermore, the disturbances during the initial phase of the calculation were also effectively improved.

  12. Limited Sampling Strategy for Accurate Prediction of Pharmacokinetics of Saroglitazar: A 3-point Linear Regression Model Development and Successful Prediction of Human Exposure.

    PubMed

    Joshi, Shuchi N; Srinivas, Nuggehally R; Parmar, Deven V

    2018-03-01

    Our aim was to develop and validate the extrapolative performance of a regression model using a limited sampling strategy for accurate estimation of the area under the plasma concentration versus time curve for saroglitazar. Healthy subject pharmacokinetic data from a well-powered food-effect study (fasted vs fed treatments; n = 50) was used in this work. The first 25 subjects' serial plasma concentration data up to 72 hours and corresponding AUC 0-t (ie, 72 hours) from the fasting group comprised a training dataset to develop the limited sampling model. The internal datasets for prediction included the remaining 25 subjects from the fasting group and all 50 subjects from the fed condition of the same study. The external datasets included pharmacokinetic data for saroglitazar from previous single-dose clinical studies. Limited sampling models were composed of 1-, 2-, and 3-concentration-time points' correlation with AUC 0-t of saroglitazar. Only models with regression coefficients (R 2 ) >0.90 were screened for further evaluation. The best R 2 model was validated for its utility based on mean prediction error, mean absolute prediction error, and root mean square error. Both correlations between predicted and observed AUC 0-t of saroglitazar and verification of precision and bias using Bland-Altman plot were carried out. None of the evaluated 1- and 2-concentration-time points models achieved R 2 > 0.90. Among the various 3-concentration-time points models, only 4 equations passed the predefined criterion of R 2 > 0.90. Limited sampling models with time points 0.5, 2, and 8 hours (R 2 = 0.9323) and 0.75, 2, and 8 hours (R 2 = 0.9375) were validated. Mean prediction error, mean absolute prediction error, and root mean square error were <30% (predefined criterion) and correlation (r) was at least 0.7950 for the consolidated internal and external datasets of 102 healthy subjects for the AUC 0-t prediction of saroglitazar. The same models, when applied to the AUC 0-t

  13. 3DVEM Software Modules for Efficient Management of Point Clouds and Photorealistic 3d Models

    NASA Astrophysics Data System (ADS)

    Fabado, S.; Seguí, A. E.; Cabrelles, M.; Navarro, S.; García-De-San-Miguel, D.; Lerma, J. L.

    2013-07-01

    Cultural heritage managers in general and information users in particular are not usually used to deal with high-technological hardware and software. On the contrary, information providers of metric surveys are most of the times applying latest developments for real-life conservation and restoration projects. This paper addresses the software issue of handling and managing either 3D point clouds or (photorealistic) 3D models to bridge the gap between information users and information providers as regards the management of information which users and providers share as a tool for decision-making, analysis, visualization and management. There are not many viewers specifically designed to handle, manage and create easily animations of architectural and/or archaeological 3D objects, monuments and sites, among others. 3DVEM - 3D Viewer, Editor & Meter software will be introduced to the scientific community, as well as 3DVEM - Live and 3DVEM - Register. The advantages of managing projects with both sets of data, 3D point cloud and photorealistic 3D models, will be introduced. Different visualizations of true documentation projects in the fields of architecture, archaeology and industry will be presented. Emphasis will be driven to highlight the features of new userfriendly software to manage virtual projects. Furthermore, the easiness of creating controlled interactive animations (both walkthrough and fly-through) by the user either on-the-fly or as a traditional movie file will be demonstrated through 3DVEM - Live.

  14. Multiview point clouds denoising based on interference elimination

    NASA Astrophysics Data System (ADS)

    Hu, Yang; Wu, Qian; Wang, Le; Jiang, Huanyu

    2018-03-01

    Newly emerging low-cost depth sensors offer huge potentials for three-dimensional (3-D) modeling, but existing high noise restricts these sensors from obtaining accurate results. Thus, we proposed a method for denoising registered multiview point clouds with high noise to solve that problem. The proposed method is aimed at fully using redundant information to eliminate the interferences among point clouds of different views based on an iterative procedure. In each iteration, noisy points are either deleted or moved to their weighted average targets in accordance with two cases. Simulated data and practical data captured by a Kinect v2 sensor were tested in experiments qualitatively and quantitatively. Results showed that the proposed method can effectively reduce noise and recover local features from highly noisy multiview point clouds with good robustness, compared to truncated signed distance function and moving least squares (MLS). Moreover, the resulting low-noise point clouds can be further smoothed by the MLS to achieve improved results. This study provides the feasibility of obtaining fine 3-D models with high-noise devices, especially for depth sensors, such as Kinect.

  15. Inherently unstable networks collapse to a critical point

    NASA Astrophysics Data System (ADS)

    Sheinman, M.; Sharma, A.; Alvarado, J.; Koenderink, G. H.; MacKintosh, F. C.

    2015-07-01

    Nonequilibrium systems that are driven or drive themselves towards a critical point have been studied for almost three decades. Here we present a minimalist example of such a system, motivated by experiments on collapsing active elastic networks. Our model of an unstable elastic network exhibits a collapse towards a critical point from any macroscopically connected initial configuration. Taking into account steric interactions within the network, the model qualitatively and quantitatively reproduces results of the experiments on collapsing active gels.

  16. Fine pointing of the Solar Optical Telescope in the Space Shuttle environment

    NASA Astrophysics Data System (ADS)

    Gowrinathan, S.

    Instruments requiring fine (i.e., sub-arcsecond) pointing, such as the Solar Optical Telescope (SOT), must be equipped with two-stage pointing devices, coarse and fine. Coarse pointing will be performed by a gimbal system, such as the Instrument Pointing System, while the image motion compensation (IMC) will provide fine pointing. This paper describes work performed on the SOT concept design that illustrates IMC as applied to SOT. The SOT control system was modeled in the frequency domain to evaluate performance, stability, and bandwidth requirements. The two requirements of the pointing control, i.e., the 2 arcsecond reproducibility and 0.03 arcsecond rms pointing jitter, can be satisfied by use of IMC at about 20 Hz bandwidth. The need for this high bandwidth is related to Shuttle-induced disturbances that arise primarily from man push-offs and vernier thruster firings. A block diagram of SOT model/stability analysis, schematic illustrations of the SOT pointing system, and a structural model summary are included.

  17. Integrating Wind Profiling Radars and Radiosonde Observations with Model Point Data to Develop a Decision Support Tool to Assess Upper-Level Winds for Space Launch

    NASA Technical Reports Server (NTRS)

    Bauman, William H., III; Flinn, Clay

    2013-01-01

    On the day of launch, the 45th Weather Squadron (45 WS) Launch Weather Officers (LWOs) monitor the upper-level winds for their launch customers. During launch operations, the payload/launch team sometimes asks the LWOs if they expect the upper-level winds to change during the countdown. The LWOs used numerical weather prediction model point forecasts to provide the information, but did not have the capability to quickly retrieve or adequately display the upper-level observations and compare them directly in the same display to the model point forecasts to help them determine which model performed the best. The LWOs requested the Applied Meteorology Unit (AMU) develop a graphical user interface (GUI) that will plot upper-level wind speed and direction observations from the Cape Canaveral Air Force Station (CCAFS) Automated Meteorological Profiling System (AMPS) rawinsondes with point forecast wind profiles from the National Centers for Environmental Prediction (NCEP) North American Mesoscale (NAM), Rapid Refresh (RAP) and Global Forecast System (GFS) models to assess the performance of these models. The AMU suggested adding observations from the NASA 50 MHz wind profiler and one of the US Air Force 915 MHz wind profilers, both located near the Kennedy Space Center (KSC) Shuttle Landing Facility, to supplement the AMPS observations with more frequent upper-level profiles. Figure 1 shows a map of KSC/CCAFS with the locations of the observation sites and the model point forecasts.

  18. Approaches to highly parameterized inversion: Pilot-point theory, guidelines, and research directions

    USGS Publications Warehouse

    Doherty, John E.; Fienen, Michael N.; Hunt, Randall J.

    2011-01-01

    Pilot points have been used in geophysics and hydrogeology for at least 30 years as a means to bridge the gap between estimating a parameter value in every cell of a model and subdividing models into a small number of homogeneous zones. Pilot points serve as surrogate parameters at which values are estimated in the inverse-modeling process, and their values are interpolated onto the modeling domain in such a way that heterogeneity can be represented at a much lower computational cost than trying to estimate parameters in every cell of a model. Although the use of pilot points is increasingly common, there are few works documenting the mathematical implications of their use and even fewer sources of guidelines for their implementation in hydrogeologic modeling studies. This report describes the mathematics of pilot-point use, provides guidelines for their use in the parameter-estimation software suite (PEST), and outlines several research directions. Two key attributes for pilot-point definitions are highlighted. First, the difference between the information contained in the every-cell parameter field and the surrogate parameter field created using pilot points should be in the realm of parameters which are not informed by the observed data (the null space). Second, the interpolation scheme for projecting pilot-point values onto model cells ideally should be orthogonal. These attributes are informed by the mathematics and have important ramifications for both the guidelines and suggestions for future research.

  19. Two-point correlators revisited: fast and slow scales in multifield models of inflation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghersi, José T. Gálvez; Frolov, Andrei V., E-mail: joseg@sfu.ca, E-mail: frolov@sfu.ca

    2017-05-01

    We study the structure of two-point correlators of the inflationary field fluctuations in order to improve the accuracy and efficiency of the existing methods to calculate primordial spectra. We present a description motivated by the separation of the fast and slow evolving components of the spectrum which is based on Cholesky decomposition of the field correlator matrix. Our purpose is to rewrite all the relevant equations of motion in terms of slowly varying quantities. This is important in order to consider the contribution from high-frequency modes to the spectrum without affecting computational performance. The slow-roll approximation is not required tomore » reproduce the main distinctive features in the power spectrum for each specific model of inflation.« less

  20. Configuration Analysis of the ERS Points in Large-Volume Metrology System

    PubMed Central

    Jin, Zhangjun; Yu, Cijun; Li, Jiangxiong; Ke, Yinglin

    2015-01-01

    In aircraft assembly, multiple laser trackers are used simultaneously to measure large-scale aircraft components. To combine the independent measurements, the transformation matrices between the laser trackers’ coordinate systems and the assembly coordinate system are calculated, by measuring the enhanced referring system (ERS) points. This article aims to understand the influence of the configuration of the ERS points that affect the transformation matrix errors, and then optimize the deployment of the ERS points to reduce the transformation matrix errors. To optimize the deployment of the ERS points, an explicit model is derived to estimate the transformation matrix errors. The estimation model is verified by the experiment implemented in the factory floor. Based on the proposed model, a group of sensitivity coefficients are derived to evaluate the quality of the configuration of the ERS points, and then several typical configurations of the ERS points are analyzed in detail with the sensitivity coefficients. Finally general guidance is established to instruct the deployment of the ERS points in the aspects of the layout, the volume size and the number of the ERS points, as well as the position and orientation of the assembly coordinate system. PMID:26402685

  1. Combining points and lines in rectifying satellite images

    NASA Astrophysics Data System (ADS)

    Elaksher, Ahmed F.

    2017-09-01

    The quick advance in remote sensing technologies established the potential to gather accurate and reliable information about the Earth surface using high resolution satellite images. Remote sensing satellite images of less than one-meter pixel size are currently used in large-scale mapping. Rigorous photogrammetric equations are usually used to describe the relationship between the image coordinates and ground coordinates. These equations require the knowledge of the exterior and interior orientation parameters of the image that might not be available. On the other hand, the parallel projection transformation could be used to represent the mathematical relationship between the image-space and objectspace coordinate systems and provides the required accuracy for large-scale mapping using fewer ground control features. This article investigates the differences between point-based and line-based parallel projection transformation models in rectifying satellite images with different resolutions. The point-based parallel projection transformation model and its extended form are presented and the corresponding line-based forms are developed. Results showed that the RMS computed using the point- or line-based transformation models are equivalent and satisfy the requirement for large-scale mapping. The differences between the transformation parameters computed using the point- and line-based transformation models are insignificant. The results showed high correlation between the differences in the ground elevation and the RMS.

  2. Elucidating reactivity regimes in cyclopentane oxidation: Jet stirred reactor experiments, computational chemistry, and kinetic modeling

    DOE PAGES

    Al Rashidi, Mariam J.; Thion, Sebastien; Togbe, Casimir; ...

    2016-06-22

    This study is concerned with the identification and quantification of species generated during the combustion of cyclopentane in a jet stirred reactor (JSR). Experiments were carried out for temperatures between 740 and 1250 K, equivalence ratios from 0.5 to 3.0, and at an operating pressure of 10 atm. The fuel concentration was kept at 0.1% and the residence time of the fuel/O 2/N 2 mixture was maintained at 0.7 s. The reactant, product, and intermediate species concentration profiles were measured using gas chromatography and Fourier transform infrared spectroscopy. The concentration profiles of cyclopentane indicate inhibition of reactivity between 850-1000 Kmore » for φ=2.0 and φ=3.0. This behavior is interesting, as it has not been observed previously for other fuel molecules, cyclic or non-cyclic. A kinetic model including both low- and high-temperature reaction pathways was developed and used to simulate the JSR experiments. The pressure-dependent rate coefficients of all relevant reactions lying on the PES of cyclopentyl + O 2, as well as the C-C and C-H scission reactions of the cyclopentyl radical were calculated at the UCCSD(T)-F12b/cc-pVTZ-F12//M06-2X/6-311++G(d,p) level of theory. The simulations reproduced the unique reactivity trend of cyclopentane and the measured concentration profiles of intermediate and product species. Furthermore, sensitivity and reaction path analyses indicate that this reactivity trend may be attributed to differences in the reactivity of allyl radical at different conditions, and it is highly sensitive to the C-C/C-H scission branching ratio of the cyclopentyl radical decomposition.« less

  3. Sparsity-based fast CGH generation using layer-based approach for 3D point cloud model

    NASA Astrophysics Data System (ADS)

    Kim, Hak Gu; Jeong, Hyunwook; Ro, Yong Man

    2017-03-01

    Computer generated hologram (CGH) is becoming increasingly important for a 3-D display in various applications including virtual reality. In the CGH, holographic fringe patterns are generated by numerically calculating them on computer simulation systems. However, a heavy computational cost is required to calculate the complex amplitude on CGH plane for all points of 3D objects. This paper proposes a new fast CGH generation based on the sparsity of CGH for 3D point cloud model. The aim of the proposed method is to significantly reduce computational complexity while maintaining the quality of the holographic fringe patterns. To that end, we present a new layer-based approach for calculating the complex amplitude distribution on the CGH plane by using sparse FFT (sFFT). We observe the CGH of a layer of 3D objects is sparse so that dominant CGH is rapidly generated from a small set of signals by sFFT. Experimental results have shown that the proposed method is one order of magnitude faster than recently reported fast CGH generation.

  4. Binary Colloidal Alloy Test-3 and 4: Critical Point

    NASA Technical Reports Server (NTRS)

    Weitz, David A.; Lu, Peter J.

    2007-01-01

    Binary Colloidal Alloy Test - 3 and 4: Critical Point (BCAT-3-4-CP) will determine phase separation rates and add needed points to the phase diagram of a model critical fluid system. Crewmembers photograph samples of polymer and colloidal particles (tiny nanoscale spheres suspended in liquid) that model liquid/gas phase changes. Results will help scientists develop fundamental physics concepts previously cloaked by the effects of gravity.

  5. Using Lunar Observations to Validate Pointing Accuracy and Geolocation, Detector Sensitivity Stability and Static Point Response of the CERES Instruments

    NASA Technical Reports Server (NTRS)

    Daniels, Janet L.; Smith, G. Louis; Priestley, Kory J.; Thomas, Susan

    2014-01-01

    Validation of in-orbit instrument performance is a function of stability in both instrument and calibration source. This paper describes a method using lunar observations scanning near full moon by the Clouds and Earth Radiant Energy System (CERES) instruments. The Moon offers an external source whose signal variance is predictable and non-degrading. From 2006 to present, these in-orbit observations have become standardized and compiled for the Flight Models -1 and -2 aboard the Terra satellite, for Flight Models-3 and -4 aboard the Aqua satellite, and beginning 2012, for Flight Model-5 aboard Suomi-NPP. Instrument performance measurements studied are detector sensitivity stability, pointing accuracy and static detector point response function. This validation method also shows trends per CERES data channel of 0.8% per decade or less for Flight Models 1-4. Using instrument gimbal data and computed lunar position, the pointing error of each detector telescope, the accuracy and consistency of the alignment between the detectors can be determined. The maximum pointing error was 0.2 Deg. in azimuth and 0.17 Deg. in elevation which corresponds to an error in geolocation near nadir of 2.09 km. With the exception of one detector, all instruments were found to have consistent detector alignment from 2006 to present. All alignment error was within 0.1o with most detector telescopes showing a consistent alignment offset of less than 0.02 Deg.

  6. The Effect of Stiffness Parameter on Mass Distribution in Heavy-Ion Induced Fission

    NASA Astrophysics Data System (ADS)

    Soheyli, Saeed; Khalil Khalili, Morteza; Ashrafi, Ghazaaleh

    2018-06-01

    The stiffness parameter of the composite system has been studied for several heavy-ion induced fission reactions without the contribution of non-compound nucleus fission events. In this research, determination of the stiffness parameter is based on the comparison between the experimental data on the mass widths of fission fragments and those predicted by the statistical model treatments at the saddle and scission points. Analysis of the results shows that for the induced fission reactions of different targets by the same projectile, the stiffness parameter of the composite system decreases with increasing the fissility parameter, as well as with increasing the mass number of the compound nucleus. This parameter also exhibits a similar behavior for the reactions of a given target induced by different projectiles. As expected, nearly same stiffness values are obtained for different reactions leading to the same compound nucleus.

  7. Sediment Transport over a Dredge Pit, Sandy Point Southeast, west flank of the Mississippi River during Summer Upcoast Currents: a Coupled Wave, Current and Sediment Numerical Model

    NASA Astrophysics Data System (ADS)

    Chaichitehrani, N.; Li, C.; Xu, K.; Bentley, S. J.; Miner, M. D.

    2017-12-01

    Sandy Point southeast, an elongated sand resource, was dredged in November 2012 to restore Pelican Island, Louisiana. Hydrodynamics and wave propagation patterns along with fluvial sediments from the Mississippi River influence the sediment and bottom boundary layer dynamics over Sandy Point. A state-of-the-art numerical model, Delft3D, was implemented to investigate current variations and wave transformation on Sandy Point as well as sediment transport pattern. Delft3d FLOW and WAVE modules were coupled and validated using WAVCIS and NDBC data. Sediment transport model was run by introducing both bed and river sediments, consisted of mainly mud and a small fraction of sand. A sediment transport model was evaluated for surface sediment concentration using data derived from satellite images. The model results were used to study sediment dynamics and bottom boundary layer characteristics focused on the Sandy Point area during summer. Two contrasting bathymetric configurations, with and without the Sandy Point dredge pit, were used to conduct an experiment on the sediment and bottom boundary layer dynamics. Preliminary model results showed that the presence of the Sandy Point pit has very limited effect on the hydrodynamics and wave pattern at the pit location. Sediments from the Mississippi River outlets, especially in the vicinity of the pit, get trapped in the pit under the easterly to the northeasterly upcoast current which prevails in August. We also examined the wave-induced sediment reworking and river-borne fluvial sediment over Sandy Point. The effect of wind induced orbital velocity increases the bottom shear stress compared to the time with no waves, relatively small wave heights (lower than 1.5 meters) along the deepest part of the pit (about 20 meters) causes little bottom sediment rework during this period. The results showed that in the summertime, river water is more likely the source of sedimentation in the pit.

  8. The Engelbourg's ruins: from 3D TLS point cloud acquisition to 3D virtual and historic models

    NASA Astrophysics Data System (ADS)

    Koehl, Mathieu; Berger, Solveig; Nobile, Sylvain

    2014-05-01

    The Castle of Engelbourg was built at the beginning of the 13th century, at the top of the Schlossberg. It is situated on the territory of the municipality of Thann (France), at the crossroads of Alsace and Lorraine, and dominates the outlet of the valley of Thur. Its strategic position was one of the causes of its systematic destructions during the 17th century, and Louis XIV finished his fate by ordering his demolition in 1673. Today only few vestiges remain, of which a section of the main tower from about 7m of diameter and 4m of wide laying on its slice, unique characteristic in the regional castral landscape. It is visible since the valley, was named "the Eye of the witch", and became a key attraction of the region. The site, which extends over approximately one hectare, is for several years the object of numerous archaeological studies and is at the heart of a project of valuation of the vestiges today. It was indeed a key objective, among the numerous planned works, to realize a 3D model of the site in its current state, in other words, a virtual model "such as seized", exploitable as well from a cultural and tourist point of view as by scientists and in archaeological researches. The team of the ICube/INSA lab had in responsibility the realization of this model, the acquisition of the data until the delivery of the virtual model, thanks to 3D TLS and topographic surveying methods. It was also planned to integrate into this 3D model, data of 2D archives, stemming from series of former excavations. The objectives of this project were the following ones: • Acquisition of 3D digital data of the site and 3D modelling • Digitization of the 2D archaeological data and integration in the 3D model • Implementation of a database connected to the 3D model • Virtual Visit of the site The obtained results allowed us to visualize every 3D object individually, under several forms (point clouds, 3D meshed objects and models, etc.) and at several levels of detail

  9. Earth-Moon Libration Point Orbit Stationkeeping: Theory, Modeling and Operations

    NASA Technical Reports Server (NTRS)

    Folta, David C.; Pavlak, Thomas A.; Haapala, Amanda F.; Howell, Kathleen C.; Woodard, Mark A.

    2013-01-01

    Collinear Earth-Moon libration points have emerged as locations with immediate applications. These libration point orbits are inherently unstable and must be maintained regularly which constrains operations and maneuver locations. Stationkeeping is challenging due to relatively short time scales for divergence effects of large orbital eccentricity of the secondary body, and third-body perturbations. Using the Acceleration Reconnection and Turbulence and Electrodynamics of the Moon's Interaction with the Sun (ARTEMIS) mission orbit as a platform, the fundamental behavior of the trajectories is explored using Poincare maps in the circular restricted three-body problem. Operational stationkeeping results obtained using the Optimal Continuation Strategy are presented and compared to orbit stability information generated from mode analysis based in dynamical systems theory.

  10. N-point functions in rolling tachyon background

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jokela, Niko; Keski-Vakkuri, Esko; Department of Physics, P.O. Box 64, FIN-00014, University of Helsinki

    2009-04-15

    We study n-point boundary correlation functions in timelike boundary Liouville theory, relevant for open string multiproduction by a decaying unstable D brane. We give an exact result for the one-point function of the tachyon vertex operator and show that it is consistent with a previously proposed relation to a conserved charge in string theory. We also discuss when the one-point amplitude vanishes. Using a straightforward perturbative expansion, we find an explicit expression for a tachyon n-point amplitude for all n, however the result is still a toy model. The calculation uses a new asymptotic approximation for Toeplitz determinants, derived bymore » relating the system to a Dyson gas at finite temperature.« less

  11. Radar Image Simulation: Validation of the Point Scattering Model. Volume 1

    DTIC Science & Technology

    1977-09-01

    I reports the work and results with technical deLails deferred to the appendices. Voluime II Is a collection of appendices containing the individual...separation between successive points on the ground. ’Look-dir- action " Is a very Important concept to imaging radars. It means, given, a particular...point, we have watched as the radar transmitted a pulse of enerqy to the ground. We observed the Inter- action of this pulse with the ground. We followed

  12. PynPoint code for exoplanet imaging

    NASA Astrophysics Data System (ADS)

    Amara, A.; Quanz, S. P.; Akeret, J.

    2015-04-01

    We announce the public release of PynPoint, a Python package that we have developed for analysing exoplanet data taken with the angular differential imaging observing technique. In particular, PynPoint is designed to model the point spread function of the central star and to subtract its flux contribution to reveal nearby faint companion planets. The current version of the package does this correction by using a principal component analysis method to build a basis set for modelling the point spread function of the observations. We demonstrate the performance of the package by reanalysing publicly available data on the exoplanet β Pictoris b, which consists of close to 24,000 individual image frames. We show that PynPoint is able to analyse this typical data in roughly 1.5 min on a Mac Pro, when the number of images is reduced by co-adding in sets of 5. The main computational work, the calculation of the Singular-Value-Decomposition, parallelises well as a result of a reliance on the SciPy and NumPy packages. For this calculation the peak memory load is 6 GB, which can be run comfortably on most workstations. A simpler calculation, by co-adding over 50, takes 3 s with a peak memory usage of 600 MB. This can be performed easily on a laptop. In developing the package we have modularised the code so that we will be able to extend functionality in future releases, through the inclusion of more modules, without it affecting the users application programming interface. We distribute the PynPoint package under GPLv3 licence through the central PyPI server, and the documentation is available online (http://pynpoint.ethz.ch).

  13. Modelling plume dispersion pattern from a point source using spatial auto-correlational analysis

    NASA Astrophysics Data System (ADS)

    Ujoh, F.; Kwabe, D.

    2014-02-01

    The main objective of the study is to estimate the rate and model the pattern of plume rise from Dangote Cement Plc. A handheld Garmin GPS was employed for collection of coordinates at a single kilometre graduation from the centre of the factory to 10 kilometres. Plume rate was estimated using the Gaussian model while Kriging, using ArcGIS, was adopted for modelling the pattern of plume dispersion over a 10 kilometre radius around the factory. ANOVA test was applied for statistical analysis of the plume coefficients. The results indicate that plume dispersion is generally high with highest values recorded for the atmospheric stability classes A and B, while the least values are recorded for the atmospheric stability classes F and E. The variograms derived from the Kriging reveal that the pattern of plume dispersion is outwardly radial and omni-directional. With the exception of 3 stability sub-classes (DH, EH and FH) out of a total of 12, the 24-hour average of particulate matters (PM10 and PM2.5) within the study area is outrageously higher (highest value at 21392.3) than the average safety limit of 150 ug/m3 - 230 ug/m3 prescribed by the 2006 WHO guidelines. This indicates the presence of respirable and non-respirable pollutants that create poor ambient air quality. The study concludes that the use of geospatial technology can be adopted in modelling dispersion of pollutants from a point source. The study recommends ameliorative measures to reduce the rate of plume emission at the factory.

  14. Turning Points: Priorities for Teacher Education in a Democracy

    ERIC Educational Resources Information Center

    Romano, Rosalie M.

    2009-01-01

    Every generation has its moment, some turning point that will mark its place in the historical record. Such points provide the direction of our history and our future. Turning points are, characteristically, times of turmoil based on a fundamental change in models or events--what Thomas Kuhn called a "paradigm shift." In terms of a democratic…

  15. Sampled control stability of the ESA instrument pointing system

    NASA Astrophysics Data System (ADS)

    Thieme, G.; Rogers, P.; Sciacovelli, D.

    Stability analysis and simulation results are presented for the ESA Instrument Pointing System (IPS) that is to be used in Spacelab's second launch. Of the two IPS plant dynamic models used in the ESA and NASA activities, one is based on six interconnected rigid bodies that represent the IPS and plant dynamic models used in the ESA and NASA activities, one is based on six interconnected rigid bodies that represent the IPS and its payload, while the other follows the NASA practice of defining an IPS-Spacelab 2 plant configuration through a structural finite element model, which is then used to generate modal data for various pointing directions. In both cases, the IPS dynamic plant model is truncated, then discretized at the sampling frequency and interfaces to a PID-based control law. A stability analysis has been carried out in discrete domain for various instrument pointing directions, taking into account suitable parameter variation ranges. A number of time simulations are presented.

  16. Collinear cluster tri-partition: Kinematics constraints and stability of collinearity

    NASA Astrophysics Data System (ADS)

    Holmvall, P.; Köster, U.; Heinz, A.; Nilsson, T.

    2017-01-01

    Background: A new mode of nuclear fission has been proposed by the FOBOS Collaboration, called collinear cluster tri-partition (CCT), and suggests that three heavy fission fragments can be emitted perfectly collinearly in low-energy fission. This claim is based on indirect observations via missing-energy events using the 2 v 2 E method. This proposed CCT seems to be an extraordinary new aspect of nuclear fission. It is surprising that CCT escaped observation for so long given the relatively high reported yield of roughly 0.5 % relative to binary fission. These claims call for an independent verification with a different experimental technique. Purpose: Verification experiments based on direct observation of CCT fragments with fission-fragment spectrometers require guidance with respect to the allowed kinetic-energy range, which we present in this paper. Furthermore, we discuss corresponding model calculations which, if CCT is found in such verification experiments, could indicate how the breakups proceed. Since CCT refers to collinear emission, we also study the intrinsic stability of collinearity. Methods: Three different decay models are used that together span the timescales of three-body fission. These models are used to calculate the possible kinetic-energy ranges of CCT fragments by varying fragment mass splits, excitation energies, neutron multiplicities, and scission-point configurations. Calculations are presented for the systems 235U(nth,f ) and 252Cf(s f ) , and the fission fragments previously reported for CCT; namely, isotopes of the elements Ni, Si, Ca, and Sn. In addition, we use semiclassical trajectory calculations with a Monte Carlo method to study the intrinsic stability of collinearity. Results: CCT has a high net Q value but, in a sequential decay, the intermediate steps are energetically and geometrically unfavorable or even forbidden. Moreover, perfect collinearity is extremely unstable, and broken by the slightest perturbation. Conclusions

  17. Gravity Duals of Lifshitz-Like Fixed Points

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kachru, Shamit; /Stanford U., Phys. Dept. /SLAC; Liu, Xiao

    2008-11-05

    We find candidate macroscopic gravity duals for scale-invariant but non-Lorentz invariant fixed points, which do not have particle number as a conserved quantity. We compute two-point correlation functions which exhibit novel behavior relative to their AdS counterparts, and find holographic renormalization group flows to conformal field theories. Our theories are characterized by a dynamical critical exponent z, which governs the anisotropy between spatial and temporal scaling t {yields} {lambda}{sup z}t, x {yields} {lambda}x; we focus on the case with z = 2. Such theories describe multicritical points in certain magnetic materials and liquid crystals, and have been shown to arisemore » at quantum critical points in toy models of the cuprate superconductors. This work can be considered a small step towards making useful dual descriptions of such critical points.« less

  18. Blood pressure long term regulation: A neural network model of the set point development

    PubMed Central

    2011-01-01

    Background The notion of the nucleus tractus solitarius (NTS) as a comparator evaluating the error signal between its rostral neural structures (RNS) and the cardiovascular receptor afferents into it has been recently presented. From this perspective, stress can cause hypertension via set point changes, so offering an answer to an old question. Even though the local blood flow to tissues is influenced by circulating vasoactive hormones and also by local factors, there is yet significant sympathetic control. It is well established that the state of maturation of sympathetic innervation of blood vessels at birth varies across animal species and it takes place mostly during the postnatal period. During ontogeny, chemoreceptors are functional; they discharge when the partial pressures of oxygen and carbon dioxide in the arterial blood are not normal. Methods The model is a simple biological plausible adaptative neural network to simulate the development of the sympathetic nervous control. It is hypothesized that during ontogeny, from the RNS afferents to the NTS, the optimal level of each sympathetic efferent discharge is learned through the chemoreceptors' feedback. Its mean discharge leads to normal oxygen and carbon dioxide levels in each tissue. Thus, the sympathetic efferent discharge sets at the optimal level if, despite maximal drift, the local blood flow is compensated for by autoregulation. Such optimal level produces minimum chemoreceptor output, which must be maintained by the nervous system. Since blood flow is controlled by arterial blood pressure, the long-term mean level is stabilized to regulate oxygen and carbon dioxide levels. After development, the cardiopulmonary reflexes play an important role in controlling efferent sympathetic nerve activity to the kidneys and modulating sodium and water excretion. Results Starting from fixed RNS afferents to the NTS and random synaptic weight values, the sympathetic efferents converged to the optimal values

  19. Acceleration of saddle-point searches with machine learning.

    PubMed

    Peterson, Andrew A

    2016-08-21

    In atomistic simulations, the location of the saddle point on the potential-energy surface (PES) gives important information on transitions between local minima, for example, via transition-state theory. However, the search for saddle points often involves hundreds or thousands of ab initio force calls, which are typically all done at full accuracy. This results in the vast majority of the computational effort being spent calculating the electronic structure of states not important to the researcher, and very little time performing the calculation of the saddle point state itself. In this work, we describe how machine learning (ML) can reduce the number of intermediate ab initio calculations needed to locate saddle points. Since machine-learning models can learn from, and thus mimic, atomistic simulations, the saddle-point search can be conducted rapidly in the machine-learning representation. The saddle-point prediction can then be verified by an ab initio calculation; if it is incorrect, this strategically has identified regions of the PES where the machine-learning representation has insufficient training data. When these training data are used to improve the machine-learning model, the estimates greatly improve. This approach can be systematized, and in two simple example problems we demonstrate a dramatic reduction in the number of ab initio force calls. We expect that this approach and future refinements will greatly accelerate searches for saddle points, as well as other searches on the potential energy surface, as machine-learning methods see greater adoption by the atomistics community.

  20. Understanding Learning: Assessment in the Turning Points School

    ERIC Educational Resources Information Center

    Center for Collaborative Education, 2005

    2005-01-01

    Turning Points helps middle schools create challenging, caring, and equitable learning communities that meet the needs of young adolescents as they reach the "turning point" between childhood and adulthood. Based on more than a decade of research and experience, this comprehensive school reform model focuses on improving student learning through…

  1. Thermal Stability of Fluorinated Polydienes Synthesized by Addition of Difluorocarbene

    DTIC Science & Technology

    2012-01-01

    polydienes proceeds through a two-stage decomposition involving chain scission, crosslinking, dehydrogenation, and dehalogenation . The pyrolysis leads...polydienes proceeds through a two-stage decomposition involving chain scission, crosslinking, dehydrogenation, and dehalogenation . The pyrolysis leads to... dehalogenation . The pyrolysis leads to graphite-like residues, whereas their polydiene precursors decompose completely under the same conditions. The

  2. Reliable four-point flexion test and model for die-to-wafer direct bonding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tabata, T., E-mail: toshiyuki.tabata@cea.fr; Sanchez, L.; Fournel, F.

    2015-07-07

    For many years, wafer-to-wafer (W2W) direct bonding has been very developed particularly in terms of bonding energy measurement and bonding mechanism comprehension. Nowadays, die-to-wafer (D2W) direct bonding has gained significant attention, for instance, in photonics and microelectro-mechanics, which supposes controlled and reliable fabrication processes. So, whatever the stuck materials may be, it is not obvious whether bonded D2W structures have the same bonding strength as bonded W2W ones, because of possible edge effects of dies. For that reason, it has been strongly required to develop a bonding energy measurement technique which is suitable for D2W structures. In this paper, bothmore » D2W- and W2W-type standard SiO{sub 2}-to-SiO{sub 2} direct bonding samples are fabricated from the same full-wafer bonding. Modifications of the four-point flexion test (4PT) technique and applications for measuring D2W direct bonding energies are reported. Thus, the comparison between the modified 4PT and the double-cantilever beam techniques is drawn, also considering possible impacts of the conditions of measures such as the water stress corrosion at the debonding interface and the friction error at the loading contact points. Finally, reliability of a modified technique and a new model established for measuring D2W direct bonding energies is demonstrated.« less

  3. Change point detection of the Persian Gulf sea surface temperature

    NASA Astrophysics Data System (ADS)

    Shirvani, A.

    2017-01-01

    In this study, the Student's t parametric and Mann-Whitney nonparametric change point models (CPMs) were applied to detect change point in the annual Persian Gulf sea surface temperature anomalies (PGSSTA) time series for the period 1951-2013. The PGSSTA time series, which were serially correlated, were transformed to produce an uncorrelated pre-whitened time series. The pre-whitened PGSSTA time series were utilized as the input file of change point models. Both the applied parametric and nonparametric CPMs estimated the change point in the PGSSTA in 1992. The PGSSTA follow the normal distribution up to 1992 and thereafter, but with a different mean value after year 1992. The estimated slope of linear trend in PGSSTA time series for the period 1951-1992 was negative; however, that was positive after the detected change point. Unlike the PGSSTA, the applied CPMs suggested no change point in the Niño3.4SSTA time series.

  4. Assimilation of concentration measurements for retrieving multiple point releases in atmosphere: A least-squares approach to inverse modelling

    NASA Astrophysics Data System (ADS)

    Singh, Sarvesh Kumar; Rani, Raj

    2015-10-01

    The study addresses the identification of multiple point sources, emitting the same tracer, from their limited set of merged concentration measurements. The identification, here, refers to the estimation of locations and strengths of a known number of simultaneous point releases. The source-receptor relationship is described in the framework of adjoint modelling by using an analytical Gaussian dispersion model. A least-squares minimization framework, free from an initialization of the release parameters (locations and strengths), is presented to estimate the release parameters. This utilizes the distributed source information observable from the given monitoring design and number of measurements. The technique leads to an exact retrieval of the true release parameters when measurements are noise free and exactly described by the dispersion model. The inversion algorithm is evaluated using the real data from multiple (two, three and four) releases conducted during Fusion Field Trials in September 2007 at Dugway Proving Ground, Utah. The release locations are retrieved, on average, within 25-45 m of the true sources with the distance from retrieved to true source ranging from 0 to 130 m. The release strengths are also estimated within a factor of three to the true release rates. The average deviations in retrieval of source locations are observed relatively large in two release trials in comparison to three and four release trials.

  5. Modeling of Laser Material Interactions

    NASA Astrophysics Data System (ADS)

    Garrison, Barbara

    2009-03-01

    Irradiation of a substrate by laser light initiates the complex chemical and physical process of ablation where large amounts of material are removed. Ablation has been successfully used in techniques such as nanolithography and LASIK surgery, however a fundamental understanding of the process is necessary in order to further optimize and develop applications. To accurately describe the ablation phenomenon, a model must take into account the multitude of events which occur when a laser irradiates a target including electronic excitation, bond cleavage, desorption of small molecules, ongoing chemical reactions, propagation of stress waves, and bulk ejection of material. A coarse grained molecular dynamics (MD) protocol with an embedded Monte Carlo (MC) scheme has been developed which effectively addresses each of these events during the simulation. Using the simulation technique, thermal and chemical excitation channels are separately studied with a model polymethyl methacrylate system. The effects of the irradiation parameters and reaction pathways on the process dynamics are investigated. The mechanism of ablation for thermal processes is governed by a critical number of bond breaks following the deposition of energy. For the case where an absorbed photon directly causes a bond scission, ablation occurs following the rapid chemical decomposition of material. The study provides insight into the influence of thermal and chemical processes in polymethyl methacrylate and facilitates greater understanding of the complex nature of polymer ablation.

  6. Processing Uav and LIDAR Point Clouds in Grass GIS

    NASA Astrophysics Data System (ADS)

    Petras, V.; Petrasova, A.; Jeziorska, J.; Mitasova, H.

    2016-06-01

    Today's methods of acquiring Earth surface data, namely lidar and unmanned aerial vehicle (UAV) imagery, non-selectively collect or generate large amounts of points. Point clouds from different sources vary in their properties such as number of returns, density, or quality. We present a set of tools with applications for different types of points clouds obtained by a lidar scanner, structure from motion technique (SfM), and a low-cost 3D scanner. To take advantage of the vertical structure of multiple return lidar point clouds, we demonstrate tools to process them using 3D raster techniques which allow, for example, the development of custom vegetation classification methods. Dense point clouds obtained from UAV imagery, often containing redundant points, can be decimated using various techniques before further processing. We implemented and compared several decimation techniques in regard to their performance and the final digital surface model (DSM). Finally, we will describe the processing of a point cloud from a low-cost 3D scanner, namely Microsoft Kinect, and its application for interaction with physical models. All the presented tools are open source and integrated in GRASS GIS, a multi-purpose open source GIS with remote sensing capabilities. The tools integrate with other open source projects, specifically Point Data Abstraction Library (PDAL), Point Cloud Library (PCL), and OpenKinect libfreenect2 library to benefit from the open source point cloud ecosystem. The implementation in GRASS GIS ensures long term maintenance and reproducibility by the scientific community but also by the original authors themselves.

  7. Tropical cyclones over the North Indian Ocean: experiments with the high-resolution global icosahedral grid point model GME

    NASA Astrophysics Data System (ADS)

    Kumkar, Yogesh V.; Sen, P. N.; Chaudhari, Hemankumar S.; Oh, Jai-Ho

    2018-02-01

    In this paper, an attempt has been made to conduct a numerical experiment with the high-resolution global model GME to predict the tropical storms in the North Indian Ocean during the year 2007. Numerical integrations using the icosahedral hexagonal grid point global model GME were performed to study the evolution of tropical cyclones, viz., Akash, Gonu, Yemyin and Sidr over North Indian Ocean during 2007. It has been seen that the GME model forecast underestimates cyclone's intensity, but the model can capture the evolution of cyclone's intensity especially its weakening during landfall, which is primarily due to the cutoff of the water vapor supply in the boundary layer as cyclones approach the coastal region. A series of numerical simulation of tropical cyclones have been performed with GME to examine model capability in prediction of intensity and track of the cyclones. The model performance is evaluated by calculating the root mean square errors as cyclone track errors.

  8. An equilibrium-point model of electromyographic patterns during single-joint movements based on experimentally reconstructed control signals.

    PubMed

    Latash, M L; Goodman, S R

    1994-01-01

    The purpose of this work has been to develop a model of electromyographic (EMG) patterns during single-joint movements based on a version of the equilibrium-point hypothesis, a method for experimental reconstruction of the joint compliant characteristics, the dual-strategy hypothesis, and a kinematic model of movement trajectory. EMG patterns are considered emergent properties of hypothetical control patterns that are equally affected by the control signals and peripheral feedback reflecting actual movement trajectory. A computer model generated the EMG patterns based on simulated movement kinematics and hypothetical control signals derived from the reconstructed joint compliant characteristics. The model predictions have been compared to published recordings of movement kinematics and EMG patterns in a variety of movement conditions, including movements over different distances, at different speeds, against different-known inertial loads, and in conditions of possible unexpected decrease in the inertial load. Changes in task parameters within the model led to simulated EMG patterns qualitatively similar to the experimentally recorded EMG patterns. The model's predictive power compares it favourably to the existing models of the EMG patterns. Copyright © 1994. Published by Elsevier Ltd.

  9. Performance measurement of PSF modeling reconstruction (True X) on Siemens Biograph TruePoint TrueV PET/CT.

    PubMed

    Lee, Young Sub; Kim, Jin Su; Kim, Kyeong Min; Kang, Joo Hyun; Lim, Sang Moo; Kim, Hee-Joung

    2014-05-01

    The Siemens Biograph TruePoint TrueV (B-TPTV) positron emission tomography (PET) scanner performs 3D PET reconstruction using a system matrix with point spread function (PSF) modeling (called the True X reconstruction). PET resolution was dramatically improved with the True X method. In this study, we assessed the spatial resolution and image quality on a B-TPTV PET scanner. In addition, we assessed the feasibility of animal imaging with a B-TPTV PET and compared it with a microPET R4 scanner. Spatial resolution was measured at center and at 8 cm offset from the center in transverse plane with warm background activity. True X, ordered subset expectation maximization (OSEM) without PSF modeling, and filtered back-projection (FBP) reconstruction methods were used. Percent contrast (% contrast) and percent background variability (% BV) were assessed according to NEMA NU2-2007. The recovery coefficient (RC), non-uniformity, spill-over ratio (SOR), and PET imaging of the Micro Deluxe Phantom were assessed to compare image quality of B-TPTV PET with that of the microPET R4. When True X reconstruction was used, spatial resolution was <3.65 mm with warm background activity. % contrast and % BV with True X reconstruction were higher than those with the OSEM reconstruction algorithm without PSF modeling. In addition, the RC with True X reconstruction was higher than that with the FBP method and the OSEM without PSF modeling method on the microPET R4. The non-uniformity with True X reconstruction was higher than that with FBP and OSEM without PSF modeling on microPET R4. SOR with True X reconstruction was better than that with FBP or OSEM without PSF modeling on the microPET R4. This study assessed the performance of the True X reconstruction. Spatial resolution with True X reconstruction was improved by 45 % and its % contrast was significantly improved compared to those with the conventional OSEM without PSF modeling reconstruction algorithm. The noise level was higher than

  10. Estimation of dew point temperature using neuro-fuzzy and neural network techniques

    NASA Astrophysics Data System (ADS)

    Kisi, Ozgur; Kim, Sungwon; Shiri, Jalal

    2013-11-01

    This study investigates the ability of two different artificial neural network (ANN) models, generalized regression neural networks model (GRNNM) and Kohonen self-organizing feature maps neural networks model (KSOFM), and two different adaptive neural fuzzy inference system (ANFIS) models, ANFIS model with sub-clustering identification (ANFIS-SC) and ANFIS model with grid partitioning identification (ANFIS-GP), for estimating daily dew point temperature. The climatic data that consisted of 8 years of daily records of air temperature, sunshine hours, wind speed, saturation vapor pressure, relative humidity, and dew point temperature from three weather stations, Daego, Pohang, and Ulsan, in South Korea were used in the study. The estimates of ANN and ANFIS models were compared according to the three different statistics, root mean square errors, mean absolute errors, and determination coefficient. Comparison results revealed that the ANFIS-SC, ANFIS-GP, and GRNNM models showed almost the same accuracy and they performed better than the KSOFM model. Results also indicated that the sunshine hours, wind speed, and saturation vapor pressure have little effect on dew point temperature. It was found that the dew point temperature could be successfully estimated by using T mean and R H variables.

  11. Flash-point prediction for binary partially miscible mixtures of flammable solvents.

    PubMed

    Liaw, Horng-Jang; Lu, Wen-Hung; Gerbaud, Vincent; Chen, Chan-Cheng

    2008-05-30

    Flash point is the most important variable used to characterize fire and explosion hazard of liquids. Herein, partially miscible mixtures are presented within the context of liquid-liquid extraction processes. This paper describes development of a model for predicting the flash point of binary partially miscible mixtures of flammable solvents. To confirm the predictive efficacy of the derived flash points, the model was verified by comparing the predicted values with the experimental data for the studied mixtures: methanol+octane; methanol+decane; acetone+decane; methanol+2,2,4-trimethylpentane; and, ethanol+tetradecane. Our results reveal that immiscibility in the two liquid phases should not be ignored in the prediction of flash point. Overall, the predictive results of this proposed model describe the experimental data well. Based on this evidence, therefore, it appears reasonable to suggest potential application for our model in assessment of fire and explosion hazards, and development of inherently safer designs for chemical processes containing binary partially miscible mixtures of flammable solvents.

  12. The Point Sal–Point Piedras Blancas correlation and the problem of slip on the San Gregorio–Hosgri fault, central California Coast Ranges

    USGS Publications Warehouse

    Colgan, Joseph P.; Stanley, Richard G.

    2016-01-01

    Existing models for large-magnitude, right-lateral slip on the San Gregorio–Hosgri fault system imply much more deformation of the onshore block in the Santa Maria basin than is supported by geologic data. This problem is resolved by a model in which dextral slip on this fault system increases gradually from 0–10 km near Point Arguello to ∼150 km at Cape San Martin, but such a model requires abandoning the cross-fault tie between Point Sal and Point Piedras Blancas, which requires 90–100 km of right-lateral slip on the southern Hosgri fault. We collected stratigraphic and detrital zircon data from Miocene clastic rocks overlying Jurassic basement at both localities to determine if either section contained unique characteristics that could establish how far apart they were in the early Miocene. Our data indicate that these basins formed in the early Miocene during a period of widespread transtensional basin formation in the central Coast Ranges, and they filled with sediment derived from nearby pre-Cenozoic basement rocks. Although detrital zircon data do not indicate a unique source component in either section, they establish the maximum depositional age of the previously undated Point Piedras Blancas section to be 18 Ma. We also show that detrital zircon trace-element data can be used to discriminate between zircons of oceanic crust and arc affinity of the same age, a potentially useful tool in future studies of the California Coast Ranges. Overall, we find no characteristics in the stratigraphy and provenance of the Point Sal and Point Piedras Blancas sections that are sufficiently unique to prove whether they were far apart or close together in the early Miocene, making them of questionable utility as piercing points.

  13. One-dimensional gravity in infinite point distributions.

    PubMed

    Gabrielli, A; Joyce, M; Sicard, F

    2009-10-01

    The dynamics of infinite asymptotically uniform distributions of purely self-gravitating particles in one spatial dimension provides a simple and interesting toy model for the analogous three dimensional problem treated in cosmology. In this paper we focus on a limitation of such models as they have been treated so far in the literature: the force, as it has been specified, is well defined in infinite point distributions only if there is a centre of symmetry (i.e., the definition requires explicitly the breaking of statistical translational invariance). The problem arises because naive background subtraction (due to expansion, or by "Jeans swindle" for the static case), applied as in three dimensions, leaves an unregulated contribution to the force due to surface mass fluctuations. Following a discussion by Kiessling of the Jeans swindle in three dimensions, we show that the problem may be resolved by defining the force in infinite point distributions as the limit of an exponentially screened pair interaction. We show explicitly that this prescription gives a well defined (finite) force acting on particles in a class of perturbed infinite lattices, which are the point processes relevant to cosmological N -body simulations. For identical particles the dynamics of the simplest toy model (without expansion) is equivalent to that of an infinite set of points with inverted harmonic oscillator potentials which bounce elastically when they collide. We discuss and compare with previous results in the literature and present new results for the specific case of this simplest (static) model starting from "shuffled lattice" initial conditions. These show qualitative properties of the evolution (notably its "self-similarity") like those in the analogous simulations in three dimensions, which in turn resemble those in the expanding universe.

  14. Transitions in state public health law: comparative analysis of state public health law reform following the Turning Point Model State Public Health Act.

    PubMed

    Meier, Benjamin Mason; Hodge, James G; Gebbie, Kristine M

    2009-03-01

    Given the public health importance of law modernization, we undertook a comparative analysis of policy efforts in 4 states (Alaska, South Carolina, Wisconsin, and Nebraska) that have considered public health law reform based on the Turning Point Model State Public Health Act. Through national legislative tracking and state case studies, we investigated how the Turning Point Act's model legal language has been considered for incorporation into state law and analyzed key facilitating and inhibiting factors for public health law reform. Our findings provide the practice community with a research base to facilitate further law reform and inform future scholarship on the role of law as a determinant of the public's health.

  15. A pointing facilitation system for motor-impaired users combining polynomial smoothing and time-weighted gradient target prediction models.

    PubMed

    Blow, Nikolaus; Biswas, Pradipta

    2017-01-01

    As computers become more and more essential for everyday life, people who cannot use them are missing out on an important tool. The predominant method of interaction with a screen is a mouse, and difficulty in using a mouse can be a huge obstacle for people who would otherwise gain great value from using a computer. If mouse pointing were to be made easier, then a large number of users may be able to begin using a computer efficiently where they may previously have been unable to. The present article aimed to improve pointing speeds for people with arm or hand impairments. The authors investigated different smoothing and prediction models on a stored data set involving 25 people, and the best of these algorithms were chosen. A web-based prototype was developed combining a polynomial smoothing algorithm with a time-weighted gradient target prediction model. The adapted interface gave an average improvement of 13.5% in target selection times in a 10-person study of representative users of the system. A demonstration video of the system is available at https://youtu.be/sAzbrKHivEY.

  16. Scaling in the vicinity of the four-state Potts fixed point

    NASA Astrophysics Data System (ADS)

    Blöte, H. W. J.; Guo, Wenan; Nightingale, M. P.

    2017-08-01

    We study a self-dual generalization of the Baxter-Wu model, employing results obtained by transfer matrix calculations of the magnetic scaling dimension and the free energy. While the pure critical Baxter-Wu model displays the critical behavior of the four-state Potts fixed point in two dimensions, in the sense that logarithmic corrections are absent, the introduction of different couplings in the up- and down triangles moves the model away from this fixed point, so that logarithmic corrections appear. Real couplings move the model into the first-order range, away from the behavior displayed by the nearest-neighbor, four-state Potts model. We also use complex couplings, which bring the model in the opposite direction characterized by the same type of logarithmic corrections as present in the four-state Potts model. Our finite-size analysis confirms in detail the existing renormalization theory describing the immediate vicinity of the four-state Potts fixed point.

  17. Flash Points of Secondary Alcohol and n-Alkane Mixtures.

    PubMed

    Esina, Zoya N; Miroshnikov, Alexander M; Korchuganova, Margarita R

    2015-11-19

    The flash point is one of the most important characteristics used to assess the ignition hazard of mixtures of flammable liquids. To determine the flash points of mixtures of secondary alcohols with n-alkanes, it is necessary to calculate the activity coefficients. In this paper, we use a model that allows us to obtain enthalpy of fusion and enthalpy of vaporization data of the pure components to calculate the liquid-solid equilibrium (LSE) and vapor-liquid equilibrium (VLE). Enthalpy of fusion and enthalpy of vaporization data of secondary alcohols in the literature are limited; thus, the prediction of these characteristics was performed using the method of thermodynamic similarity. Additionally, the empirical models provided the critical temperatures and boiling temperatures of the secondary alcohols. The modeled melting enthalpy and enthalpy of vaporization as well as the calculated LSE and VLE flash points were determined for the secondary alcohol and n-alkane mixtures.

  18. Effect of gamma irradiation on the structural, mechanical and optical properties of polytetrafluoroethylene sheet

    NASA Astrophysics Data System (ADS)

    Mohammadian-Kohol, M.; Asgari, M.; Shakur, H. R.

    2018-04-01

    In this study, the effects of gamma radiation on the chemical structure, mechanical and optical properties of polytetrafluoroethylene (PTFE) sheet were investigated with various doses up to 12 kGy. The chemical changes in the structure were studied by FTIR spectroscopy. Also, effects of radiation on the different mechanical parameters such as Young's modulus, toughness, strain, and stress were studied at the maximum tolerable force and the fracture points. Furthermore, changing the various optical parameters such as absorption coefficient, Urbach energy, optical band gaps, refractive index, optical dispersion parameters and plasma resonance frequency were studied by UV-visible spectroscopy. Formation of a band at 1594 cm-1, which was belonged to double carbon bonds, indicated that chain-scission was occurred at 12 kGy gamma irradiation dose. As well, the mechanical results showed an increase in the elastic behavior of PTFE sheets and a decrease in the plastic behavior of it with absorbed dose increasing. Moreover, the results showed that gamma irradiation can effectively change the various optical properties of PTFE sheets due to different phenomena such as degradation of the main chains, occurring chain-scission, formation of free radicals and cross-linking in the polymer structure.

  19. Thermal aging of interfacial polymer chains in ethylene-propylene-diene terpolymer/aluminum hydroxide composites: solid-state NMR study.

    PubMed

    Gabrielle, Brice; Lorthioir, Cédric; Lauprêtre, Françoise

    2011-11-03

    The possible influence of micrometric-size filler particles on the thermo-oxidative degradation behavior of the polymer chains at polymer/filler interfaces is still an open question. In this study, a cross-linked ethylene-propylene-diene (EPDM) terpolymer filled by aluminum trihydrate (ATH) particles is investigated using (1)H solid-state NMR. The time evolution of the EPDM network microstructure under thermal aging at 80 °C is monitored as a function of the exposure time and compared to that of an unfilled EPDM network displaying a similar initial structure. While nearly no variations of the topology are observed on the neat EPDM network over 5 days at 80 °C, a significant amount of chain scission phenomena are evidenced in EPDM/ATH. A specific surface effect induced by ATH on the thermodegradative properties of the polymer chains located in their vicinity is thus pointed out. Close to the filler particles, a higher amount of chain scissions are detected, and the characteristic length scale related to these interfacial regions displaying a significant thermo-oxidation process is determined as a function of the aging time.

  20. Effect of Electroacupuncture at The Zusanli Point (Stomach-36) on Dorsal Random Pattern Skin Flap Survival in a Rat Model.

    PubMed

    Wang, Li-Ren; Cai, Le-Yi; Lin, Ding-Sheng; Cao, Bin; Li, Zhi-Jie

    2017-10-01

    Random skin flaps are commonly used for wound repair and reconstruction. Electroacupuncture at The Zusanli point could enhance microcirculation and blood perfusion in random skin flaps. To determine whether electroacupuncture at The Zusanli point can improve the survival of random skin flaps in a rat model. Thirty-six male Sprague Dawley rats were randomly divided into 3 groups: control group (no electroacupuncture), Group A (electroacupuncture at a nonacupoint near The Zusanli point), and Group B (electroacupuncture at The Zusanli point). McFarlane flaps were established. On postoperative Day 2, malondialdehyde (MDA) and superoxide dismutase were detected. The flap survival rate was evaluated, inflammation was examined in hematoxylin and eosin-stained slices, and the expression of vascular endothelial growth factor (VEGF) was measured immunohistochemically on Day 7. The mean survival area of the flaps in Group B was significantly larger than that in the control group and Group A. Superoxide dismutase activity and VEGF expression level were significantly higher in Group B than those in the control group and Group A, whereas MDA and inflammation levels in Group B were significantly lower than those in the other 2 groups. Electroacupuncture at The Zusanli point can effectively improve the random flap survival.