Sample records for compound poisson approximation

  1. Spatial event cluster detection using an approximate normal distribution.

    PubMed

    Torabi, Mahmoud; Rosychuk, Rhonda J

    2008-12-12

    In geographic surveillance of disease, areas with large numbers of disease cases are to be identified so that investigations of the causes of high disease rates can be pursued. Areas with high rates are called disease clusters and statistical cluster detection tests are used to identify geographic areas with higher disease rates than expected by chance alone. Typically cluster detection tests are applied to incident or prevalent cases of disease, but surveillance of disease-related events, where an individual may have multiple events, may also be of interest. Previously, a compound Poisson approach that detects clusters of events by testing individual areas that may be combined with their neighbours has been proposed. However, the relevant probabilities from the compound Poisson distribution are obtained from a recursion relation that can be cumbersome if the number of events are large or analyses by strata are performed. We propose a simpler approach that uses an approximate normal distribution. This method is very easy to implement and is applicable to situations where the population sizes are large and the population distribution by important strata may differ by area. We demonstrate the approach on pediatric self-inflicted injury presentations to emergency departments and compare the results for probabilities based on the recursion and the normal approach. We also implement a Monte Carlo simulation to study the performance of the proposed approach. In a self-inflicted injury data example, the normal approach identifies twelve out of thirteen of the same clusters as the compound Poisson approach, noting that the compound Poisson method detects twelve significant clusters in total. Through simulation studies, the normal approach well approximates the compound Poisson approach for a variety of different population sizes and case and event thresholds. A drawback of the compound Poisson approach is that the relevant probabilities must be determined through a recursion relation and such calculations can be computationally intensive if the cluster size is relatively large or if analyses are conducted with strata variables. On the other hand, the normal approach is very flexible, easily implemented, and hence, more appealing for users. Moreover, the concepts may be more easily conveyed to non-statisticians interested in understanding the methodology associated with cluster detection test results.

  2. Normal and compound poisson approximations for pattern occurrences in NGS reads.

    PubMed

    Zhai, Zhiyuan; Reinert, Gesine; Song, Kai; Waterman, Michael S; Luan, Yihui; Sun, Fengzhu

    2012-06-01

    Next generation sequencing (NGS) technologies are now widely used in many biological studies. In NGS, sequence reads are randomly sampled from the genome sequence of interest. Most computational approaches for NGS data first map the reads to the genome and then analyze the data based on the mapped reads. Since many organisms have unknown genome sequences and many reads cannot be uniquely mapped to the genomes even if the genome sequences are known, alternative analytical methods are needed for the study of NGS data. Here we suggest using word patterns to analyze NGS data. Word pattern counting (the study of the probabilistic distribution of the number of occurrences of word patterns in one or multiple long sequences) has played an important role in molecular sequence analysis. However, no studies are available on the distribution of the number of occurrences of word patterns in NGS reads. In this article, we build probabilistic models for the background sequence and the sampling process of the sequence reads from the genome. Based on the models, we provide normal and compound Poisson approximations for the number of occurrences of word patterns from the sequence reads, with bounds on the approximation error. The main challenge is to consider the randomness in generating the long background sequence, as well as in the sampling of the reads using NGS. We show the accuracy of these approximations under a variety of conditions for different patterns with various characteristics. Under realistic assumptions, the compound Poisson approximation seems to outperform the normal approximation in most situations. These approximate distributions can be used to evaluate the statistical significance of the occurrence of patterns from NGS data. The theory and the computational algorithm for calculating the approximate distributions are then used to analyze ChIP-Seq data using transcription factor GABP. Software is available online (www-rcf.usc.edu/∼fsun/Programs/NGS_motif_power/NGS_motif_power.html). In addition, Supplementary Material can be found online (www.liebertonline.com/cmb).

  3. DISCRETE COMPOUND POISSON PROCESSES AND TABLES OF THE GEOMETRIC POISSON DISTRIBUTION.

    DTIC Science & Technology

    A concise summary of the salient properties of discrete Poisson processes , with emphasis on comparing the geometric and logarithmic Poisson processes . The...the geometric Poisson process are given for 176 sets of parameter values. New discrete compound Poisson processes are also introduced. These...processes have properties that are particularly relevant when the summation of several different Poisson processes is to be analyzed. This study provides the

  4. DFT investigation on electronic, magnetic, mechanical and thermodynamic properties under pressure of some EuMO3 (M  =  Ga, In) perovskites

    NASA Astrophysics Data System (ADS)

    Dar, Sajad Ahmad; Srivastava, Vipul; Sakalle, Umesh Kumar; Parey, Vanshree; Pagare, Gitanjali

    2017-10-01

    The structural, electronic, magnetic and elastic properties of cubic EuMO3 (M  =  Ga, In) perovskites has been successfully predicted within well accepted density functional theory using full potential linearized augmented plane wave (FP-LAPW). The structural study reveals ferromagnetic stability for both the compounds. The Hubbard correlation (GGA+U) calculated spin polarized electronic band and density of states presents half-metallic nature for both the compounds. The magnetic moments calculated with different approximations were found to be approximately 6 µ B for EuGaO3 and approximately 7 µ B for EuInO3. The three independent elastic constants (C 11, C 12, C 44) have been used for the prediction of mechanical properties like Young modulus (Y), Shear modulus (G), Poisson ratio (ν), Anisotropic factor (A) under pressure. The B/G ratio presents the ductile nature for both compounds. The thermodynamic parameters like specific heat capacity, thermal expansion, Grüneisen parameter and Debye temperature etc have also been analyzed in the temperature range 0-900 K and pressure range from 0 to 30 GPa.

  5. [Statistical (Poisson) motor unit number estimation. Methodological aspects and normal results in the extensor digitorum brevis muscle of healthy subjects].

    PubMed

    Murga Oporto, L; Menéndez-de León, C; Bauzano Poley, E; Núñez-Castaín, M J

    Among the differents techniques for motor unit number estimation (MUNE) there is the statistical one (Poisson), in which the activation of motor units is carried out by electrical stimulation and the estimation performed by means of a statistical analysis based on the Poisson s distribution. The study was undertaken in order to realize an approximation to the MUNE Poisson technique showing a coprehensible view of its methodology and also to obtain normal results in the extensor digitorum brevis muscle (EDB) from a healthy population. One hundred fourteen normal volunteers with age ranging from 10 to 88 years were studied using the MUNE software contained in a Viking IV system. The normal subjects were divided into two age groups (10 59 and 60 88 years). The EDB MUNE from all them was 184 49. Both, the MUNE and the amplitude of the compound muscle action potential (CMAP) were significantly lower in the older age group (p< 0.0001), showing the MUNE a better correlation with age than CMAP amplitude ( 0.5002 and 0.4142, respectively p< 0.0001). Statistical MUNE method is an important way for the assessment to the phisiology of the motor unit. The value of MUNE correlates better with the neuromuscular aging process than CMAP amplitude does.

  6. Applying the compound Poisson process model to the reporting of injury-related mortality rates.

    PubMed

    Kegler, Scott R

    2007-02-16

    Injury-related mortality rate estimates are often analyzed under the assumption that case counts follow a Poisson distribution. Certain types of injury incidents occasionally involve multiple fatalities, however, resulting in dependencies between cases that are not reflected in the simple Poisson model and which can affect even basic statistical analyses. This paper explores the compound Poisson process model as an alternative, emphasizing adjustments to some commonly used interval estimators for population-based rates and rate ratios. The adjusted estimators involve relatively simple closed-form computations, which in the absence of multiple-case incidents reduce to familiar estimators based on the simpler Poisson model. Summary data from the National Violent Death Reporting System are referenced in several examples demonstrating application of the proposed methodology.

  7. Approximations to camera sensor noise

    NASA Astrophysics Data System (ADS)

    Jin, Xiaodan; Hirakawa, Keigo

    2013-02-01

    Noise is present in all image sensor data. Poisson distribution is said to model the stochastic nature of the photon arrival process, while it is common to approximate readout/thermal noise by additive white Gaussian noise (AWGN). Other sources of signal-dependent noise such as Fano and quantization also contribute to the overall noise profile. Question remains, however, about how best to model the combined sensor noise. Though additive Gaussian noise with signal-dependent noise variance (SD-AWGN) and Poisson corruption are two widely used models to approximate the actual sensor noise distribution, the justification given to these types of models are based on limited evidence. The goal of this paper is to provide a more comprehensive characterization of random noise. We concluded by presenting concrete evidence that Poisson model is a better approximation to real camera model than SD-AWGN. We suggest further modification to Poisson that may improve the noise model.

  8. Limitations of Poisson statistics in describing radioactive decay.

    PubMed

    Sitek, Arkadiusz; Celler, Anna M

    2015-12-01

    The assumption that nuclear decays are governed by Poisson statistics is an approximation. This approximation becomes unjustified when data acquisition times longer than or even comparable with the half-lives of the radioisotope in the sample are considered. In this work, the limits of the Poisson-statistics approximation are investigated. The formalism for the statistics of radioactive decay based on binomial distribution is derived. The theoretical factor describing the deviation of variance of the number of decays predicated by the Poisson distribution from the true variance is defined and investigated for several commonly used radiotracers such as (18)F, (15)O, (82)Rb, (13)N, (99m)Tc, (123)I, and (201)Tl. The variance of the number of decays estimated using the Poisson distribution is significantly different than the true variance for a 5-minute observation time of (11)C, (15)O, (13)N, and (82)Rb. Durations of nuclear medicine studies often are relatively long; they may be even a few times longer than the half-lives of some short-lived radiotracers. Our study shows that in such situations the Poisson statistics is unsuitable and should not be applied to describe the statistics of the number of decays in radioactive samples. However, the above statement does not directly apply to counting statistics at the level of event detection. Low sensitivities of detectors which are used in imaging studies make the Poisson approximation near perfect. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  9. Guidelines for Use of the Approximate Beta-Poisson Dose-Response Model.

    PubMed

    Xie, Gang; Roiko, Anne; Stratton, Helen; Lemckert, Charles; Dunn, Peter K; Mengersen, Kerrie

    2017-07-01

    For dose-response analysis in quantitative microbial risk assessment (QMRA), the exact beta-Poisson model is a two-parameter mechanistic dose-response model with parameters α>0 and β>0, which involves the Kummer confluent hypergeometric function. Evaluation of a hypergeometric function is a computational challenge. Denoting PI(d) as the probability of infection at a given mean dose d, the widely used dose-response model PI(d)=1-(1+dβ)-α is an approximate formula for the exact beta-Poisson model. Notwithstanding the required conditions α<β and β>1, issues related to the validity and approximation accuracy of this approximate formula have remained largely ignored in practice, partly because these conditions are too general to provide clear guidance. Consequently, this study proposes a probability measure Pr(0 < r < 1 | α̂, β̂) as a validity measure (r is a random variable that follows a gamma distribution; α̂ and β̂ are the maximum likelihood estimates of α and β in the approximate model); and the constraint conditions β̂>(22α̂)0.50 for 0.02<α̂<2 as a rule of thumb to ensure an accurate approximation (e.g., Pr(0 < r < 1 | α̂, β̂) >0.99) . This validity measure and rule of thumb were validated by application to all the completed beta-Poisson models (related to 85 data sets) from the QMRA community portal (QMRA Wiki). The results showed that the higher the probability Pr(0 < r < 1 | α̂, β̂), the better the approximation. The results further showed that, among the total 85 models examined, 68 models were identified as valid approximate model applications, which all had a near perfect match to the corresponding exact beta-Poisson model dose-response curve. © 2016 Society for Risk Analysis.

  10. A Comparative Study of Structural Stability and Mechanical and Optical Properties of Fluorapatite (Ca5(PO4)3F) and Lithium Disilicate (Li2Si2O5) Components Forming Dental Glass-Ceramics: First Principles Study

    NASA Astrophysics Data System (ADS)

    Biskri, Z. E.; Rached, H.; Bouchear, M.; Rached, D.; Aida, M. S.

    2016-10-01

    The aim of this paper is a comparative study of structural stability and mechanical and optical properties of fluorapatite (FA) (Ca5(PO4)3F) and lithium disilicate (LD) (Li2Si2O5), using the first principles pseudopotential method based on density functional theory (DFT) within the generalized gradient approximation (GGA). The stability of fluorapatite and lithium disilicate compounds has been evaluated on the basis of their formation enthalpies. The results show that fluorapatite is more energetically stable than lithium disilicate. The independent elastic constants and related mechanical properties, including bulk modulus ( B), shear modulus ( G), Young's modulus ( E) and Poisson's ratio ( ν) as well as the Vickers hardness ( H v), have been calculated for fluorapatite compound and compared with other theoretical and experimental results. The obtained values of the shear modulus, Young's modulus and Vickers hardness are smaller in comparison with those of lithium disilicate compound, implying that lithium disilicate is more rigid than fluorapatite. The brittle and ductile properties were also discussed using B/ G ratio and Poisson's ratio. Optical properties such as refractive index n( ω), extinction coefficient k( ω), absorption coefficient α( ω) and optical reflectivity R( ω) have been determined from the calculations of the complex dielectric function ɛ( ω), and interpreted on the basis of the electronic structures of both compounds. The calculated values of static dielectric constant ɛ 1(0) and static refractive index n(0) show that the Li2Si2O5 compound has larger values compared to those of the Ca5(PO4)3F compound. The results of the extinction coefficient show that Li2Si2O5 compound exhibits a much stronger ultraviolet absorption. According to the absorption and reflectivity spectra, we inferred that both compounds are theoretically the best visible and infrared transparent materials.

  11. Ab Initio Study of Electronic Structure, Elastic and Transport Properties of Fluoroperovskite LiBeF3

    NASA Astrophysics Data System (ADS)

    Benmhidi, H.; Rached, H.; Rached, D.; Benkabou, M.

    2017-04-01

    The aim of this work is to investigate the electronic, mechanical, and transport properties of the fluoroperovskite compound LiBeF3 by first-principles calculations using the full-potential linear muffin-tin orbital method based on density functional theory within the local density approximation. The independent elastic constants and related mechanical properties including the bulk modulus ( B), shear modulus ( G), Young's modulus ( E), and Poisson's ratio ( ν) have been studied, yielding the elastic moduli, shear wave velocities, and Debye temperature. According to the electronic properties, this compound is an indirect-bandgap material, in good agreement with available theoretical data. The electron effective mass, hole effective mass, and energy bandgaps with their volume and pressure dependence are investigated for the first time.

  12. First principles predictions of electronic and elastic properties of BaPb2As2 in the ThCr2Si2-type structure

    NASA Astrophysics Data System (ADS)

    Bourourou, Y.; Amari, S.; Yahiaoui, I. E.; Bouhafs, B.

    2018-01-01

    A first-principles approach is used to predicts the electronic and elastic properties of BaPb2As2 superconductor compound, using full-potential linearized augmented plane wave plus local orbitals (FP-L/APW+lo) scheme within the local density approximation LDA. The calculated equilibrium structural parameter a agree well with the experiment while the c/a ratio is far away from the experimental result. The band structure, density of states, together with the charge density and chemical bonding are discussed. The calculated elastic constants for our compound indicate that it is mechanically stable at ambient pressure. Polycrystalline elastic moduli (Young's, Bulk, shear Modulus and the Poisson's ratio) were calculated according to the Voigte-Reusse-Hill (VRH) average.

  13. Structural stability, elastic and thermodynamic properties of Au-Cu alloys from first-principles calculations

    NASA Astrophysics Data System (ADS)

    Kong, Ge-Xing; Ma, Xiao-Juan; Liu, Qi-Jun; Li, Yong; Liu, Zheng-Tang

    2018-03-01

    Using first-principles calculations method based on density functional theory (DFT) with the Perdew-Burke-Ernzerhof (PBE) implementation of the generalized gradient approximation (GGA), we investigate the structural, elastic and thermodynamic properties of gold-copper intermetallic compounds (Au-Cu ICs). The calculated lattice parameters are in excellent agreement with experimental data. The elastic constants show that all the investigated Au-Cu alloys are mechanically stable. Elastic properties, including the shear modulus, Young's modulus, Poisson's ratio and Pugh's indicator, of the intermetallic compounds are evaluated and discussed, with special attention to the remarkable anisotropy displayed by Au-Cu ICs. Thermodynamic and transport properties including the Debye temperature, thermal conductivity and melting point are predicted from the averaged sound velocity and elastic moduli, using semi-empirical formulas.

  14. Ferromagnetic Phase Stability, Magnetic, Electronic, Elasto-Mechanical and Thermodynamic Properties of BaCmO3 Perovskite Oxide

    NASA Astrophysics Data System (ADS)

    Dar, Sajad Ahmad; Srivastava, Vipul; Sakalle, Umesh Kumar; Parey, Vanshree

    2018-04-01

    The structural, electronic, elasto-mechanical and thermodynamic properties of cubic ABO3 perovskites BaCmO3 has been successfully calculated within density functional theory via full potential linearized augmented plane wave. The structural study divulges ferromagnetic stability for the compound. For the precise calculation of electronic and magnetic properties a generalized gradient approximation (GGA), and a Hubbard approximation (GGA + U), (modified Becke Johnson approximation) mBJ have been incorporated. The electronic study portrays the half-metallic nature for the compound in all the approximations. The calculated magnetic moment with different approximations was found to be large and with an integer value of 6 μ b, this integer value of magnetic moment also proves the half-metallic nature for BaCmO3. The calculated elastic constants have been used to predict mechanical properties like the Young modulus (Y), the Shear modulus (G) and the Poisson ratio (ν). The calculated B/G and Cauchy pressure (C12-C44) present the brittle nature for BaCmO3. The thermodynamic parameters like heat capacity, thermal expansion, and Debye temperature have been calculated and examined in the temperature range of 0 K to 700 K and pressure between 0 GPa and 40 GPa. The melting temperature was also calculated and was found to be 1847 ± 300 K.

  15. Calculating pKa values for substituted phenols and hydration energies for other compounds with the first-order Fuzzy-Border continuum solvation model

    PubMed Central

    Sharma, Ity; Kaminski, George A.

    2012-01-01

    We have computed pKa values for eleven substituted phenol compounds using the continuum Fuzzy-Border (FB) solvation model. Hydration energies for 40 other compounds, including alkanes, alkenes, alkynes, ketones, amines, alcohols, ethers, aromatics, amides, heterocycles, thiols, sulfides and acids have been calculated. The overall average unsigned error in the calculated acidity constant values was equal to 0.41 pH units and the average error in the solvation energies was 0.076 kcal/mol. We have also reproduced pKa values of propanoic and butanoic acids within ca. 0.1 pH units from the experimental values by fitting the solvation parameters for carboxylate ion carbon and oxygen atoms. The FB model combines two distinguishing features. First, it limits the amount of noise which is common in numerical treatment of continuum solvation models by using fixed-position grid points. Second, it employs either second- or first-order approximation for the solvent polarization, depending on a particular implementation. These approximations are similar to those used for solute and explicit solvent fast polarization treatment which we developed previously. This article describes results of employing the first-order technique. This approximation places the presented methodology between the Generalized Born and Poisson-Boltzmann continuum solvation models with respect to their accuracy of reproducing the many-body effects in modeling a continuum solvent. PMID:22815192

  16. Segmentation algorithm for non-stationary compound Poisson processes. With an application to inventory time series of market members in a financial market

    NASA Astrophysics Data System (ADS)

    Tóth, B.; Lillo, F.; Farmer, J. D.

    2010-11-01

    We introduce an algorithm for the segmentation of a class of regime switching processes. The segmentation algorithm is a non parametric statistical method able to identify the regimes (patches) of a time series. The process is composed of consecutive patches of variable length. In each patch the process is described by a stationary compound Poisson process, i.e. a Poisson process where each count is associated with a fluctuating signal. The parameters of the process are different in each patch and therefore the time series is non-stationary. Our method is a generalization of the algorithm introduced by Bernaola-Galván, et al. [Phys. Rev. Lett. 87, 168105 (2001)]. We show that the new algorithm outperforms the original one for regime switching models of compound Poisson processes. As an application we use the algorithm to segment the time series of the inventory of market members of the London Stock Exchange and we observe that our method finds almost three times more patches than the original one.

  17. Itô and Stratonovich integrals on compound renewal processes: the normal/Poisson case

    NASA Astrophysics Data System (ADS)

    Germano, Guido; Politi, Mauro; Scalas, Enrico; Schilling, René L.

    2010-06-01

    Continuous-time random walks, or compound renewal processes, are pure-jump stochastic processes with several applications in insurance, finance, economics and physics. Based on heuristic considerations, a definition is given for stochastic integrals driven by continuous-time random walks, which includes the Itô and Stratonovich cases. It is then shown how the definition can be used to compute these two stochastic integrals by means of Monte Carlo simulations. Our example is based on the normal compound Poisson process, which in the diffusive limit converges to the Wiener process.

  18. A Method of Poisson's Ration Imaging Within a Material Part

    NASA Technical Reports Server (NTRS)

    Roth, Don J. (Inventor)

    1994-01-01

    The present invention is directed to a method of displaying the Poisson's ratio image of a material part. In the present invention, longitudinal data is produced using a longitudinal wave transducer and shear wave data is produced using a shear wave transducer. The respective data is then used to calculate the Poisson's ratio for the entire material part. The Poisson's ratio approximations are then used to display the data.

  19. Method of Poisson's ratio imaging within a material part

    NASA Technical Reports Server (NTRS)

    Roth, Don J. (Inventor)

    1996-01-01

    The present invention is directed to a method of displaying the Poisson's ratio image of a material part. In the present invention longitudinal data is produced using a longitudinal wave transducer and shear wave data is produced using a shear wave transducer. The respective data is then used to calculate the Poisson's ratio for the entire material part. The Poisson's ratio approximations are then used to displayed the image.

  20. Computational prediction of new auxetic materials.

    PubMed

    Dagdelen, John; Montoya, Joseph; de Jong, Maarten; Persson, Kristin

    2017-08-22

    Auxetics comprise a rare family of materials that manifest negative Poisson's ratio, which causes an expansion instead of contraction under tension. Most known homogeneously auxetic materials are porous foams or artificial macrostructures and there are few examples of inorganic materials that exhibit this behavior as polycrystalline solids. It is now possible to accelerate the discovery of materials with target properties, such as auxetics, using high-throughput computations, open databases, and efficient search algorithms. Candidates exhibiting features correlating with auxetic behavior were chosen from the set of more than 67 000 materials in the Materials Project database. Poisson's ratios were derived from the calculated elastic tensor of each material in this reduced set of compounds. We report that this strategy results in the prediction of three previously unidentified homogeneously auxetic materials as well as a number of compounds with a near-zero homogeneous Poisson's ratio, which are here denoted "anepirretic materials".There are very few inorganic materials with auxetic homogenous Poisson's ratio in polycrystalline form. Here authors develop an approach to screening materials databases for target properties such as negative Poisson's ratio by using stability and structural motifs to predict new instances of homogenous auxetic behavior as well as a number of materials with near-zero Poisson's ratio.

  1. A Poisson equation formulation for pressure calculations in penalty finite element models for viscous incompressible flows

    NASA Technical Reports Server (NTRS)

    Sohn, J. L.; Heinrich, J. C.

    1990-01-01

    The calculation of pressures when the penalty-function approximation is used in finite-element solutions of laminar incompressible flows is addressed. A Poisson equation for the pressure is formulated that involves third derivatives of the velocity field. The second derivatives appearing in the weak formulation of the Poisson equation are calculated from the C0 velocity approximation using a least-squares method. The present scheme is shown to be efficient, free of spurious oscillations, and accurate. Examples of applications are given and compared with results obtained using mixed formulations.

  2. Nonparametric estimation of the heterogeneity of a random medium using compound Poisson process modeling of wave multiple scattering.

    PubMed

    Le Bihan, Nicolas; Margerin, Ludovic

    2009-07-01

    In this paper, we present a nonparametric method to estimate the heterogeneity of a random medium from the angular distribution of intensity of waves transmitted through a slab of random material. Our approach is based on the modeling of forward multiple scattering using compound Poisson processes on compact Lie groups. The estimation technique is validated through numerical simulations based on radiative transfer theory.

  3. Generation of Non-Homogeneous Poisson Processes by Thinning: Programming Considerations and Comparision with Competing Algorithms.

    DTIC Science & Technology

    1978-12-01

    Poisson processes . The method is valid for Poisson processes with any given intensity function. The basic thinning algorithm is modified to exploit several refinements which reduce computer execution time by approximately one-third. The basic and modified thinning programs are compared with the Poisson decomposition and gap-statistics algorithm, which is easily implemented for Poisson processes with intensity functions of the form exp(a sub 0 + a sub 1t + a sub 2 t-squared. The thinning programs are competitive in both execution

  4. Adiabatic elimination for systems with inertia driven by compound Poisson colored noise.

    PubMed

    Li, Tiejun; Min, Bin; Wang, Zhiming

    2014-02-01

    We consider the dynamics of systems driven by compound Poisson colored noise in the presence of inertia. We study the limit when the frictional relaxation time and the noise autocorrelation time both tend to zero. We show that the Itô and Marcus stochastic calculuses naturally arise depending on these two time scales, and an extra intermediate type occurs when the two time scales are comparable. This leads to three different limiting regimes which are supported by numerical simulations. Furthermore, we establish that when the resulting compound Poisson process tends to the Wiener process in the frequent jump limit the Itô and Marcus calculuses, respectively, tend to the classical Itô and Stratonovich calculuses for Gaussian white noise, and the crossover type calculus tends to a crossover between the Itô and Stratonovich calculuses. Our results would be very helpful for understanding relevant experiments when jump type noise is involved.

  5. This is SPIRAL-TAP: Sparse Poisson Intensity Reconstruction ALgorithms--theory and practice.

    PubMed

    Harmany, Zachary T; Marcia, Roummel F; Willett, Rebecca M

    2012-03-01

    Observations in many applications consist of counts of discrete events, such as photons hitting a detector, which cannot be effectively modeled using an additive bounded or Gaussian noise model, and instead require a Poisson noise model. As a result, accurate reconstruction of a spatially or temporally distributed phenomenon (f*) from Poisson data (y) cannot be effectively accomplished by minimizing a conventional penalized least-squares objective function. The problem addressed in this paper is the estimation of f* from y in an inverse problem setting, where the number of unknowns may potentially be larger than the number of observations and f* admits sparse approximation. The optimization formulation considered in this paper uses a penalized negative Poisson log-likelihood objective function with nonnegativity constraints (since Poisson intensities are naturally nonnegative). In particular, the proposed approach incorporates key ideas of using separable quadratic approximations to the objective function at each iteration and penalization terms related to l1 norms of coefficient vectors, total variation seminorms, and partition-based multiscale estimation methods.

  6. More on approximations of Poisson probabilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kao, C

    1980-05-01

    Calculation of Poisson probabilities frequently involves calculating high factorials, which becomes tedious and time-consuming with regular calculators. The usual way to overcome this difficulty has been to find approximations by making use of the table of the standard normal distribution. A new transformation proposed by Kao in 1978 appears to perform better for this purpose than traditional transformations. In the present paper several approximation methods are stated and compared numerically, including an approximation method that utilizes a modified version of Kao's transformation. An approximation based on a power transformation was found to outperform those based on the square-root type transformationsmore » as proposed in literature. The traditional Wilson-Hilferty approximation and Makabe-Morimura approximation are extremely poor compared with this approximation. 4 tables. (RWR)« less

  7. Distribution of apparent activation energy counterparts during thermo - And thermo-oxidative degradation of Aronia melanocarpa (black chokeberry).

    PubMed

    Janković, Bojan; Marinović-Cincović, Milena; Janković, Marija

    2017-09-01

    Kinetics of degradation for Aronia melanocarpa fresh fruits in argon and air atmospheres were investigated. The investigation was based on probability distributions of apparent activation energy of counterparts (ε a ). Isoconversional analysis results indicated that the degradation process in an inert atmosphere was governed by decomposition reactions of esterified compounds. Also, based on same kinetics approach, it was assumed that in an air atmosphere, the primary compound in degradation pathways could be anthocyanins, which undergo rapid chemical reactions. A new model of reactivity demonstrated that, under inert atmospheres, expectation values for ε a occured at levels of statistical probability. These values corresponded to decomposition processes in which polyphenolic compounds might be involved. ε a values obeyed laws of binomial distribution. It was established that, for thermo-oxidative degradation, Poisson distribution represented a very successful approximation for ε a values where there was additional mechanistic complexity and the binomial distribution was no longer valid. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Theoretical investigations on structural, elastic and electronic properties of thallium halides

    NASA Astrophysics Data System (ADS)

    Singh, Rishi Pal; Singh, Rajendra Kumar; Rajagopalan, Mathrubutham

    2011-04-01

    Theoretical investigations on structural, elastic and electronic properties, viz. ground state lattice parameter, elastic moduli and density of states, of thallium halides (viz. TlCl and TlBr) have been made using the full potential linearized augmented plane wave method within the generalized gradient approximation (GGA). The ground state lattice parameter and bulk modulus and its pressure derivative have been obtained using optimization method. Young's modulus, shear modulus, Poisson ratio, sound velocities for longitudinal and shear waves, Debye average velocity, Debye temperature and Grüneisen parameter have also been calculated for these compounds. Calculated structural, elastic and other parameters are in good agreement with the available data.

  9. Characterization of x-ray framing cameras for the National Ignition Facility using single photon pulse height analysis.

    PubMed

    Holder, J P; Benedetti, L R; Bradley, D K

    2016-11-01

    Single hit pulse height analysis is applied to National Ignition Facility x-ray framing cameras to quantify gain and gain variation in a single micro-channel plate-based instrument. This method allows the separation of gain from detectability in these photon-detecting devices. While pulse heights measured by standard-DC calibration methods follow the expected exponential distribution at the limit of a compound-Poisson process, gain-gated pulse heights follow a more complex distribution that may be approximated as a weighted sum of a few exponentials. We can reproduce this behavior with a simple statistical-sampling model.

  10. The perturbed compound Poisson risk model with constant interest and a threshold dividend strategy

    NASA Astrophysics Data System (ADS)

    Gao, Shan; Liu, Zaiming

    2010-03-01

    In this paper, we consider the compound Poisson risk model perturbed by diffusion with constant interest and a threshold dividend strategy. Integro-differential equations with certain boundary conditions for the moment-generation function and the nth moment of the present value of all dividends until ruin are derived. We also derive integro-differential equations with boundary conditions for the Gerber-Shiu functions. The special case that the claim size distribution is exponential is considered in some detail.

  11. Modeling laser velocimeter signals as triply stochastic Poisson processes

    NASA Technical Reports Server (NTRS)

    Mayo, W. T., Jr.

    1976-01-01

    Previous models of laser Doppler velocimeter (LDV) systems have not adequately described dual-scatter signals in a manner useful for analysis and simulation of low-level photon-limited signals. At low photon rates, an LDV signal at the output of a photomultiplier tube is a compound nonhomogeneous filtered Poisson process, whose intensity function is another (slower) Poisson process with the nonstationary rate and frequency parameters controlled by a random flow (slowest) process. In the present paper, generalized Poisson shot noise models are developed for low-level LDV signals. Theoretical results useful in detection error analysis and simulation are presented, along with measurements of burst amplitude statistics. Computer generated simulations illustrate the difference between Gaussian and Poisson models of low-level signals.

  12. Variational Gaussian approximation for Poisson data

    NASA Astrophysics Data System (ADS)

    Arridge, Simon R.; Ito, Kazufumi; Jin, Bangti; Zhang, Chen

    2018-02-01

    The Poisson model is frequently employed to describe count data, but in a Bayesian context it leads to an analytically intractable posterior probability distribution. In this work, we analyze a variational Gaussian approximation to the posterior distribution arising from the Poisson model with a Gaussian prior. This is achieved by seeking an optimal Gaussian distribution minimizing the Kullback-Leibler divergence from the posterior distribution to the approximation, or equivalently maximizing the lower bound for the model evidence. We derive an explicit expression for the lower bound, and show the existence and uniqueness of the optimal Gaussian approximation. The lower bound functional can be viewed as a variant of classical Tikhonov regularization that penalizes also the covariance. Then we develop an efficient alternating direction maximization algorithm for solving the optimization problem, and analyze its convergence. We discuss strategies for reducing the computational complexity via low rank structure of the forward operator and the sparsity of the covariance. Further, as an application of the lower bound, we discuss hierarchical Bayesian modeling for selecting the hyperparameter in the prior distribution, and propose a monotonically convergent algorithm for determining the hyperparameter. We present extensive numerical experiments to illustrate the Gaussian approximation and the algorithms.

  13. Evaluating the double Poisson generalized linear model.

    PubMed

    Zou, Yaotian; Geedipally, Srinivas Reddy; Lord, Dominique

    2013-10-01

    The objectives of this study are to: (1) examine the applicability of the double Poisson (DP) generalized linear model (GLM) for analyzing motor vehicle crash data characterized by over- and under-dispersion and (2) compare the performance of the DP GLM with the Conway-Maxwell-Poisson (COM-Poisson) GLM in terms of goodness-of-fit and theoretical soundness. The DP distribution has seldom been investigated and applied since its first introduction two decades ago. The hurdle for applying the DP is related to its normalizing constant (or multiplicative constant) which is not available in closed form. This study proposed a new method to approximate the normalizing constant of the DP with high accuracy and reliability. The DP GLM and COM-Poisson GLM were developed using two observed over-dispersed datasets and one observed under-dispersed dataset. The modeling results indicate that the DP GLM with its normalizing constant approximated by the new method can handle crash data characterized by over- and under-dispersion. Its performance is comparable to the COM-Poisson GLM in terms of goodness-of-fit (GOF), although COM-Poisson GLM provides a slightly better fit. For the over-dispersed data, the DP GLM performs similar to the NB GLM. Considering the fact that the DP GLM can be easily estimated with inexpensive computation and that it is simpler to interpret coefficients, it offers a flexible and efficient alternative for researchers to model count data. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. Schrödinger-Poisson-Vlasov-Poisson correspondence

    NASA Astrophysics Data System (ADS)

    Mocz, Philip; Lancaster, Lachlan; Fialkov, Anastasia; Becerra, Fernando; Chavanis, Pierre-Henri

    2018-04-01

    The Schrödinger-Poisson equations describe the behavior of a superfluid Bose-Einstein condensate under self-gravity with a 3D wave function. As ℏ/m →0 , m being the boson mass, the equations have been postulated to approximate the collisionless Vlasov-Poisson equations also known as the collisionless Boltzmann-Poisson equations. The latter describe collisionless matter with a 6D classical distribution function. We investigate the nature of this correspondence with a suite of numerical test problems in 1D, 2D, and 3D along with analytic treatments when possible. We demonstrate that, while the density field of the superfluid always shows order unity oscillations as ℏ/m →0 due to interference and the uncertainty principle, the potential field converges to the classical answer as (ℏ/m )2. Thus, any dynamics coupled to the superfluid potential is expected to recover the classical collisionless limit as ℏ/m →0 . The quantum superfluid is able to capture rich phenomena such as multiple phase-sheets, shell-crossings, and warm distributions. Additionally, the quantum pressure tensor acts as a regularizer of caustics and singularities in classical solutions. This suggests the exciting prospect of using the Schrödinger-Poisson equations as a low-memory method for approximating the high-dimensional evolution of the Vlasov-Poisson equations. As a particular example we consider dark matter composed of ultralight axions, which in the classical limit (ℏ/m →0 ) is expected to manifest itself as collisionless cold dark matter.

  15. Fractional Brownian motion and long term clinical trial recruitment

    PubMed Central

    Zhang, Qiang; Lai, Dejian

    2015-01-01

    Prediction of recruitment in clinical trials has been a challenging task. Many methods have been studied, including models based on Poisson process and its large sample approximation by Brownian motion (BM), however, when the independent incremental structure is violated for BM model, we could use fractional Brownian motion to model and approximate the underlying Poisson processes with random rates. In this paper, fractional Brownian motion (FBM) is considered for such conditions and compared to BM model with illustrated examples from different trials and simulations. PMID:26347306

  16. Fractional Brownian motion and long term clinical trial recruitment.

    PubMed

    Zhang, Qiang; Lai, Dejian

    2011-05-01

    Prediction of recruitment in clinical trials has been a challenging task. Many methods have been studied, including models based on Poisson process and its large sample approximation by Brownian motion (BM), however, when the independent incremental structure is violated for BM model, we could use fractional Brownian motion to model and approximate the underlying Poisson processes with random rates. In this paper, fractional Brownian motion (FBM) is considered for such conditions and compared to BM model with illustrated examples from different trials and simulations.

  17. Compound Poisson Law for Hitting Times to Periodic Orbits in Two-Dimensional Hyperbolic Systems

    NASA Astrophysics Data System (ADS)

    Carney, Meagan; Nicol, Matthew; Zhang, Hong-Kun

    2017-11-01

    We show that a compound Poisson distribution holds for scaled exceedances of observables φ uniquely maximized at a periodic point ζ in a variety of two-dimensional hyperbolic dynamical systems with singularities (M,T,μ ), including the billiard maps of Sinai dispersing billiards in both the finite and infinite horizon case. The observable we consider is of form φ (z)=-ln d(z,ζ ) where d is a metric defined in terms of the stable and unstable foliation. The compound Poisson process we obtain is a Pólya-Aeppli distibution of index θ . We calculate θ in terms of the derivative of the map T. Furthermore if we define M_n=\\max {φ ,\\ldots ,φ circ T^n} and u_n (τ ) by \\lim _{n→ ∞} nμ (φ >u_n (τ ) )=τ the maximal process satisfies an extreme value law of form μ (M_n ≤ u_n)=e^{-θ τ }. These results generalize to a broader class of functions maximized at ζ , though the formulas regarding the parameters in the distribution need to be modified.

  18. A Poisson process approximation for generalized K-5 confidence regions

    NASA Technical Reports Server (NTRS)

    Arsham, H.; Miller, D. R.

    1982-01-01

    One-sided confidence regions for continuous cumulative distribution functions are constructed using empirical cumulative distribution functions and the generalized Kolmogorov-Smirnov distance. The band width of such regions becomes narrower in the right or left tail of the distribution. To avoid tedious computation of confidence levels and critical values, an approximation based on the Poisson process is introduced. This aproximation provides a conservative confidence region; moreover, the approximation error decreases monotonically to 0 as sample size increases. Critical values necessary for implementation are given. Applications are made to the areas of risk analysis, investment modeling, reliability assessment, and analysis of fault tolerant systems.

  19. Efficient exact motif discovery.

    PubMed

    Marschall, Tobias; Rahmann, Sven

    2009-06-15

    The motif discovery problem consists of finding over-represented patterns in a collection of biosequences. It is one of the classical sequence analysis problems, but still has not been satisfactorily solved in an exact and efficient manner. This is partly due to the large number of possibilities of defining the motif search space and the notion of over-representation. Even for well-defined formalizations, the problem is frequently solved in an ad hoc manner with heuristics that do not guarantee to find the best motif. We show how to solve the motif discovery problem (almost) exactly on a practically relevant space of IUPAC generalized string patterns, using the p-value with respect to an i.i.d. model or a Markov model as the measure of over-representation. In particular, (i) we use a highly accurate compound Poisson approximation for the null distribution of the number of motif occurrences. We show how to compute the exact clump size distribution using a recently introduced device called probabilistic arithmetic automaton (PAA). (ii) We define two p-value scores for over-representation, the first one based on the total number of motif occurrences, the second one based on the number of sequences in a collection with at least one occurrence. (iii) We describe an algorithm to discover the optimal pattern with respect to either of the scores. The method exploits monotonicity properties of the compound Poisson approximation and is by orders of magnitude faster than exhaustive enumeration of IUPAC strings (11.8 h compared with an extrapolated runtime of 4.8 years). (iv) We justify the use of the proposed scores for motif discovery by showing our method to outperform other motif discovery algorithms (e.g. MEME, Weeder) on benchmark datasets. We also propose new motifs on Mycobacterium tuberculosis. The method has been implemented in Java. It can be obtained from http://ls11-www.cs.tu-dortmund.de/people/marschal/paa_md/.

  20. Nonlinear Poisson Equation for Heterogeneous Media

    PubMed Central

    Hu, Langhua; Wei, Guo-Wei

    2012-01-01

    The Poisson equation is a widely accepted model for electrostatic analysis. However, the Poisson equation is derived based on electric polarizations in a linear, isotropic, and homogeneous dielectric medium. This article introduces a nonlinear Poisson equation to take into consideration of hyperpolarization effects due to intensive charges and possible nonlinear, anisotropic, and heterogeneous media. Variational principle is utilized to derive the nonlinear Poisson model from an electrostatic energy functional. To apply the proposed nonlinear Poisson equation for the solvation analysis, we also construct a nonpolar solvation energy functional based on the nonlinear Poisson equation by using the geometric measure theory. At a fixed temperature, the proposed nonlinear Poisson theory is extensively validated by the electrostatic analysis of the Kirkwood model and a set of 20 proteins, and the solvation analysis of a set of 17 small molecules whose experimental measurements are also available for a comparison. Moreover, the nonlinear Poisson equation is further applied to the solvation analysis of 21 compounds at different temperatures. Numerical results are compared to theoretical prediction, experimental measurements, and those obtained from other theoretical methods in the literature. A good agreement between our results and experimental data as well as theoretical results suggests that the proposed nonlinear Poisson model is a potentially useful model for electrostatic analysis involving hyperpolarization effects. PMID:22947937

  1. LD-SPatt: large deviations statistics for patterns on Markov chains.

    PubMed

    Nuel, G

    2004-01-01

    Statistics on Markov chains are widely used for the study of patterns in biological sequences. Statistics on these models can be done through several approaches. Central limit theorem (CLT) producing Gaussian approximations are one of the most popular ones. Unfortunately, in order to find a pattern of interest, these methods have to deal with tail distribution events where CLT is especially bad. In this paper, we propose a new approach based on the large deviations theory to assess pattern statistics. We first recall theoretical results for empiric mean (level 1) as well as empiric distribution (level 2) large deviations on Markov chains. Then, we present the applications of these results focusing on numerical issues. LD-SPatt is the name of GPL software implementing these algorithms. We compare this approach to several existing ones in terms of complexity and reliability and show that the large deviations are more reliable than the Gaussian approximations in absolute values as well as in terms of ranking and are at least as reliable as compound Poisson approximations. We then finally discuss some further possible improvements and applications of this new method.

  2. The scaling of oblique plasma double layers

    NASA Technical Reports Server (NTRS)

    Borovsky, J. E.

    1983-01-01

    Strong oblique plasma double layers are investigated using three methods, i.e., electrostatic particle-in-cell simulations, numerical solutions to the Poisson-Vlasov equations, and analytical approximations to the Poisson-Vlasov equations. The solutions to the Poisson-Vlasov equations and numerical simulations show that strong oblique double layers scale in terms of Debye lengths. For very large potential jumps, theory and numerical solutions indicate that all effects of the magnetic field vanish and the oblique double layers follow the same scaling relation as the field-aligned double layers.

  3. Application of zero-inflated poisson mixed models in prognostic factors of hepatitis C.

    PubMed

    Akbarzadeh Baghban, Alireza; Pourhoseingholi, Asma; Zayeri, Farid; Jafari, Ali Akbar; Alavian, Seyed Moayed

    2013-01-01

    In recent years, hepatitis C virus (HCV) infection represents a major public health problem. Evaluation of risk factors is one of the solutions which help protect people from the infection. This study aims to employ zero-inflated Poisson mixed models to evaluate prognostic factors of hepatitis C. The data was collected from a longitudinal study during 2005-2010. First, mixed Poisson regression (PR) model was fitted to the data. Then, a mixed zero-inflated Poisson model was fitted with compound Poisson random effects. For evaluating the performance of the proposed mixed model, standard errors of estimators were compared. The results obtained from mixed PR showed that genotype 3 and treatment protocol were statistically significant. Results of zero-inflated Poisson mixed model showed that age, sex, genotypes 2 and 3, the treatment protocol, and having risk factors had significant effects on viral load of HCV patients. Of these two models, the estimators of zero-inflated Poisson mixed model had the minimum standard errors. The results showed that a mixed zero-inflated Poisson model was the almost best fit. The proposed model can capture serial dependence, additional overdispersion, and excess zeros in the longitudinal count data.

  4. A novel multitarget model of radiation-induced cell killing based on the Gaussian distribution.

    PubMed

    Zhao, Lei; Mi, Dong; Sun, Yeqing

    2017-05-07

    The multitarget version of the traditional target theory based on the Poisson distribution is still used to describe the dose-survival curves of cells after ionizing radiation in radiobiology and radiotherapy. However, noting that the usual ionizing radiation damage is the result of two sequential stochastic processes, the probability distribution of the damage number per cell should follow a compound Poisson distribution, like e.g. Neyman's distribution of type A (N. A.). In consideration of that the Gaussian distribution can be considered as the approximation of the N. A. in the case of high flux, a multitarget model based on the Gaussian distribution is proposed to describe the cell inactivation effects in low linear energy transfer (LET) radiation with high dose-rate. Theoretical analysis and experimental data fitting indicate that the present theory is superior to the traditional multitarget model and similar to the Linear - Quadratic (LQ) model in describing the biological effects of low-LET radiation with high dose-rate, and the parameter ratio in the present model can be used as an alternative indicator to reflect the radiation damage and radiosensitivity of the cells. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Nonlinear Poisson equation for heterogeneous media.

    PubMed

    Hu, Langhua; Wei, Guo-Wei

    2012-08-22

    The Poisson equation is a widely accepted model for electrostatic analysis. However, the Poisson equation is derived based on electric polarizations in a linear, isotropic, and homogeneous dielectric medium. This article introduces a nonlinear Poisson equation to take into consideration of hyperpolarization effects due to intensive charges and possible nonlinear, anisotropic, and heterogeneous media. Variational principle is utilized to derive the nonlinear Poisson model from an electrostatic energy functional. To apply the proposed nonlinear Poisson equation for the solvation analysis, we also construct a nonpolar solvation energy functional based on the nonlinear Poisson equation by using the geometric measure theory. At a fixed temperature, the proposed nonlinear Poisson theory is extensively validated by the electrostatic analysis of the Kirkwood model and a set of 20 proteins, and the solvation analysis of a set of 17 small molecules whose experimental measurements are also available for a comparison. Moreover, the nonlinear Poisson equation is further applied to the solvation analysis of 21 compounds at different temperatures. Numerical results are compared to theoretical prediction, experimental measurements, and those obtained from other theoretical methods in the literature. A good agreement between our results and experimental data as well as theoretical results suggests that the proposed nonlinear Poisson model is a potentially useful model for electrostatic analysis involving hyperpolarization effects. Copyright © 2012 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  6. A Family of Poisson Processes for Use in Stochastic Models of Precipitation

    NASA Astrophysics Data System (ADS)

    Penland, C.

    2013-12-01

    Both modified Poisson processes and compound Poisson processes can be relevant to stochastic parameterization of precipitation. This presentation compares the dynamical properties of these systems and discusses the physical situations in which each might be appropriate. If the parameters describing either class of systems originate in hydrodynamics, then proper consideration of stochastic calculus is required during numerical implementation of the parameterization. It is shown here that an improper numerical treatment can have severe implications for estimating rainfall distributions, particularly in the tails of the distributions and, thus, on the frequency of extreme events.

  7. Poisson Approximation-Based Score Test for Detecting Association of Rare Variants.

    PubMed

    Fang, Hongyan; Zhang, Hong; Yang, Yaning

    2016-07-01

    Genome-wide association study (GWAS) has achieved great success in identifying genetic variants, but the nature of GWAS has determined its inherent limitations. Under the common disease rare variants (CDRV) hypothesis, the traditional association analysis methods commonly used in GWAS for common variants do not have enough power for detecting rare variants with a limited sample size. As a solution to this problem, pooling rare variants by their functions provides an efficient way for identifying susceptible genes. Rare variant typically have low frequencies of minor alleles, and the distribution of the total number of minor alleles of the rare variants can be approximated by a Poisson distribution. Based on this fact, we propose a new test method, the Poisson Approximation-based Score Test (PAST), for association analysis of rare variants. Two testing methods, namely, ePAST and mPAST, are proposed based on different strategies of pooling rare variants. Simulation results and application to the CRESCENDO cohort data show that our methods are more powerful than the existing methods. © 2016 John Wiley & Sons Ltd/University College London.

  8. Solution of the nonlinear Poisson-Boltzmann equation: Application to ionic diffusion in cementitious materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arnold, J.; Kosson, D.S., E-mail: david.s.kosson@vanderbilt.edu; Garrabrants, A.

    2013-02-15

    A robust numerical solution of the nonlinear Poisson-Boltzmann equation for asymmetric polyelectrolyte solutions in discrete pore geometries is presented. Comparisons to the linearized approximation of the Poisson-Boltzmann equation reveal that the assumptions leading to linearization may not be appropriate for the electrochemical regime in many cementitious materials. Implications of the electric double layer on both partitioning of species and on diffusive release are discussed. The influence of the electric double layer on anion diffusion relative to cation diffusion is examined.

  9. Oscillatory Reduction in Option Pricing Formula Using Shifted Poisson and Linear Approximation

    NASA Astrophysics Data System (ADS)

    Nur Rachmawati, Ro'fah; Irene; Budiharto, Widodo

    2014-03-01

    Option is one of derivative instruments that can help investors improve their expected return and minimize the risks. However, the Black-Scholes formula is generally used in determining the price of the option does not involve skewness factor and it is difficult to apply in computing process because it produces oscillation for the skewness values close to zero. In this paper, we construct option pricing formula that involve skewness by modified Black-Scholes formula using Shifted Poisson model and transformed it into the form of a Linear Approximation in the complete market to reduce the oscillation. The results are Linear Approximation formula can predict the price of an option with very accurate and successfully reduce the oscillations in the calculation processes.

  10. Harnessing the theoretical foundations of the exponential and beta-Poisson dose-response models to quantify parameter uncertainty using Markov Chain Monte Carlo.

    PubMed

    Schmidt, Philip J; Pintar, Katarina D M; Fazil, Aamir M; Topp, Edward

    2013-09-01

    Dose-response models are the essential link between exposure assessment and computed risk values in quantitative microbial risk assessment, yet the uncertainty that is inherent to computed risks because the dose-response model parameters are estimated using limited epidemiological data is rarely quantified. Second-order risk characterization approaches incorporating uncertainty in dose-response model parameters can provide more complete information to decisionmakers by separating variability and uncertainty to quantify the uncertainty in computed risks. Therefore, the objective of this work is to develop procedures to sample from posterior distributions describing uncertainty in the parameters of exponential and beta-Poisson dose-response models using Bayes's theorem and Markov Chain Monte Carlo (in OpenBUGS). The theoretical origins of the beta-Poisson dose-response model are used to identify a decomposed version of the model that enables Bayesian analysis without the need to evaluate Kummer confluent hypergeometric functions. Herein, it is also established that the beta distribution in the beta-Poisson dose-response model cannot address variation among individual pathogens, criteria to validate use of the conventional approximation to the beta-Poisson model are proposed, and simple algorithms to evaluate actual beta-Poisson probabilities of infection are investigated. The developed MCMC procedures are applied to analysis of a case study data set, and it is demonstrated that an important region of the posterior distribution of the beta-Poisson dose-response model parameters is attributable to the absence of low-dose data. This region includes beta-Poisson models for which the conventional approximation is especially invalid and in which many beta distributions have an extreme shape with questionable plausibility. © Her Majesty the Queen in Right of Canada 2013. Reproduced with the permission of the Minister of the Public Health Agency of Canada.

  11. Structural, electronic, mechanical, and thermoelectric properties of a novel half Heusler compound HfPtPb

    NASA Astrophysics Data System (ADS)

    Kaur, Kulwinder; Rai, D. P.; Thapa, R. K.; Srivastava, Sunita

    2017-07-01

    We explore the structural, electronic, mechanical, and thermoelectric properties of a new half Heusler compound HfPtPb, an all metallic heavy element, recently proposed to be stable [Gautier et al., Nat. Chem. 7, 308 (2015)]. In this work, we employ density functional theory and semi-classical Boltzmann transport equations with constant relaxation time approximation. The mechanical properties, such as shear modulus, Young's modulus, elastic constants, Poisson's ratio, and shear anisotropy factor, have been investigated. The elastic and phonon properties reveal that this compound is mechanically and dynamically stable. Pugh's ratio and Frantsevich's ratio demonstrate its ductile behavior, and the shear anisotropic factor reveals the anisotropic nature of HfPtPb. The band structure predicts this compound to be a semiconductor with a band gap of 0.86 eV. The thermoelectric transport parameters, such as Seebeck coefficient, electrical conductivity, electronic thermal conductivity, and lattice thermal conductivity, have been calculated as a function of temperature. The highest value of Seebeck coefficient is obtained for n-type doping at an optimal carrier concentration of 1.0 × 1020 e/cm3. We predict the maximum value of figure of merit (0.25) at 1000 K. Our investigation suggests that this material is an n-type semiconductor.

  12. Lindley frailty model for a class of compound Poisson processes

    NASA Astrophysics Data System (ADS)

    Kadilar, Gamze Özel; Ata, Nihal

    2013-10-01

    The Lindley distribution gain importance in survival analysis for the similarity of exponential distribution and allowance for the different shapes of hazard function. Frailty models provide an alternative to proportional hazards model where misspecified or omitted covariates are described by an unobservable random variable. Despite of the distribution of the frailty is generally assumed to be continuous, it is appropriate to consider discrete frailty distributions In some circumstances. In this paper, frailty models with discrete compound Poisson process for the Lindley distributed failure time are introduced. Survival functions are derived and maximum likelihood estimation procedures for the parameters are studied. Then, the fit of the models to the earthquake data set of Turkey are examined.

  13. Orientational analysis of planar fibre systems observed as a Poisson shot-noise process.

    PubMed

    Kärkkäinen, Salme; Lantuéjoul, Christian

    2007-10-01

    We consider two-dimensional fibrous materials observed as a digital greyscale image. The problem addressed is to estimate the orientation distribution of unobservable thin fibres from a greyscale image modelled by a planar Poisson shot-noise process. The classical stereological approach is not straightforward, because the point intensities of thin fibres along sampling lines may not be observable. For such cases, Kärkkäinen et al. (2001) suggested the use of scaled variograms determined from grey values along sampling lines in several directions. Their method is based on the assumption that the proportion between the scaled variograms and point intensities in all directions of sampling lines is constant. This assumption is proved to be valid asymptotically for Boolean models and dead leaves models, under some regularity conditions. In this work, we derive the scaled variogram and its approximations for a planar Poisson shot-noise process using the modified Bessel function. In the case of reasonable high resolution of the observed image, the scaled variogram has an approximate functional relation to the point intensity, and in the case of high resolution the relation is proportional. As the obtained relations are approximative, they are tested on simulations. The existing orientation analysis method based on the proportional relation is further experimented on images with different resolutions. The new result, the asymptotic proportionality between the scaled variograms and the point intensities for a Poisson shot-noise process, completes the earlier results for the Boolean models and for the dead leaves models.

  14. Determining the Uncertainty of X-Ray Absorption Measurements

    PubMed Central

    Wojcik, Gary S.

    2004-01-01

    X-ray absorption (or more properly, x-ray attenuation) techniques have been applied to study the moisture movement in and moisture content of materials like cement paste, mortar, and wood. An increase in the number of x-ray counts with time at a location in a specimen may indicate a decrease in moisture content. The uncertainty of measurements from an x-ray absorption system, which must be known to properly interpret the data, is often assumed to be the square root of the number of counts, as in a Poisson process. No detailed studies have heretofore been conducted to determine the uncertainty of x-ray absorption measurements or the effect of averaging data on the uncertainty. In this study, the Poisson estimate was found to adequately approximate normalized root mean square errors (a measure of uncertainty) of counts for point measurements and profile measurements of water specimens. The Poisson estimate, however, was not reliable in approximating the magnitude of the uncertainty when averaging data from paste and mortar specimens. Changes in uncertainty from differing averaging procedures were well-approximated by a Poisson process. The normalized root mean square errors decreased when the x-ray source intensity, integration time, collimator size, and number of scanning repetitions increased. Uncertainties in mean paste and mortar count profiles were kept below 2 % by averaging vertical profiles at horizontal spacings of 1 mm or larger with counts per point above 4000. Maximum normalized root mean square errors did not exceed 10 % in any of the tests conducted. PMID:27366627

  15. Multilevel Sequential Monte Carlo Samplers for Normalizing Constants

    DOE PAGES

    Moral, Pierre Del; Jasra, Ajay; Law, Kody J. H.; ...

    2017-08-24

    This article considers the sequential Monte Carlo (SMC) approximation of ratios of normalizing constants associated to posterior distributions which in principle rely on continuum models. Therefore, the Monte Carlo estimation error and the discrete approximation error must be balanced. A multilevel strategy is utilized to substantially reduce the cost to obtain a given error level in the approximation as compared to standard estimators. Two estimators are considered and relative variance bounds are given. The theoretical results are numerically illustrated for two Bayesian inverse problems arising from elliptic partial differential equations (PDEs). The examples involve the inversion of observations of themore » solution of (i) a 1-dimensional Poisson equation to infer the diffusion coefficient, and (ii) a 2-dimensional Poisson equation to infer the external forcing.« less

  16. Sample size calculations for comparative clinical trials with over-dispersed Poisson process data.

    PubMed

    Matsui, Shigeyuki

    2005-05-15

    This paper develops a new formula for sample size calculations for comparative clinical trials with Poisson or over-dispersed Poisson process data. The criteria for sample size calculations is developed on the basis of asymptotic approximations for a two-sample non-parametric test to compare the empirical event rate function between treatment groups. This formula can accommodate time heterogeneity, inter-patient heterogeneity in event rate, and also, time-varying treatment effects. An application of the formula to a trial for chronic granulomatous disease is provided. Copyright 2004 John Wiley & Sons, Ltd.

  17. Response analysis of a class of quasi-linear systems with fractional derivative excited by Poisson white noise

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Yongge; Xu, Wei, E-mail: weixu@nwpu.edu.cn; Yang, Guidong

    The Poisson white noise, as a typical non-Gaussian excitation, has attracted much attention recently. However, little work was referred to the study of stochastic systems with fractional derivative under Poisson white noise excitation. This paper investigates the stationary response of a class of quasi-linear systems with fractional derivative excited by Poisson white noise. The equivalent stochastic system of the original stochastic system is obtained. Then, approximate stationary solutions are obtained with the help of the perturbation method. Finally, two typical examples are discussed in detail to demonstrate the effectiveness of the proposed method. The analysis also shows that the fractionalmore » order and the fractional coefficient significantly affect the responses of the stochastic systems with fractional derivative.« less

  18. Bayesian analysis of volcanic eruptions

    NASA Astrophysics Data System (ADS)

    Ho, Chih-Hsiang

    1990-10-01

    The simple Poisson model generally gives a good fit to many volcanoes for volcanic eruption forecasting. Nonetheless, empirical evidence suggests that volcanic activity in successive equal time-periods tends to be more variable than a simple Poisson with constant eruptive rate. An alternative model is therefore examined in which eruptive rate(λ) for a given volcano or cluster(s) of volcanoes is described by a gamma distribution (prior) rather than treated as a constant value as in the assumptions of a simple Poisson model. Bayesian analysis is performed to link two distributions together to give the aggregate behavior of the volcanic activity. When the Poisson process is expanded to accomodate a gamma mixing distribution on λ, a consequence of this mixed (or compound) Poisson model is that the frequency distribution of eruptions in any given time-period of equal length follows the negative binomial distribution (NBD). Applications of the proposed model and comparisons between the generalized model and simple Poisson model are discussed based on the historical eruptive count data of volcanoes Mauna Loa (Hawaii) and Etna (Italy). Several relevant facts lead to the conclusion that the generalized model is preferable for practical use both in space and time.

  19. Estimating the intensity of a cyclic Poisson process in the presence of additive and multiplicative linear trend

    NASA Astrophysics Data System (ADS)

    Wayan Mangku, I.

    2017-10-01

    In this paper we survey some results on estimation of the intensity function of a cyclic Poisson process in the presence of additive and multiplicative linear trend. We do not assume any parametric form for the cyclic component of the intensity function, except that it is periodic. Moreover, we consider the case when there is only a single realization of the Poisson process is observed in a bounded interval. The considered estimators are weakly and strongly consistent when the size of the observation interval indefinitely expands. Asymptotic approximations to the bias and variance of those estimators are presented.

  20. Validation of the Poisson Stochastic Radiative Transfer Model

    NASA Technical Reports Server (NTRS)

    Zhuravleva, Tatiana; Marshak, Alexander

    2004-01-01

    A new approach to validation of the Poisson stochastic radiative transfer method is proposed. In contrast to other validations of stochastic models, the main parameter of the Poisson model responsible for cloud geometrical structure - cloud aspect ratio - is determined entirely by matching measurements and calculations of the direct solar radiation. If the measurements of the direct solar radiation is unavailable, it was shown that there is a range of the aspect ratios that allows the stochastic model to accurately approximate the average measurements of surface downward and cloud top upward fluxes. Realizations of the fractionally integrated cascade model are taken as a prototype of real measurements.

  1. Irreversible thermodynamics of Poisson processes with reaction.

    PubMed

    Méndez, V; Fort, J

    1999-11-01

    A kinetic model is derived to study the successive movements of particles, described by a Poisson process, as well as their generation. The irreversible thermodynamics of this system is also studied from the kinetic model. This makes it possible to evaluate the differences between thermodynamical quantities computed exactly and up to second-order. Such differences determine the range of validity of the second-order approximation to extended irreversible thermodynamics.

  2. A Poisson-like closed-form expression for the steady-state wealth distribution in a kinetic model of gambling

    NASA Astrophysics Data System (ADS)

    Garcia, Jane Bernadette Denise M.; Esguerra, Jose Perico H.

    2017-08-01

    An approximate but closed-form expression for a Poisson-like steady state wealth distribution in a kinetic model of gambling was formulated from a finite number of its moments, which were generated from a βa,b(x) exchange distribution. The obtained steady-state wealth distributions have tails which are qualitatively similar to those observed in actual wealth distributions.

  3. Radio pulsar glitches as a state-dependent Poisson process

    NASA Astrophysics Data System (ADS)

    Fulgenzi, W.; Melatos, A.; Hughes, B. D.

    2017-10-01

    Gross-Pitaevskii simulations of vortex avalanches in a neutron star superfluid are limited computationally to ≲102 vortices and ≲102 avalanches, making it hard to study the long-term statistics of radio pulsar glitches in realistically sized systems. Here, an idealized, mean-field model of the observed Gross-Pitaevskii dynamics is presented, in which vortex unpinning is approximated as a state-dependent, compound Poisson process in a single random variable, the spatially averaged crust-superfluid lag. Both the lag-dependent Poisson rate and the conditional distribution of avalanche-driven lag decrements are inputs into the model, which is solved numerically (via Monte Carlo simulations) and analytically (via a master equation). The output statistics are controlled by two dimensionless free parameters: α, the glitch rate at a reference lag, multiplied by the critical lag for unpinning, divided by the spin-down rate; and β, the minimum fraction of the lag that can be restored by a glitch. The system evolves naturally to a self-regulated stationary state, whose properties are determined by α/αc(β), where αc(β) ≈ β-1/2 is a transition value. In the regime α ≳ αc(β), one recovers qualitatively the power-law size and exponential waiting-time distributions observed in many radio pulsars and Gross-Pitaevskii simulations. For α ≪ αc(β), the size and waiting-time distributions are both power-law-like, and a correlation emerges between size and waiting time until the next glitch, contrary to what is observed in most pulsars. Comparisons with astrophysical data are restricted by the small sample sizes available at present, with ≤35 events observed per pulsar.

  4. A note on Poisson goodness-of-fit tests for ionizing radiation induced chromosomal aberration samples.

    PubMed

    Higueras, Manuel; González, J E; Di Giorgio, Marina; Barquinero, J F

    2018-06-13

    To present Poisson exact goodness-of-fit tests as alternatives and complements to the asymptotic u-test, which is the most widely used in cytogenetic biodosimetry, to decide whether a sample of chromosomal aberrations in blood cells comes from an homogeneous or inhomogeneous exposure. Three Poisson exact goodness-of-fit test from the literature are introduced and implemented in the R environment. A Shiny R Studio application, named GOF Poisson, has been updated for the purpose of giving support to this work. The three exact tests and the u-test are applied in chromosomal aberration data from clinical and accidental radiation exposure patients. It is observed how the u-test is not an appropriate approximation in small samples with small yield of chromosomal aberrations. Tools are provided to compute the three exact tests, which is not as trivial as the implementation of the u-test. Poisson exact goodness-of-fit tests should be considered jointly to the u-test for detecting inhomogeneous exposures in the cytogenetic biodosimetry practice.

  5. Some functional limit theorems for compound Cox processes

    NASA Astrophysics Data System (ADS)

    Korolev, Victor Yu.; Chertok, A. V.; Korchagin, A. Yu.; Kossova, E. V.; Zeifman, Alexander I.

    2016-06-01

    An improved version of the functional limit theorem is proved establishing weak convergence of random walks generated by compound doubly stochastic Poisson processes (compound Cox processes) to Lévy processes in the Skorokhod space under more realistic moment conditions. As corollaries, theorems are proved on convergence of random walks with jumps having finite variances to Lévy processes with variance-mean mixed normal distributions, in particular, to stable Lévy processes.

  6. Some functional limit theorems for compound Cox processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Korolev, Victor Yu.; Institute of Informatics Problems FRC CSC RAS; Chertok, A. V.

    2016-06-08

    An improved version of the functional limit theorem is proved establishing weak convergence of random walks generated by compound doubly stochastic Poisson processes (compound Cox processes) to Lévy processes in the Skorokhod space under more realistic moment conditions. As corollaries, theorems are proved on convergence of random walks with jumps having finite variances to Lévy processes with variance-mean mixed normal distributions, in particular, to stable Lévy processes.

  7. A poisson process model for hip fracture risk.

    PubMed

    Schechner, Zvi; Luo, Gangming; Kaufman, Jonathan J; Siffert, Robert S

    2010-08-01

    The primary method for assessing fracture risk in osteoporosis relies primarily on measurement of bone mass. Estimation of fracture risk is most often evaluated using logistic or proportional hazards models. Notwithstanding the success of these models, there is still much uncertainty as to who will or will not suffer a fracture. This has led to a search for other components besides mass that affect bone strength. The purpose of this paper is to introduce a new mechanistic stochastic model that characterizes the risk of hip fracture in an individual. A Poisson process is used to model the occurrence of falls, which are assumed to occur at a rate, lambda. The load induced by a fall is assumed to be a random variable that has a Weibull probability distribution. The combination of falls together with loads leads to a compound Poisson process. By retaining only those occurrences of the compound Poisson process that result in a hip fracture, a thinned Poisson process is defined that itself is a Poisson process. The fall rate is modeled as an affine function of age, and hip strength is modeled as a power law function of bone mineral density (BMD). The risk of hip fracture can then be computed as a function of age and BMD. By extending the analysis to a Bayesian framework, the conditional densities of BMD given a prior fracture and no prior fracture can be computed and shown to be consistent with clinical observations. In addition, the conditional probabilities of fracture given a prior fracture and no prior fracture can also be computed, and also demonstrate results similar to clinical data. The model elucidates the fact that the hip fracture process is inherently random and improvements in hip strength estimation over and above that provided by BMD operate in a highly "noisy" environment and may therefore have little ability to impact clinical practice.

  8. POSTPROCESSING MIXED FINITE ELEMENT METHODS FOR SOLVING CAHN-HILLIARD EQUATION: METHODS AND ERROR ANALYSIS

    PubMed Central

    Wang, Wansheng; Chen, Long; Zhou, Jie

    2015-01-01

    A postprocessing technique for mixed finite element methods for the Cahn-Hilliard equation is developed and analyzed. Once the mixed finite element approximations have been computed at a fixed time on the coarser mesh, the approximations are postprocessed by solving two decoupled Poisson equations in an enriched finite element space (either on a finer grid or a higher-order space) for which many fast Poisson solvers can be applied. The nonlinear iteration is only applied to a much smaller size problem and the computational cost using Newton and direct solvers is negligible compared with the cost of the linear problem. The analysis presented here shows that this technique remains the optimal rate of convergence for both the concentration and the chemical potential approximations. The corresponding error estimate obtained in our paper, especially the negative norm error estimates, are non-trivial and different with the existing results in the literatures. PMID:27110063

  9. Multiparameter linear least-squares fitting to Poisson data one count at a time

    NASA Technical Reports Server (NTRS)

    Wheaton, Wm. A.; Dunklee, Alfred L.; Jacobsen, Allan S.; Ling, James C.; Mahoney, William A.; Radocinski, Robert G.

    1995-01-01

    A standard problem in gamma-ray astronomy data analysis is the decomposition of a set of observed counts, described by Poisson statistics, according to a given multicomponent linear model, with underlying physical count rates or fluxes which are to be estimated from the data. Despite its conceptual simplicity, the linear least-squares (LLSQ) method for solving this problem has generally been limited to situations in which the number n(sub i) of counts in each bin i is not too small, conventionally more than 5-30. It seems to be widely believed that the failure of the LLSQ method for small counts is due to the failure of the Poisson distribution to be even approximately normal for small numbers. The cause is more accurately the strong anticorrelation between the data and the wieghts w(sub i) in the weighted LLSQ method when square root of n(sub i) instead of square root of bar-n(sub i) is used to approximate the uncertainties, sigma(sub i), in the data, where bar-n(sub i) = E(n(sub i)), the expected value of N(sub i). We show in an appendix that, avoiding this approximation, the correct equations for the Poisson LLSQ (PLLSQ) problems are actually identical to those for the maximum likelihood estimate using the exact Poisson distribution. We apply the method to solve a problem in high-resolution gamma-ray spectroscopy for the JPL High-Resolution Gamma-Ray Spectrometer flown on HEAO 3. Systematic error in subtracting the strong, highly variable background encountered in the low-energy gamma-ray region can be significantly reduced by closely pairing source and background data in short segments. Significant results can be built up by weighted averaging of the net fluxes obtained from the subtraction of many individual source/background pairs. Extension of the approach to complex situations, with multiple cosmic sources and realistic background parameterizations, requires a means of efficiently fitting to data from single scans in the narrow (approximately = 1.2 keV, HEAO 3) energy channels of a Ge spectrometer, where the expected number of counts obtained per scan may be very low. Such an analysis system is discussed and compared to the method previously used.

  10. Nonparametric Bayesian Segmentation of a Multivariate Inhomogeneous Space-Time Poisson Process.

    PubMed

    Ding, Mingtao; He, Lihan; Dunson, David; Carin, Lawrence

    2012-12-01

    A nonparametric Bayesian model is proposed for segmenting time-evolving multivariate spatial point process data. An inhomogeneous Poisson process is assumed, with a logistic stick-breaking process (LSBP) used to encourage piecewise-constant spatial Poisson intensities. The LSBP explicitly favors spatially contiguous segments, and infers the number of segments based on the observed data. The temporal dynamics of the segmentation and of the Poisson intensities are modeled with exponential correlation in time, implemented in the form of a first-order autoregressive model for uniformly sampled discrete data, and via a Gaussian process with an exponential kernel for general temporal sampling. We consider and compare two different inference techniques: a Markov chain Monte Carlo sampler, which has relatively high computational complexity; and an approximate and efficient variational Bayesian analysis. The model is demonstrated with a simulated example and a real example of space-time crime events in Cincinnati, Ohio, USA.

  11. Image denoising in mixed Poisson-Gaussian noise.

    PubMed

    Luisier, Florian; Blu, Thierry; Unser, Michael

    2011-03-01

    We propose a general methodology (PURE-LET) to design and optimize a wide class of transform-domain thresholding algorithms for denoising images corrupted by mixed Poisson-Gaussian noise. We express the denoising process as a linear expansion of thresholds (LET) that we optimize by relying on a purely data-adaptive unbiased estimate of the mean-squared error (MSE), derived in a non-Bayesian framework (PURE: Poisson-Gaussian unbiased risk estimate). We provide a practical approximation of this theoretical MSE estimate for the tractable optimization of arbitrary transform-domain thresholding. We then propose a pointwise estimator for undecimated filterbank transforms, which consists of subband-adaptive thresholding functions with signal-dependent thresholds that are globally optimized in the image domain. We finally demonstrate the potential of the proposed approach through extensive comparisons with state-of-the-art techniques that are specifically tailored to the estimation of Poisson intensities. We also present denoising results obtained on real images of low-count fluorescence microscopy.

  12. A spectral Poisson solver for kinetic plasma simulation

    NASA Astrophysics Data System (ADS)

    Szeremley, Daniel; Obberath, Jens; Brinkmann, Ralf

    2011-10-01

    Plasma resonance spectroscopy is a well established plasma diagnostic method, realized in several designs. One of these designs is the multipole resonance probe (MRP). In its idealized - geometrically simplified - version it consists of two dielectrically shielded, hemispherical electrodes to which an RF signal is applied. A numerical tool is under development which is capable of simulating the dynamics of the plasma surrounding the MRP in electrostatic approximation. In this contribution we concentrate on the specialized Poisson solver for that tool. The plasma is represented by an ensemble of point charges. By expanding both the charge density and the potential into spherical harmonics, a largely analytical solution of the Poisson problem can be employed. For a practical implementation, the expansion must be appropriately truncated. With this spectral solver we are able to efficiently solve the Poisson equation in a kinetic plasma simulation without the need of introducing a spatial discretization.

  13. A special case of the Poisson PDE formulated for Earth's surface and its capability to approximate the terrain mass density employing land-based gravity data, a case study in the south of Iran

    NASA Astrophysics Data System (ADS)

    AllahTavakoli, Yahya; Safari, Abdolreza; Vaníček, Petr

    2016-12-01

    This paper resurrects a version of Poisson's Partial Differential Equation (PDE) associated with the gravitational field at the Earth's surface and illustrates how the PDE possesses a capability to extract the mass density of Earth's topography from land-based gravity data. Herein, first we propound a theorem which mathematically introduces this version of Poisson's PDE adapted for the Earth's surface and then we use this PDE to develop a method of approximating the terrain mass density. Also, we carry out a real case study showing how the proposed approach is able to be applied to a set of land-based gravity data. In the case study, the method is summarized by an algorithm and applied to a set of gravity stations located along a part of the north coast of the Persian Gulf in the south of Iran. The results were numerically validated via rock-samplings as well as a geological map. Also, the method was compared with two conventional methods of mass density reduction. The numerical experiments indicate that the Poisson PDE at the Earth's surface has the capability to extract the mass density from land-based gravity data and is able to provide an alternative and somewhat more precise method of estimating the terrain mass density.

  14. Estimating effectiveness in HIV prevention trials with a Bayesian hierarchical compound Poisson frailty model

    PubMed Central

    Coley, Rebecca Yates; Browna, Elizabeth R.

    2016-01-01

    Inconsistent results in recent HIV prevention trials of pre-exposure prophylactic interventions may be due to heterogeneity in risk among study participants. Intervention effectiveness is most commonly estimated with the Cox model, which compares event times between populations. When heterogeneity is present, this population-level measure underestimates intervention effectiveness for individuals who are at risk. We propose a likelihood-based Bayesian hierarchical model that estimates the individual-level effectiveness of candidate interventions by accounting for heterogeneity in risk with a compound Poisson-distributed frailty term. This model reflects the mechanisms of HIV risk and allows that some participants are not exposed to HIV and, therefore, have no risk of seroconversion during the study. We assess model performance via simulation and apply the model to data from an HIV prevention trial. PMID:26869051

  15. A spatial scan statistic for compound Poisson data.

    PubMed

    Rosychuk, Rhonda J; Chang, Hsing-Ming

    2013-12-20

    The topic of spatial cluster detection gained attention in statistics during the late 1980s and early 1990s. Effort has been devoted to the development of methods for detecting spatial clustering of cases and events in the biological sciences, astronomy and epidemiology. More recently, research has examined detecting clusters of correlated count data associated with health conditions of individuals. Such a method allows researchers to examine spatial relationships of disease-related events rather than just incident or prevalent cases. We introduce a spatial scan test that identifies clusters of events in a study region. Because an individual case may have multiple (repeated) events, we base the test on a compound Poisson model. We illustrate our method for cluster detection on emergency department visits, where individuals may make multiple disease-related visits. Copyright © 2013 John Wiley & Sons, Ltd.

  16. Stochastic modeling for neural spiking events based on fractional superstatistical Poisson process

    NASA Astrophysics Data System (ADS)

    Konno, Hidetoshi; Tamura, Yoshiyasu

    2018-01-01

    In neural spike counting experiments, it is known that there are two main features: (i) the counting number has a fractional power-law growth with time and (ii) the waiting time (i.e., the inter-spike-interval) distribution has a heavy tail. The method of superstatistical Poisson processes (SSPPs) is examined whether these main features are properly modeled. Although various mixed/compound Poisson processes are generated with selecting a suitable distribution of the birth-rate of spiking neurons, only the second feature (ii) can be modeled by the method of SSPPs. Namely, the first one (i) associated with the effect of long-memory cannot be modeled properly. Then, it is shown that the two main features can be modeled successfully by a class of fractional SSPP (FSSPP).

  17. Dynamics of a prey-predator system under Poisson white noise excitation

    NASA Astrophysics Data System (ADS)

    Pan, Shan-Shan; Zhu, Wei-Qiu

    2014-10-01

    The classical Lotka-Volterra (LV) model is a well-known mathematical model for prey-predator ecosystems. In the present paper, the pulse-type version of stochastic LV model, in which the effect of a random natural environment has been modeled as Poisson white noise, is investigated by using the stochastic averaging method. The averaged generalized Itô stochastic differential equation and Fokker-Planck-Kolmogorov (FPK) equation are derived for prey-predator ecosystem driven by Poisson white noise. Approximate stationary solution for the averaged generalized FPK equation is obtained by using the perturbation method. The effect of prey self-competition parameter ɛ2 s on ecosystem behavior is evaluated. The analytical result is confirmed by corresponding Monte Carlo (MC) simulation.

  18. Robustness of Quadratic Hedging Strategies in Finance via Backward Stochastic Differential Equations with Jumps

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Di Nunno, Giulia, E-mail: giulian@math.uio.no; Khedher, Asma, E-mail: asma.khedher@tum.de; Vanmaele, Michèle, E-mail: michele.vanmaele@ugent.be

    We consider a backward stochastic differential equation with jumps (BSDEJ) which is driven by a Brownian motion and a Poisson random measure. We present two candidate-approximations to this BSDEJ and we prove that the solution of each candidate-approximation converges to the solution of the original BSDEJ in a space which we specify. We use this result to investigate in further detail the consequences of the choice of the model to (partial) hedging in incomplete markets in finance. As an application, we consider models in which the small variations in the price dynamics are modeled with a Poisson random measure withmore » infinite activity and models in which these small variations are modeled with a Brownian motion or are cut off. Using the convergence results on BSDEJs, we show that quadratic hedging strategies are robust towards the approximation of the market prices and we derive an estimation of the model risk.« less

  19. Retention for Stoploss reinsurance to minimize VaR in compound Poisson-Lognormal distribution

    NASA Astrophysics Data System (ADS)

    Soleh, Achmad Zanbar; Noviyanti, Lienda; Nurrahmawati, Irma

    2015-12-01

    Automobile insurance is one of the emerging general insurance's product in Indonesia. Fluctuation in total premium revenues and total claim expenses leads to a risk that insurance company can not be able to pay consumer's claims, thus reinsurance is needeed. Reinsurance is a risk transfer mechanism from the insurance company to another company called reinsurer, one of the reinsurance type is Stoploss. Because reinsurer charges premium to the insurance company, it is important to determine the retention or the total claims to be retain solely by the insurance company. Thus, retention is determined using Value at Risk (VaR) which minimize the total risk of the insurance company in the presence of Stoploss reinsurance. Retention depends only on the distribution of total claims and reinsurance loading factor. We use the compound Poisson distribution and the Log-Normal Distribution to illustrate the retention value in a collective risk model.

  20. Independence of the effective dielectric constant of an electrolytic solution on the ionic distribution in the linear Poisson-Nernst-Planck model.

    PubMed

    Alexe-Ionescu, A L; Barbero, G; Lelidis, I

    2014-08-28

    We consider the influence of the spatial dependence of the ions distribution on the effective dielectric constant of an electrolytic solution. We show that in the linear version of the Poisson-Nernst-Planck model, the effective dielectric constant of the solution has to be considered independent of any ionic distribution induced by the external field. This result follows from the fact that, in the linear approximation of the Poisson-Nernst-Planck model, the redistribution of the ions in the solvent due to the external field gives rise to a variation of the dielectric constant that is of the first order in the effective potential, and therefore it has to be neglected in the Poisson's equation that relates the actual electric potential across the electrolytic cell to the bulk density of ions. The analysis is performed in the case where the electrodes are perfectly blocking and the adsorption at the electrodes is negligible, and in the absence of any ion dissociation-recombination effect.

  1. Temperature dependences of the time of electron-electron interactions in two-dimensional heterojunction

    NASA Astrophysics Data System (ADS)

    Bukhenskyy, K. V.; Dubois, A. B.; Kucheryavyy, S. I.; Mashnina, S. N.; Safoshkin, A. S.; Baukov, A. A.; Shchigorev, E. Yu

    2017-12-01

    The article discusses the joint solution of the Schrödinger and Poisson equations for two-dimensional semiconductor heterojunction. The application of a triangular potential of well approximation for the calculation of the electron-electron interaction is offered in the paper. The influence of the parameters of the selected approximation was analyzed.

  2. Anisotropic norm-oriented mesh adaptation for a Poisson problem

    NASA Astrophysics Data System (ADS)

    Brèthes, Gautier; Dervieux, Alain

    2016-10-01

    We present a novel formulation for the mesh adaptation of the approximation of a Partial Differential Equation (PDE). The discussion is restricted to a Poisson problem. The proposed norm-oriented formulation extends the goal-oriented formulation since it is equation-based and uses an adjoint. At the same time, the norm-oriented formulation somewhat supersedes the goal-oriented one since it is basically a solution-convergent method. Indeed, goal-oriented methods rely on the reduction of the error in evaluating a chosen scalar output with the consequence that, as mesh size is increased (more degrees of freedom), only this output is proven to tend to its continuous analog while the solution field itself may not converge. A remarkable quality of goal-oriented metric-based adaptation is the mathematical formulation of the mesh adaptation problem under the form of the optimization, in the well-identified set of metrics, of a well-defined functional. In the new proposed formulation, we amplify this advantage. We search, in the same well-identified set of metrics, the minimum of a norm of the approximation error. The norm is prescribed by the user and the method allows addressing the case of multi-objective adaptation like, for example in aerodynamics, adaptating the mesh for drag, lift and moment in one shot. In this work, we consider the basic linear finite-element approximation and restrict our study to L2 norm in order to enjoy second-order convergence. Numerical examples for the Poisson problem are computed.

  3. Theoretical simulations of the structural stabilities, elastic, thermodynamic and electronic properties of Pt3Sc and Pt3Y compounds

    NASA Astrophysics Data System (ADS)

    Boulechfar, R.; Khenioui, Y.; Drablia, S.; Meradji, H.; Abu-Jafar, M.; Omran, S. Bin; Khenata, R.; Ghemid, S.

    2018-05-01

    Ab-initio calculations based on density functional theory have been performed to study the structural, electronic, thermodynamic and mechanical properties of intermetallic compounds Pt3Sc and Pt3Y using the full-potential linearized augmented plane wave(FP-LAPW) method. The total energy calculations performed for L12, D022 and D024 structures confirm the experimental phase stability. Using the generalized gradient approximation (GGA), the values of enthalpies formation are -1.23 eV/atom and -1.18 eV/atom for Pt3Sc and Pt3Y, respectively. The densities of states (DOS) spectra show the existence of a pseudo-gap at the Fermi level for both compounds which indicate the strong spd hybridization and directing covalent bonding. Furthermore, the density of states at the Fermi level N(EF), the electronic specific heat coefficient (γele) and the number of bonding electrons per atom are predicted in addition to the elastic constants (C11, C12 and C44). The shear modulus (GH), Young's modulus (E), Poisson's ratio (ν), anisotropy factor (A), ratio of B/GH and Cauchy pressure (C12-C44) are also estimated. These parameters show that the Pt3Sc and Pt3Y are ductile compounds. The thermodynamic properties were calculated using the quasi-harmonic Debye model to account for their lattice vibrations. In addition, the influence of the temperature and pressure was analyzed on the heat capacities (Cp and Cv), thermal expansion coefficient (α), Debye temperature (θD) and Grüneisen parameter (γ).

  4. Convergence of Spectral Discretizations of the Vlasov--Poisson System

    DOE PAGES

    Manzini, G.; Funaro, D.; Delzanno, G. L.

    2017-09-26

    Here we prove the convergence of a spectral discretization of the Vlasov-Poisson system. The velocity term of the Vlasov equation is discretized using either Hermite functions on the infinite domain or Legendre polynomials on a bounded domain. The spatial term of the Vlasov and Poisson equations is discretized using periodic Fourier expansions. Boundary conditions are treated in weak form through a penalty type term that can be applied also in the Hermite case. As a matter of fact, stability properties of the approximated scheme descend from this added term. The convergence analysis is carried out in detail for the 1D-1Vmore » case, but results can be generalized to multidimensional domains, obtained as Cartesian product, in both space and velocity. The error estimates show the spectral convergence under suitable regularity assumptions on the exact solution.« less

  5. Weak convergence to isotropic complex [Formula: see text] random measure.

    PubMed

    Wang, Jun; Li, Yunmeng; Sang, Liheng

    2017-01-01

    In this paper, we prove that an isotropic complex symmetric α -stable random measure ([Formula: see text]) can be approximated by a complex process constructed by integrals based on the Poisson process with random intensity.

  6. Rayleigh-Sommerfield Diffraction vs Fresnel-Kirchhoff, Fourier Propagation and Poisson's Spot

    NASA Technical Reports Server (NTRS)

    Lucke, Robert L.

    2004-01-01

    The boundary conditions imposed on the diffraction problem in order to obtain the Fresnel-Kirchhoff (FK) solution are well-known to be mathematically inconsistent and to be violated by the solution when the observation point is close to the diffracting screen 1-3. These problems are absent in the Rayleigh-Sommerfeld (RS) solution. The difference between RS and FK is in the inclination factor and is usually immaterial because the inclination factor is approximated by unity. But when this approximation is not valid, FK can lead to unacceptable answers. Calculating the on-axis intensity of Poisson s spot provides a critical test, a test passed by RS and failed by FK. FK fails because (a) convergence of the integral depends on how it is evaluated and (b) when the convergence problem is xed, the predicted amplitude at points near the obscuring disk is not consistent with the assumed boundary conditions.

  7. A GPU accelerated and error-controlled solver for the unbounded Poisson equation in three dimensions

    NASA Astrophysics Data System (ADS)

    Exl, Lukas

    2017-12-01

    An efficient solver for the three dimensional free-space Poisson equation is presented. The underlying numerical method is based on finite Fourier series approximation. While the error of all involved approximations can be fully controlled, the overall computation error is driven by the convergence of the finite Fourier series of the density. For smooth and fast-decaying densities the proposed method will be spectrally accurate. The method scales with O(N log N) operations, where N is the total number of discretization points in the Cartesian grid. The majority of the computational costs come from fast Fourier transforms (FFT), which makes it ideal for GPU computation. Several numerical computations on CPU and GPU validate the method and show efficiency and convergence behavior. Tests are performed using the Vienna Scientific Cluster 3 (VSC3). A free MATLAB implementation for CPU and GPU is provided to the interested community.

  8. Maximum-likelihood fitting of data dominated by Poisson statistical uncertainties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stoneking, M.R.; Den Hartog, D.J.

    1996-06-01

    The fitting of data by {chi}{sup 2}-minimization is valid only when the uncertainties in the data are normally distributed. When analyzing spectroscopic or particle counting data at very low signal level (e.g., a Thomson scattering diagnostic), the uncertainties are distributed with a Poisson distribution. The authors have developed a maximum-likelihood method for fitting data that correctly treats the Poisson statistical character of the uncertainties. This method maximizes the total probability that the observed data are drawn from the assumed fit function using the Poisson probability function to determine the probability for each data point. The algorithm also returns uncertainty estimatesmore » for the fit parameters. They compare this method with a {chi}{sup 2}-minimization routine applied to both simulated and real data. Differences in the returned fits are greater at low signal level (less than {approximately}20 counts per measurement). the maximum-likelihood method is found to be more accurate and robust, returning a narrower distribution of values for the fit parameters with fewer outliers.« less

  9. Development of a fractional-step method for the unsteady incompressible Navier-Stokes equations in generalized coordinate systems

    NASA Technical Reports Server (NTRS)

    Rosenfeld, Moshe; Kwak, Dochan; Vinokur, Marcel

    1992-01-01

    A fractional step method is developed for solving the time-dependent three-dimensional incompressible Navier-Stokes equations in generalized coordinate systems. The primitive variable formulation uses the pressure, defined at the center of the computational cell, and the volume fluxes across the faces of the cells as the dependent variables, instead of the Cartesian components of the velocity. This choice is equivalent to using the contravariant velocity components in a staggered grid multiplied by the volume of the computational cell. The governing equations are discretized by finite volumes using a staggered mesh system. The solution of the continuity equation is decoupled from the momentum equations by a fractional step method which enforces mass conservation by solving a Poisson equation. This procedure, combined with the consistent approximations of the geometric quantities, is done to satisfy the discretized mass conservation equation to machine accuracy, as well as to gain the favorable convergence properties of the Poisson solver. The momentum equations are solved by an approximate factorization method, and a novel ZEBRA scheme with four-color ordering is devised for the efficient solution of the Poisson equation. Several two- and three-dimensional laminar test cases are computed and compared with other numerical and experimental results to validate the solution method. Good agreement is obtained in all cases.

  10. Integrate-and-fire vs Poisson models of LGN input to V1 cortex: noisier inputs reduce orientation selectivity

    PubMed Central

    Lin, I-Chun; Xing, Dajun; Shapley, Robert

    2014-01-01

    One of the reasons the visual cortex has attracted the interest of computational neuroscience is that it has well-defined inputs. The lateral geniculate nucleus (LGN) of the thalamus is the source of visual signals to the primary visual cortex (V1). Most large-scale cortical network models approximate the spike trains of LGN neurons as simple Poisson point processes. However, many studies have shown that neurons in the early visual pathway are capable of spiking with high temporal precision and their discharges are not Poisson-like. To gain an understanding of how response variability in the LGN influences the behavior of V1, we study response properties of model V1 neurons that receive purely feedforward inputs from LGN cells modeled either as noisy leaky integrate-and-fire (NLIF) neurons or as inhomogeneous Poisson processes. We first demonstrate that the NLIF model is capable of reproducing many experimentally observed statistical properties of LGN neurons. Then we show that a V1 model in which the LGN input to a V1 neuron is modeled as a group of NLIF neurons produces higher orientation selectivity than the one with Poisson LGN input. The second result implies that statistical characteristics of LGN spike trains are important for V1's function. We conclude that physiologically motivated models of V1 need to include more realistic LGN spike trains that are less noisy than inhomogeneous Poisson processes. PMID:22684587

  11. Integrate-and-fire vs Poisson models of LGN input to V1 cortex: noisier inputs reduce orientation selectivity.

    PubMed

    Lin, I-Chun; Xing, Dajun; Shapley, Robert

    2012-12-01

    One of the reasons the visual cortex has attracted the interest of computational neuroscience is that it has well-defined inputs. The lateral geniculate nucleus (LGN) of the thalamus is the source of visual signals to the primary visual cortex (V1). Most large-scale cortical network models approximate the spike trains of LGN neurons as simple Poisson point processes. However, many studies have shown that neurons in the early visual pathway are capable of spiking with high temporal precision and their discharges are not Poisson-like. To gain an understanding of how response variability in the LGN influences the behavior of V1, we study response properties of model V1 neurons that receive purely feedforward inputs from LGN cells modeled either as noisy leaky integrate-and-fire (NLIF) neurons or as inhomogeneous Poisson processes. We first demonstrate that the NLIF model is capable of reproducing many experimentally observed statistical properties of LGN neurons. Then we show that a V1 model in which the LGN input to a V1 neuron is modeled as a group of NLIF neurons produces higher orientation selectivity than the one with Poisson LGN input. The second result implies that statistical characteristics of LGN spike trains are important for V1's function. We conclude that physiologically motivated models of V1 need to include more realistic LGN spike trains that are less noisy than inhomogeneous Poisson processes.

  12. Poisson Statistics of Combinatorial Library Sampling Predict False Discovery Rates of Screening

    PubMed Central

    2017-01-01

    Microfluidic droplet-based screening of DNA-encoded one-bead-one-compound combinatorial libraries is a miniaturized, potentially widely distributable approach to small molecule discovery. In these screens, a microfluidic circuit distributes library beads into droplets of activity assay reagent, photochemically cleaves the compound from the bead, then incubates and sorts the droplets based on assay result for subsequent DNA sequencing-based hit compound structure elucidation. Pilot experimental studies revealed that Poisson statistics describe nearly all aspects of such screens, prompting the development of simulations to understand system behavior. Monte Carlo screening simulation data showed that increasing mean library sampling (ε), mean droplet occupancy, or library hit rate all increase the false discovery rate (FDR). Compounds identified as hits on k > 1 beads (the replicate k class) were much more likely to be authentic hits than singletons (k = 1), in agreement with previous findings. Here, we explain this observation by deriving an equation for authenticity, which reduces to the product of a library sampling bias term (exponential in k) and a sampling saturation term (exponential in ε) setting a threshold that the k-dependent bias must overcome. The equation thus quantitatively describes why each hit structure’s FDR is based on its k class, and further predicts the feasibility of intentionally populating droplets with multiple library beads, assaying the micromixtures for function, and identifying the active members by statistical deconvolution. PMID:28682059

  13. AQUEOUS PROTONATION PROPERTIES OF AMPHOTERIC NANOPARTICLES

    EPA Science Inventory

    A divergence is predicted between the acidity behavior of charged sites on micron sized colloidal particles and nanoparticles. Utilizing the approximate analytical solution to the Poisson-Boltzmann equation published by Ohshima et al. (1982), findings from the work included: 1):...

  14. Extended Poisson process modelling and analysis of grouped binary data.

    PubMed

    Faddy, Malcolm J; Smith, David M

    2012-05-01

    A simple extension of the Poisson process results in binomially distributed counts of events in a time interval. A further extension generalises this to probability distributions under- or over-dispersed relative to the binomial distribution. Substantial levels of under-dispersion are possible with this modelling, but only modest levels of over-dispersion - up to Poisson-like variation. Although simple analytical expressions for the moments of these probability distributions are not available, approximate expressions for the mean and variance are derived, and used to re-parameterise the models. The modelling is applied in the analysis of two published data sets, one showing under-dispersion and the other over-dispersion. More appropriate assessment of the precision of estimated parameters and reliable model checking diagnostics follow from this more general modelling of these data sets. © 2012 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Poisson, Poisson-gamma and zero-inflated regression models of motor vehicle crashes: balancing statistical fit and theory.

    PubMed

    Lord, Dominique; Washington, Simon P; Ivan, John N

    2005-01-01

    There has been considerable research conducted over the last 20 years focused on predicting motor vehicle crashes on transportation facilities. The range of statistical models commonly applied includes binomial, Poisson, Poisson-gamma (or negative binomial), zero-inflated Poisson and negative binomial models (ZIP and ZINB), and multinomial probability models. Given the range of possible modeling approaches and the host of assumptions with each modeling approach, making an intelligent choice for modeling motor vehicle crash data is difficult. There is little discussion in the literature comparing different statistical modeling approaches, identifying which statistical models are most appropriate for modeling crash data, and providing a strong justification from basic crash principles. In the recent literature, it has been suggested that the motor vehicle crash process can successfully be modeled by assuming a dual-state data-generating process, which implies that entities (e.g., intersections, road segments, pedestrian crossings, etc.) exist in one of two states-perfectly safe and unsafe. As a result, the ZIP and ZINB are two models that have been applied to account for the preponderance of "excess" zeros frequently observed in crash count data. The objective of this study is to provide defensible guidance on how to appropriate model crash data. We first examine the motor vehicle crash process using theoretical principles and a basic understanding of the crash process. It is shown that the fundamental crash process follows a Bernoulli trial with unequal probability of independent events, also known as Poisson trials. We examine the evolution of statistical models as they apply to the motor vehicle crash process, and indicate how well they statistically approximate the crash process. We also present the theory behind dual-state process count models, and note why they have become popular for modeling crash data. A simulation experiment is then conducted to demonstrate how crash data give rise to "excess" zeros frequently observed in crash data. It is shown that the Poisson and other mixed probabilistic structures are approximations assumed for modeling the motor vehicle crash process. Furthermore, it is demonstrated that under certain (fairly common) circumstances excess zeros are observed-and that these circumstances arise from low exposure and/or inappropriate selection of time/space scales and not an underlying dual state process. In conclusion, carefully selecting the time/space scales for analysis, including an improved set of explanatory variables and/or unobserved heterogeneity effects in count regression models, or applying small-area statistical methods (observations with low exposure) represent the most defensible modeling approaches for datasets with a preponderance of zeros.

  16. Assessing historical rate changes in global tsunami occurrence

    USGS Publications Warehouse

    Geist, E.L.; Parsons, T.

    2011-01-01

    The global catalogue of tsunami events is examined to determine if transient variations in tsunami rates are consistent with a Poisson process commonly assumed for tsunami hazard assessments. The primary data analyzed are tsunamis with maximum sizes >1m. The record of these tsunamis appears to be complete since approximately 1890. A secondary data set of tsunamis >0.1m is also analyzed that appears to be complete since approximately 1960. Various kernel density estimates used to determine the rate distribution with time indicate a prominent rate change in global tsunamis during the mid-1990s. Less prominent rate changes occur in the early- and mid-20th century. To determine whether these rate fluctuations are anomalous, the distribution of annual event numbers for the tsunami catalogue is compared to Poisson and negative binomial distributions, the latter of which includes the effects of temporal clustering. Compared to a Poisson distribution, the negative binomial distribution model provides a consistent fit to tsunami event numbers for the >1m data set, but the Poisson null hypothesis cannot be falsified for the shorter duration >0.1m data set. Temporal clustering of tsunami sources is also indicated by the distribution of interevent times for both data sets. Tsunami event clusters consist only of two to four events, in contrast to protracted sequences of earthquakes that make up foreshock-main shock-aftershock sequences. From past studies of seismicity, it is likely that there is a physical triggering mechanism responsible for events within the tsunami source 'mini-clusters'. In conclusion, prominent transient rate increases in the occurrence of global tsunamis appear to be caused by temporal grouping of geographically distinct mini-clusters, in addition to the random preferential location of global M >7 earthquakes along offshore fault zones.

  17. Fractional models of seismoacoustic and electromagnetic activity

    NASA Astrophysics Data System (ADS)

    Shevtsov, Boris; Sheremetyeva, Olga

    2017-10-01

    Statistical models of the seismoacoustic and electromagnetic activity caused by deformation disturbances are considered on the basis of compound Poisson process and its fractional generalizations. Wave representations of these processes are used too. It is discussed five regimes of deformation activity and their role in understanding of the earthquakes precursors nature.

  18. QMRA for Drinking Water: 2. The Effect of Pathogen Clustering in Single-Hit Dose-Response Models.

    PubMed

    Nilsen, Vegard; Wyller, John

    2016-01-01

    Spatial and/or temporal clustering of pathogens will invalidate the commonly used assumption of Poisson-distributed pathogen counts (doses) in quantitative microbial risk assessment. In this work, the theoretically predicted effect of spatial clustering in conventional "single-hit" dose-response models is investigated by employing the stuttering Poisson distribution, a very general family of count distributions that naturally models pathogen clustering and contains the Poisson and negative binomial distributions as special cases. The analysis is facilitated by formulating the dose-response models in terms of probability generating functions. It is shown formally that the theoretical single-hit risk obtained with a stuttering Poisson distribution is lower than that obtained with a Poisson distribution, assuming identical mean doses. A similar result holds for mixed Poisson distributions. Numerical examples indicate that the theoretical single-hit risk is fairly insensitive to moderate clustering, though the effect tends to be more pronounced for low mean doses. Furthermore, using Jensen's inequality, an upper bound on risk is derived that tends to better approximate the exact theoretical single-hit risk for highly overdispersed dose distributions. The bound holds with any dose distribution (characterized by its mean and zero inflation index) and any conditional dose-response model that is concave in the dose variable. Its application is exemplified with published data from Norovirus feeding trials, for which some of the administered doses were prepared from an inoculum of aggregated viruses. The potential implications of clustering for dose-response assessment as well as practical risk characterization are discussed. © 2016 Society for Risk Analysis.

  19. Elastic, Optoelectronic and Thermoelectric Properties of the Lead-Free Halide Semiconductors Cs2AgBi X 6 ( X = Cl, Br): Ab Initio Investigation

    NASA Astrophysics Data System (ADS)

    Guechi, N.; Bouhemadou, A.; Bin-Omran, S.; Bourzami, A.; Louail, L.

    2018-02-01

    We report a detailed investigation of the elastic moduli, electronic band structure, density of states, chemical bonding, electron and hole effective masses, optical response functions and thermoelectric properties of the lead-free halide double perovskites Cs2AgBiCl6 and Cs2AgBiBr6 using the full potential linearized augmented plane wave (FP-LAPW) method with the generalized gradient approximation (GGA-PBEsol) and the Tran-Blaha modified Becke-Johnson (TB-mBJ) potential. Because of the presence of heavy elements in the studied compounds, we include the spin-orbit coupling (SOC) effect. Our calculated structural parameters agree very well with the available experimental and theoretical findings. Single-crystal and polycrystalline elastic constants are predicted using the total-energy versus strain approach. Three-dimensional representations of the crystallographic direction dependence on the shear modulus, Young's modulus and Poisson's ratio demonstrate a noticeable elastic anisotropy. The TB-mBJ potential with SOC yields an indirect band gap of 2.44 (1.93) eV for Cs2AgBiCl6 (Cs2AgBiBr6), in good agreement with the existing experimental data. The chemical bonding features are probed via density of states and valence electron density distribution calculations. Optical response functions were predicted from the calculated band structure. Both of the investigated compounds have a significant absorption coefficient (˜ 25 × 104 {cm}^{ - 1} ) in the visible range of sunlight. The thermoelectric properties of the title compounds were investigated using the FP-LAPW approach in combination with the semi-classical Boltzmann transport theory. The Cs2AgBiCl6 and Cs2AgBiBr6 compounds have a large thermopower S, which makes them potential candidates for thermoelectric applications.

  20. A numerical investigation into the ability of the Poisson PDE to extract the mass-density from land-based gravity data: A case study of salt diapirs in the north coast of the Persian Gulf

    NASA Astrophysics Data System (ADS)

    AllahTavakoli, Yahya; Safari, Abdolreza

    2017-08-01

    This paper is counted as a numerical investigation into the capability of Poisson's Partial Differential Equation (PDE) at Earth's surface to extract the near-surface mass-density from land-based gravity data. For this purpose, first it focuses on approximating the gradient tensor of Earth's gravitational potential by means of land-based gravity data. Then, based on the concepts of both the gradient tensor and Poisson's PDE at the Earth's surface, certain formulae are proposed for the mass-density determination. Furthermore, this paper shows how the generalized Tikhonov regularization strategy can be used for enhancing the efficiency of the proposed approach. Finally, in a real case study, the formulae are applied to 6350 gravity stations located within a part of the north coast of the Persian Gulf. The case study numerically indicates that the proposed formulae, provided by Poisson's PDE, has the ability to convert land-based gravity data into the terrain mass-density which has been used for depicting areas of salt diapirs in the region of the case study.

  1. Long-term statistics of extreme tsunami height at Crescent City

    NASA Astrophysics Data System (ADS)

    Dong, Sheng; Zhai, Jinjin; Tao, Shanshan

    2017-06-01

    Historically, Crescent City is one of the most vulnerable communities impacted by tsunamis along the west coast of the United States, largely attributed to its offshore geography. Trans-ocean tsunamis usually produce large wave runup at Crescent Harbor resulting in catastrophic damages, property loss and human death. How to determine the return values of tsunami height using relatively short-term observation data is of great significance to assess the tsunami hazards and improve engineering design along the coast of Crescent City. In the present study, the extreme tsunami heights observed along the coast of Crescent City from 1938 to 2015 are fitted using six different probabilistic distributions, namely, the Gumbel distribution, the Weibull distribution, the maximum entropy distribution, the lognormal distribution, the generalized extreme value distribution and the generalized Pareto distribution. The maximum likelihood method is applied to estimate the parameters of all above distributions. Both Kolmogorov-Smirnov test and root mean square error method are utilized for goodness-of-fit test and the better fitting distribution is selected. Assuming that the occurrence frequency of tsunami in each year follows the Poisson distribution, the Poisson compound extreme value distribution can be used to fit the annual maximum tsunami amplitude, and then the point and interval estimations of return tsunami heights are calculated for structural design. The results show that the Poisson compound extreme value distribution fits tsunami heights very well and is suitable to determine the return tsunami heights for coastal disaster prevention.

  2. Modeling salt-mediated electrostatics of macromolecules: the discrete surface charge optimization algorithm and its application to the nucleosome.

    PubMed

    Beard, D A; Schlick, T

    2001-01-01

    Much progress has been achieved on quantitative assessment of electrostatic interactions on the all-atom level by molecular mechanics and dynamics, as well as on the macroscopic level by models of continuum solvation. Bridging of the two representations-an area of active research-is necessary for studying integrated functions of large systems of biological importance. Following perspectives of both discrete (N-body) interaction and continuum solvation, we present a new algorithm, DiSCO (Discrete Surface Charge Optimization), for economically describing the electrostatic field predicted by Poisson-Boltzmann theory using a discrete set of Debye-Hückel charges distributed on a virtual surface enclosing the macromolecule. The procedure in DiSCO relies on the linear behavior of the Poisson-Boltzmann equation in the far zone; thus contributions from a number of molecules may be superimposed, and the electrostatic potential, or equivalently the electrostatic field, may be quickly and efficiently approximated by the summation of contributions from the set of charges. The desired accuracy of this approximation is achieved by minimizing the difference between the Poisson-Boltzmann electrostatic field and that produced by the linearized Debye-Hückel approximation using our truncated Newton optimization package. DiSCO is applied here to describe the salt-dependent electrostatic environment of the nucleosome core particle in terms of several hundred surface charges. This representation forms the basis for modeling-by dynamic simulations (or Monte Carlo)-the folding of chromatin. DiSCO can be applied more generally to many macromolecular systems whose size and complexity warrant a model resolution between the all-atom and macroscopic levels. Copyright 2000 John Wiley & Sons, Inc.

  3. Self-energy-modified Poisson-Nernst-Planck equations: WKB approximation and finite-difference approaches.

    PubMed

    Xu, Zhenli; Ma, Manman; Liu, Pei

    2014-07-01

    We propose a modified Poisson-Nernst-Planck (PNP) model to investigate charge transport in electrolytes of inhomogeneous dielectric environment. The model includes the ionic polarization due to the dielectric inhomogeneity and the ion-ion correlation. This is achieved by the self energy of test ions through solving a generalized Debye-Hückel (DH) equation. We develop numerical methods for the system composed of the PNP and DH equations. Particularly, toward the numerical challenge of solving the high-dimensional DH equation, we developed an analytical WKB approximation and a numerical approach based on the selective inversion of sparse matrices. The model and numerical methods are validated by simulating the charge diffusion in electrolytes between two electrodes, for which effects of dielectrics and correlation are investigated by comparing the results with the prediction by the classical PNP theory. We find that, at the length scale of the interface separation comparable to the Bjerrum length, the results of the modified equations are significantly different from the classical PNP predictions mostly due to the dielectric effect. It is also shown that when the ion self energy is in weak or mediate strength, the WKB approximation presents a high accuracy, compared to precise finite-difference results.

  4. The electric double layer at a metal electrode in pure water

    NASA Astrophysics Data System (ADS)

    Brüesch, Peter; Christen, Thomas

    2004-03-01

    Pure water is a weak electrolyte that dissociates into hydronium ions and hydroxide ions. In contact with a charged electrode a double layer forms for which neither experimental nor theoretical studies exist, in contrast to electrolytes containing extrinsic ions like acids, bases, and solute salts. Starting from a self-consistent solution of the one-dimensional modified Poisson-Boltzmann equation, which takes into account activity coefficients of point-like ions, we explore the properties of the electric double layer by successive incorporation of various correction terms like finite ion size, polarization, image charge, and field dissociation. We also discuss the effect of the usual approximation of an average potential as required for the one-dimensional Poisson-Boltzmann equation, and conclude that the one-dimensional approximation underestimates the ion density. We calculate the electric potential, the ion distributions, the pH-values, the ion-size corrected activity coefficients, and the dissociation constants close to the electric double layer and compare the results for the various model corrections.

  5. A multiscale filter for noise reduction of low-dose cone beam projections.

    PubMed

    Yao, Weiguang; Farr, Jonathan B

    2015-08-21

    The Poisson or compound Poisson process governs the randomness of photon fluence in cone beam computed tomography (CBCT) imaging systems. The probability density function depends on the mean (noiseless) of the fluence at a certain detector. This dependence indicates the natural requirement of multiscale filters to smooth noise while preserving structures of the imaged object on the low-dose cone beam projection. In this work, we used a Gaussian filter, exp(-x2/2σ(2)(f)) as the multiscale filter to de-noise the low-dose cone beam projections. We analytically obtained the expression of σ(f), which represents the scale of the filter, by minimizing local noise-to-signal ratio. We analytically derived the variance of residual noise from the Poisson or compound Poisson processes after Gaussian filtering. From the derived analytical form of the variance of residual noise, optimal σ(2)(f)) is proved to be proportional to the noiseless fluence and modulated by local structure strength expressed as the linear fitting error of the structure. A strategy was used to obtain the reliable linear fitting error: smoothing the projection along the longitudinal direction to calculate the linear fitting error along the lateral direction and vice versa. The performance of our multiscale filter was examined on low-dose cone beam projections of a Catphan phantom and a head-and-neck patient. After performing the filter on the Catphan phantom projections scanned with pulse time 4 ms, the number of visible line pairs was similar to that scanned with 16 ms, and the contrast-to-noise ratio of the inserts was higher than that scanned with 16 ms about 64% in average. For the simulated head-and-neck patient projections with pulse time 4 ms, the visibility of soft tissue structures in the patient was comparable to that scanned with 20 ms. The image processing took less than 0.5 s per projection with 1024   ×   768 pixels.

  6. On-Orbit Collision Hazard Analysis in Low Earth Orbit Using the Poisson Probability Distribution (Version 1.0)

    DOT National Transportation Integrated Search

    1992-08-26

    This document provides the basic information needed to estimate a general : probability of collision in Low Earth Orbit (LEO). Although the method : described in this primer is a first order approximation, its results are : reasonable. Furthermore, t...

  7. Sparse Poisson noisy image deblurring.

    PubMed

    Carlavan, Mikael; Blanc-Féraud, Laure

    2012-04-01

    Deblurring noisy Poisson images has recently been a subject of an increasing amount of works in many areas such as astronomy and biological imaging. In this paper, we focus on confocal microscopy, which is a very popular technique for 3-D imaging of biological living specimens that gives images with a very good resolution (several hundreds of nanometers), although degraded by both blur and Poisson noise. Deconvolution methods have been proposed to reduce these degradations, and in this paper, we focus on techniques that promote the introduction of an explicit prior on the solution. One difficulty of these techniques is to set the value of the parameter, which weights the tradeoff between the data term and the regularizing term. Only few works have been devoted to the research of an automatic selection of this regularizing parameter when considering Poisson noise; therefore, it is often set manually such that it gives the best visual results. We present here two recent methods to estimate this regularizing parameter, and we first propose an improvement of these estimators, which takes advantage of confocal images. Following these estimators, we secondly propose to express the problem of the deconvolution of Poisson noisy images as the minimization of a new constrained problem. The proposed constrained formulation is well suited to this application domain since it is directly expressed using the antilog likelihood of the Poisson distribution and therefore does not require any approximation. We show how to solve the unconstrained and constrained problems using the recent alternating-direction technique, and we present results on synthetic and real data using well-known priors, such as total variation and wavelet transforms. Among these wavelet transforms, we specially focus on the dual-tree complex wavelet transform and on the dictionary composed of curvelets and an undecimated wavelet transform.

  8. External Theory for Stochastic Processes.

    DTIC Science & Technology

    1985-11-01

    1.2 1.4 1.8 11111125 11.I6 MICROCOP RESOLUTION TEST CHART M.. MW’ PAPI ~ W W ’W IV AV a a W 4 * S6 _ ~.. r dV . Unclassif’ DA 7 4 9JT FILE COPY...intensity measure has the Laplace : <-f Transform L (f)=exp(-x (l-e - f ) whereas a Compound Poisson Process has Laplace Transform (2.3.1) L (f...see Example 2.2.4 as an illustration of this). The result is a clustering of exceedances, leading to a compounding of events in the limiting point

  9. AN EFFICIENT HIGHER-ORDER FAST MULTIPOLE BOUNDARY ELEMENT SOLUTION FOR POISSON-BOLTZMANN BASED MOLECULAR ELECTROSTATICS

    PubMed Central

    Bajaj, Chandrajit; Chen, Shun-Chuan; Rand, Alexander

    2011-01-01

    In order to compute polarization energy of biomolecules, we describe a boundary element approach to solving the linearized Poisson-Boltzmann equation. Our approach combines several important features including the derivative boundary formulation of the problem and a smooth approximation of the molecular surface based on the algebraic spline molecular surface. State of the art software for numerical linear algebra and the kernel independent fast multipole method is used for both simplicity and efficiency of our implementation. We perform a variety of computational experiments, testing our method on a number of actual proteins involved in molecular docking and demonstrating the effectiveness of our solver for computing molecular polarization energy. PMID:21660123

  10. Calculation of Protein Heat Capacity from Replica-Exchange Molecular Dynamics Simulations with Different Implicit Solvent Models

    DTIC Science & Technology

    2008-10-30

    rigorous Poisson-based methods generally apply a Lee-Richards mo- lecular surface.9 This surface is considered the de facto description for continuum...definition and calculation of the Born radii. To evaluate the Born radii, two approximations are invoked. The first is the Coulomb field approximation (CFA...energy term, and depending on the particular GB formulation, higher-order non- Coulomb correction terms may be added to the Born radii to account for the

  11. Assessment of Linear Finite-Difference Poisson-Boltzmann Solvers

    PubMed Central

    Wang, Jun; Luo, Ray

    2009-01-01

    CPU time and memory usage are two vital issues that any numerical solvers for the Poisson-Boltzmann equation have to face in biomolecular applications. In this study we systematically analyzed the CPU time and memory usage of five commonly used finite-difference solvers with a large and diversified set of biomolecular structures. Our comparative analysis shows that modified incomplete Cholesky conjugate gradient and geometric multigrid are the most efficient in the diversified test set. For the two efficient solvers, our test shows that their CPU times increase approximately linearly with the numbers of grids. Their CPU times also increase almost linearly with the negative logarithm of the convergence criterion at very similar rate. Our comparison further shows that geometric multigrid performs better in the large set of tested biomolecules. However, modified incomplete Cholesky conjugate gradient is superior to geometric multigrid in molecular dynamics simulations of tested molecules. We also investigated other significant components in numerical solutions of the Poisson-Boltzmann equation. It turns out that the time-limiting step is the free boundary condition setup for the linear systems for the selected proteins if the electrostatic focusing is not used. Thus, development of future numerical solvers for the Poisson-Boltzmann equation should balance all aspects of the numerical procedures in realistic biomolecular applications. PMID:20063271

  12. Casimir meets Poisson: improved quark/gluon discrimination with counting observables

    DOE PAGES

    Frye, Christopher; Larkoski, Andrew J.; Thaler, Jesse; ...

    2017-09-19

    Charged track multiplicity is among the most powerful observables for discriminating quark- from gluon-initiated jets. Despite its utility, it is not infrared and collinear (IRC) safe, so perturbative calculations are limited to studying the energy evolution of multiplicity moments. While IRC-safe observables, like jet mass, are perturbatively calculable, their distributions often exhibit Casimir scaling, such that their quark/gluon discrimination power is limited by the ratio of quark to gluon color factors. In this paper, we introduce new IRC-safe counting observables whose discrimination performance exceeds that of jet mass and approaches that of track multiplicity. The key observation is that trackmore » multiplicity is approximately Poisson distributed, with more suppressed tails than the Sudakov peak structure from jet mass. By using an iterated version of the soft drop jet grooming algorithm, we can define a “soft drop multiplicity” which is Poisson distributed at leading-logarithmic accuracy. In addition, we calculate the next-to-leading-logarithmic corrections to this Poisson structure. If we allow the soft drop groomer to proceed to the end of the jet branching history, we can define a collinear-unsafe (but still infrared-safe) counting observable. Exploiting the universality of the collinear limit, we define generalized fragmentation functions to study the perturbative energy evolution of collinear-unsafe multiplicity.« less

  13. Casimir meets Poisson: improved quark/gluon discrimination with counting observables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frye, Christopher; Larkoski, Andrew J.; Thaler, Jesse

    Charged track multiplicity is among the most powerful observables for discriminating quark- from gluon-initiated jets. Despite its utility, it is not infrared and collinear (IRC) safe, so perturbative calculations are limited to studying the energy evolution of multiplicity moments. While IRC-safe observables, like jet mass, are perturbatively calculable, their distributions often exhibit Casimir scaling, such that their quark/gluon discrimination power is limited by the ratio of quark to gluon color factors. In this paper, we introduce new IRC-safe counting observables whose discrimination performance exceeds that of jet mass and approaches that of track multiplicity. The key observation is that trackmore » multiplicity is approximately Poisson distributed, with more suppressed tails than the Sudakov peak structure from jet mass. By using an iterated version of the soft drop jet grooming algorithm, we can define a “soft drop multiplicity” which is Poisson distributed at leading-logarithmic accuracy. In addition, we calculate the next-to-leading-logarithmic corrections to this Poisson structure. If we allow the soft drop groomer to proceed to the end of the jet branching history, we can define a collinear-unsafe (but still infrared-safe) counting observable. Exploiting the universality of the collinear limit, we define generalized fragmentation functions to study the perturbative energy evolution of collinear-unsafe multiplicity.« less

  14. Bluues: a program for the analysis of the electrostatic properties of proteins based on generalized Born radii

    PubMed Central

    2012-01-01

    Background The Poisson-Boltzmann (PB) equation and its linear approximation have been widely used to describe biomolecular electrostatics. Generalized Born (GB) models offer a convenient computational approximation for the more fundamental approach based on the Poisson-Boltzmann equation, and allows estimation of pairwise contributions to electrostatic effects in the molecular context. Results We have implemented in a single program most common analyses of the electrostatic properties of proteins. The program first computes generalized Born radii, via a surface integral and then it uses generalized Born radii (using a finite radius test particle) to perform electrostic analyses. In particular the ouput of the program entails, depending on user's requirement: 1) the generalized Born radius of each atom; 2) the electrostatic solvation free energy; 3) the electrostatic forces on each atom (currently in a dvelopmental stage); 4) the pH-dependent properties (total charge and pH-dependent free energy of folding in the pH range -2 to 18; 5) the pKa of all ionizable groups; 6) the electrostatic potential at the surface of the molecule; 7) the electrostatic potential in a volume surrounding the molecule; Conclusions Although at the expense of limited flexibility the program provides most common analyses with requirement of a single input file in PQR format. The results obtained are comparable to those obtained using state-of-the-art Poisson-Boltzmann solvers. A Linux executable with example input and output files is provided as supplementary material. PMID:22536964

  15. Theoretical Investigation of Half-Metallic Oxides XFeO3 (X = Sr, Ba) via Modified Becke-Johnson Potential Scheme

    NASA Astrophysics Data System (ADS)

    Maqsood, Saba; Rashid, Muhammad; Din, Fasih Ud; Saddique, M. Bilal; Laref, A.

    2018-03-01

    The cubic XFeO3 (X = Sr, Ba) perovskite oxides are studied for their thermodynamic stability in the ferromagnetic phase by using density functional theory calculations. We also explore the elastic properties of these compounds in terms of elastic constants C ij, bulk modulus B, shear modulus G, anisotropy factor A, Poisson's ratio ν and the B/ G ratio. The electronic properties are examined to elucidate the magnetic order, and the thermoelectric properties of XFeO3 (X = Sr, Ba) materials are also presented. The modified Becke-Johnson local density approximation scheme has been used to compute the electronic band structure and density of states, which show that these materials are half-metallic ferromagnetic. We study the magnetic properties by computing the crystal field energy (ΔCF), John-Teller energy (ΔJT) and the exchange splitting energies Δx( d) and Δx( pd). Our results indicate that strong hybridization causes a decrease in the magnetic moment of Fe, which then produces permanent magnetic moments in the nonmagnetic sites.

  16. Log-normal frailty models fitted as Poisson generalized linear mixed models.

    PubMed

    Hirsch, Katharina; Wienke, Andreas; Kuss, Oliver

    2016-12-01

    The equivalence of a survival model with a piecewise constant baseline hazard function and a Poisson regression model has been known since decades. As shown in recent studies, this equivalence carries over to clustered survival data: A frailty model with a log-normal frailty term can be interpreted and estimated as a generalized linear mixed model with a binary response, a Poisson likelihood, and a specific offset. Proceeding this way, statistical theory and software for generalized linear mixed models are readily available for fitting frailty models. This gain in flexibility comes at the small price of (1) having to fix the number of pieces for the baseline hazard in advance and (2) having to "explode" the data set by the number of pieces. In this paper we extend the simulations of former studies by using a more realistic baseline hazard (Gompertz) and by comparing the model under consideration with competing models. Furthermore, the SAS macro %PCFrailty is introduced to apply the Poisson generalized linear mixed approach to frailty models. The simulations show good results for the shared frailty model. Our new %PCFrailty macro provides proper estimates, especially in case of 4 events per piece. The suggested Poisson generalized linear mixed approach for log-normal frailty models based on the %PCFrailty macro provides several advantages in the analysis of clustered survival data with respect to more flexible modelling of fixed and random effects, exact (in the sense of non-approximate) maximum likelihood estimation, and standard errors and different types of confidence intervals for all variance parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  17. The origin of bursts and heavy tails in human dynamics.

    PubMed

    Barabási, Albert-László

    2005-05-12

    The dynamics of many social, technological and economic phenomena are driven by individual human actions, turning the quantitative understanding of human behaviour into a central question of modern science. Current models of human dynamics, used from risk assessment to communications, assume that human actions are randomly distributed in time and thus well approximated by Poisson processes. In contrast, there is increasing evidence that the timing of many human activities, ranging from communication to entertainment and work patterns, follow non-Poisson statistics, characterized by bursts of rapidly occurring events separated by long periods of inactivity. Here I show that the bursty nature of human behaviour is a consequence of a decision-based queuing process: when individuals execute tasks based on some perceived priority, the timing of the tasks will be heavy tailed, with most tasks being rapidly executed, whereas a few experience very long waiting times. In contrast, random or priority blind execution is well approximated by uniform inter-event statistics. These finding have important implications, ranging from resource management to service allocation, in both communications and retail.

  18. Survival analysis of clinical mastitis data using a nested frailty Cox model fit as a mixed-effects Poisson model.

    PubMed

    Elghafghuf, Adel; Dufour, Simon; Reyher, Kristen; Dohoo, Ian; Stryhn, Henrik

    2014-12-01

    Mastitis is a complex disease affecting dairy cows and is considered to be the most costly disease of dairy herds. The hazard of mastitis is a function of many factors, both managerial and environmental, making its control a difficult issue to milk producers. Observational studies of clinical mastitis (CM) often generate datasets with a number of characteristics which influence the analysis of those data: the outcome of interest may be the time to occurrence of a case of mastitis, predictors may change over time (time-dependent predictors), the effects of factors may change over time (time-dependent effects), there are usually multiple hierarchical levels, and datasets may be very large. Analysis of such data often requires expansion of the data into the counting-process format - leading to larger datasets - thus complicating the analysis and requiring excessive computing time. In this study, a nested frailty Cox model with time-dependent predictors and effects was applied to Canadian Bovine Mastitis Research Network data in which 10,831 lactations of 8035 cows from 69 herds were followed through lactation until the first occurrence of CM. The model was fit to the data as a Poisson model with nested normally distributed random effects at the cow and herd levels. Risk factors associated with the hazard of CM during the lactation were identified, such as parity, calving season, herd somatic cell score, pasture access, fore-stripping, and proportion of treated cases of CM in a herd. The analysis showed that most of the predictors had a strong effect early in lactation and also demonstrated substantial variation in the baseline hazard among cows and between herds. A small simulation study for a setting similar to the real data was conducted to evaluate the Poisson maximum likelihood estimation approach with both Gaussian quadrature method and Laplace approximation. Further, the performance of the two methods was compared with the performance of a widely used estimation approach for frailty Cox models based on the penalized partial likelihood. The simulation study showed good performance for the Poisson maximum likelihood approach with Gaussian quadrature and biased variance component estimates for both the Poisson maximum likelihood with Laplace approximation and penalized partial likelihood approaches. Copyright © 2014. Published by Elsevier B.V.

  19. Research in Stochastic Processes.

    DTIC Science & Technology

    1983-10-01

    increases. A more detailed investigation for the exceedances themselves (rather than Just the cluster centers) was undertaken, together with J. HUsler and...J. HUsler and M.R. Leadbetter, Compoung Poisson limit theorems for high level exceedances by stationary sequences, Center for Stochastic Processes...stability by a random linear operator. C.D. Hardin, General (asymmetric) stable variables and processes. T. Hsing, J. HUsler and M.R. Leadbetter, Compound

  20. A Mixed-Effects Heterogeneous Negative Binomial Model for Postfire Conifer Regeneration in Northeastern California, USA

    Treesearch

    Justin S. Crotteau; Martin W. Ritchie; J. Morgan Varner

    2014-01-01

    Many western USA fire regimes are typified by mixed-severity fire, which compounds the variability inherent to natural regeneration densities in associated forests. Tree regeneration data are often discrete and nonnegative; accordingly, we fit a series of Poisson and negative binomial variation models to conifer seedling counts across four distinct burn severities and...

  1. A bayesian analysis for identifying DNA copy number variations using a compound poisson process.

    PubMed

    Chen, Jie; Yiğiter, Ayten; Wang, Yu-Ping; Deng, Hong-Wen

    2010-01-01

    To study chromosomal aberrations that may lead to cancer formation or genetic diseases, the array-based Comparative Genomic Hybridization (aCGH) technique is often used for detecting DNA copy number variants (CNVs). Various methods have been developed for gaining CNVs information based on aCGH data. However, most of these methods make use of the log-intensity ratios in aCGH data without taking advantage of other information such as the DNA probe (e.g., biomarker) positions/distances contained in the data. Motivated by the specific features of aCGH data, we developed a novel method that takes into account the estimation of a change point or locus of the CNV in aCGH data with its associated biomarker position on the chromosome using a compound Poisson process. We used a Bayesian approach to derive the posterior probability for the estimation of the CNV locus. To detect loci of multiple CNVs in the data, a sliding window process combined with our derived Bayesian posterior probability was proposed. To evaluate the performance of the method in the estimation of the CNV locus, we first performed simulation studies. Finally, we applied our approach to real data from aCGH experiments, demonstrating its applicability.

  2. Atomic clocks and the continuous-time random-walk

    NASA Astrophysics Data System (ADS)

    Formichella, Valerio; Camparo, James; Tavella, Patrizia

    2017-11-01

    Atomic clocks play a fundamental role in many fields, most notably they generate Universal Coordinated Time and are at the heart of all global navigation satellite systems. Notwithstanding their excellent timekeeping performance, their output frequency does vary: it can display deterministic frequency drift; diverse continuous noise processes result in nonstationary clock noise (e.g., random-walk frequency noise, modelled as a Wiener process), and the clock frequency may display sudden changes (i.e., "jumps"). Typically, the clock's frequency instability is evaluated by the Allan or Hadamard variances, whose functional forms can identify the different operative noise processes. Here, we show that the Allan and Hadamard variances of a particular continuous-time random-walk, the compound Poisson process, have the same functional form as for a Wiener process with drift. The compound Poisson process, introduced as a model for observed frequency jumps, is an alternative to the Wiener process for modelling random walk frequency noise. This alternate model fits well the behavior of the rubidium clocks flying on GPS Block-IIR satellites. Further, starting from jump statistics, the model can be improved by considering a more general form of continuous-time random-walk, and this could bring new insights into the physics of atomic clocks.

  3. Beyond Poisson-Boltzmann: Fluctuation effects and correlation functions

    NASA Astrophysics Data System (ADS)

    Netz, R. R.; Orland, H.

    2000-02-01

    We formulate the exact non-linear field theory for a fluctuating counter-ion distribution in the presence of a fixed, arbitrary charge distribution. The Poisson-Boltzmann equation is obtained as the saddle-point of the field-theoretic action, and the effects of counter-ion fluctuations are included by a loop-wise expansion around this saddle point. The Poisson equation is obeyed at each order in this loop expansion. We explicitly give the expansion of the Gibbs potential up to two loops. We then apply our field-theoretic formalism to the case of a single impenetrable wall with counter ions only (in the absence of salt ions). We obtain the fluctuation corrections to the electrostatic potential and the counter-ion density to one-loop order without further approximations. The relative importance of fluctuation corrections is controlled by a single parameter, which is proportional to the cube of the counter-ion valency and to the surface charge density. The effective interactions and correlation functions between charged particles close to the charged wall are obtained on the one-loop level.

  4. A comparative study of a theoretical neural net model with MEG data from epileptic patients and normal individuals.

    PubMed

    Kotini, A; Anninos, P; Anastasiadis, A N; Tamiolakis, D

    2005-09-07

    The aim of this study was to compare a theoretical neural net model with MEG data from epileptic patients and normal individuals. Our experimental study population included 10 epilepsy sufferers and 10 healthy subjects. The recordings were obtained with a one-channel biomagnetometer SQUID in a magnetically shielded room. Using the method of x2-fitting it was found that the MEG amplitudes in epileptic patients and normal subjects had Poisson and Gauss distributions respectively. The Poisson connectivity derived from the theoretical neural model represents the state of epilepsy, whereas the Gauss connectivity represents normal behavior. The MEG data obtained from epileptic areas had higher amplitudes than the MEG from normal regions and were comparable with the theoretical magnetic fields from Poisson and Gauss distributions. Furthermore, the magnetic field derived from the theoretical model had amplitudes in the same order as the recorded MEG from the 20 participants. The approximation of the theoretical neural net model with real MEG data provides information about the structure of the brain function in epileptic and normal states encouraging further studies to be conducted.

  5. A multiscale filter for noise reduction of low-dose cone beam projections

    NASA Astrophysics Data System (ADS)

    Yao, Weiguang; Farr, Jonathan B.

    2015-08-01

    The Poisson or compound Poisson process governs the randomness of photon fluence in cone beam computed tomography (CBCT) imaging systems. The probability density function depends on the mean (noiseless) of the fluence at a certain detector. This dependence indicates the natural requirement of multiscale filters to smooth noise while preserving structures of the imaged object on the low-dose cone beam projection. In this work, we used a Gaussian filter, \\text{exp}≤ft(-{{x}2}/2σ f2\\right) as the multiscale filter to de-noise the low-dose cone beam projections. We analytically obtained the expression of {σf} , which represents the scale of the filter, by minimizing local noise-to-signal ratio. We analytically derived the variance of residual noise from the Poisson or compound Poisson processes after Gaussian filtering. From the derived analytical form of the variance of residual noise, optimal σ f2 is proved to be proportional to the noiseless fluence and modulated by local structure strength expressed as the linear fitting error of the structure. A strategy was used to obtain the reliable linear fitting error: smoothing the projection along the longitudinal direction to calculate the linear fitting error along the lateral direction and vice versa. The performance of our multiscale filter was examined on low-dose cone beam projections of a Catphan phantom and a head-and-neck patient. After performing the filter on the Catphan phantom projections scanned with pulse time 4 ms, the number of visible line pairs was similar to that scanned with 16 ms, and the contrast-to-noise ratio of the inserts was higher than that scanned with 16 ms about 64% in average. For the simulated head-and-neck patient projections with pulse time 4 ms, the visibility of soft tissue structures in the patient was comparable to that scanned with 20 ms. The image processing took less than 0.5 s per projection with 1024   ×   768 pixels.

  6. Elasticity, slowness, thermal conductivity and the anisotropies in the Mn3Cu1-xGexN compounds

    NASA Astrophysics Data System (ADS)

    Li, Guan-Nan; Chen, Zhi-Qian; Lu, Yu-Ming; Hu, Meng; Jiao, Li-Na; Zhao, Hao-Ting

    2018-03-01

    We perform the first-principles to systematically investigate the elastic properties, minimum thermal conductivity and anisotropy of the negative thermal expansion compounds Mn3Cu1-xGexN. The elastic constant, bulk modulus, shear modulus, Young’s modulus and Poisson ratio are calculated for all the compounds. The results of the elastic constant indicate that all the compounds are mechanically stable and the doped Ge can adjust the ductile character of the compounds. According to the values of the percent ratio of the elastic anisotropy AB, AE and AG, shear anisotropic factors A1, A2 and A3, all the Mn3Cu1-xGexN compounds are elastic anisotropy. The three-dimensional diagrams of elastic moduli in space also show that all the compounds are elastic anisotropy. In addition, the acoustic wave speed, slowness, minimum thermal conductivity and Debye temperature are also calculated. When the ratio of content for Cu and Ge arrived to 1:1, the compound has the lowest thermal conductivity and the highest Debye temperature.

  7. High order discretization techniques for real-space ab initio simulations

    NASA Astrophysics Data System (ADS)

    Anderson, Christopher R.

    2018-03-01

    In this paper, we present discretization techniques to address numerical problems that arise when constructing ab initio approximations that use real-space computational grids. We present techniques to accommodate the singular nature of idealized nuclear and idealized electronic potentials, and we demonstrate the utility of using high order accurate grid based approximations to Poisson's equation in unbounded domains. To demonstrate the accuracy of these techniques, we present results for a Full Configuration Interaction computation of the dissociation of H2 using a computed, configuration dependent, orbital basis set.

  8. Receiver design for SPAD-based VLC systems under Poisson-Gaussian mixed noise model.

    PubMed

    Mao, Tianqi; Wang, Zhaocheng; Wang, Qi

    2017-01-23

    Single-photon avalanche diode (SPAD) is a promising photosensor because of its high sensitivity to optical signals in weak illuminance environment. Recently, it has drawn much attention from researchers in visible light communications (VLC). However, existing literature only deals with the simplified channel model, which only considers the effects of Poisson noise introduced by SPAD, but neglects other noise sources. Specifically, when an analog SPAD detector is applied, there exists Gaussian thermal noise generated by the transimpedance amplifier (TIA) and the digital-to-analog converter (D/A). Therefore, in this paper, we propose an SPAD-based VLC system with pulse-amplitude-modulation (PAM) under Poisson-Gaussian mixed noise model, where Gaussian-distributed thermal noise at the receiver is also investigated. The closed-form conditional likelihood of received signals is derived using the Laplace transform and the saddle-point approximation method, and the corresponding quasi-maximum-likelihood (quasi-ML) detector is proposed. Furthermore, the Poisson-Gaussian-distributed signals are converted to Gaussian variables with the aid of the generalized Anscombe transform (GAT), leading to an equivalent additive white Gaussian noise (AWGN) channel, and a hard-decision-based detector is invoked. Simulation results demonstrate that, the proposed GAT-based detector can reduce the computational complexity with marginal performance loss compared with the proposed quasi-ML detector, and both detectors are capable of accurately demodulating the SPAD-based PAM signals.

  9. Coupling finite element and spectral methods: First results

    NASA Technical Reports Server (NTRS)

    Bernardi, Christine; Debit, Naima; Maday, Yvon

    1987-01-01

    A Poisson equation on a rectangular domain is solved by coupling two methods: the domain is divided in two squares, a finite element approximation is used on the first square and a spectral discretization is used on the second one. Two kinds of matching conditions on the interface are presented and compared. In both cases, error estimates are proved.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moradi, Afshin, E-mail: a.moradi@kut.ac.ir

    We develop the Maxwell-Garnett theory for the effective medium approximation of composite materials with metallic nanoparticles by taking into account the quantum spatial dispersion effects in dielectric response of nanoparticles. We derive a quantum nonlocal generalization of the standard Maxwell-Garnett formula, by means the linearized quantum hydrodynamic theory in conjunction with the Poisson equation as well as the appropriate additional quantum boundary conditions.

  11. Explanation of the Reaction of Monoclonal Antibodies with Candida Albicans Cell Surface in Terms of Compound Poisson Process

    NASA Astrophysics Data System (ADS)

    Dudek, Mirosław R.; Mleczko, Józef

    Surprisingly, still very little is known about the mathematical modeling of peaks in the binding affinities distribution function. In general, it is believed that the peaks represent antibodies directed towards single epitopes. In this paper, we refer to fluorescence flow cytometry experiments and show that even monoclonal antibodies can display multi-modal histograms of affinity distribution. This result take place when some obstacles appear in the paratope-epitope reaction such that the process of reaching the specific epitope ceases to be a point Poisson process. A typical example is the large area of cell surface, which could be unreachable by antibodies leading to the heterogeneity of the cell surface repletion. In this case the affinity of cells to bind the antibodies should be described by a more complex process than the pure-Poisson point process. We suggested to use a doubly stochastic Poisson process, where the points are replaced by a binomial point process resulting in the Neyman distribution. The distribution can have a strongly multinomial character, and with the number of modes depending on the concentration of antibodies and epitopes. All this means that there is a possibility to go beyond the simplified theory, one response towards one epitope. As a consequence, our description provides perspectives for describing antigen-antibody reactions, both qualitatively and quantitavely, even in the case when some peaks result from more than one binding mechanism.

  12. An Estimation of the Likelihood of Significant Eruptions During 2000-2009 Using Poisson Statistics on Two-Point Moving Averages of the Volcanic Time Series

    NASA Technical Reports Server (NTRS)

    Wilson, Robert M.

    2001-01-01

    Since 1750, the number of cataclysmic volcanic eruptions (volcanic explosivity index (VEI)>=4) per decade spans 2-11, with 96 percent located in the tropics and extra-tropical Northern Hemisphere. A two-point moving average of the volcanic time series has higher values since the 1860's than before, being 8.00 in the 1910's (the highest value) and 6.50 in the 1980's, the highest since the 1910's peak. Because of the usual behavior of the first difference of the two-point moving averages, one infers that its value for the 1990's will measure approximately 6.50 +/- 1, implying that approximately 7 +/- 4 cataclysmic volcanic eruptions should be expected during the present decade (2000-2009). Because cataclysmic volcanic eruptions (especially those having VEI>=5) nearly always have been associated with short-term episodes of global cooling, the occurrence of even one might confuse our ability to assess the effects of global warming. Poisson probability distributions reveal that the probability of one or more events with a VEI>=4 within the next ten years is >99 percent. It is approximately 49 percent for an event with a VEI>=5, and 18 percent for an event with a VEI>=6. Hence, the likelihood that a climatically significant volcanic eruption will occur within the next ten years appears reasonably high.

  13. Morphology and linear-elastic moduli of random network solids.

    PubMed

    Nachtrab, Susan; Kapfer, Sebastian C; Arns, Christoph H; Madadi, Mahyar; Mecke, Klaus; Schröder-Turk, Gerd E

    2011-06-17

    The effective linear-elastic moduli of disordered network solids are analyzed by voxel-based finite element calculations. We analyze network solids given by Poisson-Voronoi processes and by the structure of collagen fiber networks imaged by confocal microscopy. The solid volume fraction ϕ is varied by adjusting the fiber radius, while keeping the structural mesh or pore size of the underlying network fixed. For intermediate ϕ, the bulk and shear modulus are approximated by empirical power-laws K(phi)proptophin and G(phi)proptophim with n≈1.4 and m≈1.7. The exponents for the collagen and the Poisson-Voronoi network solids are similar, and are close to the values n=1.22 and m=2.11 found in a previous voxel-based finite element study of Poisson-Voronoi systems with different boundary conditions. However, the exponents of these empirical power-laws are at odds with the analytic values of n=1 and m=2, valid for low-density cellular structures in the limit of thin beams. We propose a functional form for K(ϕ) that models the cross-over from a power-law at low densities to a porous solid at high densities; a fit of the data to this functional form yields the asymptotic exponent n≈1.00, as expected. Further, both the intensity of the Poisson-Voronoi process and the collagen concentration in the samples, both of which alter the typical pore or mesh size, affect the effective moduli only by the resulting change of the solid volume fraction. These findings suggest that a network solid with the structure of the collagen networks can be modeled in quantitative agreement by a Poisson-Voronoi process. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Time‐dependent renewal‐model probabilities when date of last earthquake is unknown

    USGS Publications Warehouse

    Field, Edward H.; Jordan, Thomas H.

    2015-01-01

    We derive time-dependent, renewal-model earthquake probabilities for the case in which the date of the last event is completely unknown, and compare these with the time-independent Poisson probabilities that are customarily used as an approximation in this situation. For typical parameter values, the renewal-model probabilities exceed Poisson results by more than 10% when the forecast duration exceeds ~20% of the mean recurrence interval. We also derive probabilities for the case in which the last event is further constrained to have occurred before historical record keeping began (the historic open interval), which can only serve to increase earthquake probabilities for typically applied renewal models.We conclude that accounting for the historic open interval can improve long-term earthquake rupture forecasts for California and elsewhere.

  15. Effect of collisions on photoelectron sheath in a gas

    NASA Astrophysics Data System (ADS)

    Sodha, Mahendra Singh; Mishra, S. K.

    2016-02-01

    This paper presents a study of the effect of the collision of electrons with atoms/molecules on the structure of a photoelectron sheath. Considering the half Fermi-Dirac distribution of photo-emitted electrons, an expression for the electron density in the sheath has been derived in terms of the electric potential and the structure of the sheath has been investigated by incorporating Poisson's equation in the analysis. The method of successive approximations has been used to solve Poisson's equation with the solution for the electric potential in the case of vacuum, obtained earlier [Sodha and Mishra, Phys. Plasmas 21, 093704 (2014)], being used as the zeroth order solution for the present analysis. The inclusion of collisions influences the photoelectron sheath structure significantly; a reduction in the sheath width with increasing collisions is obtained.

  16. Dynamics of moment neuronal networks.

    PubMed

    Feng, Jianfeng; Deng, Yingchun; Rossoni, Enrico

    2006-04-01

    A theoretical framework is developed for moment neuronal networks (MNNs). Within this framework, the behavior of the system of spiking neurons is specified in terms of the first- and second-order statistics of their interspike intervals, i.e., the mean, the variance, and the cross correlations of spike activity. Since neurons emit and receive spike trains which can be described by renewal--but generally non-Poisson--processes, we first derive a suitable diffusion-type approximation of such processes. Two approximation schemes are introduced: the usual approximation scheme (UAS) and the Ornstein-Uhlenbeck scheme. It is found that both schemes approximate well the input-output characteristics of spiking models such as the IF and the Hodgkin-Huxley models. The MNN framework is then developed according to the UAS scheme, and its predictions are tested on a few examples.

  17. Fast Poisson noise removal by biorthogonal Haar domain hypothesis testing

    NASA Astrophysics Data System (ADS)

    Zhang, B.; Fadili, M. J.; Starck, J.-L.; Digel, S. W.

    2008-07-01

    Methods based on hypothesis tests (HTs) in the Haar domain are widely used to denoise Poisson count data. Facing large datasets or real-time applications, Haar-based denoisers have to use the decimated transform to meet limited-memory or computation-time constraints. Unfortunately, for regular underlying intensities, decimation yields discontinuous estimates and strong “staircase” artifacts. In this paper, we propose to combine the HT framework with the decimated biorthogonal Haar (Bi-Haar) transform instead of the classical Haar. The Bi-Haar filter bank is normalized such that the p-values of Bi-Haar coefficients (p) provide good approximation to those of Haar (pH) for high-intensity settings or large scales; for low-intensity settings and small scales, we show that p are essentially upper-bounded by pH. Thus, we may apply the Haar-based HTs to Bi-Haar coefficients to control a prefixed false positive rate. By doing so, we benefit from the regular Bi-Haar filter bank to gain a smooth estimate while always maintaining a low computational complexity. A Fisher-approximation-based threshold implementing the HTs is also established. The efficiency of this method is illustrated on an example of hyperspectral-source-flux estimation.

  18. Bayesian Inference and Online Learning in Poisson Neuronal Networks.

    PubMed

    Huang, Yanping; Rao, Rajesh P N

    2016-08-01

    Motivated by the growing evidence for Bayesian computation in the brain, we show how a two-layer recurrent network of Poisson neurons can perform both approximate Bayesian inference and learning for any hidden Markov model. The lower-layer sensory neurons receive noisy measurements of hidden world states. The higher-layer neurons infer a posterior distribution over world states via Bayesian inference from inputs generated by sensory neurons. We demonstrate how such a neuronal network with synaptic plasticity can implement a form of Bayesian inference similar to Monte Carlo methods such as particle filtering. Each spike in a higher-layer neuron represents a sample of a particular hidden world state. The spiking activity across the neural population approximates the posterior distribution over hidden states. In this model, variability in spiking is regarded not as a nuisance but as an integral feature that provides the variability necessary for sampling during inference. We demonstrate how the network can learn the likelihood model, as well as the transition probabilities underlying the dynamics, using a Hebbian learning rule. We present results illustrating the ability of the network to perform inference and learning for arbitrary hidden Markov models.

  19. Mixed-Poisson Point Process with Partially-Observed Covariates: Ecological Momentary Assessment of Smoking.

    PubMed

    Neustifter, Benjamin; Rathbun, Stephen L; Shiffman, Saul

    2012-01-01

    Ecological Momentary Assessment is an emerging method of data collection in behavioral research that may be used to capture the times of repeated behavioral events on electronic devices, and information on subjects' psychological states through the electronic administration of questionnaires at times selected from a probability-based design as well as the event times. A method for fitting a mixed Poisson point process model is proposed for the impact of partially-observed, time-varying covariates on the timing of repeated behavioral events. A random frailty is included in the point-process intensity to describe variation among subjects in baseline rates of event occurrence. Covariate coefficients are estimated using estimating equations constructed by replacing the integrated intensity in the Poisson score equations with a design-unbiased estimator. An estimator is also proposed for the variance of the random frailties. Our estimators are robust in the sense that no model assumptions are made regarding the distribution of the time-varying covariates or the distribution of the random effects. However, subject effects are estimated under gamma frailties using an approximate hierarchical likelihood. The proposed approach is illustrated using smoking data.

  20. Numerical Solution of 3D Poisson-Nernst-Planck Equations Coupled with Classical Density Functional Theory for Modeling Ion and Electron Transport in a Confined Environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meng, Da; Zheng, Bin; Lin, Guang

    2014-08-29

    We have developed efficient numerical algorithms for the solution of 3D steady-state Poisson-Nernst-Planck equations (PNP) with excess chemical potentials described by the classical density functional theory (cDFT). The coupled PNP equations are discretized by finite difference scheme and solved iteratively by Gummel method with relaxation. The Nernst-Planck equations are transformed into Laplace equations through the Slotboom transformation. Algebraic multigrid method is then applied to efficiently solve the Poisson equation and the transformed Nernst-Planck equations. A novel strategy for calculating excess chemical potentials through fast Fourier transforms is proposed which reduces computational complexity from O(N2) to O(NlogN) where N is themore » number of grid points. Integrals involving Dirac delta function are evaluated directly by coordinate transformation which yields more accurate result compared to applying numerical quadrature to an approximated delta function. Numerical results for ion and electron transport in solid electrolyte for Li ion batteries are shown to be in good agreement with the experimental data and the results from previous studies.« less

  1. Multiscale modeling of a rectifying bipolar nanopore: Comparing Poisson-Nernst-Planck to Monte Carlo

    NASA Astrophysics Data System (ADS)

    Matejczyk, Bartłomiej; Valiskó, Mónika; Wolfram, Marie-Therese; Pietschmann, Jan-Frederik; Boda, Dezső

    2017-03-01

    In the framework of a multiscale modeling approach, we present a systematic study of a bipolar rectifying nanopore using a continuum and a particle simulation method. The common ground in the two methods is the application of the Nernst-Planck (NP) equation to compute ion transport in the framework of the implicit-water electrolyte model. The difference is that the Poisson-Boltzmann theory is used in the Poisson-Nernst-Planck (PNP) approach, while the Local Equilibrium Monte Carlo (LEMC) method is used in the particle simulation approach (NP+LEMC) to relate the concentration profile to the electrochemical potential profile. Since we consider a bipolar pore which is short and narrow, we perform simulations using two-dimensional PNP. In addition, results of a non-linear version of PNP that takes crowding of ions into account are shown. We observe that the mean field approximation applied in PNP is appropriate to reproduce the basic behavior of the bipolar nanopore (e.g., rectification) for varying parameters of the system (voltage, surface charge, electrolyte concentration, and pore radius). We present current data that characterize the nanopore's behavior as a device, as well as concentration, electrical potential, and electrochemical potential profiles.

  2. Multiscale modeling of a rectifying bipolar nanopore: Comparing Poisson-Nernst-Planck to Monte Carlo.

    PubMed

    Matejczyk, Bartłomiej; Valiskó, Mónika; Wolfram, Marie-Therese; Pietschmann, Jan-Frederik; Boda, Dezső

    2017-03-28

    In the framework of a multiscale modeling approach, we present a systematic study of a bipolar rectifying nanopore using a continuum and a particle simulation method. The common ground in the two methods is the application of the Nernst-Planck (NP) equation to compute ion transport in the framework of the implicit-water electrolytemodel. The difference is that the Poisson-Boltzmann theory is used in the Poisson-Nernst-Planck (PNP) approach, while the Local Equilibrium Monte Carlo (LEMC) method is used in the particle simulation approach (NP+LEMC) to relate the concentration profile to the electrochemical potential profile. Since we consider a bipolar pore which is short and narrow, we perform simulations using two-dimensional PNP. In addition, results of a non-linear version of PNP that takes crowding of ions into account are shown. We observe that the mean field approximation applied in PNP is appropriate to reproduce the basic behavior of the bipolar nanopore (e.g., rectification) for varying parameters of the system (voltage, surface charge,electrolyte concentration, and pore radius). We present current data that characterize the nanopore's behavior as a device, as well as concentration, electrical potential, and electrochemical potential profiles.

  3. Geometrical Effects on Nonlinear Electrodiffusion in Cell Physiology

    NASA Astrophysics Data System (ADS)

    Cartailler, J.; Schuss, Z.; Holcman, D.

    2017-12-01

    We report here new electrical laws, derived from nonlinear electrodiffusion theory, about the effect of the local geometrical structure, such as curvature, on the electrical properties of a cell. We adopt the Poisson-Nernst-Planck equations for charge concentration and electric potential as a model of electrodiffusion. In the case at hand, the entire boundary is impermeable to ions and the electric field satisfies the compatibility condition of Poisson's equation. We construct an asymptotic approximation for certain singular limits to the steady-state solution in a ball with an attached cusp-shaped funnel on its surface. As the number of charge increases, they concentrate at the end of cusp-shaped funnel. These results can be used in the design of nanopipettes and help to understand the local voltage changes inside dendrites and axons with heterogeneous local geometry.

  4. Effective implementation of wavelet Galerkin method

    NASA Astrophysics Data System (ADS)

    Finěk, Václav; Šimunková, Martina

    2012-11-01

    It was proved by W. Dahmen et al. that an adaptive wavelet scheme is asymptotically optimal for a wide class of elliptic equations. This scheme approximates the solution u by a linear combination of N wavelets and a benchmark for its performance is the best N-term approximation, which is obtained by retaining the N largest wavelet coefficients of the unknown solution. Moreover, the number of arithmetic operations needed to compute the approximate solution is proportional to N. The most time consuming part of this scheme is the approximate matrix-vector multiplication. In this contribution, we will introduce our implementation of wavelet Galerkin method for Poisson equation -Δu = f on hypercube with homogeneous Dirichlet boundary conditions. In our implementation, we identified nonzero elements of stiffness matrix corresponding to the above problem and we perform matrix-vector multiplication only with these nonzero elements.

  5. On the connection between multigrid and cyclic reduction

    NASA Technical Reports Server (NTRS)

    Merriam, M. L.

    1984-01-01

    A technique is shown whereby it is possible to relate a particular multigrid process to cyclic reduction using purely mathematical arguments. This technique suggest methods for solving Poisson's equation in 1-, 2-, or 3-dimensions with Dirichlet or Neumann boundary conditions. In one dimension the method is exact and, in fact, reduces to cyclic reduction. This provides a valuable reference point for understanding multigrid techniques. The particular multigrid process analyzed is referred to here as Approximate Cyclic Reduction (ACR) and is one of a class known as Multigrid Reduction methods in the literature. It involves one approximation with a known error term. It is possible to relate the error term in this approximation with certain eigenvector components of the error. These are sharply reduced in amplitude by classical relaxation techniques. The approximation can thus be made a very good one.

  6. ADAPTIVE FINITE ELEMENT MODELING TECHNIQUES FOR THE POISSON-BOLTZMANN EQUATION

    PubMed Central

    HOLST, MICHAEL; MCCAMMON, JAMES ANDREW; YU, ZEYUN; ZHOU, YOUNGCHENG; ZHU, YUNRONG

    2011-01-01

    We consider the design of an effective and reliable adaptive finite element method (AFEM) for the nonlinear Poisson-Boltzmann equation (PBE). We first examine the two-term regularization technique for the continuous problem recently proposed by Chen, Holst, and Xu based on the removal of the singular electrostatic potential inside biomolecules; this technique made possible the development of the first complete solution and approximation theory for the Poisson-Boltzmann equation, the first provably convergent discretization, and also allowed for the development of a provably convergent AFEM. However, in practical implementation, this two-term regularization exhibits numerical instability. Therefore, we examine a variation of this regularization technique which can be shown to be less susceptible to such instability. We establish a priori estimates and other basic results for the continuous regularized problem, as well as for Galerkin finite element approximations. We show that the new approach produces regularized continuous and discrete problems with the same mathematical advantages of the original regularization. We then design an AFEM scheme for the new regularized problem, and show that the resulting AFEM scheme is accurate and reliable, by proving a contraction result for the error. This result, which is one of the first results of this type for nonlinear elliptic problems, is based on using continuous and discrete a priori L∞ estimates to establish quasi-orthogonality. To provide a high-quality geometric model as input to the AFEM algorithm, we also describe a class of feature-preserving adaptive mesh generation algorithms designed specifically for constructing meshes of biomolecular structures, based on the intrinsic local structure tensor of the molecular surface. All of the algorithms described in the article are implemented in the Finite Element Toolkit (FETK), developed and maintained at UCSD. The stability advantages of the new regularization scheme are demonstrated with FETK through comparisons with the original regularization approach for a model problem. The convergence and accuracy of the overall AFEM algorithm is also illustrated by numerical approximation of electrostatic solvation energy for an insulin protein. PMID:21949541

  7. Fractional properties of geophysical field variability on the example of hydrochemical parameters

    NASA Astrophysics Data System (ADS)

    Shevtsov, Boris; Shevtsova, Olga

    2017-10-01

    Using the properties of compound Poisson process and its fractional generalizations, statistical models of geophysical fields variability are considered on an example of hydrochemical parameters system. These models are universal to describe objects of different nature and allow us to explain various pulsing regime. Manifestations of non-conservatism in hydrochemical parameters system and the advantages of the system approach in the description of geophysical fields variability are discussed.

  8. An ab-initio investigation on SrLa intermetallic compound

    NASA Astrophysics Data System (ADS)

    Kumar, S. Ramesh; Jaiganesh, G.; Jayalakshmi, V.

    2018-05-01

    The electronic, elastic and thermodynamic property of CsCl-type SrLa are investigated through density functional theory. The energy-volume relation for this compound has been obtained. The band structure, density of states and charge density in (110) plane are also examined. The elastic constants (C11, C12 and C44) of SrLa is computed, then, using these elastic constants, the bulk moduli, shear moduli, Young's moduli and Poisson's ratio are also derived. The calculated results showed that CsCl-type SrLa is ductile at ambient conditions. The thermodynamic quantities such as free energy, entropy and heat capacity as a function of temperature are estimated and the results obtained are discussed.

  9. Order-disorder effects on the elastic properties of CuMPt6 (M=Cr and Co) compounds

    NASA Astrophysics Data System (ADS)

    Huang, Shuo; Li, Rui-Zi; Qi, San-Tao; Chen, Bao; Shen, Jiang

    2014-04-01

    The elastic properties of CuMPt6 (M=Cr and Co) in disordered face-centered cubic (fcc) structure and ordered Cu3Au-type structure are studied with lattice inversion embedded-atom method. The calculated lattice constant and Debye temperature agree quite well with the comparable experimental data. The obtained formation enthalpy demonstrates that the Cu3Au-type structure is energetically more favorable. Numerical estimates of the elastic constants, bulk/shear modulus, Young's modulus, Poisson's ratio, elastic anisotropy, and Debye temperature for both compounds are performed, and the results suggest that the disordered fcc structure is much softer than the ordered Cu3Au-type structure.

  10. On the estimation variance for the specific Euler-Poincaré characteristic of random networks.

    PubMed

    Tscheschel, A; Stoyan, D

    2003-07-01

    The specific Euler number is an important topological characteristic in many applications. It is considered here for the case of random networks, which may appear in microscopy either as primary objects of investigation or as secondary objects describing in an approximate way other structures such as, for example, porous media. For random networks there is a simple and natural estimator of the specific Euler number. For its estimation variance, a simple Poisson approximation is given. It is based on the general exact formula for the estimation variance. In two examples of quite different nature and topology application of the formulas is demonstrated.

  11. ANALYZING NUMERICAL ERRORS IN DOMAIN HEAT TRANSPORT MODELS USING THE CVBEM.

    USGS Publications Warehouse

    Hromadka, T.V.

    1987-01-01

    Besides providing an exact solution for steady-state heat conduction processes (Laplace-Poisson equations), the CVBEM (complex variable boundary element method) can be used for the numerical error analysis of domain model solutions. For problems where soil-water phase change latent heat effects dominate the thermal regime, heat transport can be approximately modeled as a time-stepped steady-state condition in the thawed and frozen regions, respectively. The CVBEM provides an exact solution of the two-dimensional steady-state heat transport problem, and also provides the error in matching the prescribed boundary conditions by the development of a modeling error distribution or an approximate boundary generation.

  12. Electrostatic potential of B-DNA: effect of interionic correlations.

    PubMed Central

    Gavryushov, S; Zielenkiewicz, P

    1998-01-01

    Modified Poisson-Boltzmann (MPB) equations have been numerically solved to study ionic distributions and mean electrostatic potentials around a macromolecule of arbitrarily complex shape and charge distribution. Results for DNA are compared with those obtained by classical Poisson-Boltzmann (PB) calculations. The comparisons were made for 1:1 and 2:1 electrolytes at ionic strengths up to 1 M. It is found that ion-image charge interactions and interionic correlations, which are neglected by the PB equation, have relatively weak effects on the electrostatic potential at charged groups of the DNA. The PB equation predicts errors in the long-range electrostatic part of the free energy that are only approximately 1.5 kJ/mol per nucleotide even in the case of an asymmetrical electrolyte. In contrast, the spatial correlations between ions drastically affect the electrostatic potential at significant separations from the macromolecule leading to a clearly predicted effect of charge overneutralization. PMID:9826596

  13. Statistical analysis of excitation energies in actinide and rare-earth nuclei

    NASA Astrophysics Data System (ADS)

    Levon, A. I.; Magner, A. G.; Radionov, S. V.

    2018-04-01

    Statistical analysis of distributions of the collective states in actinide and rare-earth nuclei is performed in terms of the nearest-neighbor spacing distribution (NNSD). Several approximations, such as the linear approach to the level repulsion density and that suggested by Brody to the NNSDs were applied for the analysis. We found an intermediate character of the experimental spectra between the order and the chaos for a number of rare-earth and actinide nuclei. The spectra are closer to the Wigner distribution for energies limited by 3 MeV, and to the Poisson distribution for data including higher excitation energies and higher spins. The latter result is in agreement with the theoretical calculations. These features are confirmed by the cumulative distributions, where the Wigner contribution dominates at smaller spacings while the Poisson one is more important at larger spacings, and our linear approach improves the comparison with experimental data at all desired spacings.

  14. Study of the Anisotropic Elastoplastic Properties of β-Ga2O3 Films Synthesized on SiC/Si Substrates

    NASA Astrophysics Data System (ADS)

    Grashchenko, A. S.; Kukushkin, S. A.; Nikolaev, V. I.; Osipov, A. V.; Osipova, E. V.; Soshnikov, I. P.

    2018-05-01

    The structural and mechanical properties of gallium oxide films grown on silicon crystallographic planes (001), (011), and (111) with a buffer layer of silicon carbide are investigated. Nanoindentation was used to study the elastoplastic properties of gallium oxide and also to determine the elastic recovery parameter of the films under study. The tensile strength, hardness, elasticity tensor, compliance tensor, Young's modulus, Poisson's ratio, and other characteristics of gallium oxide were calculated using quantum chemistry methods. It was found that the gallium oxide crystal is auxetic because, for some stretching directions, the Poisson's ratio takes on negative values. The calculated values correspond quantitatively to the experimental data. It is concluded that the elastoplastic properties of gallium oxide films approximately correspond to the properties of bulk crystals and that a change in the orientation of the silicon surface leads to a significant change in the orientation of gallium oxide.

  15. Calculated and measured fields in superferric wiggler magnets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blum, E.B.; Solomon, L.

    1995-02-01

    Although Klaus Halbach is widely known and appreciated as the originator of the computer program POISSON for electromagnetic field calculation, Klaus has always believed that analytical methods can give much more insight into the performance of a magnet than numerical simulation. Analytical approximations readily show how the different aspects of a magnet`s design such as pole dimensions, current, and coil configuration contribute to the performance. These methods yield accuracies of better than 10%. Analytical methods should therefore be used when conceptualizing a magnet design. Computer analysis can then be used for refinement. A simple model is presented for the peakmore » on-axis field of an electro-magnetic wiggler with iron poles and superconducting coils. The model is applied to the radiator section of the superconducting wiggler for the BNL Harmonic Generation Free Electron Laser. The predictions of the model are compared to the measured field and the results from POISSON.« less

  16. Performance of cellular frequency-hopped spread-spectrum radio networks

    NASA Astrophysics Data System (ADS)

    Gluck, Jeffrey W.; Geraniotis, Evaggelos

    1989-10-01

    Multiple access interference is characterized for cellular mobile networks, in which users are assumed to be Poisson-distributed in the plane and employ frequency-hopped spread-spectrum signaling with transmitter-oriented assignment of frequency-hopping patterns. Exact expressions for the bit error probabilities are derived for binary coherently demodulated systems without coding. Approximations for the packet error probability are derived for coherent and noncoherent systems and these approximations are applied when forward-error-control coding is employed. In all cases, the effects of varying interference power are accurately taken into account according to some propagation law. Numerical results are given in terms of bit error probability for the exact case and throughput for the approximate analyses. Comparisons are made with previously derived bounds and it is shown that these tend to be very pessimistic.

  17. First-principles study on the structure, elastic properties, hardness and electronic structure of TMB4 (TM=Cr, Re, Ru and Os) compounds

    NASA Astrophysics Data System (ADS)

    Pan, Y.; Zheng, W. T.; Guan, W. M.; Zhang, K. H.; Fan, X. F.

    2013-11-01

    The structural formation, elastic properties, hardness and electronic structure of TMB4 (TM=Cr, Re, Ru and Os) compounds are investigated using first-principles approach. The value of C22 for these compounds is almost two times bigger than the C11 and C33. The intrinsic hardness, shear modulus and Young's modulus are calculated to be in a sequence of CrB4>ReB4>RuB4>OsB4, and the Poisson's ratio and B/G ratio of TMB4 follow the order of CrB4

  18. TRUNCATED RANDOM MEASURES

    DTIC Science & Technology

    2018-01-12

    sequential representations, a method is required for deter- mining which to use for the application at hand and, once a representation is selected, for...DISTRIBUTION UNLIMITED Methods , Assumptions, and Procedures 3.1 Background 3.1.1 CRMs and truncation Consider a Poisson point process on R+ := [0...the heart of the study of truncated CRMs. They provide an itera- tive method that can be terminated at any point to yield a finite approximation to the

  19. Logistic quantile regression provides improved estimates for bounded avian counts: a case study of California Spotted Owl fledgling production

    Treesearch

    Brian S. Cade; Barry R. Noon; Rick D. Scherer; John J. Keane

    2017-01-01

    Counts of avian fledglings, nestlings, or clutch size that are bounded below by zero and above by some small integer form a discrete random variable distribution that is not approximated well by conventional parametric count distributions such as the Poisson or negative binomial. We developed a logistic quantile regression model to provide estimates of the empirical...

  20. A Boussinesq-scaled, pressure-Poisson water wave model

    NASA Astrophysics Data System (ADS)

    Donahue, Aaron S.; Zhang, Yao; Kennedy, Andrew B.; Westerink, Joannes J.; Panda, Nishant; Dawson, Clint

    2015-02-01

    Through the use of Boussinesq scaling we develop and test a model for resolving non-hydrostatic pressure profiles in nonlinear wave systems over varying bathymetry. A Green-Nagdhi type polynomial expansion is used to resolve the pressure profile along the vertical axis, this is then inserted into the pressure-Poisson equation, retaining terms up to a prescribed order and solved using a weighted residual approach. The model shows rapid convergence properties with increasing order of polynomial expansion which can be greatly improved through the application of asymptotic rearrangement. Models of Boussinesq scaling of the fully nonlinear O (μ2) and weakly nonlinear O (μN) are presented, the analytical and numerical properties of O (μ2) and O (μ4) models are discussed. Optimal basis functions in the Green-Nagdhi expansion are determined through manipulation of the free-parameters which arise due to the Boussinesq scaling. The optimal O (μ2) model has dispersion accuracy equivalent to a Padé [2,2] approximation with one extra free-parameter. The optimal O (μ4) model obtains dispersion accuracy equivalent to a Padé [4,4] approximation with two free-parameters which can be used to optimize shoaling or nonlinear properties. In comparison to experimental results the O (μ4) model shows excellent agreement to experimental data.

  1. A Bayesian destructive weighted Poisson cure rate model and an application to a cutaneous melanoma data.

    PubMed

    Rodrigues, Josemar; Cancho, Vicente G; de Castro, Mário; Balakrishnan, N

    2012-12-01

    In this article, we propose a new Bayesian flexible cure rate survival model, which generalises the stochastic model of Klebanov et al. [Klebanov LB, Rachev ST and Yakovlev AY. A stochastic-model of radiation carcinogenesis--latent time distributions and their properties. Math Biosci 1993; 113: 51-75], and has much in common with the destructive model formulated by Rodrigues et al. [Rodrigues J, de Castro M, Balakrishnan N and Cancho VG. Destructive weighted Poisson cure rate models. Technical Report, Universidade Federal de São Carlos, São Carlos-SP. Brazil, 2009 (accepted in Lifetime Data Analysis)]. In our approach, the accumulated number of lesions or altered cells follows a compound weighted Poisson distribution. This model is more flexible than the promotion time cure model in terms of dispersion. Moreover, it possesses an interesting and realistic interpretation of the biological mechanism of the occurrence of the event of interest as it includes a destructive process of tumour cells after an initial treatment or the capacity of an individual exposed to irradiation to repair altered cells that results in cancer induction. In other words, what is recorded is only the damaged portion of the original number of altered cells not eliminated by the treatment or repaired by the repair system of an individual. Markov Chain Monte Carlo (MCMC) methods are then used to develop Bayesian inference for the proposed model. Also, some discussions on the model selection and an illustration with a cutaneous melanoma data set analysed by Rodrigues et al. [Rodrigues J, de Castro M, Balakrishnan N and Cancho VG. Destructive weighted Poisson cure rate models. Technical Report, Universidade Federal de São Carlos, São Carlos-SP. Brazil, 2009 (accepted in Lifetime Data Analysis)] are presented.

  2. Polishing compound for plastic surfaces

    DOEpatents

    Stowell, M.S.

    1991-01-01

    This invention is comprised of a polishing compound for plastic materials. The compound includes approximately by approximately by weight 25 to 80 parts at least one petroleum distillate lubricant, 1 to 12 parts mineral spirits, 50 to 155 parts abrasive paste, and 15 to 60 parts water. Preferably, the compound includes approximately 37 to 42 parts at least one petroleum distillate lubricant, up to 8 parts mineral spirits, 95 to 110 parts abrasive paste, and 50 to 55 parts water. The proportions of the ingredients are varied in accordance with the particular application. The compound is used on PLEXIGLAS{trademark}, LEXAN{trademark}, LUCITE{trademark}, polyvinyl chloride (PVC), and similar plastic materials whenever a smooth, clear polished surface is desired.

  3. Prediction study of structural, elastic and electronic properties of FeMP (M = Ti, Zr, Hf) compounds

    NASA Astrophysics Data System (ADS)

    Tanto, A.; Chihi, T.; Ghebouli, M. A.; Reffas, M.; Fatmi, M.; Ghebouli, B.

    2018-06-01

    First principles calculations are applied in the study of FeMP (M = Ti, Zr, Hf) compounds. We investigate the structural, elastic, mechanical and electronic properties by combining first-principles calculations with the CASTEP approach. For ideal polycrystalline FeMP (M = Ti, Zr, Hf) the shear modulus, Young's modulus, Poisson's ratio, elastic anisotropy indexes, Pugh's criterion, elastic wave velocities and Debye temperature are also calculated from the single crystal elastic constants. The shear anisotropic factors and anisotropy are obtained from the single crystal elastic constants. The Debye temperature is calculated from the average elastic wave velocity obtained from shear and bulk modulus as well as the integration of elastic wave velocities in different directions of the single crystal.

  4. Normal forms for Poisson maps and symplectic groupoids around Poisson transversals

    NASA Astrophysics Data System (ADS)

    Frejlich, Pedro; Mărcuț, Ioan

    2018-03-01

    Poisson transversals are submanifolds in a Poisson manifold which intersect all symplectic leaves transversally and symplectically. In this communication, we prove a normal form theorem for Poisson maps around Poisson transversals. A Poisson map pulls a Poisson transversal back to a Poisson transversal, and our first main result states that simultaneous normal forms exist around such transversals, for which the Poisson map becomes transversally linear, and intertwines the normal form data of the transversals. Our second result concerns symplectic integrations. We prove that a neighborhood of a Poisson transversal is integrable exactly when the Poisson transversal itself is integrable, and in that case we prove a normal form theorem for the symplectic groupoid around its restriction to the Poisson transversal, which puts all structure maps in normal form. We conclude by illustrating our results with examples arising from Lie algebras.

  5. Normal forms for Poisson maps and symplectic groupoids around Poisson transversals.

    PubMed

    Frejlich, Pedro; Mărcuț, Ioan

    2018-01-01

    Poisson transversals are submanifolds in a Poisson manifold which intersect all symplectic leaves transversally and symplectically. In this communication, we prove a normal form theorem for Poisson maps around Poisson transversals. A Poisson map pulls a Poisson transversal back to a Poisson transversal, and our first main result states that simultaneous normal forms exist around such transversals, for which the Poisson map becomes transversally linear, and intertwines the normal form data of the transversals. Our second result concerns symplectic integrations. We prove that a neighborhood of a Poisson transversal is integrable exactly when the Poisson transversal itself is integrable, and in that case we prove a normal form theorem for the symplectic groupoid around its restriction to the Poisson transversal, which puts all structure maps in normal form. We conclude by illustrating our results with examples arising from Lie algebras.

  6. Polishing compound for plastic surfaces

    DOEpatents

    Stowell, Michael S.

    1995-01-01

    A polishing compound for plastic surfaces. The compound contains by weight approximately 4 to 17 parts at least one petroleum distillate lubricant, 1 to 6 parts mineral spirits, 2.5 to 15 parts abrasive particles, and 2.5 to 10 parts water. The abrasive is tripoli or a similar material that contains fine particles silica. Preferably, most of the abrasive particles are less than approximately 10 microns, more preferably less than approximately 5 microns in size. The compound is used on PLEXIGLAS.TM., LEXAN.TM., LUCITE.TM., polyvinyl chloride (PVC) and similar plastic materials whenever a smooth, clear polished surface is desired.

  7. Polishing compound for plastic surfaces

    DOEpatents

    Stowell, M.S.

    1993-01-01

    A polishing compound for plastic surfaces is disclosed. The compound contains by weight approximately 4 to 17 parts at least one petroleum distillate lubricant, 1 to 6 parts mineral spirits, 2.5 to 15 parts abrasive particles, and 2.5 to 10 parts water. The abrasive is tripoli or a similar material that contains colloidal silica. Preferably, most of the abrasive particles are less than approximately 10 microns, more preferably less than approximately 5 microns in size. The compound is used on PLEXIGLAS{sup TM}, LEXAN{sup TM}, LUCITE{sup TM}, polyvinyl chloride (PVC) and similar plastic materials whenever a smooth, clear polished surface is desired.

  8. Polishing compound for plastic surfaces

    DOEpatents

    Stowell, M.S.

    1995-08-22

    A polishing compound for plastic surfaces is disclosed. The compound contains by weight approximately 4 to 17 parts at least one petroleum distillate lubricant, 1 to 6 parts mineral spirits, 2.5 to 15 parts abrasive particles, and 2.5 to 10 parts water. The abrasive is tripoli or a similar material that contains fine particles silica. Preferably, most of the abrasive particles are less than approximately 10 microns, more preferably less than approximately 5 microns in size. The compound is used on PLEXIGLAS{trademark}, LEXAN{trademark}, LUCITE{trademark}, polyvinyl chloride (PVC) and similar plastic materials whenever a smooth, clear polished surface is desired. 5 figs.

  9. A variational approach to moment-closure approximations for the kinetics of biomolecular reaction networks

    NASA Astrophysics Data System (ADS)

    Bronstein, Leo; Koeppl, Heinz

    2018-01-01

    Approximate solutions of the chemical master equation and the chemical Fokker-Planck equation are an important tool in the analysis of biomolecular reaction networks. Previous studies have highlighted a number of problems with the moment-closure approach used to obtain such approximations, calling it an ad hoc method. In this article, we give a new variational derivation of moment-closure equations which provides us with an intuitive understanding of their properties and failure modes and allows us to correct some of these problems. We use mixtures of product-Poisson distributions to obtain a flexible parametric family which solves the commonly observed problem of divergences at low system sizes. We also extend the recently introduced entropic matching approach to arbitrary ansatz distributions and Markov processes, demonstrating that it is a special case of variational moment closure. This provides us with a particularly principled approximation method. Finally, we extend the above approaches to cover the approximation of multi-time joint distributions, resulting in a viable alternative to process-level approximations which are often intractable.

  10. Electrostatic Solvation Free Energy of Amino Acid Side Chain Analogs: Implications for the Validity of Electrostatic Linear Response in Water

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Bin; Pettitt, Bernard M.

    Electrostatic free energies of solvation for 15 neutral amino acid side chain analogs are computed. We compare three methods of varying computational complexity and accuracy for three force fields: free energy simulations, Poisson-Boltzmann (PB), and linear response approximation (LRA) using AMBER, CHARMM, and OPLSAA force fields. We find that deviations from simulation start at low charges for solutes. The approximate PB and LRA produce an overestimation of electrostatic solvation free energies for most of molecules studied here. These deviations are remarkably systematic. The variations among force fields are almost as large as the variations found among methods. Our study confirmsmore » that success of the approximate methods for electrostatic solvation free energies comes from their ability to evaluate free energy differences accurately.« less

  11. Accurate spectral solutions for the parabolic and elliptic partial differential equations by the ultraspherical tau method

    NASA Astrophysics Data System (ADS)

    Doha, E. H.; Abd-Elhameed, W. M.

    2005-09-01

    We present a double ultraspherical spectral methods that allow the efficient approximate solution for the parabolic partial differential equations in a square subject to the most general inhomogeneous mixed boundary conditions. The differential equations with their boundary and initial conditions are reduced to systems of ordinary differential equations for the time-dependent expansion coefficients. These systems are greatly simplified by using tensor matrix algebra, and are solved by using the step-by-step method. Numerical applications of how to use these methods are described. Numerical results obtained compare favorably with those of the analytical solutions. Accurate double ultraspherical spectral approximations for Poisson's and Helmholtz's equations are also noted. Numerical experiments show that spectral approximation based on Chebyshev polynomials of the first kind is not always better than others based on ultraspherical polynomials.

  12. A Generalized QMRA Beta-Poisson Dose-Response Model.

    PubMed

    Xie, Gang; Roiko, Anne; Stratton, Helen; Lemckert, Charles; Dunn, Peter K; Mengersen, Kerrie

    2016-10-01

    Quantitative microbial risk assessment (QMRA) is widely accepted for characterizing the microbial risks associated with food, water, and wastewater. Single-hit dose-response models are the most commonly used dose-response models in QMRA. Denoting PI(d) as the probability of infection at a given mean dose d, a three-parameter generalized QMRA beta-Poisson dose-response model, PI(d|α,β,r*), is proposed in which the minimum number of organisms required for causing infection, K min , is not fixed, but a random variable following a geometric distribution with parameter 0

  13. A new method for extracting near-surface mass-density anomalies from land-based gravity data, based on a special case of Poisson's PDE at the Earth's surface: A case study of salt diapirs in the south of Iran

    NASA Astrophysics Data System (ADS)

    AllahTavakoli, Y.; Safari, A.; Ardalan, A.; Bahroudi, A.

    2015-12-01

    The current research provides a method for tracking near-surface mass-density anomalies via using only land-based gravity data, which is based on a special version of Poisson's Partial Differential Equation (PDE) of the gravitational field at Earth's surface. The research demonstrates how the Poisson's PDE can provide us with a capability to extract the near-surface mass-density anomalies from land-based gravity data. Herein, this version of the Poisson's PDE is mathematically introduced to the Earth's surface and then it is used to develop the new method for approximating the mass-density via derivatives of the Earth's gravitational field (i.e. via the gradient tensor). Herein, the author believes that the PDE can give us new knowledge about the behavior of the Earth's gravitational field at the Earth's surface which can be so useful for developing new methods of Earth's mass-density determination. In a case study, the proposed method is applied to a set of gravity stations located in the south of Iran. The results were numerically validated via certain knowledge about the geological structures in the area of the case study. Also, the method was compared with two standard methods of mass-density determination. All the numerical experiments show that the proposed approach is well-suited for tracking near-surface mass-density anomalies via using only the gravity data. Finally, the approach is also applied to some petroleum exploration studies of salt diapirs in the south of Iran.

  14. Distribution of Escherichia coli O157:H7 in ground beef: Assessing the clustering intensity for an industrial-scale grinder and a low and localized initial contamination.

    PubMed

    Loukiadis, Estelle; Bièche-Terrier, Clémence; Malayrat, Catherine; Ferré, Franck; Cartier, Philippe; Augustin, Jean-Christophe

    2017-06-05

    Undercooked ground beef is regularly implicated in food-borne outbreaks involving pathogenic Shiga toxin-producing Escherichia coli. The dispersion of bacteria during mixing processes is of major concern for quantitative microbiological risk assessment since clustering will influence the number of bacteria the consumers might get exposed to as well as the performance of sampling plans used to detect contaminated ground beef batches. In this study, batches of 25kg of ground beef were manufactured according to a process mimicking an industrial-scale grinding with three successive steps: primary grinding, mixing and final grinding. The ground beef batches were made with 100% of chilled trims or with 2/3 of chilled trims and 1/3 of frozen trims. Prior grinding, one beef trim was contaminated with approximately 10 6 -10 7 CFU of E. coli O157:H7 on a surface of 0.5cm 2 to reach a concentration of 10-100cells/g in ground beef. The E. coli O157:H7 distribution in ground beef was characterized by enumerating 60 samples (20 samples of 5g, 20 samples of 25g and 20 samples of 100g) and fitting a Poisson-gamma model to describe the variability of bacterial counts. The shape parameter of the gamma distribution, also known as the dispersion parameter reflecting the amount of clustering, was estimated between 1.0 and 1.6. This k-value of approximately 1 expresses a moderate level of clustering of bacterial cells in the ground beef. The impact of this clustering on the performance of sampling strategies was relatively limited in comparison to the classical hypothesis of a random repartition of pathogenic cells in mixed materials (purely Poisson distribution instead of Poisson-gamma distribution). Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Random transitions described by the stochastic Smoluchowski-Poisson system and by the stochastic Keller-Segel model.

    PubMed

    Chavanis, P H; Delfini, L

    2014-03-01

    We study random transitions between two metastable states that appear below a critical temperature in a one-dimensional self-gravitating Brownian gas with a modified Poisson equation experiencing a second order phase transition from a homogeneous phase to an inhomogeneous phase [P. H. Chavanis and L. Delfini, Phys. Rev. E 81, 051103 (2010)]. We numerically solve the N-body Langevin equations and the stochastic Smoluchowski-Poisson system, which takes fluctuations (finite N effects) into account. The system switches back and forth between the two metastable states (bistability) and the particles accumulate successively at the center or at the boundary of the domain. We explicitly show that these random transitions exhibit the phenomenology of the ordinary Kramers problem for a Brownian particle in a double-well potential. The distribution of the residence time is Poissonian and the average lifetime of a metastable state is given by the Arrhenius law; i.e., it is proportional to the exponential of the barrier of free energy ΔF divided by the energy of thermal excitation kBT. Since the free energy is proportional to the number of particles N for a system with long-range interactions, the lifetime of metastable states scales as eN and is considerable for N≫1. As a result, in many applications, metastable states of systems with long-range interactions can be considered as stable states. However, for moderate values of N, or close to a critical point, the lifetime of the metastable states is reduced since the barrier of free energy decreases. In that case, the fluctuations become important and the mean field approximation is no more valid. This is the situation considered in this paper. By an appropriate change of notations, our results also apply to bacterial populations experiencing chemotaxis in biology. Their dynamics can be described by a stochastic Keller-Segel model that takes fluctuations into account and goes beyond the usual mean field approximation.

  16. Zeroth Poisson Homology, Foliated Cohomology and Perfect Poisson Manifolds

    NASA Astrophysics Data System (ADS)

    Martínez-Torres, David; Miranda, Eva

    2018-01-01

    We prove that, for compact regular Poisson manifolds, the zeroth homology group is isomorphic to the top foliated cohomology group, and we give some applications. In particular, we show that, for regular unimodular Poisson manifolds, top Poisson and foliated cohomology groups are isomorphic. Inspired by the symplectic setting, we define what a perfect Poisson manifold is. We use these Poisson homology computations to provide families of perfect Poisson manifolds.

  17. Analysis of a Compressible Fluid Soft Recoil (CFSR) Concept Applied to a 155 MM Howitzer

    DTIC Science & Technology

    1979-03-01

    Nitrile or Buna-N ( NBR ) rubber with ’ backup rings of nylotron. HITRILE NVLOTRON Piston seals An unresolved problem is that the coefficient of...fluid at atmospheric pressure Poisson’s ratio for Nitrile rubber dynamic coefficient of friction for rubber mass of recoiling parts weight of...Greene, tweed 5 Co. Palmetto catalog.) 43 [i^ - 0.50 = coefficient of friction (An approximate figure for rubber supplied by RIA Rubber

  18. Development of a Fuel Spill/Vapor Migration Modeling System.

    DTIC Science & Technology

    1985-12-01

    transforms resulting in a direct solution of the differential equation. A second order finite * difference approximation to the Poisson equation A2*j is...7 O-A64 043 DEVELOPMENT OF A FUEL SPILL/VPOR MIGRATION MODELING 1/2 SYSTEM(U) TRACER TECHNOLOGIES ESCONDIDO Cflo IL 0 ENGLAND ET AL. DEC 85 RFURL...AFWAL-TR-85-2089 DEVELOPMENT OF A FUEL SPILL/VAPOR MIGRATION MODELING SYSTEM W.G. England * L.H. Teuscher TRACER TECHNOLOGIES DTIC *2120 WEST MISSION

  19. Structural, electronic, elastic, thermoelectric and thermodynamic properties of the NbMSb half heusler (M=Fe, Ru, Os) compounds with first principle calculations

    NASA Astrophysics Data System (ADS)

    Abid, O. Miloud; Menouer, S.; Yakoubi, A.; Khachai, H.; Omran, S. Bin; Murtaza, G.; Prakash, Deo; Khenata, R.; Verma, K. D.

    2016-05-01

    The structural, electronic, elastic, thermoelectric and thermodynamic properties of NbMSb (M = Fe, Ru, Os) half heusler compounds are reported. The full-potential linearized augmented plane wave (FP-LAPW) plus local orbital (lo) method, based on the density functional theory (DFT) was employed for the present study. The equilibrium lattice parameter results are in good compliance with the available experimental measurements. The electronic band structure and Boltzmann transport calculations indicated a narrow indirect energy band gap for the compound having electronic structure favorable for thermoelectric performance as well as with substantial thermopowers at temperature ranges from 300 K to 800 K. Furthermore, good potential for thermoelectric performance (thermopower S ≥ 500 μeV) was found at higher temperature. In addition, the analysis of the charge density, partial and total densities of states (DOS) of three compounds demonstrate their semiconducting, ionic and covalent characters. Conversely, the calculated values of the Poisson's ratio and the B/G ratio indicate their ductile makeup. The thermal properties of the compounds were calculated by quasi-harmonic Debye model as implemented in the GIBBS code.

  20. First Principles Investigation of Fluorine Based Strontium Series of Perovskites

    NASA Astrophysics Data System (ADS)

    Erum, Nazia; Azhar Iqbal, Muhammad

    2016-11-01

    Density functional theory is used to explore structural, elastic, and mechanical properties of SrLiF3, SrNaF3, SrKF3 and SrRbF3 fluoroperovskite compounds by means of an ab-initio Full Potential-Linearized Augmented Plane Wave (FP-LAPW) method. Several lattice parameters are employed to obtain accurate equilibrium volume (Vo). The resultant quantities include ground state energy, elastic constants, shear modulus, bulk modulus, young's modulus, cauchy's pressure, poisson's ratio, shear constant, ratio of elastic anisotropy factor, kleinman's parameter, melting temperature, and lame's coefficient. The calculated structural parameters via DFT as well as analytical methods are found to be consistent with experimental findings. Chemical bonding is used to investigate corresponding chemical trends which authenticate combination of covalent-ionic behavior. Furthermore electron density plots as well as elastic and mechanical properties are reported for the first time which reveals that fluorine based strontium series of perovskites are mechanically stable and posses weak resistance towards shear deformation as compared to resistance towards unidirectional compression while brittleness and ionic behavior is dominated in them which decreases from SrLiF3 to SrRbF3. Calculated cauchy's pressure, poisson's ratio and B/G ratio also proves ionic nature in these compounds. The present methodology represents an effective and influential approach to calculate the whole set of elastic and mechanical parameters which would support to understand various physical phenomena and empower device engineers for implementing these materials in numerous applications.

  1. Maximum Likelihood Time-of-Arrival Estimation of Optical Pulses via Photon-Counting Photodetectors

    NASA Technical Reports Server (NTRS)

    Erkmen, Baris I.; Moision, Bruce E.

    2010-01-01

    Many optical imaging, ranging, and communications systems rely on the estimation of the arrival time of an optical pulse. Recently, such systems have been increasingly employing photon-counting photodetector technology, which changes the statistics of the observed photocurrent. This requires time-of-arrival estimators to be developed and their performances characterized. The statistics of the output of an ideal photodetector, which are well modeled as a Poisson point process, were considered. An analytical model was developed for the mean-square error of the maximum likelihood (ML) estimator, demonstrating two phenomena that cause deviations from the minimum achievable error at low signal power. An approximation was derived to the threshold at which the ML estimator essentially fails to provide better than a random guess of the pulse arrival time. Comparing the analytic model performance predictions to those obtained via simulations, it was verified that the model accurately predicts the ML performance over all regimes considered. There is little prior art that attempts to understand the fundamental limitations to time-of-arrival estimation from Poisson statistics. This work establishes both a simple mathematical description of the error behavior, and the associated physical processes that yield this behavior. Previous work on mean-square error characterization for ML estimators has predominantly focused on additive Gaussian noise. This work demonstrates that the discrete nature of the Poisson noise process leads to a distinctly different error behavior.

  2. Bayesian inference for unidirectional misclassification of a binary response trait.

    PubMed

    Xia, Michelle; Gustafson, Paul

    2018-03-15

    When assessing association between a binary trait and some covariates, the binary response may be subject to unidirectional misclassification. Unidirectional misclassification can occur when revealing a particular level of the trait is associated with a type of cost, such as a social desirability or financial cost. The feasibility of addressing misclassification is commonly obscured by model identification issues. The current paper attempts to study the efficacy of inference when the binary response variable is subject to unidirectional misclassification. From a theoretical perspective, we demonstrate that the key model parameters possess identifiability, except for the case with a single binary covariate. From a practical standpoint, the logistic model with quantitative covariates can be weakly identified, in the sense that the Fisher information matrix may be near singular. This can make learning some parameters difficult under certain parameter settings, even with quite large samples. In other cases, the stronger identification enables the model to provide more effective adjustment for unidirectional misclassification. An extension to the Poisson approximation of the binomial model reveals the identifiability of the Poisson and zero-inflated Poisson models. For fully identified models, the proposed method adjusts for misclassification based on learning from data. For binary models where there is difficulty in identification, the method is useful for sensitivity analyses on the potential impact from unidirectional misclassification. Copyright © 2017 John Wiley & Sons, Ltd.

  3. Protein-ion binding process on finite macromolecular concentration. A Poisson-Boltzmann and Monte Carlo study.

    PubMed

    de Carvalho, Sidney Jurado; Fenley, Márcia O; da Silva, Fernando Luís Barroso

    2008-12-25

    Electrostatic interactions are one of the key driving forces for protein-ligands complexation. Different levels for the theoretical modeling of such processes are available on the literature. Most of the studies on the Molecular Biology field are performed within numerical solutions of the Poisson-Boltzmann Equation and the dielectric continuum models framework. In such dielectric continuum models, there are two pivotal questions: (a) how the protein dielectric medium should be modeled, and (b) what protocol should be used when solving this effective Hamiltonian. By means of Monte Carlo (MC) and Poisson-Boltzmann (PB) calculations, we define the applicability of the PB approach with linear and nonlinear responses for macromolecular electrostatic interactions in electrolyte solution, revealing some physical mechanisms and limitations behind it especially due the raise of both macromolecular charge and concentration out of the strong coupling regime. A discrepancy between PB and MC for binding constant shifts is shown and explained in terms of the manner PB approximates the excess chemical potentials of the ligand, and not as a consequence of the nonlinear thermal treatment and/or explicit ion-ion interactions as it could be argued. Our findings also show that the nonlinear PB predictions with a low dielectric response well reproduce the pK shifts calculations carried out with an uniform dielectric model. This confirms and completes previous results obtained by both MC and linear PB calculations.

  4. Poisson-Nernst-Planck equations for simulating biomolecular diffusion-reaction processes I: Finite element solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu Benzhuo; Holst, Michael J.; Center for Theoretical Biological Physics, University of California San Diego, La Jolla, CA 92093

    2010-09-20

    In this paper we developed accurate finite element methods for solving 3-D Poisson-Nernst-Planck (PNP) equations with singular permanent charges for simulating electrodiffusion in solvated biomolecular systems. The electrostatic Poisson equation was defined in the biomolecules and in the solvent, while the Nernst-Planck equation was defined only in the solvent. We applied a stable regularization scheme to remove the singular component of the electrostatic potential induced by the permanent charges inside biomolecules, and formulated regular, well-posed PNP equations. An inexact-Newton method was used to solve the coupled nonlinear elliptic equations for the steady problems; while an Adams-Bashforth-Crank-Nicolson method was devised formore » time integration for the unsteady electrodiffusion. We numerically investigated the conditioning of the stiffness matrices for the finite element approximations of the two formulations of the Nernst-Planck equation, and theoretically proved that the transformed formulation is always associated with an ill-conditioned stiffness matrix. We also studied the electroneutrality of the solution and its relation with the boundary conditions on the molecular surface, and concluded that a large net charge concentration is always present near the molecular surface due to the presence of multiple species of charged particles in the solution. The numerical methods are shown to be accurate and stable by various test problems, and are applicable to real large-scale biophysical electrodiffusion problems.« less

  5. Poisson-Nernst-Planck Equations for Simulating Biomolecular Diffusion-Reaction Processes I: Finite Element Solutions

    PubMed Central

    Lu, Benzhuo; Holst, Michael J.; McCammon, J. Andrew; Zhou, Y. C.

    2010-01-01

    In this paper we developed accurate finite element methods for solving 3-D Poisson-Nernst-Planck (PNP) equations with singular permanent charges for electrodiffusion in solvated biomolecular systems. The electrostatic Poisson equation was defined in the biomolecules and in the solvent, while the Nernst-Planck equation was defined only in the solvent. We applied a stable regularization scheme to remove the singular component of the electrostatic potential induced by the permanent charges inside biomolecules, and formulated regular, well-posed PNP equations. An inexact-Newton method was used to solve the coupled nonlinear elliptic equations for the steady problems; while an Adams-Bashforth-Crank-Nicolson method was devised for time integration for the unsteady electrodiffusion. We numerically investigated the conditioning of the stiffness matrices for the finite element approximations of the two formulations of the Nernst-Planck equation, and theoretically proved that the transformed formulation is always associated with an ill-conditioned stiffness matrix. We also studied the electroneutrality of the solution and its relation with the boundary conditions on the molecular surface, and concluded that a large net charge concentration is always present near the molecular surface due to the presence of multiple species of charged particles in the solution. The numerical methods are shown to be accurate and stable by various test problems, and are applicable to real large-scale biophysical electrodiffusion problems. PMID:21709855

  6. Poisson-Nernst-Planck Equations for Simulating Biomolecular Diffusion-Reaction Processes I: Finite Element Solutions.

    PubMed

    Lu, Benzhuo; Holst, Michael J; McCammon, J Andrew; Zhou, Y C

    2010-09-20

    In this paper we developed accurate finite element methods for solving 3-D Poisson-Nernst-Planck (PNP) equations with singular permanent charges for electrodiffusion in solvated biomolecular systems. The electrostatic Poisson equation was defined in the biomolecules and in the solvent, while the Nernst-Planck equation was defined only in the solvent. We applied a stable regularization scheme to remove the singular component of the electrostatic potential induced by the permanent charges inside biomolecules, and formulated regular, well-posed PNP equations. An inexact-Newton method was used to solve the coupled nonlinear elliptic equations for the steady problems; while an Adams-Bashforth-Crank-Nicolson method was devised for time integration for the unsteady electrodiffusion. We numerically investigated the conditioning of the stiffness matrices for the finite element approximations of the two formulations of the Nernst-Planck equation, and theoretically proved that the transformed formulation is always associated with an ill-conditioned stiffness matrix. We also studied the electroneutrality of the solution and its relation with the boundary conditions on the molecular surface, and concluded that a large net charge concentration is always present near the molecular surface due to the presence of multiple species of charged particles in the solution. The numerical methods are shown to be accurate and stable by various test problems, and are applicable to real large-scale biophysical electrodiffusion problems.

  7. Applying Flammability Limit Probabilities and the Normoxic Upward Limiting Pressure Concept to NASA STD-6001 Test 1

    NASA Technical Reports Server (NTRS)

    Olson, Sandra L.; Beeson, Harold; Fernandez-Pello, A. Carlos

    2014-01-01

    Repeated Test 1 extinction tests near the upward flammability limit are expected to follow a Poisson process trend. This Poisson process trend suggests that rather than define a ULOI and MOC (which requires two limits to be determined), it might be better to define a single upward limit as being where 1/e (where e (approx. equal to 2.7183) is the characteristic time of the normalized Poisson process) of the materials burn, or, rounding, where approximately 1/3 of the samples fail the test (and burn). Recognizing that spacecraft atmospheres will not bound the entire oxygen-pressure parameter space, but actually lie along the normoxic atmosphere control band, we can focus the materials flammability testing along this normoxic band. A Normoxic Upward Limiting Pressure (NULP) is defined that determines the minimum safe total pressure for a material within the constant partial pressure control band. Then, increasing this pressure limit by a factor of safety, we can define the material as being safe to use at the NULP + SF (where SF is on the order of 10 kilopascal, based on existing flammability data). It is recommended that the thickest material to be tested with the current Test 1 igniter should be 3 mm thick (1/8 inches) to avoid the problem of differentiating between an ignition limit and a true flammability limit.

  8. Hydrodynamic model of temperature change in open ionic channels.

    PubMed Central

    Chen, D P; Eisenberg, R S; Jerome, J W; Shu, C W

    1995-01-01

    Most theories of open ionic channels ignore heat generated by current flow, but that heat is known to be significant when analogous currents flow in semiconductors, so a generalization of the Poisson-Nernst-Planck theory of channels, called the hydrodynamic model, is needed. The hydrodynamic theory is a combination of the Poisson and Euler field equations of electrostatics and fluid dynamics, conservation laws that describe diffusive and convective flow of mass, heat, and charge (i.e., current), and their coupling. That is to say, it is a kinetic theory of solute and solvent flow, allowing heat and current flow as well, taking into account density changes, temperature changes, and electrical potential gradients. We integrate the equations with an essentially nonoscillatory shock-capturing numerical scheme previously shown to be stable and accurate. Our calculations show that 1) a significant amount of electrical energy is exchanged with the permeating ions; 2) the local temperature of the ions rises some tens of degrees, and this temperature rise significantly alters for ionic flux in a channel 25 A long, such as gramicidin-A; and 3) a critical parameter, called the saturation velocity, determines whether ionic motion is overdamped (Poisson-Nernst-Planck theory), is an intermediate regime (called the adiabatic approximation in semiconductor theory), or is altogether unrestricted (requiring the full hydrodynamic model). It seems that significant temperature changes are likely to accompany current flow in the open ionic channel. PMID:8599638

  9. Filling of a Poisson trap by a population of random intermittent searchers.

    PubMed

    Bressloff, Paul C; Newby, Jay M

    2012-03-01

    We extend the continuum theory of random intermittent search processes to the case of N independent searchers looking to deliver cargo to a single hidden target located somewhere on a semi-infinite track. Each searcher randomly switches between a stationary state and either a leftward or rightward constant velocity state. We assume that all of the particles start at one end of the track and realize sample trajectories independently generated from the same underlying stochastic process. The hidden target is treated as a partially absorbing trap in which a particle can only detect the target and deliver its cargo if it is stationary and within range of the target; the particle is removed from the system after delivering its cargo. As a further generalization of previous models, we assume that up to n successive particles can find the target and deliver its cargo. Assuming that the rate of target detection scales as 1/N, we show that there exists a well-defined mean-field limit N→∞, in which the stochastic model reduces to a deterministic system of linear reaction-hyperbolic equations for the concentrations of particles in each of the internal states. These equations decouple from the stochastic process associated with filling the target with cargo. The latter can be modeled as a Poisson process in which the time-dependent rate of filling λ(t) depends on the concentration of stationary particles within the target domain. Hence, we refer to the target as a Poisson trap. We analyze the efficiency of filling the Poisson trap with n particles in terms of the waiting time density f(n)(t). The latter is determined by the integrated Poisson rate μ(t)=∫(0)(t)λ(s)ds, which in turn depends on the solution to the reaction-hyperbolic equations. We obtain an approximate solution for the particle concentrations by reducing the system of reaction-hyperbolic equations to a scalar advection-diffusion equation using a quasisteady-state analysis. We compare our analytical results for the mean-field model with Monte Carlo simulations for finite N. We thus determine how the mean first passage time (MFPT) for filling the target depends on N and n.

  10. Stochastic analysis of three-dimensional flow in a bounded domain

    USGS Publications Warehouse

    Naff, R.L.; Vecchia, A.V.

    1986-01-01

    A commonly accepted first-order approximation of the equation for steady state flow in a fully saturated spatially random medium has the form of Poisson's equation. This form allows for the advantageous use of Green's functions to solve for the random output (hydraulic heads) in terms of a convolution over the random input (the logarithm of hydraulic conductivity). A solution for steady state three- dimensional flow in an aquifer bounded above and below is presented; consideration of these boundaries is made possible by use of Green's functions to solve Poisson's equation. Within the bounded domain the medium hydraulic conductivity is assumed to be a second-order stationary random process as represented by a simple three-dimensional covariance function. Upper and lower boundaries are taken to be no-flow boundaries; the mean flow vector lies entirely in the horizontal dimensions. The resulting hydraulic head covariance function exhibits nonstationary effects resulting from the imposition of boundary conditions. Comparisons are made with existing infinite domain solutions.

  11. Galerkin methods for Boltzmann-Poisson transport with reflection conditions on rough boundaries

    NASA Astrophysics Data System (ADS)

    Morales Escalante, José A.; Gamba, Irene M.

    2018-06-01

    We consider in this paper the mathematical and numerical modeling of reflective boundary conditions (BC) associated to Boltzmann-Poisson systems, including diffusive reflection in addition to specularity, in the context of electron transport in semiconductor device modeling at nano scales, and their implementation in Discontinuous Galerkin (DG) schemes. We study these BC on the physical boundaries of the device and develop a numerical approximation to model an insulating boundary condition, or equivalently, a pointwise zero flux mathematical condition for the electron transport equation. Such condition balances the incident and reflective momentum flux at the microscopic level, pointwise at the boundary, in the case of a more general mixed reflection with momentum dependant specularity probability p (k →). We compare the computational prediction of physical observables given by the numerical implementation of these different reflection conditions in our DG scheme for BP models, and observe that the diffusive condition influences the kinetic moments over the whole domain in position space.

  12. The spatial distribution of fixed mutations within genes coding for proteins

    NASA Technical Reports Server (NTRS)

    Holmquist, R.; Goodman, M.; Conroy, T.; Czelusniak, J.

    1983-01-01

    An examination has been conducted of the extensive amino acid sequence data now available for five protein families - the alpha crystallin A chain, myoglobin, alpha and beta hemoglobin, and the cytochromes c - with the goal of estimating the true spatial distribution of base substitutions within genes that code for proteins. In every case the commonly used Poisson density failed to even approximate the experimental pattern of base substitution. For the 87 species of beta hemoglobin examined, for example, the probability that the observed results were from a Poisson process was the minuscule 10 to the -44th. Analogous results were obtained for the other functional families. All the data were reasonably, but not perfectly, described by the negative binomial density. In particular, most of the data were described by one of the very simple limiting forms of this density, the geometric density. The implications of this for evolutionary inference are discussed. It is evident that most estimates of total base substitutions between genes are badly in need of revision.

  13. Electromagnetic gyrokinetic simulation in GTS

    NASA Astrophysics Data System (ADS)

    Ma, Chenhao; Wang, Weixing; Startsev, Edward; Lee, W. W.; Ethier, Stephane

    2017-10-01

    We report the recent development in the electromagnetic simulations for general toroidal geometry based on the particle-in-cell gyrokinetic code GTS. Because of the cancellation problem, the EM gyrokinetic simulation has numerical difficulties in the MHD limit where k⊥ρi -> 0 and/or β >me /mi . Recently several approaches has been developed to circumvent this problem: (1) p∥ formulation with analytical skin term iteratively approximated by simulation particles (Yang Chen), (2) A modified p∥ formulation with ∫ dtE∥ used in place of A∥ (Mishichenko); (3) A conservative theme where the electron density perturbation for the Poisson equation is calculated from an electron continuity equation (Bao) ; (4) double-split-weight scheme with two weights, one for Poisson equation and one for time derivative of Ampere's law, each with different splits designed to remove large terms from Vlasov equation (Startsev). These algorithms are being implemented into GTS framework for general toroidal geometry. The performance of these different algorithms will be compared for various EM modes.

  14. A regularized vortex-particle mesh method for large eddy simulation

    NASA Astrophysics Data System (ADS)

    Spietz, H. J.; Walther, J. H.; Hejlesen, M. M.

    2017-11-01

    We present recent developments of the remeshed vortex particle-mesh method for simulating incompressible fluid flow. The presented method relies on a parallel higher-order FFT based solver for the Poisson equation. Arbitrary high order is achieved through regularization of singular Green's function solutions to the Poisson equation and recently we have derived novel high order solutions for a mixture of open and periodic domains. With this approach the simulated variables may formally be viewed as the approximate solution to the filtered Navier Stokes equations, hence we use the method for Large Eddy Simulation by including a dynamic subfilter-scale model based on test-filters compatible with the aforementioned regularization functions. Further the subfilter-scale model uses Lagrangian averaging, which is a natural candidate in light of the Lagrangian nature of vortex particle methods. A multiresolution variation of the method is applied to simulate the benchmark problem of the flow past a square cylinder at Re = 22000 and the obtained results are compared to results from the literature.

  15. Single- and multiple-pulse noncoherent detection statistics associated with partially developed speckle.

    PubMed

    Osche, G R

    2000-08-20

    Single- and multiple-pulse detection statistics are presented for aperture-averaged direct detection optical receivers operating against partially developed speckle fields. A partially developed speckle field arises when the probability density function of the received intensity does not follow negative exponential statistics. The case of interest here is the target surface that exhibits diffuse as well as specular components in the scattered radiation. An approximate expression is derived for the integrated intensity at the aperture, which leads to single- and multiple-pulse discrete probability density functions for the case of a Poisson signal in Poisson noise with an additive coherent component. In the absence of noise, the single-pulse discrete density function is shown to reduce to a generalized negative binomial distribution. The radar concept of integration loss is discussed in the context of direct detection optical systems where it is shown that, given an appropriate set of system parameters, multiple-pulse processing can be more efficient than single-pulse processing over a finite range of the integration parameter n.

  16. The NonConforming Virtual Element Method for the Stokes Equations

    DOE PAGES

    Cangiani, Andrea; Gyrya, Vitaliy; Manzini, Gianmarco

    2016-01-01

    In this paper, we present the nonconforming virtual element method (VEM) for the numerical approximation of velocity and pressure in the steady Stokes problem. The pressure is approximated using discontinuous piecewise polynomials, while each component of the velocity is approximated using the nonconforming virtual element space. On each mesh element the local virtual space contains the space of polynomials of up to a given degree, plus suitable nonpolynomial functions. The virtual element functions are implicitly defined as the solution of local Poisson problems with polynomial Neumann boundary conditions. As typical in VEM approaches, the explicit evaluation of the non-polynomial functionsmore » is not required. This approach makes it possible to construct nonconforming (virtual) spaces for any polynomial degree regardless of the parity, for two- and three-dimensional problems, and for meshes with very general polygonal and polyhedral elements. We show that the nonconforming VEM is inf-sup stable and establish optimal a priori error estimates for the velocity and pressure approximations. Finally, numerical examples confirm the convergence analysis and the effectiveness of the method in providing high-order accurate approximations.« less

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cangiani, Andrea; Gyrya, Vitaliy; Manzini, Gianmarco

    In this paper, we present the nonconforming virtual element method (VEM) for the numerical approximation of velocity and pressure in the steady Stokes problem. The pressure is approximated using discontinuous piecewise polynomials, while each component of the velocity is approximated using the nonconforming virtual element space. On each mesh element the local virtual space contains the space of polynomials of up to a given degree, plus suitable nonpolynomial functions. The virtual element functions are implicitly defined as the solution of local Poisson problems with polynomial Neumann boundary conditions. As typical in VEM approaches, the explicit evaluation of the non-polynomial functionsmore » is not required. This approach makes it possible to construct nonconforming (virtual) spaces for any polynomial degree regardless of the parity, for two- and three-dimensional problems, and for meshes with very general polygonal and polyhedral elements. We show that the nonconforming VEM is inf-sup stable and establish optimal a priori error estimates for the velocity and pressure approximations. Finally, numerical examples confirm the convergence analysis and the effectiveness of the method in providing high-order accurate approximations.« less

  18. Counting statistics for genetic switches based on effective interaction approximation

    NASA Astrophysics Data System (ADS)

    Ohkubo, Jun

    2012-09-01

    Applicability of counting statistics for a system with an infinite number of states is investigated. The counting statistics has been studied a lot for a system with a finite number of states. While it is possible to use the scheme in order to count specific transitions in a system with an infinite number of states in principle, we have non-closed equations in general. A simple genetic switch can be described by a master equation with an infinite number of states, and we use the counting statistics in order to count the number of transitions from inactive to active states in the gene. To avoid having the non-closed equations, an effective interaction approximation is employed. As a result, it is shown that the switching problem can be treated as a simple two-state model approximately, which immediately indicates that the switching obeys non-Poisson statistics.

  19. Method for resonant measurement

    DOEpatents

    Rhodes, G.W.; Migliori, A.; Dixon, R.D.

    1996-03-05

    A method of measurement of objects to determine object flaws, Poisson`s ratio ({sigma}) and shear modulus ({mu}) is shown and described. First, the frequency for expected degenerate responses is determined for one or more input frequencies and then splitting of degenerate resonant modes are observed to identify the presence of flaws in the object. Poisson`s ratio and the shear modulus can be determined by identification of resonances dependent only on the shear modulus, and then using that shear modulus to find Poisson`s ratio using other modes dependent on both the shear modulus and Poisson`s ratio. 1 fig.

  20. Online sequential Monte Carlo smoother for partially observed diffusion processes

    NASA Astrophysics Data System (ADS)

    Gloaguen, Pierre; Étienne, Marie-Pierre; Le Corff, Sylvain

    2018-12-01

    This paper introduces a new algorithm to approximate smoothed additive functionals of partially observed diffusion processes. This method relies on a new sequential Monte Carlo method which allows to compute such approximations online, i.e., as the observations are received, and with a computational complexity growing linearly with the number of Monte Carlo samples. The original algorithm cannot be used in the case of partially observed stochastic differential equations since the transition density of the latent data is usually unknown. We prove that it may be extended to partially observed continuous processes by replacing this unknown quantity by an unbiased estimator obtained for instance using general Poisson estimators. This estimator is proved to be consistent and its performance are illustrated using data from two models.

  1. Efficient Levenberg-Marquardt minimization of the maximum likelihood estimator for Poisson deviates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laurence, T; Chromy, B

    2009-11-10

    Histograms of counted events are Poisson distributed, but are typically fitted without justification using nonlinear least squares fitting. The more appropriate maximum likelihood estimator (MLE) for Poisson distributed data is seldom used. We extend the use of the Levenberg-Marquardt algorithm commonly used for nonlinear least squares minimization for use with the MLE for Poisson distributed data. In so doing, we remove any excuse for not using this more appropriate MLE. We demonstrate the use of the algorithm and the superior performance of the MLE using simulations and experiments in the context of fluorescence lifetime imaging. Scientists commonly form histograms ofmore » counted events from their data, and extract parameters by fitting to a specified model. Assuming that the probability of occurrence for each bin is small, event counts in the histogram bins will be distributed according to the Poisson distribution. We develop here an efficient algorithm for fitting event counting histograms using the maximum likelihood estimator (MLE) for Poisson distributed data, rather than the non-linear least squares measure. This algorithm is a simple extension of the common Levenberg-Marquardt (L-M) algorithm, is simple to implement, quick and robust. Fitting using a least squares measure is most common, but it is the maximum likelihood estimator only for Gaussian-distributed data. Non-linear least squares methods may be applied to event counting histograms in cases where the number of events is very large, so that the Poisson distribution is well approximated by a Gaussian. However, it is not easy to satisfy this criterion in practice - which requires a large number of events. It has been well-known for years that least squares procedures lead to biased results when applied to Poisson-distributed data; a recent paper providing extensive characterization of these biases in exponential fitting is given. The more appropriate measure based on the maximum likelihood estimator (MLE) for the Poisson distribution is also well known, but has not become generally used. This is primarily because, in contrast to non-linear least squares fitting, there has been no quick, robust, and general fitting method. In the field of fluorescence lifetime spectroscopy and imaging, there have been some efforts to use this estimator through minimization routines such as Nelder-Mead optimization, exhaustive line searches, and Gauss-Newton minimization. Minimization based on specific one- or multi-exponential models has been used to obtain quick results, but this procedure does not allow the incorporation of the instrument response, and is not generally applicable to models found in other fields. Methods for using the MLE for Poisson-distributed data have been published by the wider spectroscopic community, including iterative minimization schemes based on Gauss-Newton minimization. The slow acceptance of these procedures for fitting event counting histograms may also be explained by the use of the ubiquitous, fast Levenberg-Marquardt (L-M) fitting procedure for fitting non-linear models using least squares fitting (simple searches obtain {approx}10000 references - this doesn't include those who use it, but don't know they are using it). The benefits of L-M include a seamless transition between Gauss-Newton minimization and downward gradient minimization through the use of a regularization parameter. This transition is desirable because Gauss-Newton methods converge quickly, but only within a limited domain of convergence; on the other hand the downward gradient methods have a much wider domain of convergence, but converge extremely slowly nearer the minimum. L-M has the advantages of both procedures: relative insensitivity to initial parameters and rapid convergence. Scientists, when wanting an answer quickly, will fit data using L-M, get an answer, and move on. Only those that are aware of the bias issues will bother to fit using the more appropriate MLE for Poisson deviates. However, since there is a simple, analytical formula for the appropriate MLE measure for Poisson deviates, it is inexcusable that least squares estimators are used almost exclusively when fitting event counting histograms. There have been ways found to use successive non-linear least squares fitting to obtain similarly unbiased results, but this procedure is justified by simulation, must be re-tested when conditions change significantly, and requires two successive fits. There is a great need for a fitting routine for the MLE estimator for Poisson deviates that has convergence domains and rates comparable to the non-linear least squares L-M fitting. We show in this report that a simple way to achieve that goal is to use the L-M fitting procedure not to minimize the least squares measure, but the MLE for Poisson deviates.« less

  2. Necessary and sufficient conditions for the stability of a sleeping top described by three forms of dynamic equations

    NASA Astrophysics Data System (ADS)

    Ge, Zheng-Ming

    2008-04-01

    Necessary and sufficient conditions for the stability of a sleeping top described by dynamic equations of six state variables, Euler equations, and Poisson equations, by a two-degree-of-freedom system, Krylov equations, and by a one-degree-of-freedom system, nutation angle equation, is obtained by the Lyapunov direct method, Ge-Liu second instability theorem, an instability theorem, and a Ge-Yao-Chen partial region stability theorem without using the first approximation theory altogether.

  3. Lateral trapping of DNA inside a voltage gated nanopore

    NASA Astrophysics Data System (ADS)

    Töws, Thomas; Reimann, Peter

    2017-06-01

    The translocation of a short DNA fragment through a nanopore is addressed when the perforated membrane contains an embedded electrode. Accurate numerical solutions of the coupled Poisson, Nernst-Planck, and Stokes equations for a realistic, fully three-dimensional setup as well as analytical approximations for a simplified model are worked out. By applying a suitable voltage to the membrane electrode, the DNA can be forced to preferably traverse the pore either along the pore axis or at a small but finite distance from the pore wall.

  4. Error analysis of finite element method for Poisson–Nernst–Planck equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Yuzhou; Sun, Pengtao; Zheng, Bin

    A priori error estimates of finite element method for time-dependent Poisson-Nernst-Planck equations are studied in this work. We obtain the optimal error estimates in L∞(H1) and L2(H1) norms, and suboptimal error estimates in L∞(L2) norm, with linear element, and optimal error estimates in L∞(L2) norm with quadratic or higher-order element, for both semi- and fully discrete finite element approximations. Numerical experiments are also given to validate the theoretical results.

  5. Numerical solutions for patterns statistics on Markov chains.

    PubMed

    Nuel, Gregory

    2006-01-01

    We propose here a review of the methods available to compute pattern statistics on text generated by a Markov source. Theoretical, but also numerical aspects are detailed for a wide range of techniques (exact, Gaussian, large deviations, binomial and compound Poisson). The SPatt package (Statistics for Pattern, free software available at http://stat.genopole.cnrs.fr/spatt) implementing all these methods is then used to compare all these approaches in terms of computational time and reliability in the most complete pattern statistics benchmark available at the present time.

  6. Functional Data Approximation on Bounded Domains using Polygonal Finite Elements.

    PubMed

    Cao, Juan; Xiao, Yanyang; Chen, Zhonggui; Wang, Wenping; Bajaj, Chandrajit

    2018-07-01

    We construct and analyze piecewise approximations of functional data on arbitrary 2D bounded domains using generalized barycentric finite elements, and particularly quadratic serendipity elements for planar polygons. We compare approximation qualities (precision/convergence) of these partition-of-unity finite elements through numerical experiments, using Wachspress coordinates, natural neighbor coordinates, Poisson coordinates, mean value coordinates, and quadratic serendipity bases over polygonal meshes on the domain. For a convex n -sided polygon, the quadratic serendipity elements have 2 n basis functions, associated in a Lagrange-like fashion to each vertex and each edge midpoint, rather than the usual n ( n + 1)/2 basis functions to achieve quadratic convergence. Two greedy algorithms are proposed to generate Voronoi meshes for adaptive functional/scattered data approximations. Experimental results show space/accuracy advantages for these quadratic serendipity finite elements on polygonal domains versus traditional finite elements over simplicial meshes. Polygonal meshes and parameter coefficients of the quadratic serendipity finite elements obtained by our greedy algorithms can be further refined using an L 2 -optimization to improve the piecewise functional approximation. We conduct several experiments to demonstrate the efficacy of our algorithm for modeling features/discontinuities in functional data/image approximation.

  7. Properties of the Bivariate Delayed Poisson Process

    DTIC Science & Technology

    1974-07-01

    and Lewis (1972) in their Berkeley Symposium paper and here their analysis of the bivariate Poisson processes (without Poisson noise) is carried... Poisson processes . They cannot, however, be independent Poisson processes because their events are associated in pairs by the displace- ment centres...process because its marginal processes for events of each type are themselves (univariate) Poisson processes . Cox and Lewis (1972) assumed a

  8. Atomic Charge Parameters for the Finite Difference Poisson-Boltzmann Method Using Electronegativity Neutralization.

    PubMed

    Yang, Qingyi; Sharp, Kim A

    2006-07-01

    An optimization of Rappe and Goddard's charge equilibration (QEq) method of assigning atomic partial charges is described. This optimization is designed for fast and accurate calculation of solvation free energies using the finite difference Poisson-Boltzmann (FDPB) method. The optimization is performed against experimental small molecule solvation free energies using the FDPB method and adjusting Rappe and Goddard's atomic electronegativity values. Using a test set of compounds for which experimental solvation energies are available and a rather small number of parameters, very good agreement was obtained with experiment, with a mean unsigned error of about 0.5 kcal/mol. The QEq atomic partial charge assignment method can reflect the effects of the conformational changes and solvent induction on charge distribution in molecules. In the second section of the paper we examined this feature with a study of the alanine dipeptide conformations in water solvent. The different contributions to the energy surface of the dipeptide were examined and compared with the results from fixed CHARMm charge potential, which is widely used for molecular dynamics studies.

  9. First principle study of structural, elastic and electronic properties of APt3 (A=Mg, Sc, Y and Zr)

    NASA Astrophysics Data System (ADS)

    Benamer, A.; Roumili, A.; Medkour, Y.; Charifi, Z.

    2018-02-01

    We report results obtained from first principle calculations on APt3 compounds with A=Mg, Sc, Y and Zr. Our results of the lattice parameter a are in good agreement with experimental data, with deviations less than 0.8%. Single crystal elastic constants are calculated, then polycrystalline elastic moduli (bulk, shear and Young moduli, Poisson ration, anisotropy factor) are presented. Based on Debye model, Debye temperature ϴD is calculated from the sound velocities Vl, Vt and Vm. Band structure results show that the studied compounds are electrical conductors, the conduction mechanism is assured by Pt-d electrons. Different hybridisation states are observed between Pt-d and A-d orbitals. The study of the charge density distribution and the population analysis shows the coexistence of ionic, covalent and metallic bonds.

  10. Imaging Analysis of the Hard X-Ray Telescope ProtoEXIST2 and New Techniques for High-Resolution Coded-Aperture Telescopes

    NASA Technical Reports Server (NTRS)

    Hong, Jaesub; Allen, Branden; Grindlay, Jonathan; Barthelmy, Scott D.

    2016-01-01

    Wide-field (greater than or approximately equal to 100 degrees squared) hard X-ray coded-aperture telescopes with high angular resolution (greater than or approximately equal to 2 minutes) will enable a wide range of time domain astrophysics. For instance, transient sources such as gamma-ray bursts can be precisely localized without the assistance of secondary focusing X-ray telescopes to enable rapid followup studies. On the other hand, high angular resolution in coded-aperture imaging introduces a new challenge in handling the systematic uncertainty: the average photon count per pixel is often too small to establish a proper background pattern or model the systematic uncertainty in a timescale where the model remains invariant. We introduce two new techniques to improve detection sensitivity, which are designed for, but not limited to, a high-resolution coded-aperture system: a self-background modeling scheme which utilizes continuous scan or dithering operations, and a Poisson-statistics based probabilistic approach to evaluate the significance of source detection without subtraction in handling the background. We illustrate these new imaging analysis techniques in high resolution coded-aperture telescope using the data acquired by the wide-field hard X-ray telescope ProtoEXIST2 during a high-altitude balloon flight in fall 2012. We review the imaging sensitivity of ProtoEXIST2 during the flight, and demonstrate the performance of the new techniques using our balloon flight data in comparison with a simulated ideal Poisson background.

  11. Convex reformulation of biologically-based multi-criteria intensity-modulated radiation therapy optimization including fractionation effects

    NASA Astrophysics Data System (ADS)

    Hoffmann, Aswin L.; den Hertog, Dick; Siem, Alex Y. D.; Kaanders, Johannes H. A. M.; Huizenga, Henk

    2008-11-01

    Finding fluence maps for intensity-modulated radiation therapy (IMRT) can be formulated as a multi-criteria optimization problem for which Pareto optimal treatment plans exist. To account for the dose-per-fraction effect of fractionated IMRT, it is desirable to exploit radiobiological treatment plan evaluation criteria based on the linear-quadratic (LQ) cell survival model as a means to balance the radiation benefits and risks in terms of biologic response. Unfortunately, the LQ-model-based radiobiological criteria are nonconvex functions, which make the optimization problem hard to solve. We apply the framework proposed by Romeijn et al (2004 Phys. Med. Biol. 49 1991-2013) to find transformations of LQ-model-based radiobiological functions and establish conditions under which transformed functions result in equivalent convex criteria that do not change the set of Pareto optimal treatment plans. The functions analysed are: the LQ-Poisson-based model for tumour control probability (TCP) with and without inter-patient heterogeneity in radiation sensitivity, the LQ-Poisson-based relative seriality s-model for normal tissue complication probability (NTCP), the equivalent uniform dose (EUD) under the LQ-Poisson model and the fractionation-corrected Probit-based model for NTCP according to Lyman, Kutcher and Burman. These functions differ from those analysed before in that they cannot be decomposed into elementary EUD or generalized-EUD functions. In addition, we show that applying increasing and concave transformations to the convexified functions is beneficial for the piecewise approximation of the Pareto efficient frontier.

  12. Relation Between Firing Statistics of Spiking Neuron with Delayed Fast Inhibitory Feedback and Without Feedback

    NASA Astrophysics Data System (ADS)

    Vidybida, Alexander; Shchur, Olha

    We consider a class of spiking neuronal models, defined by a set of conditions typical for basic threshold-type models, such as the leaky integrate-and-fire or the binding neuron model and also for some artificial neurons. A neuron is fed with a Poisson process. Each output impulse is applied to the neuron itself after a finite delay Δ. This impulse acts as being delivered through a fast Cl-type inhibitory synapse. We derive a general relation which allows calculating exactly the probability density function (pdf) p(t) of output interspike intervals of a neuron with feedback based on known pdf p0(t) for the same neuron without feedback and on the properties of the feedback line (the Δ value). Similar relations between corresponding moments are derived. Furthermore, we prove that the initial segment of pdf p0(t) for a neuron with a fixed threshold level is the same for any neuron satisfying the imposed conditions and is completely determined by the input stream. For the Poisson input stream, we calculate that initial segment exactly and, based on it, obtain exactly the initial segment of pdf p(t) for a neuron with feedback. That is the initial segment of p(t) is model-independent as well. The obtained expressions are checked by means of Monte Carlo simulation. The course of p(t) has a pronounced peculiarity, which makes it impossible to approximate p(t) by Poisson or another simple stochastic process.

  13. Ab Initio Study of the Electronic Structure, Elastic Properties, Magnetic Feature and Thermodynamic Properties of the Ba2NiMoO6 Material

    NASA Astrophysics Data System (ADS)

    Deluque Toro, C. E.; Mosquera Polo, A. S.; Gil Rebaza, A. V.; Landínez Téllez, D. A.; Roa-Rojas, J.

    2018-04-01

    We report first-principles calculations of the elastic properties, electronic structure and magnetic behavior performed over the Ba2NiMoO6 double perovskite. Calculations are carried out through the full-potential linear augmented plane-wave method within the framework of the Density Functional Theory (DFT) with exchange and correlation effects in the Generalized Gradient and Local Density Approximations, including spin polarization. The elastic properties calculated are bulk modulus (B), the elastic constants (C 11, C 12 and C 44), the Zener anisotropy factor (A), the isotropic shear modulus (G), the Young modulus (Y) and the Poisson ratio (υ). Structural parameters, total energies and cohesive properties of the perovskite are studied by means of minimization of internal parameters with the Murnaghan equation, where the structural parameters are in good agreement with experimental data. Furthermore, we have explored different antiferromagnetic configurations in order to describe the magnetic ground state of this compound. The pressure and temperature dependence of specific heat, thermal expansion coefficient, Debye temperature and Grüneisen parameter were calculated by DFT from the state equation using the quasi-harmonic model of Debye. A specific heat behavior C V ≈ C P was found at temperatures below T = 400 K, with Dulong-Petit limit values, which is higher than those, reported for simple perovskites.

  14. Measuring gravel transport and dispersion in a mountain river using passive radio tracers

    USGS Publications Warehouse

    Bradley, D. N.; Tucker, G. E.

    2012-01-01

    Random walk models of fluvial sediment transport recognize that grains move intermittently, with short duration steps separated by rests that are comparatively long. These models are built upon the probability distributions of the step length and the resting time. Motivated by these models, tracer experiments have attempted to measure directly the steps and rests of sediment grains in natural streams. This paper describes results from a large tracer experiment designed to test stochastic transport models. We used passive integrated transponder (PIT) tags to label 893 coarse gravel clasts and placed them in Halfmoon Creek, a small alpine stream near Leadville, Colorado, USA. The PIT tags allow us to locate and identify tracers without picking them up or digging them out of the streambed. They also enable us to find a very high percentage of our rocks, 98% after three years and 96% after the fourth year. We use the annual tracer displacement to test two stochastic transport models, the Einstein–Hubbell–Sayre (EHS) model and the Yang–Sayre gamma-exponential model (GEM). We find that the GEM is a better fit to the observations, particularly for slower moving tracers and suggest that the strength of the GEM is that the gamma distribution of step lengths approximates a compound Poisson distribution. Published in 2012. This article is a US Government work and is in the public domain in the USA.

  15. Fractional Poisson Fields and Martingales

    NASA Astrophysics Data System (ADS)

    Aletti, Giacomo; Leonenko, Nikolai; Merzbach, Ely

    2018-02-01

    We present new properties for the Fractional Poisson process (FPP) and the Fractional Poisson field on the plane. A martingale characterization for FPPs is given. We extend this result to Fractional Poisson fields, obtaining some other characterizations. The fractional differential equations are studied. We consider a more general Mixed-Fractional Poisson process and show that this process is the stochastic solution of a system of fractional differential-difference equations. Finally, we give some simulations of the Fractional Poisson field on the plane.

  16. On a Poisson homogeneous space of bilinear forms with a Poisson-Lie action

    NASA Astrophysics Data System (ADS)

    Chekhov, L. O.; Mazzocco, M.

    2017-12-01

    Let \\mathscr A be the space of bilinear forms on C^N with defining matrices A endowed with a quadratic Poisson structure of reflection equation type. The paper begins with a short description of previous studies of the structure, and then this structure is extended to systems of bilinear forms whose dynamics is governed by the natural action A\\mapsto B ABT} of the {GL}_N Poisson-Lie group on \\mathscr A. A classification is given of all possible quadratic brackets on (B, A)\\in {GL}_N× \\mathscr A preserving the Poisson property of the action, thus endowing \\mathscr A with the structure of a Poisson homogeneous space. Besides the product Poisson structure on {GL}_N× \\mathscr A, there are two other (mutually dual) structures, which (unlike the product Poisson structure) admit reductions by the Dirac procedure to a space of bilinear forms with block upper triangular defining matrices. Further generalisations of this construction are considered, to triples (B,C, A)\\in {GL}_N× {GL}_N× \\mathscr A with the Poisson action A\\mapsto B ACT}, and it is shown that \\mathscr A then acquires the structure of a Poisson symmetric space. Generalisations to chains of transformations and to the quantum and quantum affine algebras are investigated, as well as the relations between constructions of Poisson symmetric spaces and the Poisson groupoid. Bibliography: 30 titles.

  17. Stability and Elastic, Electronic, and Thermodynamic Properties of Fe2TiSi1- x Sn x Compounds

    NASA Astrophysics Data System (ADS)

    Jong, Ju-Yong; Yan, Jihong; Zhu, Jingchuan; Kim, Chol-Jin

    2017-10-01

    We have systematically studied the structural, phase, and mechanical stability and elastic, electronic, and thermodynamic properties of Fe2TiSi1- x Sn x ( x = 0, 0.25, 0.5, 0.75, 1) compounds using first-principles calculations. The structural and phase stability and elastic properties of Fe2TiSi1- x Sn x ( x = 0, 0.25, 0.5, 0.75, 1) indicated that all of the compounds are thermodynamically and mechanically stable. The shear modulus, bulk modulus, Young's modulus, Poisson's ratio, electronic band structure, density of states, Debye temperature, and Grüneisen parameter of all the substituted compounds were studied. The results show that Sn substitution in Fe2TiSi enhances its stability and mechanical and thermoelectric properties. The Fe2TiSi1- x Sn x compounds have narrow bandgap from 0.144 eV and 0.472 eV for Sn substitution from 0 to 1. The calculated band structure and density of states (DOS) of Fe2TiSi1- x Sn x show that the thermoelectric properties can be improved at substituent concentration x of 0.75. The lattice thermal conductivity was significantly decreased in the Sn-substituted compounds, and all the results indicate that Fe2TiSi0.25Sn0.75 could be a new candidate high-performance thermoelectric material.

  18. Quantum chemistry in arbitrary dielectric environments: Theory and implementation of nonequilibrium Poisson boundary conditions and application to compute vertical ionization energies at the air/water interface

    NASA Astrophysics Data System (ADS)

    Coons, Marc P.; Herbert, John M.

    2018-06-01

    Widely used continuum solvation models for electronic structure calculations, including popular polarizable continuum models (PCMs), usually assume that the continuum environment is isotropic and characterized by a scalar dielectric constant, ɛ. This assumption is invalid at a liquid/vapor interface or any other anisotropic solvation environment. To address such scenarios, we introduce a more general formalism based on solution of Poisson's equation for a spatially varying dielectric function, ɛ(r). Inspired by nonequilibrium versions of PCMs, we develop a similar formalism within the context of Poisson's equation that includes the out-of-equilibrium dielectric response that accompanies a sudden change in the electron density of the solute, such as that which occurs in a vertical ionization process. A multigrid solver for Poisson's equation is developed to accommodate the large spatial grids necessary to discretize the three-dimensional electron density. We apply this methodology to compute vertical ionization energies (VIEs) of various solutes at the air/water interface and compare them to VIEs computed in bulk water, finding only very small differences between the two environments. VIEs computed using approximately two solvation shells of explicit water molecules are in excellent agreement with experiment for F-(aq), Cl-(aq), neat liquid water, and the hydrated electron, although errors for Li+(aq) and Na+(aq) are somewhat larger. Nonequilibrium corrections modify VIEs by up to 1.2 eV, relative to models based only on the static dielectric constant, and are therefore essential to obtain agreement with experiment. Given that the experiments (liquid microjet photoelectron spectroscopy) may be more sensitive to solutes situated at the air/water interface as compared to those in bulk water, our calculations provide some confidence that these experiments can indeed be interpreted as measurements of VIEs in bulk water.

  19. Investigating on the Differences between Triggered and Background Seismicity in Italy and Southern California.

    NASA Astrophysics Data System (ADS)

    Stallone, A.; Marzocchi, W.

    2017-12-01

    Earthquake occurrence may be approximated by a multidimensional Poisson clustering process, where each point of the Poisson process is replaced by a cluster of points, the latter corresponding to the well-known aftershock sequence (triggered events). Earthquake clusters and their parents are assumed to occur according to a Poisson process at a constant temporal rate proportional to the tectonic strain rate, while events within a cluster are modeled as generations of dependent events reproduced by a branching process. Although the occurrence of such space-time clusters is a general feature in different tectonic settings, seismic sequences seem to have marked differences from region to region: one example, among many others, is that seismic sequences of moderate magnitude in Italian Apennines seem to last longer than similar seismic sequences in California. In this work we investigate on the existence of possible differences in the earthquake clustering process in these two areas. At first, we separate the triggered and background components of seismicity in the Italian and Southern California seismic catalog. Then we study the space-time domain of the triggered earthquakes with the aim to identify possible variations in the triggering properties across the two regions. In the second part of the work we focus our attention on the characteristics of the background seismicity in both seismic catalogs. The assumption of time stationarity of the background seismicity (which includes both cluster parents and isolated events) is still under debate. Some authors suggest that the independent component of seismicity could undergo transient perturbations at various time scales due to different physical mechanisms, such as, for example, viscoelastic relaxation, presence of fluids, non-stationary plate motion, etc, whose impact may depend on the tectonic setting. Here we test if the background seismicity in the two regions can be satisfactorily described by the time-homogeneous Poisson process, and, in case, we characterize quantitatively possible discrepancies with this reference process, and the differences between the two regions.

  20. Quantum chemistry in arbitrary dielectric environments: Theory and implementation of nonequilibrium Poisson boundary conditions and application to compute vertical ionization energies at the air/water interface.

    PubMed

    Coons, Marc P; Herbert, John M

    2018-06-14

    Widely used continuum solvation models for electronic structure calculations, including popular polarizable continuum models (PCMs), usually assume that the continuum environment is isotropic and characterized by a scalar dielectric constant, ε. This assumption is invalid at a liquid/vapor interface or any other anisotropic solvation environment. To address such scenarios, we introduce a more general formalism based on solution of Poisson's equation for a spatially varying dielectric function, ε(r). Inspired by nonequilibrium versions of PCMs, we develop a similar formalism within the context of Poisson's equation that includes the out-of-equilibrium dielectric response that accompanies a sudden change in the electron density of the solute, such as that which occurs in a vertical ionization process. A multigrid solver for Poisson's equation is developed to accommodate the large spatial grids necessary to discretize the three-dimensional electron density. We apply this methodology to compute vertical ionization energies (VIEs) of various solutes at the air/water interface and compare them to VIEs computed in bulk water, finding only very small differences between the two environments. VIEs computed using approximately two solvation shells of explicit water molecules are in excellent agreement with experiment for F - (aq), Cl - (aq), neat liquid water, and the hydrated electron, although errors for Li + (aq) and Na + (aq) are somewhat larger. Nonequilibrium corrections modify VIEs by up to 1.2 eV, relative to models based only on the static dielectric constant, and are therefore essential to obtain agreement with experiment. Given that the experiments (liquid microjet photoelectron spectroscopy) may be more sensitive to solutes situated at the air/water interface as compared to those in bulk water, our calculations provide some confidence that these experiments can indeed be interpreted as measurements of VIEs in bulk water.

  1. On the Singularity of the Vlasov-Poisson System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    and Hong Qin, Jian Zheng

    2013-04-26

    The Vlasov-Poisson system can be viewed as the collisionless limit of the corresponding Fokker- Planck-Poisson system. It is reasonable to expect that the result of Landau damping can also be obtained from the Fokker-Planck-Poisson system when the collision frequency v approaches zero. However, we show that the colllisionless Vlasov-Poisson system is a singular limit of the collisional Fokker-Planck-Poisson system, and Landau's result can be recovered only as the approaching zero from the positive side.

  2. On the singularity of the Vlasov-Poisson system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng, Jian; Qin, Hong; Plasma Physics Laboratory, Princeton University, Princeton, New Jersey 08550

    2013-09-15

    The Vlasov-Poisson system can be viewed as the collisionless limit of the corresponding Fokker-Planck-Poisson system. It is reasonable to expect that the result of Landau damping can also be obtained from the Fokker-Planck-Poisson system when the collision frequency ν approaches zero. However, we show that the collisionless Vlasov-Poisson system is a singular limit of the collisional Fokker-Planck-Poisson system, and Landau's result can be recovered only as the ν approaches zero from the positive side.

  3. Low-Rank Correction Methods for Algebraic Domain Decomposition Preconditioners

    DOE PAGES

    Li, Ruipeng; Saad, Yousef

    2017-08-01

    This study presents a parallel preconditioning method for distributed sparse linear systems, based on an approximate inverse of the original matrix, that adopts a general framework of distributed sparse matrices and exploits domain decomposition (DD) and low-rank corrections. The DD approach decouples the matrix and, once inverted, a low-rank approximation is applied by exploiting the Sherman--Morrison--Woodbury formula, which yields two variants of the preconditioning methods. The low-rank expansion is computed by the Lanczos procedure with reorthogonalizations. Numerical experiments indicate that, when combined with Krylov subspace accelerators, this preconditioner can be efficient and robust for solving symmetric sparse linear systems. Comparisonsmore » with pARMS, a DD-based parallel incomplete LU (ILU) preconditioning method, are presented for solving Poisson's equation and linear elasticity problems.« less

  4. Low-Rank Correction Methods for Algebraic Domain Decomposition Preconditioners

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Ruipeng; Saad, Yousef

    This study presents a parallel preconditioning method for distributed sparse linear systems, based on an approximate inverse of the original matrix, that adopts a general framework of distributed sparse matrices and exploits domain decomposition (DD) and low-rank corrections. The DD approach decouples the matrix and, once inverted, a low-rank approximation is applied by exploiting the Sherman--Morrison--Woodbury formula, which yields two variants of the preconditioning methods. The low-rank expansion is computed by the Lanczos procedure with reorthogonalizations. Numerical experiments indicate that, when combined with Krylov subspace accelerators, this preconditioner can be efficient and robust for solving symmetric sparse linear systems. Comparisonsmore » with pARMS, a DD-based parallel incomplete LU (ILU) preconditioning method, are presented for solving Poisson's equation and linear elasticity problems.« less

  5. Mathematical analysis of the boundary-integral based electrostatics estimation approximation for molecular solvation: exact results for spherical inclusions.

    PubMed

    Bardhan, Jaydeep P; Knepley, Matthew G

    2011-09-28

    We analyze the mathematically rigorous BIBEE (boundary-integral based electrostatics estimation) approximation of the mixed-dielectric continuum model of molecular electrostatics, using the analytically solvable case of a spherical solute containing an arbitrary charge distribution. Our analysis, which builds on Kirkwood's solution using spherical harmonics, clarifies important aspects of the approximation and its relationship to generalized Born models. First, our results suggest a new perspective for analyzing fast electrostatic models: the separation of variables between material properties (the dielectric constants) and geometry (the solute dielectric boundary and charge distribution). Second, we find that the eigenfunctions of the reaction-potential operator are exactly preserved in the BIBEE model for the sphere, which supports the use of this approximation for analyzing charge-charge interactions in molecular binding. Third, a comparison of BIBEE to the recent GBε theory suggests a modified BIBEE model capable of predicting electrostatic solvation free energies to within 4% of a full numerical Poisson calculation. This modified model leads to a projection-framework understanding of BIBEE and suggests opportunities for future improvements. © 2011 American Institute of Physics

  6. Saddlepoint approximation to the distribution of the total distance of the continuous time random walk

    NASA Astrophysics Data System (ADS)

    Gatto, Riccardo

    2017-12-01

    This article considers the random walk over Rp, with p ≥ 2, where a given particle starts at the origin and moves stepwise with uniformly distributed step directions and step lengths following a common distribution. Step directions and step lengths are independent. The case where the number of steps of the particle is fixed and the more general case where it follows an independent continuous time inhomogeneous counting process are considered. Saddlepoint approximations to the distribution of the distance from the position of the particle to the origin are provided. Despite the p-dimensional nature of the random walk, the computations of the saddlepoint approximations are one-dimensional and thus simple. Explicit formulae are derived with dimension p = 3: for uniformly and exponentially distributed step lengths, for fixed and for Poisson distributed number of steps. In these situations, the high accuracy of the saddlepoint approximations is illustrated by numerical comparisons with Monte Carlo simulation. Contribution to the "Topical Issue: Continuous Time Random Walk Still Trendy: Fifty-year History, Current State and Outlook", edited by Ryszard Kutner and Jaume Masoliver.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dechant, Lawrence J.

    Wave packet analysis provides a connection between linear small disturbance theory and subsequent nonlinear turbulent spot flow behavior. The traditional association between linear stability analysis and nonlinear wave form is developed via the method of stationary phase whereby asymptotic (simplified) mean flow solutions are used to estimate dispersion behavior and stationary phase approximation are used to invert the associated Fourier transform. The resulting process typically requires nonlinear algebraic equations inversions that can be best performed numerically, which partially mitigates the value of the approximation as compared to a more complete, e.g. DNS or linear/nonlinear adjoint methods. To obtain a simpler,more » closed-form analytical result, the complete packet solution is modeled via approximate amplitude (linear convected kinematic wave initial value problem) and local sinusoidal (wave equation) expressions. Significantly, the initial value for the kinematic wave transport expression follows from a separable variable coefficient approximation to the linearized pressure fluctuation Poisson expression. The resulting amplitude solution, while approximate in nature, nonetheless, appears to mimic many of the global features, e.g. transitional flow intermittency and pressure fluctuation magnitude behavior. A low wave number wave packet models also recover meaningful auto-correlation and low frequency spectral behaviors.« less

  8. Accelerating Electrostatic Surface Potential Calculation with Multiscale Approximation on Graphics Processing Units

    PubMed Central

    Anandakrishnan, Ramu; Scogland, Tom R. W.; Fenley, Andrew T.; Gordon, John C.; Feng, Wu-chun; Onufriev, Alexey V.

    2010-01-01

    Tools that compute and visualize biomolecular electrostatic surface potential have been used extensively for studying biomolecular function. However, determining the surface potential for large biomolecules on a typical desktop computer can take days or longer using currently available tools and methods. Two commonly used techniques to speed up these types of electrostatic computations are approximations based on multi-scale coarse-graining and parallelization across multiple processors. This paper demonstrates that for the computation of electrostatic surface potential, these two techniques can be combined to deliver significantly greater speed-up than either one separately, something that is in general not always possible. Specifically, the electrostatic potential computation, using an analytical linearized Poisson Boltzmann (ALPB) method, is approximated using the hierarchical charge partitioning (HCP) multiscale method, and parallelized on an ATI Radeon 4870 graphical processing unit (GPU). The implementation delivers a combined 934-fold speed-up for a 476,040 atom viral capsid, compared to an equivalent non-parallel implementation on an Intel E6550 CPU without the approximation. This speed-up is significantly greater than the 42-fold speed-up for the HCP approximation alone or the 182-fold speed-up for the GPU alone. PMID:20452792

  9. On the fractal characterization of Paretian Poisson processes

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo I.; Sokolov, Igor M.

    2012-06-01

    Paretian Poisson processes are Poisson processes which are defined on the positive half-line, have maximal points, and are quantified by power-law intensities. Paretian Poisson processes are elemental in statistical physics, and are the bedrock of a host of power-law statistics ranging from Pareto's law to anomalous diffusion. In this paper we establish evenness-based fractal characterizations of Paretian Poisson processes. Considering an array of socioeconomic evenness-based measures of statistical heterogeneity, we show that: amongst the realm of Poisson processes which are defined on the positive half-line, and have maximal points, Paretian Poisson processes are the unique class of 'fractal processes' exhibiting scale-invariance. The results established in this paper are diametric to previous results asserting that the scale-invariance of Poisson processes-with respect to physical randomness-based measures of statistical heterogeneity-is characterized by exponential Poissonian intensities.

  10. Method of making thermally removable polyurethanes

    DOEpatents

    Loy, Douglas A.; Wheeler, David R.; McElhanon, James R.; Saunders, Randall S.; Durbin-Voss, Marvie Lou

    2002-01-01

    A method of making a thermally-removable polyurethane material by heating a mixture of a maleimide compound and a furan compound, and introducing alcohol and isocyanate functional groups, where the alcohol group and the isocyanate group reacts to form the urethane linkages and the furan compound and the maleimide compound react to form the thermally weak Diels-Alder adducts that are incorporated into the backbone of the urethane linkages during the formation of the polyurethane material at temperatures from above room temperature to less than approximately 90.degree. C. The polyurethane material can be easily removed within approximately an hour by heating to temperatures greater than approximately 90.degree. C. in a polar solvent. The polyurethane material can be used in protecting electronic components that may require subsequent removal of the solid material for component repair, modification or quality control.

  11. Nambu-Poisson gauge theory

    NASA Astrophysics Data System (ADS)

    Jurčo, Branislav; Schupp, Peter; Vysoký, Jan

    2014-06-01

    We generalize noncommutative gauge theory using Nambu-Poisson structures to obtain a new type of gauge theory with higher brackets and gauge fields. The approach is based on covariant coordinates and higher versions of the Seiberg-Witten map. We construct a covariant Nambu-Poisson gauge theory action, give its first order expansion in the Nambu-Poisson tensor and relate it to a Nambu-Poisson matrix model.

  12. Derivation of Poisson and Nernst-Planck equations in a bath and channel from a molecular model.

    PubMed

    Schuss, Z; Nadler, B; Eisenberg, R S

    2001-09-01

    Permeation of ions from one electrolytic solution to another, through a protein channel, is a biological process of considerable importance. Permeation occurs on a time scale of micro- to milliseconds, far longer than the femtosecond time scales of atomic motion. Direct simulations of atomic dynamics are not yet possible for such long-time scales; thus, averaging is unavoidable. The question is what and how to average. In this paper, we average a Langevin model of ionic motion in a bulk solution and protein channel. The main result is a coupled system of averaged Poisson and Nernst-Planck equations (CPNP) involving conditional and unconditional charge densities and conditional potentials. The resulting NP equations contain the averaged force on a single ion, which is the sum of two components. The first component is the gradient of a conditional electric potential that is the solution of Poisson's equation with conditional and permanent charge densities and boundary conditions of the applied voltage. The second component is the self-induced force on an ion due to surface charges induced only by that ion at dielectric interfaces. The ion induces surface polarization charge that exerts a significant force on the ion itself, not present in earlier PNP equations. The proposed CPNP system is not complete, however, because the electric potential satisfies Poisson's equation with conditional charge densities, conditioned on the location of an ion, while the NP equations contain unconditional densities. The conditional densities are closely related to the well-studied pair-correlation functions of equilibrium statistical mechanics. We examine a specific closure relation, which on the one hand replaces the conditional charge densities by the unconditional ones in the Poisson equation, and on the other hand replaces the self-induced force in the NP equation by an effective self-induced force. This effective self-induced force is nearly zero in the baths but is approximately equal to the self-induced force in and near the channel. The charge densities in the NP equations are interpreted as time averages over long times of the motion of a quasiparticle that diffuses with the same diffusion coefficient as that of a real ion, but is driven by the averaged force. In this way, continuum equations with averaged charge densities and mean-fields can be used to describe permeation through a protein channel.

  13. Identification of lithology in Gulf of Mexico Miocene rocks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hilterman, F.J.; Sherwood, J.W.C.; Schellhorn, R.

    1996-12-31

    In the Gulf of Mexico, many gas-saturated sands are not Bright Spots and thus are difficult to detect on conventional 3D seismic data. These small amplitude reflections occur frequently in Pliocene-Miocene exploration plays when the acoustic impedances of the gas-saturated sands and shales are approximately the same. In these areas, geophysicists have had limited success using AVO to reduce the exploration risk. The interpretation of the conventional AVO attributes is often difficult and contains questionable relationships to the physical properties of the media. A 3D AVO study was conducted utilizing numerous well-log suites, core analyses, and production histories to helpmore » calibrate the seismic response to the petrophysical properties. This study resulted in an extension of the AVO method to a technique that now displays Bright spots when very clean sands and gas-saturated sands occur. These litho-stratigraphic reflections on the new AVO technique are related to Poisson`s ratio, a petrophysical property that is normally mixed with the acoustic impedance on conventional 3D migrated data.« less

  14. Square Root Graphical Models: Multivariate Generalizations of Univariate Exponential Families that Permit Positive Dependencies

    PubMed Central

    Inouye, David I.; Ravikumar, Pradeep; Dhillon, Inderjit S.

    2016-01-01

    We develop Square Root Graphical Models (SQR), a novel class of parametric graphical models that provides multivariate generalizations of univariate exponential family distributions. Previous multivariate graphical models (Yang et al., 2015) did not allow positive dependencies for the exponential and Poisson generalizations. However, in many real-world datasets, variables clearly have positive dependencies. For example, the airport delay time in New York—modeled as an exponential distribution—is positively related to the delay time in Boston. With this motivation, we give an example of our model class derived from the univariate exponential distribution that allows for almost arbitrary positive and negative dependencies with only a mild condition on the parameter matrix—a condition akin to the positive definiteness of the Gaussian covariance matrix. Our Poisson generalization allows for both positive and negative dependencies without any constraints on the parameter values. We also develop parameter estimation methods using node-wise regressions with ℓ1 regularization and likelihood approximation methods using sampling. Finally, we demonstrate our exponential generalization on a synthetic dataset and a real-world dataset of airport delay times. PMID:27563373

  15. A two-phase Poisson process model and its application to analysis of cancer mortality among A-bomb survivors.

    PubMed

    Ohtaki, Megu; Tonda, Tetsuji; Aihara, Kazuyuki

    2015-10-01

    We consider a two-phase Poisson process model where only early successive transitions are assumed to be sensitive to exposure. In the case where intensity transitions are low, we derive analytically an approximate formula for the distribution of time to event for the excess hazard ratio (EHR) due to a single point exposure. The formula for EHR is a polynomial in exposure dose. Since the formula for EHR contains no unknown parameters except for the number of total stages, number of exposure-sensitive stages, and a coefficient of exposure effect, it is applicable easily under a variety of situations where there exists a possible latency time from a single point exposure to occurrence of event. Based on the multistage hypothesis of cancer, we formulate a radiation carcinogenesis model in which only some early consecutive stages of the process are sensitive to exposure, whereas later stages are not affected. An illustrative analysis using the proposed model is given for cancer mortality among A-bomb survivors. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Numerical solution of the Hele-Shaw equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whitaker, N.

    1987-04-01

    An algorithm is presented for approximating the motion of the interface between two immiscible fluids in a Hele-Shaw cell. The interface is represented by a set of volume fractions. We use the Simple Line Interface Calculation method along with the method of fractional steps to transport the interface. The equation of continuity leads to a Poisson equation for the pressure. The Poisson equation is discretized. Near the interface where the velocity field is discontinuous, the discretization is based on a weak formulation of the continuity equation. Interpolation is used on each side of the interface to increase the accuracy ofmore » the algorithm. The weak formulation as well as the interpolation are based on the computed volume fractions. This treatment of the interface is new. The discretized equations are solved by a modified conjugate gradient method. Surface tension is included and the curvature is computed through the use of osculating circles. For perturbations of small amplitude, a surprisingly good agreement is found between the numerical results and linearized perturbation theory. Numerical results are presented for the finite amplitude growth of unstable fingers. 62 refs., 13 figs.« less

  17. Strong and weak adsorptions of polyelectrolyte chains onto oppositely charged spheres

    NASA Astrophysics Data System (ADS)

    Cherstvy, A. G.; Winkler, R. G.

    2006-08-01

    We investigate the complexation of long thin polyelectrolyte (PE) chains with oppositely charged spheres. In the limit of strong adsorption, when strongly charged PE chains adapt a definite wrapped conformation on the sphere surface, we analytically solve the linear Poisson-Boltzmann equation and calculate the electrostatic potential and the energy of the complex. We discuss some biological applications of the obtained results. For weak adsorption, when a flexible weakly charged PE chain is localized next to the sphere in solution, we solve the Edwards equation for PE conformations in the Hulthén potential, which is used as an approximation for the screened Debye-Hückel potential of the sphere. We predict the critical conditions for PE adsorption. We find that the critical sphere charge density exhibits a distinctively different dependence on the Debye screening length than for PE adsorption onto a flat surface. We compare our findings with experimental measurements on complexation of various PEs with oppositely charged colloidal particles. We also present some numerical results of the coupled Poisson-Boltzmann and self-consistent field equation for PE adsorption in an assembly of oppositely charged spheres.

  18. SELF-GRAVITATIONAL FORCE CALCULATION OF SECOND-ORDER ACCURACY FOR INFINITESIMALLY THIN GASEOUS DISKS IN POLAR COORDINATES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Hsiang-Hsu; Taam, Ronald E.; Yen, David C. C., E-mail: yen@math.fju.edu.tw

    Investigating the evolution of disk galaxies and the dynamics of proto-stellar disks can involve the use of both a hydrodynamical and a Poisson solver. These systems are usually approximated as infinitesimally thin disks using two-dimensional Cartesian or polar coordinates. In Cartesian coordinates, the calculations of the hydrodynamics and self-gravitational forces are relatively straightforward for attaining second-order accuracy. However, in polar coordinates, a second-order calculation of self-gravitational forces is required for matching the second-order accuracy of hydrodynamical schemes. We present a direct algorithm for calculating self-gravitational forces with second-order accuracy without artificial boundary conditions. The Poisson integral in polar coordinates ismore » expressed in a convolution form and the corresponding numerical complexity is nearly linear using a fast Fourier transform. Examples with analytic solutions are used to verify that the truncated error of this algorithm is of second order. The kernel integral around the singularity is applied to modify the particle method. The use of a softening length is avoided and the accuracy of the particle method is significantly improved.« less

  19. A Poisson Log-Normal Model for Constructing Gene Covariation Network Using RNA-seq Data.

    PubMed

    Choi, Yoonha; Coram, Marc; Peng, Jie; Tang, Hua

    2017-07-01

    Constructing expression networks using transcriptomic data is an effective approach for studying gene regulation. A popular approach for constructing such a network is based on the Gaussian graphical model (GGM), in which an edge between a pair of genes indicates that the expression levels of these two genes are conditionally dependent, given the expression levels of all other genes. However, GGMs are not appropriate for non-Gaussian data, such as those generated in RNA-seq experiments. We propose a novel statistical framework that maximizes a penalized likelihood, in which the observed count data follow a Poisson log-normal distribution. To overcome the computational challenges, we use Laplace's method to approximate the likelihood and its gradients, and apply the alternating directions method of multipliers to find the penalized maximum likelihood estimates. The proposed method is evaluated and compared with GGMs using both simulated and real RNA-seq data. The proposed method shows improved performance in detecting edges that represent covarying pairs of genes, particularly for edges connecting low-abundant genes and edges around regulatory hubs.

  20. Analysis and control of hourglass instabilities in underintegrated linear and nonlinear elasticity

    NASA Technical Reports Server (NTRS)

    Jacquotte, Olivier P.; Oden, J. Tinsley

    1994-01-01

    Methods are described to identify and correct a bad finite element approximation of the governing operator obtained when under-integration is used in numerical code for several model problems: the Poisson problem, the linear elasticity problem, and for problems in the nonlinear theory of elasticity. For each of these problems, the reason for the occurrence of instabilities is given, a way to control or eliminate them is presented, and theorems of existence, uniqueness, and convergence for the given methods are established. Finally, numerical results are included which illustrate the theory.

  1. Solving the Helmholtz equation in conformal mapped ARROW structures using homotopy perturbation method.

    PubMed

    Reck, Kasper; Thomsen, Erik V; Hansen, Ole

    2011-01-31

    The scalar wave equation, or Helmholtz equation, describes within a certain approximation the electromagnetic field distribution in a given system. In this paper we show how to solve the Helmholtz equation in complex geometries using conformal mapping and the homotopy perturbation method. The solution of the mapped Helmholtz equation is found by solving an infinite series of Poisson equations using two dimensional Fourier series. The solution is entirely based on analytical expressions and is not mesh dependent. The analytical results are compared to a numerical (finite element method) solution.

  2. Effects of biaxial strains on electronic and elastic properties of hexagonal XSi2 (X = Cr, Mo, W) from first-principles

    NASA Astrophysics Data System (ADS)

    Zhu, Haiyan; Shi, Liwei; Li, Shuaiqi; Zhang, Shaobo; Xia, Wangsuo

    2018-02-01

    Structural, electronic properties and elastic anisotropy of hexagonal C40 XSi2 (X = Cr, Mo, W) under equibiaxial in-plane strains are systematically studied using first-principle calculations. The energy gaps show significant changes with biaxial strains, whereas they are always indirect band-gap materials for -6% <ɛxx < 6%. All elastic constants, bulk modulus, shear modulus, Young's modulus increase (decrease) almost linearly with increasing compressive (tensile) strains. The evolutions of BH /GH ratio and Poisson's ratio indicate that these compounds have a better (worse) ductile behaviour under compressive (tensile) strains. A set of 3D plots show a larger directional variability in the Young's modulus E and shear modulus G at different strains for the three compounds, which is consist with the values of anisotropy factors. Moreover, the evolution of Debye temperature and anisotropy of sound velocities with biaxial strains are discussed.

  3. A Three-dimensional Polymer Scaffolding Material Exhibiting a Zero Poisson's Ratio.

    PubMed

    Soman, Pranav; Fozdar, David Y; Lee, Jin Woo; Phadke, Ameya; Varghese, Shyni; Chen, Shaochen

    2012-05-14

    Poisson's ratio describes the degree to which a material contracts (expands) transversally when axially strained. A material with a zero Poisson's ratio does not transversally deform in response to an axial strain (stretching). In tissue engineering applications, scaffolding having a zero Poisson's ratio (ZPR) may be more suitable for emulating the behavior of native tissues and accommodating and transmitting forces to the host tissue site during wound healing (or tissue regrowth). For example, scaffolding with a zero Poisson's ratio may be beneficial in the engineering of cartilage, ligament, corneal, and brain tissues, which are known to possess Poisson's ratios of nearly zero. Here, we report a 3D biomaterial constructed from polyethylene glycol (PEG) exhibiting in-plane Poisson's ratios of zero for large values of axial strain. We use digital micro-mirror device projection printing (DMD-PP) to create single- and double-layer scaffolds composed of semi re-entrant pores whose arrangement and deformation mechanisms contribute the zero Poisson's ratio. Strain experiments prove the zero Poisson's behavior of the scaffolds and that the addition of layers does not change the Poisson's ratio. Human mesenchymal stem cells (hMSCs) cultured on biomaterials with zero Poisson's ratio demonstrate the feasibility of utilizing these novel materials for biological applications which require little to no transverse deformations resulting from axial strains. Techniques used in this work allow Poisson's ratio to be both scale-independent and independent of the choice of strut material for strains in the elastic regime, and therefore ZPR behavior can be imparted to a variety of photocurable biomaterial.

  4. A Local Approximation of Fundamental Measure Theory Incorporated into Three Dimensional Poisson-Nernst-Planck Equations to Account for Hard Sphere Repulsion Among Ions

    NASA Astrophysics Data System (ADS)

    Qiao, Yu; Liu, Xuejiao; Chen, Minxin; Lu, Benzhuo

    2016-04-01

    The hard sphere repulsion among ions can be considered in the Poisson-Nernst-Planck (PNP) equations by combining the fundamental measure theory (FMT). To reduce the nonlocal computational complexity in 3D simulation of biological systems, a local approximation of FMT is derived, which forms a local hard sphere PNP (LHSPNP) model. In the derivation, the excess chemical potential from hard sphere repulsion is obtained with the FMT and has six integration components. For the integrands and weighted densities in each component, Taylor expansions are performed and the lowest order approximations are taken, which result in the final local hard sphere (LHS) excess chemical potential with four components. By plugging the LHS excess chemical potential into the ionic flux expression in the Nernst-Planck equation, the three dimensional LHSPNP is obtained. It is interestingly found that the essential part of free energy term of the previous size modified model (Borukhov et al. in Phys Rev Lett 79:435-438, 1997; Kilic et al. in Phys Rev E 75:021502, 2007; Lu and Zhou in Biophys J 100:2475-2485, 2011; Liu and Eisenberg in J Chem Phys 141:22D532, 2014) has a very similar form to one term of the LHS model, but LHSPNP has more additional terms accounting for size effects. Equation of state for one component homogeneous fluid is studied for the local hard sphere approximation of FMT and is proved to be exact for the first two virial coefficients, while the previous size modified model only presents the first virial coefficient accurately. To investigate the effects of LHS model and the competitions among different counterion species, numerical experiments are performed for the traditional PNP model, the LHSPNP model, the previous size modified PNP (SMPNP) model and the Monte Carlo simulation. It's observed that in steady state the LHSPNP results are quite different from the PNP results, but are close to the SMPNP results under a wide range of boundary conditions. Besides, in both LHSPNP and SMPNP models the stratification of one counterion species can be observed under certain bulk concentrations.

  5. Equivalent Discrete-Time Channel Modeling for Molecular Communication With Emphasize on an Absorbing Receiver.

    PubMed

    Damrath, Martin; Korte, Sebastian; Hoeher, Peter Adam

    2017-01-01

    This paper introduces the equivalent discrete-time channel model (EDTCM) to the area of diffusion-based molecular communication (DBMC). Emphasis is on an absorbing receiver, which is based on the so-called first passage time concept. In the wireless communications community the EDTCM is well known. Therefore, it is anticipated that the EDTCM improves the accessibility of DBMC and supports the adaptation of classical wireless communication algorithms to the area of DBMC. Furthermore, the EDTCM has the capability to provide a remarkable reduction of computational complexity compared to random walk based DBMC simulators. Besides the exact EDTCM, three approximations thereof based on binomial, Gaussian, and Poisson approximation are proposed and analyzed in order to further reduce computational complexity. In addition, the Bahl-Cocke-Jelinek-Raviv (BCJR) algorithm is adapted to all four channel models. Numerical results show the performance of the exact EDTCM, illustrate the performance of the adapted BCJR algorithm, and demonstrate the accuracy of the approximations.

  6. Mutation-selection balance in mixed mating populations.

    PubMed

    Kelly, John K

    2007-05-21

    An approximation to the average number of deleterious mutations per gamete, Q, is derived from a model allowing selection on both zygotes and male gametes. Progeny are produced by either outcrossing or self-fertilization with fixed probabilities. The genetic model is a standard in evolutionary biology: mutations occur at unlinked loci, have equivalent effects, and combine multiplicatively to determine fitness. The approximation developed here treats individual mutation counts with a generalized Poisson model conditioned on the distribution of selfing histories in the population. The approximation is accurate across the range of parameter sets considered and provides both analytical insights and greatly increased computational speed. Model predictions are discussed in relation to several outstanding problems, including the estimation of the genomic deleterious mutation rates (U), the generality of "selective interference" among loci, and the consequences of gametic selection for the joint distribution of inbreeding depression and mating system across species. Finally, conflicting results from previous analytical treatments of mutation-selection balance are resolved to assumptions about the life-cycle and the initial fate of mutations.

  7. From Loss of Memory to Poisson.

    ERIC Educational Resources Information Center

    Johnson, Bruce R.

    1983-01-01

    A way of presenting the Poisson process and deriving the Poisson distribution for upper-division courses in probability or mathematical statistics is presented. The main feature of the approach lies in the formulation of Poisson postulates with immediate intuitive appeal. (MNS)

  8. Discovery of HIV Type 1 Aspartic Protease Hit Compounds through Combined Computational Approaches.

    PubMed

    Xanthopoulos, Dimitrios; Kritsi, Eftichia; Supuran, Claudiu T; Papadopoulos, Manthos G; Leonis, Georgios; Zoumpoulakis, Panagiotis

    2016-08-05

    A combination of computational techniques and inhibition assay experiments was employed to identify hit compounds from commercial libraries with enhanced inhibitory potency against HIV type 1 aspartic protease (HIV PR). Extensive virtual screening with the aid of reliable pharmacophore models yielded five candidate protease inhibitors. Subsequent molecular dynamics and molecular mechanics Poisson-Boltzmann surface area free-energy calculations for the five ligand-HIV PR complexes suggested a high stability of the systems through hydrogen-bond interactions between the ligands and the protease's flaps (Ile50/50'), as well as interactions with residues of the active site (Asp25/25'/29/29'/30/30'). Binding-energy calculations for the three most promising compounds yielded values between -5 and -10 kcal mol(-1) and suggested that van der Waals interactions contribute most favorably to the total energy. The predicted binding-energy values were verified by in vitro inhibition assays, which showed promising results in the high nanomolar range. These results provide structural considerations that may guide further hit-to-lead optimization toward improved anti-HIV drugs. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Process for epoxy foam production

    DOEpatents

    Celina, Mathias C [Albuquerque, NM

    2011-08-23

    An epoxy resin mixture with at least one epoxy resin of between approximately 60 wt % and 90 wt %, a maleic anhydride of between approximately 1 wt % and approximately 30 wt %, and an imidazole catalyst of less than approximately 2 wt % where the resin mixture is formed from at least one epoxy resin with a 1-30 wt % maleic anhydride compound and an imidazole catalyst at a temperature sufficient to keep the maleic anhydride compound molten, the resin mixture reacting to form a foaming resin which can then be cured at a temperature greater than 50.degree. C. to form an epoxy foam.

  10. Nonlocal Poisson-Fermi model for ionic solvent.

    PubMed

    Xie, Dexuan; Liu, Jinn-Liang; Eisenberg, Bob

    2016-07-01

    We propose a nonlocal Poisson-Fermi model for ionic solvent that includes ion size effects and polarization correlations among water molecules in the calculation of electrostatic potential. It includes the previous Poisson-Fermi models as special cases, and its solution is the convolution of a solution of the corresponding nonlocal Poisson dielectric model with a Yukawa-like kernel function. The Fermi distribution is shown to be a set of optimal ionic concentration functions in the sense of minimizing an electrostatic potential free energy. Numerical results are reported to show the difference between a Poisson-Fermi solution and a corresponding Poisson solution.

  11. Saint-Venant end effects for materials with negative Poisson's ratios

    NASA Technical Reports Server (NTRS)

    Lakes, R. S.

    1992-01-01

    Results are presented from an analysis of Saint-Venant end effects for materials with negative Poisson's ratio. Examples are presented showing that slow decay of end stress occurs in circular cylinders of negative Poisson's ratio, whereas a sandwich panel containing rigid face sheets and a compliant core exhibits no anomalous effects for negative Poisson's ratio (but exhibits slow stress decay for core Poisson's ratios approaching 0.5). In sand panels with stiff but not perfectly rigid face sheets, a negative Poisson's ratio results in end stress decay, which is faster than it would be otherwise. It is suggested that the slow decay previously predicted for sandwich strips in plane deformation as a result of the geometry can be mitigated by the use of a negative Poisson's ratio material for the core.

  12. Poisson's ratio of fiber-reinforced composites

    NASA Astrophysics Data System (ADS)

    Christiansson, Henrik; Helsing, Johan

    1996-05-01

    Poisson's ratio flow diagrams, that is, the Poisson's ratio versus the fiber fraction, are obtained numerically for hexagonal arrays of elastic circular fibers in an elastic matrix. High numerical accuracy is achieved through the use of an interface integral equation method. Questions concerning fixed point theorems and the validity of existing asymptotic relations are investigated and partially resolved. Our findings for the transverse effective Poisson's ratio, together with earlier results for random systems by other authors, make it possible to formulate a general statement for Poisson's ratio flow diagrams: For composites with circular fibers and where the phase Poisson's ratios are equal to 1/3, the system with the lowest stiffness ratio has the highest Poisson's ratio. For other choices of the elastic moduli for the phases, no simple statement can be made.

  13. Modeling bursts and heavy tails in human dynamics.

    PubMed

    Vázquez, Alexei; Oliveira, João Gama; Dezsö, Zoltán; Goh, Kwang-Il; Kondor, Imre; Barabási, Albert-László

    2006-03-01

    The dynamics of many social, technological and economic phenomena are driven by individual human actions, turning the quantitative understanding of human behavior into a central question of modern science. Current models of human dynamics, used from risk assessment to communications, assume that human actions are randomly distributed in time and thus well approximated by Poisson processes. Here we provide direct evidence that for five human activity patterns, such as email and letter based communications, web browsing, library visits and stock trading, the timing of individual human actions follow non-Poisson statistics, characterized by bursts of rapidly occurring events separated by long periods of inactivity. We show that the bursty nature of human behavior is a consequence of a decision based queuing process: when individuals execute tasks based on some perceived priority, the timing of the tasks will be heavy tailed, most tasks being rapidly executed, while a few experiencing very long waiting times. In contrast, priority blind execution is well approximated by uniform interevent statistics. We discuss two queuing models that capture human activity. The first model assumes that there are no limitations on the number of tasks an individual can handle at any time, predicting that the waiting time of the individual tasks follow a heavy tailed distribution P(tau(w)) approximately tau(w)(-alpha) with alpha=3/2. The second model imposes limitations on the queue length, resulting in a heavy tailed waiting time distribution characterized by alpha=1. We provide empirical evidence supporting the relevance of these two models to human activity patterns, showing that while emails, web browsing and library visitation display alpha=1, the surface mail based communication belongs to the alpha=3/2 universality class. Finally, we discuss possible extension of the proposed queuing models and outline some future challenges in exploring the statistical mechanics of human dynamics.

  14. Characterization of Nonhomogeneous Poisson Processes Via Moment Conditions.

    DTIC Science & Technology

    1986-08-01

    Poisson processes play an important role in many fields. The Poisson process is one of the simplest counting processes and is a building block for...place of independent increments. This provides a somewhat different viewpoint for examining Poisson processes . In addition, new characterizations for

  15. Poisson Mixture Regression Models for Heart Disease Prediction.

    PubMed

    Mufudza, Chipo; Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model.

  16. Constructions and classifications of projective Poisson varieties.

    PubMed

    Pym, Brent

    2018-01-01

    This paper is intended both as an introduction to the algebraic geometry of holomorphic Poisson brackets, and as a survey of results on the classification of projective Poisson manifolds that have been obtained in the past 20 years. It is based on the lecture series delivered by the author at the Poisson 2016 Summer School in Geneva. The paper begins with a detailed treatment of Poisson surfaces, including adjunction, ruled surfaces and blowups, and leading to a statement of the full birational classification. We then describe several constructions of Poisson threefolds, outlining the classification in the regular case, and the case of rank-one Fano threefolds (such as projective space). Following a brief introduction to the notion of Poisson subspaces, we discuss Bondal's conjecture on the dimensions of degeneracy loci on Poisson Fano manifolds. We close with a discussion of log symplectic manifolds with simple normal crossings degeneracy divisor, including a new proof of the classification in the case of rank-one Fano manifolds.

  17. Poisson Mixture Regression Models for Heart Disease Prediction

    PubMed Central

    Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model. PMID:27999611

  18. Constructions and classifications of projective Poisson varieties

    NASA Astrophysics Data System (ADS)

    Pym, Brent

    2018-03-01

    This paper is intended both as an introduction to the algebraic geometry of holomorphic Poisson brackets, and as a survey of results on the classification of projective Poisson manifolds that have been obtained in the past 20 years. It is based on the lecture series delivered by the author at the Poisson 2016 Summer School in Geneva. The paper begins with a detailed treatment of Poisson surfaces, including adjunction, ruled surfaces and blowups, and leading to a statement of the full birational classification. We then describe several constructions of Poisson threefolds, outlining the classification in the regular case, and the case of rank-one Fano threefolds (such as projective space). Following a brief introduction to the notion of Poisson subspaces, we discuss Bondal's conjecture on the dimensions of degeneracy loci on Poisson Fano manifolds. We close with a discussion of log symplectic manifolds with simple normal crossings degeneracy divisor, including a new proof of the classification in the case of rank-one Fano manifolds.

  19. Adiabatic reduction of a model of stochastic gene expression with jump Markov process.

    PubMed

    Yvinec, Romain; Zhuge, Changjing; Lei, Jinzhi; Mackey, Michael C

    2014-04-01

    This paper considers adiabatic reduction in a model of stochastic gene expression with bursting transcription considered as a jump Markov process. In this model, the process of gene expression with auto-regulation is described by fast/slow dynamics. The production of mRNA is assumed to follow a compound Poisson process occurring at a rate depending on protein levels (the phenomena called bursting in molecular biology) and the production of protein is a linear function of mRNA numbers. When the dynamics of mRNA is assumed to be a fast process (due to faster mRNA degradation than that of protein) we prove that, with appropriate scalings in the burst rate, jump size or translational rate, the bursting phenomena can be transmitted to the slow variable. We show that, depending on the scaling, the reduced equation is either a stochastic differential equation with a jump Poisson process or a deterministic ordinary differential equation. These results are significant because adiabatic reduction techniques seem to have not been rigorously justified for a stochastic differential system containing a jump Markov process. We expect that the results can be generalized to adiabatic methods in more general stochastic hybrid systems.

  20. Topics in elementary particle physics

    NASA Astrophysics Data System (ADS)

    Jin, Xiang

    The author of this thesis discusses two topics in elementary particle physics: n-ary algebras and their applications to M-theory (Part I), and functional evolution and Renormalization Group flows (Part II). In part I, Lie algebra is extended to four different n-ary algebraic structure: generalized Lie algebra, Filippov algebra, Nambu algebra and Nambu-Poisson tensor; though there are still many other n-ary algebras. A natural property of Generalized Lie algebras — the Bremner identity, is studied, and proved with a totally different method from its original version. We extend Bremner identity to n-bracket cases, where n is an arbitrary odd integer. Filippov algebras do not focus on associativity, and are defined by the Fundamental identity. We add associativity to Filippov algebras, and give examples of how to construct Filippov algebras from su(2), bosonic oscillator, Virasoro algebra. We try to include fermionic charges into the ternary Virasoro-Witt algebra, but the attempt fails because fermionic charges keep generating new charges that make the algebra not closed. We also study the Bremner identity restriction on Nambu algebras and Nambu-Poisson tensors. So far, the only example 3-algebra being used in physics is the BLG model with 3-algebra A4, describing two M2-branes interactions. Its extension with Nambu algebra, BLG-NB model, is believed to describe infinite M2-branes condensation. Also, there is another propose for M2-brane interactions, the ABJM model, which is constructed by ordinary Lie algebra. We compare the symmetry properties between them, and discuss the possible approaches to include these three models into a grand unification theory. In Part II, we give an approximate solution for Schroeder's equations, based on series and conjugation methods. We use the logistic map as an example, and demonstrate that this approximate solution converges to known analytical solutions around the fixed point, around which the approximate solution is constructed. Although the closed-form solutions for Schroeder's equations can not always be approached analytically, by fitting the approximation solutions, one can still obtain closed-form solutions sometimes. Based on Schroeder's theory, approximate solutions for trajectories, velocities and potentials can also be constructed. The approximate solution is significantly useful to calculate the beta function in renormalization group trajectory. By "wrapping" the series solutions with the conjugations from different inverse functions, we generate different branches of the trajectory, and construct a counterexample for a folk theorem about limited cycles.

  1. On-the-fly Numerical Surface Integration for Finite-Difference Poisson-Boltzmann Methods.

    PubMed

    Cai, Qin; Ye, Xiang; Wang, Jun; Luo, Ray

    2011-11-01

    Most implicit solvation models require the definition of a molecular surface as the interface that separates the solute in atomic detail from the solvent approximated as a continuous medium. Commonly used surface definitions include the solvent accessible surface (SAS), the solvent excluded surface (SES), and the van der Waals surface. In this study, we present an efficient numerical algorithm to compute the SES and SAS areas to facilitate the applications of finite-difference Poisson-Boltzmann methods in biomolecular simulations. Different from previous numerical approaches, our algorithm is physics-inspired and intimately coupled to the finite-difference Poisson-Boltzmann methods to fully take advantage of its existing data structures. Our analysis shows that the algorithm can achieve very good agreement with the analytical method in the calculation of the SES and SAS areas. Specifically, in our comprehensive test of 1,555 molecules, the average unsigned relative error is 0.27% in the SES area calculations and 1.05% in the SAS area calculations at the grid spacing of 1/2Å. In addition, a systematic correction analysis can be used to improve the accuracy for the coarse-grid SES area calculations, with the average unsigned relative error in the SES areas reduced to 0.13%. These validation studies indicate that the proposed algorithm can be applied to biomolecules over a broad range of sizes and structures. Finally, the numerical algorithm can also be adapted to evaluate the surface integral of either a vector field or a scalar field defined on the molecular surface for additional solvation energetics and force calculations.

  2. Monitoring Poisson observations using combined applications of Shewhart and EWMA charts

    NASA Astrophysics Data System (ADS)

    Abujiya, Mu'azu Ramat

    2017-11-01

    The Shewhart and exponentially weighted moving average (EWMA) charts for nonconformities are the most widely used procedures of choice for monitoring Poisson observations in modern industries. Individually, the Shewhart EWMA charts are only sensitive to large and small shifts, respectively. To enhance the detection abilities of the two schemes in monitoring all kinds of shifts in Poisson count data, this study examines the performance of combined applications of the Shewhart, and EWMA Poisson control charts. Furthermore, the study proposes modifications based on well-structured statistical data collection technique, ranked set sampling (RSS), to detect shifts in the mean of a Poisson process more quickly. The relative performance of the proposed Shewhart-EWMA Poisson location charts is evaluated in terms of the average run length (ARL), standard deviation of the run length (SDRL), median run length (MRL), average ratio ARL (ARARL), average extra quadratic loss (AEQL) and performance comparison index (PCI). Consequently, all the new Poisson control charts based on RSS method are generally more superior than most of the existing schemes for monitoring Poisson processes. The use of these combined Shewhart-EWMA Poisson charts is illustrated with an example to demonstrate the practical implementation of the design procedure.

  3. Comment on: 'A Poisson resampling method for simulating reduced counts in nuclear medicine images'.

    PubMed

    de Nijs, Robin

    2015-07-21

    In order to be able to calculate half-count images from already acquired data, White and Lawson published their method based on Poisson resampling. They verified their method experimentally by measurements with a Co-57 flood source. In this comment their results are reproduced and confirmed by a direct numerical simulation in Matlab. Not only Poisson resampling, but also two direct redrawing methods were investigated. Redrawing methods were based on a Poisson and a Gaussian distribution. Mean, standard deviation, skewness and excess kurtosis half-count/full-count ratios were determined for all methods, and compared to the theoretical values for a Poisson distribution. Statistical parameters showed the same behavior as in the original note and showed the superiority of the Poisson resampling method. Rounding off before saving of the half count image had a severe impact on counting statistics for counts below 100. Only Poisson resampling was not affected by this, while Gaussian redrawing was less affected by it than Poisson redrawing. Poisson resampling is the method of choice, when simulating half-count (or less) images from full-count images. It simulates correctly the statistical properties, also in the case of rounding off of the images.

  4. Optimal decision making on the basis of evidence represented in spike trains.

    PubMed

    Zhang, Jiaxiang; Bogacz, Rafal

    2010-05-01

    Experimental data indicate that perceptual decision making involves integration of sensory evidence in certain cortical areas. Theoretical studies have proposed that the computation in neural decision circuits approximates statistically optimal decision procedures (e.g., sequential probability ratio test) that maximize the reward rate in sequential choice tasks. However, these previous studies assumed that the sensory evidence was represented by continuous values from gaussian distributions with the same variance across alternatives. In this article, we make a more realistic assumption that sensory evidence is represented in spike trains described by the Poisson processes, which naturally satisfy the mean-variance relationship observed in sensory neurons. We show that for such a representation, the neural circuits involving cortical integrators and basal ganglia can approximate the optimal decision procedures for two and multiple alternative choice tasks.

  5. In the linear quadratic model, the Poisson approximation and the Zaider-Minerbo formula agree on the ranking of tumor control probabilities, up to a critical cell birth rate.

    PubMed

    Ballhausen, Hendrik; Belka, Claus

    2017-03-01

    To provide a rule for the agreement or disagreement of the Poisson approximation (PA) and the Zaider-Minerbo formula (ZM) on the ranking of treatment alternatives in terms of tumor control probability (TCP) in the linear quadratic model. A general criterion involving a critical cell birth rate was formally derived. For demonstration, the criterion was applied to a distinct radiobiological model of fast growing head and neck tumors and a respective range of 22 conventional and nonconventional head and neck schedules. There is a critical cell birth rate b crit below which PA and ZM agree on which one out of two alternative treatment schemes with single-cell survival curves S'(t) and S''(t) offers better TCP: [Formula: see text] For cell birth rates b above this critical cell birth rate, PA and ZM disagree if and only if b >b crit > 0. In case of the exemplary head and neck schedules, out of 231 possible combinations, only 16 or 7% were found where PA and ZM disagreed. In all 231 cases the prediction of the criterion was numerically confirmed, and cell birth rates at crossovers between schedules matched the calculated critical cell birth rates. TCP estimated by PA and ZM almost never numerically coincide. Still, in many cases both formulas at least agree about which one out of two alternative fractionation schemes offers better TCP. In case of fast growing tumors featuring a high cell birth rate, however, ZM may suggest a re-evaluation of treatment options.

  6. Determination of the Temperature Dependence of Heat Capacity for Some Molecular Crystals of Nitro Compounds

    NASA Astrophysics Data System (ADS)

    Kovalev, Yu. M.; Kuropatenko, V. F.

    2018-05-01

    An analysis of the existing approximations used for describing the dependence of heat capacity at a constant volume on the temperature of a molecular crystal has been carried out. It is shown that the considered Debye and Einstein approximations do not enable one to adequately describe the dependence of heat capacity at a constant volume on the temperature of the molecular crystals of nitro compounds. This inference requires the development of special approximations that would describe both low-frequency and high-frequency parts of the vibrational spectra of molecular crystals. This work presents a universal dependence allowing one to describe the dependence of heat capacity at a constant volume on temperature for a number of molecular crystals of nitro compounds.

  7. [Study on solubility of Chinese herbal compound by solubility parameter].

    PubMed

    Wu, Dezhi; Chen, Lihua; Wang, Sen; Zhu, Weifeng; Guan, Yongmei

    2010-02-01

    To demonstrate the solubility of Chinese herbal compound with solubility parameters. The solubility parameters of Liangfu effective components and Liangfu compound were determined by inverse gas chromatograph (IGC) and group contribution. Hansen ball was plotting by HSPiP, which could be used to investigate the solubility of Liangfu effective components and Liangfu compound in different solvents. And the results were verified by approximate solubility. Liangfu effective components and Liangfu compound could be dissolved in chloroform, ethyl acetate, acetone, octanol and ether, and were slightly soluble in glycerol, methanol, ethanol and propanediol, but could not be dissolved in water. They were all liposoluble, and the results were the same as the test results of the approximate solubility. The solubility of Chinese herbal compound can be expressed by solubility parameters, and it is accurate, convenient and visual.

  8. Systematic design of 3D auxetic lattice materials with programmable Poisson's ratio for finite strains

    NASA Astrophysics Data System (ADS)

    Wang, Fengwen

    2018-05-01

    This paper presents a systematic approach for designing 3D auxetic lattice materials, which exhibit constant negative Poisson's ratios over large strain intervals. A unit cell model mimicking tensile tests is established and based on the proposed model, the secant Poisson's ratio is defined as the negative ratio between the lateral and the longitudinal engineering strains. The optimization problem for designing a material unit cell with a target Poisson's ratio is formulated to minimize the average lateral engineering stresses under the prescribed deformations. Numerical results demonstrate that 3D auxetic lattice materials with constant Poisson's ratios can be achieved by the proposed optimization formulation and that two sets of material architectures are obtained by imposing different symmetry on the unit cell. Moreover, inspired by the topology-optimized material architecture, a subsequent shape optimization is proposed by parametrizing material architectures using super-ellipsoids. By designing two geometrical parameters, simple optimized material microstructures with different target Poisson's ratios are obtained. By interpolating these two parameters as polynomial functions of Poisson's ratios, material architectures for any Poisson's ratio in the interval of ν ∈ [ - 0.78 , 0.00 ] are explicitly presented. Numerical evaluations show that interpolated auxetic lattice materials exhibit constant Poisson's ratios in the target strain interval of [0.00, 0.20] and that 3D auxetic lattice material architectures with programmable Poisson's ratio are achievable.

  9. Method of making thermally removable epoxies

    DOEpatents

    Loy, Douglas A.; Wheeler, David R.; Russick, Edward M.; McElhanon, James R.; Saunders, Randall S.

    2002-01-01

    A method of making a thermally-removable epoxy by mixing a bis(maleimide) compound to a monomeric furan compound containing an oxirane group to form a di-epoxy mixture and then adding a curing agent at temperatures from approximately room temperature to less than approximately 90.degree. C. to form a thermally-removable epoxy. The thermally-removable epoxy can be easily removed within approximately an hour by heating to temperatures greater than approximately 90.degree. C. in a polar solvent. The epoxy material can be used in protecting electronic components that may require subsequent removal of the solid material for component repair, modification or quality control.

  10. Unimodularity criteria for Poisson structures on foliated manifolds

    NASA Astrophysics Data System (ADS)

    Pedroza, Andrés; Velasco-Barreras, Eduardo; Vorobiev, Yury

    2018-03-01

    We study the behavior of the modular class of an orientable Poisson manifold and formulate some unimodularity criteria in the semilocal context, around a (singular) symplectic leaf. Our results generalize some known unimodularity criteria for regular Poisson manifolds related to the notion of the Reeb class. In particular, we show that the unimodularity of the transverse Poisson structure of the leaf is a necessary condition for the semilocal unimodular property. Our main tool is an explicit formula for a bigraded decomposition of modular vector fields of a coupling Poisson structure on a foliated manifold. Moreover, we also exploit the notion of the modular class of a Poisson foliation and its relationship with the Reeb class.

  11. Ensemble docking to difficult targets in early-stage drug discovery: Methodology and application to fibroblast growth factor 23.

    PubMed

    Velazquez, Hector A; Riccardi, Demian; Xiao, Zhousheng; Quarles, Leigh Darryl; Yates, Charless Ryan; Baudry, Jerome; Smith, Jeremy C

    2018-02-01

    Ensemble docking is now commonly used in early-stage in silico drug discovery and can be used to attack difficult problems such as finding lead compounds which can disrupt protein-protein interactions. We give an example of this methodology here, as applied to fibroblast growth factor 23 (FGF23), a protein hormone that is responsible for regulating phosphate homeostasis. The first small-molecule antagonists of FGF23 were recently discovered by combining ensemble docking with extensive experimental target validation data (Science Signaling, 9, 2016, ra113). Here, we provide a detailed account of how ensemble-based high-throughput virtual screening was used to identify the antagonist compounds discovered in reference (Science Signaling, 9, 2016, ra113). Moreover, we perform further calculations, redocking those antagonist compounds identified in reference (Science Signaling, 9, 2016, ra113) that performed well on drug-likeness filters, to predict possible binding regions. These predicted binding modes are rescored with the molecular mechanics Poisson-Boltzmann surface area (MM/PBSA) approach to calculate the most likely binding site. Our findings suggest that the antagonist compounds antagonize FGF23 through the disruption of protein-protein interactions between FGF23 and fibroblast growth factor receptor (FGFR). © 2017 John Wiley & Sons A/S.

  12. Cyclocurcumin, a curcumin derivative, exhibits immune-modulating ability and is a potential compound for the treatment of rheumatoid arthritis as predicted by the MM-PBSA method.

    PubMed

    Fu, Min; Chen, Lihui; Zhang, Limin; Yu, Xiao; Yang, Qingrui

    2017-05-01

    The control and treatment of rheumatoid arthritis is a challenge in today's world. Therefore, the pursuit of natural disease-modifying antirheumatic drugs (DMRDs) remains a top priority in rheumatology. The present study focused on curcumin and its derivatives in the search for new DMRDs. We focused on prominent p38 mitogen-activated protein (MAP) kinase p38α which is a prime regulator of tumor necrosis factor-α (TNF-α), a key mediator of rheumatoid arthritis. In the present study, we used the X-ray crystallographic structure of p38α for molecular docking simulations and molecular dynamic simulations to study the binding modes of curcumin and its derivatives with the active site of p38α. The ATP-binding domain was used for evaluating curcumin and its derivatives. Molecular docking simulation results were used to select 4 out of 8 compounds. These 4 compounds were simulated using GROMACS molecular simulation platform; the results generated were subjected to molecular mechanics-Poisson Boltzmann surface area (MM-PBSA) calculations. The results showed cyclocurcumin as a potential natural compound for development of a potent DMRD. These data were further supported by inhibition of TNF-α release from lipopolysaccharide (LPS)-stimulated human macrophages following cyclocurcumin treatment.

  13. Thermoelectric Materials

    NASA Astrophysics Data System (ADS)

    Gao, Peng; Berkun, Isil; Schmidt, Robert D.; Luzenski, Matthew F.; Lu, Xu; Bordon Sarac, Patricia; Case, Eldon D.; Hogan, Timothy P.

    2014-06-01

    Mg2(Si,Sn) compounds are promising candidate low-cost, lightweight, nontoxic thermoelectric materials made from abundant elements and are suited for power generation applications in the intermediate temperature range of 600 K to 800 K. Knowledge on the transport and mechanical properties of Mg2(Si,Sn) compounds is essential to the design of Mg2(Si,Sn)-based thermoelectric devices. In this work, such materials were synthesized using the molten-salt sealing method and were powder processed, followed by pulsed electric sintering densification. A set of Mg2.08Si0.4- x Sn0.6Sb x (0 ≤ x ≤ 0.072) compounds were investigated, and a peak ZT of 1.50 was obtained at 716 K in Mg2.08Si0.364Sn0.6Sb0.036. The high ZT is attributed to a high electrical conductivity in these samples, possibly caused by a magnesium deficiency in the final product. The mechanical response of the material to stresses is a function of the elastic moduli. The temperature-dependent Young's modulus, shear modulus, bulk modulus, Poisson's ratio, acoustic wave speeds, and acoustic Debye temperature of the undoped Mg2(Si,Sn) compounds were measured using resonant ultrasound spectroscopy from 295 K to 603 K. In addition, the hardness and fracture toughness were measured at room temperature.

  14. Stationary and non-stationary occurrences of miniature end plate potentials are well described as stationary and non-stationary Poisson processes in the mollusc Navanax inermis.

    PubMed

    Cappell, M S; Spray, D C; Bennett, M V

    1988-06-28

    Protractor muscles in the gastropod mollusc Navanax inermis exhibit typical spontaneous miniature end plate potentials with mean amplitude 1.71 +/- 1.19 (standard deviation) mV. The evoked end plate potential is quantized, with a quantum equal to the miniature end plate potential amplitude. When their rate is stationary, occurrence of miniature end plate potentials is a random, Poisson process. When non-stationary, spontaneous miniature end plate potential occurrence is a non-stationary Poisson process, a Poisson process with the mean frequency changing with time. This extends the random Poisson model for miniature end plate potentials to the frequently observed non-stationary occurrence. Reported deviations from a Poisson process can sometimes be accounted for by the non-stationary Poisson process and more complex models, such as clustered release, are not always needed.

  15. A test of inflated zeros for Poisson regression models.

    PubMed

    He, Hua; Zhang, Hui; Ye, Peng; Tang, Wan

    2017-01-01

    Excessive zeros are common in practice and may cause overdispersion and invalidate inference when fitting Poisson regression models. There is a large body of literature on zero-inflated Poisson models. However, methods for testing whether there are excessive zeros are less well developed. The Vuong test comparing a Poisson and a zero-inflated Poisson model is commonly applied in practice. However, the type I error of the test often deviates seriously from the nominal level, rendering serious doubts on the validity of the test in such applications. In this paper, we develop a new approach for testing inflated zeros under the Poisson model. Unlike the Vuong test for inflated zeros, our method does not require a zero-inflated Poisson model to perform the test. Simulation studies show that when compared with the Vuong test our approach not only better at controlling type I error rate, but also yield more power.

  16. Calculation of the Poisson cumulative distribution function

    NASA Technical Reports Server (NTRS)

    Bowerman, Paul N.; Nolty, Robert G.; Scheuer, Ernest M.

    1990-01-01

    A method for calculating the Poisson cdf (cumulative distribution function) is presented. The method avoids computer underflow and overflow during the process. The computer program uses this technique to calculate the Poisson cdf for arbitrary inputs. An algorithm that determines the Poisson parameter required to yield a specified value of the cdf is presented.

  17. Poisson's Ratio of a Hyperelastic Foam Under Quasi-static and Dynamic Loading

    DOE PAGES

    Sanborn, Brett; Song, Bo

    2018-06-03

    Poisson's ratio is a material constant representing compressibility of material volume. However, when soft, hyperelastic materials such as silicone foam are subjected to large deformation into densification, the Poisson's ratio may rather significantly change, which warrants careful consideration in modeling and simulation of impact/shock mitigation scenarios where foams are used as isolators. The evolution of Poisson's ratio of silicone foam materials has not yet been characterized, particularly under dynamic loading. In this study, radial and axial measurements of specimen strain are conducted simultaneously during quasi-static and dynamic compression tests to determine the Poisson's ratio of silicone foam. The Poisson's ratiomore » of silicone foam exhibited a transition from compressible to nearly incompressible at a threshold strain that coincided with the onset of densification in the material. Poisson's ratio as a function of engineering strain was different at quasi-static and dynamic rates. Here, the Poisson's ratio behavior is presented and can be used to improve constitutive modeling of silicone foams subjected to a broad range of mechanical loading.« less

  18. Poisson's Ratio of a Hyperelastic Foam Under Quasi-static and Dynamic Loading

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanborn, Brett; Song, Bo

    Poisson's ratio is a material constant representing compressibility of material volume. However, when soft, hyperelastic materials such as silicone foam are subjected to large deformation into densification, the Poisson's ratio may rather significantly change, which warrants careful consideration in modeling and simulation of impact/shock mitigation scenarios where foams are used as isolators. The evolution of Poisson's ratio of silicone foam materials has not yet been characterized, particularly under dynamic loading. In this study, radial and axial measurements of specimen strain are conducted simultaneously during quasi-static and dynamic compression tests to determine the Poisson's ratio of silicone foam. The Poisson's ratiomore » of silicone foam exhibited a transition from compressible to nearly incompressible at a threshold strain that coincided with the onset of densification in the material. Poisson's ratio as a function of engineering strain was different at quasi-static and dynamic rates. Here, the Poisson's ratio behavior is presented and can be used to improve constitutive modeling of silicone foams subjected to a broad range of mechanical loading.« less

  19. Application of the Hyper-Poisson Generalized Linear Model for Analyzing Motor Vehicle Crashes.

    PubMed

    Khazraee, S Hadi; Sáez-Castillo, Antonio Jose; Geedipally, Srinivas Reddy; Lord, Dominique

    2015-05-01

    The hyper-Poisson distribution can handle both over- and underdispersion, and its generalized linear model formulation allows the dispersion of the distribution to be observation-specific and dependent on model covariates. This study's objective is to examine the potential applicability of a newly proposed generalized linear model framework for the hyper-Poisson distribution in analyzing motor vehicle crash count data. The hyper-Poisson generalized linear model was first fitted to intersection crash data from Toronto, characterized by overdispersion, and then to crash data from railway-highway crossings in Korea, characterized by underdispersion. The results of this study are promising. When fitted to the Toronto data set, the goodness-of-fit measures indicated that the hyper-Poisson model with a variable dispersion parameter provided a statistical fit as good as the traditional negative binomial model. The hyper-Poisson model was also successful in handling the underdispersed data from Korea; the model performed as well as the gamma probability model and the Conway-Maxwell-Poisson model previously developed for the same data set. The advantages of the hyper-Poisson model studied in this article are noteworthy. Unlike the negative binomial model, which has difficulties in handling underdispersed data, the hyper-Poisson model can handle both over- and underdispersed crash data. Although not a major issue for the Conway-Maxwell-Poisson model, the effect of each variable on the expected mean of crashes is easily interpretable in the case of this new model. © 2014 Society for Risk Analysis.

  20. Reference analysis of the signal + background model in counting experiments II. Approximate reference prior

    NASA Astrophysics Data System (ADS)

    Casadei, D.

    2014-10-01

    The objective Bayesian treatment of a model representing two independent Poisson processes, labelled as ``signal'' and ``background'' and both contributing additively to the total number of counted events, is considered. It is shown that the reference prior for the parameter of interest (the signal intensity) can be well approximated by the widely (ab)used flat prior only when the expected background is very high. On the other hand, a very simple approximation (the limiting form of the reference prior for perfect prior background knowledge) can be safely used over a large portion of the background parameters space. The resulting approximate reference posterior is a Gamma density whose parameters are related to the observed counts. This limiting form is simpler than the result obtained with a flat prior, with the additional advantage of representing a much closer approximation to the reference posterior in all cases. Hence such limiting prior should be considered a better default or conventional prior than the uniform prior. On the computing side, it is shown that a 2-parameter fitting function is able to reproduce extremely well the reference prior for any background prior. Thus, it can be useful in applications requiring the evaluation of the reference prior for a very large number of times.

  1. Accelerating electrostatic surface potential calculation with multi-scale approximation on graphics processing units.

    PubMed

    Anandakrishnan, Ramu; Scogland, Tom R W; Fenley, Andrew T; Gordon, John C; Feng, Wu-chun; Onufriev, Alexey V

    2010-06-01

    Tools that compute and visualize biomolecular electrostatic surface potential have been used extensively for studying biomolecular function. However, determining the surface potential for large biomolecules on a typical desktop computer can take days or longer using currently available tools and methods. Two commonly used techniques to speed-up these types of electrostatic computations are approximations based on multi-scale coarse-graining and parallelization across multiple processors. This paper demonstrates that for the computation of electrostatic surface potential, these two techniques can be combined to deliver significantly greater speed-up than either one separately, something that is in general not always possible. Specifically, the electrostatic potential computation, using an analytical linearized Poisson-Boltzmann (ALPB) method, is approximated using the hierarchical charge partitioning (HCP) multi-scale method, and parallelized on an ATI Radeon 4870 graphical processing unit (GPU). The implementation delivers a combined 934-fold speed-up for a 476,040 atom viral capsid, compared to an equivalent non-parallel implementation on an Intel E6550 CPU without the approximation. This speed-up is significantly greater than the 42-fold speed-up for the HCP approximation alone or the 182-fold speed-up for the GPU alone. Copyright (c) 2010 Elsevier Inc. All rights reserved.

  2. Performance of the strongly constrained and appropriately normed density functional for solid-state materials

    DOE PAGES

    Isaacs, Eric B.; Wolverton, Chris

    2018-06-22

    Constructed to satisfy 17 known exact constraints for a semilocal density functional, the strongly constrained and appropriately normed (SCAN) meta-generalized-gradient-approximation functional has shown early promise for accurately describing the electronic structure of molecules and solids. One open question is how well SCAN predicts the formation energy, a key quantity for describing the thermodynamic stability of solid-state compounds. To answer this question, we perform an extensive benchmark of SCAN by computing the formation energies for a diverse group of nearly 1000 crystalline compounds for which experimental values are known. Due to an enhanced exchange interaction in the covalent bonding regime, SCANmore » substantially decreases the formation energy errors for strongly bound compounds, by approximately 50% to 110 meV/atom, as compared to the generalized gradient approximation of Perdew, Burke, and Ernzerhof (PBE). However, for intermetallic compounds, SCAN performs moderately worse than PBE with an increase in formation energy error of approximately 20%, stemming from SCAN's distinct behavior in the weak bonding regime. The formation energy errors can be further reduced via elemental chemical potential fitting. We find that SCAN leads to significantly more accurate predicted crystal volumes, moderately enhanced magnetism, and mildly improved band gaps as compared to PBE. Altogether, SCAN represents a significant improvement in accurately describing the thermodynamics of strongly bound compounds.« less

  3. Performance of the strongly constrained and appropriately normed density functional for solid-state materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Isaacs, Eric B.; Wolverton, Chris

    Constructed to satisfy 17 known exact constraints for a semilocal density functional, the strongly constrained and appropriately normed (SCAN) meta-generalized-gradient-approximation functional has shown early promise for accurately describing the electronic structure of molecules and solids. One open question is how well SCAN predicts the formation energy, a key quantity for describing the thermodynamic stability of solid-state compounds. To answer this question, we perform an extensive benchmark of SCAN by computing the formation energies for a diverse group of nearly 1000 crystalline compounds for which experimental values are known. Due to an enhanced exchange interaction in the covalent bonding regime, SCANmore » substantially decreases the formation energy errors for strongly bound compounds, by approximately 50% to 110 meV/atom, as compared to the generalized gradient approximation of Perdew, Burke, and Ernzerhof (PBE). However, for intermetallic compounds, SCAN performs moderately worse than PBE with an increase in formation energy error of approximately 20%, stemming from SCAN's distinct behavior in the weak bonding regime. The formation energy errors can be further reduced via elemental chemical potential fitting. We find that SCAN leads to significantly more accurate predicted crystal volumes, moderately enhanced magnetism, and mildly improved band gaps as compared to PBE. Altogether, SCAN represents a significant improvement in accurately describing the thermodynamics of strongly bound compounds.« less

  4. Characterization of Ice for Return-to-Flight of the Space Shuttle. Part 2; Soft Ice

    NASA Technical Reports Server (NTRS)

    Schulson, Erland M.; Iliescu, Daniel

    2005-01-01

    In support of characterizing ice debris for return-to-flight (RTF) of NASA's space shuttle, we have determined the microstructure, density and compressive strength (at -10 C at approximately 0.3 per second) of porous or soft ice that was produced from both atmospheric water and consolidated snow. The study showed that the atmospheric material was generally composed of a mixture of very fine (0.1 to 0.3 millimeters) and coarser (5 to 10 millimeter) grains, plus air bubbles distributed preferentially within the more finely-grained part of the microstructure. The snow ice was composed of even finer grains (approximately 0.05 millimeters) and contained more pores. Correspondingly, the snow ice was of lower density than the atmospheric ice and both materials were significantly less dense than hard ice. The atmospheric ice was stronger (approximately 3.8 MPa) than the snow ice (approximately 1.9 MPa), but weaker by a factor of 2 to 5 than pore-free hard ice deformed under the same conditions. Zero Values are given for Young's modulus, compressive strength and Poisson's ratio that can be used for modeling soft ice from the external tank (ET).

  5. A Martingale Characterization of Mixed Poisson Processes.

    DTIC Science & Technology

    1985-10-01

    03LA A 11. TITLE (Inciuae Security Clanafication, ",A martingale characterization of mixed Poisson processes " ________________ 12. PERSONAL AUTHOR... POISSON PROCESSES Jostification .......... . ... . . Di.;t ib,,jtion by Availability Codes Dietmar Pfeifer* Technical University Aachen Dist Special and...Mixed Poisson processes play an important role in many branches of applied probability, for instance in insurance mathematics and physics (see Albrecht

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hwang, Jai-chan; Noh, Hyerim

    Special relativistic hydrodynamics with weak gravity has hitherto been unknown in the literature. Whether such an asymmetric combination is possible has been unclear. Here, the hydrodynamic equations with Poisson-type gravity, considering fully relativistic velocity and pressure under the weak gravity and the action-at-a-distance limit, are consistently derived from Einstein’s theory of general relativity. An analysis is made in the maximal slicing, where the Poisson’s equation becomes much simpler than our previous study in the zero-shear gauge. Also presented is the hydrodynamic equations in the first post-Newtonian approximation, now under the general hypersurface condition. Our formulation includes the anisotropic stress.

  7. A general dead-time correction method based on live-time stamping. Application to the measurement of short-lived radionuclides.

    PubMed

    Chauvenet, B; Bobin, C; Bouchard, J

    2017-12-01

    Dead-time correction formulae are established in the general case of superimposed non-homogeneous Poisson processes. Based on the same principles as conventional live-timed counting, this method exploits the additional information made available using digital signal processing systems, and especially the possibility to store the time stamps of live-time intervals. No approximation needs to be made to obtain those formulae. Estimates of the variances of corrected rates are also presented. This method is applied to the activity measurement of short-lived radionuclides. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. A three-ions model of electrodiffusion kinetics in a nanochannel

    NASA Astrophysics Data System (ADS)

    Sebechlebská, Táňa; Neogrády, Pavel; Valent, Ivan

    2016-10-01

    Nanoscale electrodiffusion transport is involved in many electrochemical, technological and biological processes. Developments in computer power and numerical algorithms allow for solving full time-dependent Nernst-Planck and Poisson equations without simplifying approximations. We simulate spatio-temporal profiles of concentration and electric potential changes after a potential jump in a 10 nm channel with two cations (with opposite concentration gradients and different mobilities) and one anion (of uniform concentration). The temporal dynamics shows three exponential phases and damped oscillations of the electric potential. Despite the absence of surface charges in the studied model, an asymmetric current-voltage characteristic was observed.

  9. Deformation mechanisms in negative Poisson's ratio materials - Structural aspects

    NASA Technical Reports Server (NTRS)

    Lakes, R.

    1991-01-01

    Poisson's ratio in materials is governed by the following aspects of the microstructure: the presence of rotational degrees of freedom, non-affine deformation kinematics, or anisotropic structure. Several structural models are examined. The non-affine kinematics are seen to be essential for the production of negative Poisson's ratios for isotropic materials containing central force linkages of positive stiffness. Non-central forces combined with pre-load can also give rise to a negative Poisson's ratio in isotropic materials. A chiral microstructure with non-central force interaction or non-affine deformation can also exhibit a negative Poisson's ratio. Toughness and damage resistance in these materials may be affected by the Poisson's ratio itself, as well as by generalized continuum aspects associated with the microstructure.

  10. Exact solution for the Poisson field in a semi-infinite strip.

    PubMed

    Cohen, Yossi; Rothman, Daniel H

    2017-04-01

    The Poisson equation is associated with many physical processes. Yet exact analytic solutions for the two-dimensional Poisson field are scarce. Here we derive an analytic solution for the Poisson equation with constant forcing in a semi-infinite strip. We provide a method that can be used to solve the field in other intricate geometries. We show that the Poisson flux reveals an inverse square-root singularity at a tip of a slit, and identify a characteristic length scale in which a small perturbation, in a form of a new slit, is screened by the field. We suggest that this length scale expresses itself as a characteristic spacing between tips in real Poisson networks that grow in response to fluxes at tips.

  11. Studying the time trend of Methicillin-resistant Staphylococcus aureus (MRSA) in Norway by use of non-stationary γ-Poisson distributions.

    PubMed

    Moxnes, John F; Moen, Aina E Fossum; Leegaard, Truls Michael

    2015-10-05

    Study the time development of methicillin-resistant Staphylococcus aureus (MRSA) and forecast future behaviour. The major question: Is the number of MRSA isolates in Norway increasing and will it continue to increase? Time trend analysis using non-stationary γ-Poisson distributions. Two data sets were analysed. The first data set (data set I) consists of all MRSA isolates collected in Oslo County from 1997 to 2010; the study area includes the Norwegian capital of Oslo and nearby surrounding areas, covering approximately 11% of the Norwegian population. The second data set (data set II) consists of all MRSA isolates collected in Health Region East from 2002 to 2011. Health Region East consists of Oslo County and four neighbouring counties, and is the most populated area of Norway. Both data sets I and II consist of all persons in the area and time period described in the Settings, from whom MRSA have been isolated. MRSA infections have been mandatory notifiable in Norway since 1995, and MRSA colonisation since 2004. In the time period studied, all bacterial samples in Norway have been sent to a medical microbiological laboratory at the regional hospital for testing. In collaboration with the regional hospitals in five counties, we have collected all MRSA findings in the South-Eastern part of Norway over long time periods. On an average, a linear or exponential increase in MRSA numbers was observed in the data sets. A Poisson process with increasing intensity did not capture the dispersion of the time series, but a γ-Poisson process showed good agreement and captured the overdispersion. The numerical model showed numerical internal consistency. In the present study, we find that the number of MRSA isolates is increasing in the most populated area of Norway during the time period studied. We also forecast a continuous increase until the year 2017. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  12. Fuzzy classifier based support vector regression framework for Poisson ratio determination

    NASA Astrophysics Data System (ADS)

    Asoodeh, Mojtaba; Bagheripour, Parisa

    2013-09-01

    Poisson ratio is considered as one of the most important rock mechanical properties of hydrocarbon reservoirs. Determination of this parameter through laboratory measurement is time, cost, and labor intensive. Furthermore, laboratory measurements do not provide continuous data along the reservoir intervals. Hence, a fast, accurate, and inexpensive way of determining Poisson ratio which produces continuous data over the whole reservoir interval is desirable. For this purpose, support vector regression (SVR) method based on statistical learning theory (SLT) was employed as a supervised learning algorithm to estimate Poisson ratio from conventional well log data. SVR is capable of accurately extracting the implicit knowledge contained in conventional well logs and converting the gained knowledge into Poisson ratio data. Structural risk minimization (SRM) principle which is embedded in the SVR structure in addition to empirical risk minimization (EMR) principle provides a robust model for finding quantitative formulation between conventional well log data and Poisson ratio. Although satisfying results were obtained from an individual SVR model, it had flaws of overestimation in low Poisson ratios and underestimation in high Poisson ratios. These errors were eliminated through implementation of fuzzy classifier based SVR (FCBSVR). The FCBSVR significantly improved accuracy of the final prediction. This strategy was successfully applied to data from carbonate reservoir rocks of an Iranian Oil Field. Results indicated that SVR predicted Poisson ratio values are in good agreement with measured values.

  13. Biofluidic Transport and Molecular Recognition in Polymer Microdevices

    DTIC Science & Technology

    2005-04-29

    flexible membrane separating the particles and reservoir. B. Using photopolymerizable wires, an electrolysis pump was fabricated on a microdevice. It...Antigen detection was accomplished by grafting the approximate antibody or sensing compound via acrylation and polymerization to the surface. Figure 14...were detected with assay times of approximately 10 minutes. Figure 15 shows detection data for a compound (glucagon) that is impossible to detect by

  14. Performance of the modified Poisson regression approach for estimating relative risks from clustered prospective data.

    PubMed

    Yelland, Lisa N; Salter, Amy B; Ryan, Philip

    2011-10-15

    Modified Poisson regression, which combines a log Poisson regression model with robust variance estimation, is a useful alternative to log binomial regression for estimating relative risks. Previous studies have shown both analytically and by simulation that modified Poisson regression is appropriate for independent prospective data. This method is often applied to clustered prospective data, despite a lack of evidence to support its use in this setting. The purpose of this article is to evaluate the performance of the modified Poisson regression approach for estimating relative risks from clustered prospective data, by using generalized estimating equations to account for clustering. A simulation study is conducted to compare log binomial regression and modified Poisson regression for analyzing clustered data from intervention and observational studies. Both methods generally perform well in terms of bias, type I error, and coverage. Unlike log binomial regression, modified Poisson regression is not prone to convergence problems. The methods are contrasted by using example data sets from 2 large studies. The results presented in this article support the use of modified Poisson regression as an alternative to log binomial regression for analyzing clustered prospective data when clustering is taken into account by using generalized estimating equations.

  15. A generalized Poisson and Poisson-Boltzmann solver for electrostatic environments.

    PubMed

    Fisicaro, G; Genovese, L; Andreussi, O; Marzari, N; Goedecker, S

    2016-01-07

    The computational study of chemical reactions in complex, wet environments is critical for applications in many fields. It is often essential to study chemical reactions in the presence of applied electrochemical potentials, taking into account the non-trivial electrostatic screening coming from the solvent and the electrolytes. As a consequence, the electrostatic potential has to be found by solving the generalized Poisson and the Poisson-Boltzmann equations for neutral and ionic solutions, respectively. In the present work, solvers for both problems have been developed. A preconditioned conjugate gradient method has been implemented for the solution of the generalized Poisson equation and the linear regime of the Poisson-Boltzmann, allowing to solve iteratively the minimization problem with some ten iterations of the ordinary Poisson equation solver. In addition, a self-consistent procedure enables us to solve the non-linear Poisson-Boltzmann problem. Both solvers exhibit very high accuracy and parallel efficiency and allow for the treatment of periodic, free, and slab boundary conditions. The solver has been integrated into the BigDFT and Quantum-ESPRESSO electronic-structure packages and will be released as an independent program, suitable for integration in other codes.

  16. A Review of Multivariate Distributions for Count Data Derived from the Poisson Distribution.

    PubMed

    Inouye, David; Yang, Eunho; Allen, Genevera; Ravikumar, Pradeep

    2017-01-01

    The Poisson distribution has been widely studied and used for modeling univariate count-valued data. Multivariate generalizations of the Poisson distribution that permit dependencies, however, have been far less popular. Yet, real-world high-dimensional count-valued data found in word counts, genomics, and crime statistics, for example, exhibit rich dependencies, and motivate the need for multivariate distributions that can appropriately model this data. We review multivariate distributions derived from the univariate Poisson, categorizing these models into three main classes: 1) where the marginal distributions are Poisson, 2) where the joint distribution is a mixture of independent multivariate Poisson distributions, and 3) where the node-conditional distributions are derived from the Poisson. We discuss the development of multiple instances of these classes and compare the models in terms of interpretability and theory. Then, we empirically compare multiple models from each class on three real-world datasets that have varying data characteristics from different domains, namely traffic accident data, biological next generation sequencing data, and text data. These empirical experiments develop intuition about the comparative advantages and disadvantages of each class of multivariate distribution that was derived from the Poisson. Finally, we suggest new research directions as explored in the subsequent discussion section.

  17. A generalized Poisson and Poisson-Boltzmann solver for electrostatic environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fisicaro, G., E-mail: giuseppe.fisicaro@unibas.ch; Goedecker, S.; Genovese, L.

    2016-01-07

    The computational study of chemical reactions in complex, wet environments is critical for applications in many fields. It is often essential to study chemical reactions in the presence of applied electrochemical potentials, taking into account the non-trivial electrostatic screening coming from the solvent and the electrolytes. As a consequence, the electrostatic potential has to be found by solving the generalized Poisson and the Poisson-Boltzmann equations for neutral and ionic solutions, respectively. In the present work, solvers for both problems have been developed. A preconditioned conjugate gradient method has been implemented for the solution of the generalized Poisson equation and themore » linear regime of the Poisson-Boltzmann, allowing to solve iteratively the minimization problem with some ten iterations of the ordinary Poisson equation solver. In addition, a self-consistent procedure enables us to solve the non-linear Poisson-Boltzmann problem. Both solvers exhibit very high accuracy and parallel efficiency and allow for the treatment of periodic, free, and slab boundary conditions. The solver has been integrated into the BigDFT and Quantum-ESPRESSO electronic-structure packages and will be released as an independent program, suitable for integration in other codes.« less

  18. Poisson Coordinates.

    PubMed

    Li, Xian-Ying; Hu, Shi-Min

    2013-02-01

    Harmonic functions are the critical points of a Dirichlet energy functional, the linear projections of conformal maps. They play an important role in computer graphics, particularly for gradient-domain image processing and shape-preserving geometric computation. We propose Poisson coordinates, a novel transfinite interpolation scheme based on the Poisson integral formula, as a rapid way to estimate a harmonic function on a certain domain with desired boundary values. Poisson coordinates are an extension of the Mean Value coordinates (MVCs) which inherit their linear precision, smoothness, and kernel positivity. We give explicit formulas for Poisson coordinates in both continuous and 2D discrete forms. Superior to MVCs, Poisson coordinates are proved to be pseudoharmonic (i.e., they reproduce harmonic functions on n-dimensional balls). Our experimental results show that Poisson coordinates have lower Dirichlet energies than MVCs on a number of typical 2D domains (particularly convex domains). As well as presenting a formula, our approach provides useful insights for further studies on coordinates-based interpolation and fast estimation of harmonic functions.

  19. Markov model of fatigue of a composite material with the poisson process of defect initiation

    NASA Astrophysics Data System (ADS)

    Paramonov, Yu.; Chatys, R.; Andersons, J.; Kleinhofs, M.

    2012-05-01

    As a development of the model where only one weak microvolume (WMV) and only a pulsating cyclic loading are considered, in the current version of the model, we take into account the presence of several weak sites where fatigue damage can accumulate and a loading with an arbitrary (but positive) stress ratio. The Poisson process of initiation of WMVs is considered, whose rate depends on the size of a specimen. The cumulative distribution function (cdf) of the fatigue life of every individual WMV is calculated using the Markov model of fatigue. For the case where this function is approximated by a lognormal distribution, a formula for calculating the cdf of fatigue life of the specimen (modeled as a chain of WMVs) is obtained. Only a pulsating cyclic loading was considered in the previous version of the model. Now, using the modified energy method, a loading cycle with an arbitrary stress ratio is "transformed" into an equivalent cycle with some other stress ratio. In such a way, the entire probabilistic fatigue diagram for any stress ratio with a positive cycle stress can be obtained. Numerical examples are presented.

  20. A 12 year EDF study of concrete creep under uniaxial and biaxial loading

    DOE PAGES

    Charpin, Laurent; Le Pape, Yann; Coustabeau, Eric; ...

    2017-11-04

    This paper presents a 12-year-long creep and shrinkage experimental campaign on cylindrical and prismatic concrete samples under uniaxial and biaxial stress, respectively. The motivation for the study is the need for predicting the delayed strains and the pre-stress loss of concrete containment buildings of nuclear power plants. Two subjects are central in this regard: the creep strain's long-term evolution and the creep Poisson's ratio. A greater understanding of these areas is necessary to ensure reliable predictions of the long-term behavior of the concrete containment buildings.Long-term basic creep appears to evolve as a logarithm function of time in the range ofmore » 3 to 10 years of testing. Similar trends are observed for drying creep, autogenous shrinkage, and drying shrinkage testing, which suggests that all delayed strains obtained using different loading and drying conditions originate from a common mechanism.The creep Poisson's ratio derived from the biaxial tests is approximately constant over time for both the basic and drying creep tests (creep strains corrected by the shrinkage strain).It is also shown that the biaxial non-drying samples undergo a significant increase in Young's modulus after 10 years.« less

  1. Predictors for the Number of Warning Information Sources During Tornadoes.

    PubMed

    Cong, Zhen; Luo, Jianjun; Liang, Daan; Nejat, Ali

    2017-04-01

    People may receive tornado warnings from multiple information sources, but little is known about factors that affect the number of warning information sources (WISs). This study examined predictors for the number of WISs with a telephone survey on randomly sampled residents in Tuscaloosa, Alabama, and Joplin, Missouri, approximately 1 year after both cities were struck by violent tornadoes (EF4 and EF5) in 2011. The survey included 1006 finished interviews and the working sample included 903 respondents. Poisson regression and Zero-Inflated Poisson regression showed that older age and having an emergency plan predicted more WISs in both cities. Education, marital status, and gender affected the possibilities of receiving warnings and the number of WISs either in Joplin or in Tuscaloosa. The findings suggest that social disparity affects the access to warnings not only with respect to the likelihood of receiving any warnings but also with respect to the number of WISs. In addition, historical and social contexts are important for examining predictors for the number of WISs. We recommend that the number of WISs should be regarded as an important measure to evaluate access to warnings in addition to the likelihood of receiving warnings. (Disaster Med Public Health Preparedness. 2017;11:168-172).

  2. Continuum description of ionic and dielectric shielding for molecular-dynamics simulations of proteins in solution

    NASA Astrophysics Data System (ADS)

    Egwolf, Bernhard; Tavan, Paul

    2004-01-01

    We extend our continuum description of solvent dielectrics in molecular-dynamics (MD) simulations [B. Egwolf and P. Tavan, J. Chem. Phys. 118, 2039 (2003)], which has provided an efficient and accurate solution of the Poisson equation, to ionic solvents as described by the linearized Poisson-Boltzmann (LPB) equation. We start with the formulation of a general theory for the electrostatics of an arbitrarily shaped molecular system, which consists of partially charged atoms and is embedded in a LPB continuum. This theory represents the reaction field induced by the continuum in terms of charge and dipole densities localized within the molecular system. Because these densities cannot be calculated analytically for systems of arbitrary shape, we introduce an atom-based discretization and a set of carefully designed approximations. This allows us to represent the densities by charges and dipoles located at the atoms. Coupled systems of linear equations determine these multipoles and can be rapidly solved by iteration during a MD simulation. The multipoles yield the reaction field forces and energies. Finally, we scrutinize the quality of our approach by comparisons with an analytical solution restricted to perfectly spherical systems and with results of a finite difference method.

  3. Solving the problem of negative populations in approximate accelerated stochastic simulations using the representative reaction approach.

    PubMed

    Kadam, Shantanu; Vanka, Kumar

    2013-02-15

    Methods based on the stochastic formulation of chemical kinetics have the potential to accurately reproduce the dynamical behavior of various biochemical systems of interest. However, the computational expense makes them impractical for the study of real systems. Attempts to render these methods practical have led to the development of accelerated methods, where the reaction numbers are modeled by Poisson random numbers. However, for certain systems, such methods give rise to physically unrealistic negative numbers for species populations. The methods which make use of binomial variables, in place of Poisson random numbers, have since become popular, and have been partially successful in addressing this problem. In this manuscript, the development of two new computational methods, based on the representative reaction approach (RRA), has been discussed. The new methods endeavor to solve the problem of negative numbers, by making use of tools like the stochastic simulation algorithm and the binomial method, in conjunction with the RRA. It is found that these newly developed methods perform better than other binomial methods used for stochastic simulations, in resolving the problem of negative populations. Copyright © 2012 Wiley Periodicals, Inc.

  4. Limits on Achievable Dimensional and Photon Efficiencies with Intensity-Modulation and Photon-Counting Due to Non-Ideal Photon-Counter Behavior

    NASA Technical Reports Server (NTRS)

    Moision, Bruce; Erkmen, Baris I.; Farr, William; Dolinar, Samuel J.; Birnbaum, Kevin M.

    2012-01-01

    An ideal intensity-modulated photon-counting channel can achieve unbounded photon information efficiencies (PIEs). However, a number of limitations of a physical system limit the practically achievable PIE. In this paper, we discuss several of these limitations and illustrate their impact on the channel. We show that, for the Poisson channel, noise does not strictly bound PIE, although there is an effective limit, as the dimensional information efficiency goes as e[overline] e PIE beyond a threshold PIE. Since the Holevo limit is bounded in the presence of noise, this illustrates that the Poisson approximation is invalid at large PIE for any number of noise modes. We show that a finite transmitter extinction ratio bounds the achievable PIE to a maximum that is logarithmic in the extinction ratio. We show how detector jitter limits the ability to mitigate noise in the PPM signaling framework. We illustrate a method to model detector blocking when the number of detectors is large, and illustrate mitigation of blocking with spatial spreading and altering. Finally, we illustrate the design of a high photon efficiency system using state-of-the-art photo-detectors and taking all these effects into account.

  5. A 12 year EDF study of concrete creep under uniaxial and biaxial loading

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Charpin, Laurent; Le Pape, Yann; Coustabeau, Eric

    This paper presents a 12-year-long creep and shrinkage experimental campaign on cylindrical and prismatic concrete samples under uniaxial and biaxial stress, respectively. The motivation for the study is the need for predicting the delayed strains and the pre-stress loss of concrete containment buildings of nuclear power plants. Two subjects are central in this regard: the creep strain's long-term evolution and the creep Poisson's ratio. A greater understanding of these areas is necessary to ensure reliable predictions of the long-term behavior of the concrete containment buildings.Long-term basic creep appears to evolve as a logarithm function of time in the range ofmore » 3 to 10 years of testing. Similar trends are observed for drying creep, autogenous shrinkage, and drying shrinkage testing, which suggests that all delayed strains obtained using different loading and drying conditions originate from a common mechanism.The creep Poisson's ratio derived from the biaxial tests is approximately constant over time for both the basic and drying creep tests (creep strains corrected by the shrinkage strain).It is also shown that the biaxial non-drying samples undergo a significant increase in Young's modulus after 10 years.« less

  6. Probabilistic assessment of precipitation-triggered landslides using historical records of landslide occurence, Seattle, Washington

    USGS Publications Warehouse

    Coe, J.A.; Michael, J.A.; Crovelli, R.A.; Savage, W.Z.; Laprade, W.T.; Nashem, W.D.

    2004-01-01

    Ninety years of historical landslide records were used as input to the Poisson and binomial probability models. Results from these models show that, for precipitation-triggered landslides, approximately 9 percent of the area of Seattle has annual exceedance probabilities of 1 percent or greater. Application of the Poisson model for estimating the future occurrence of individual landslides results in a worst-case scenario map, with a maximum annual exceedance probability of 25 percent on a hillslope near Duwamish Head in West Seattle. Application of the binomial model for estimating the future occurrence of a year with one or more landslides results in a map with a maximum annual exceedance probability of 17 percent (also near Duwamish Head). Slope and geology both play a role in localizing the occurrence of landslides in Seattle. A positive correlation exists between slope and mean exceedance probability, with probability tending to increase as slope increases. Sixty-four percent of all historical landslide locations are within 150 m (500 ft, horizontal distance) of the Esperance Sand/Lawton Clay contact, but within this zone, no positive or negative correlation exists between exceedance probability and distance to the contact.

  7. Subaru HDS transmission spectroscopy of the transiting extrasolar planet HD209458b

    NASA Astrophysics Data System (ADS)

    Narita, N.; Suto, Y.; Winn, J. N.; Turner, E. L.; Aoki, W.; Leigh, C. J.; Sato, B.; Tamura, M.; Yamada, T.

    2006-02-01

    We have searched for absorption in several common atomic species due to the atmosphere or exosphere of the transiting extrasolar planet HD 209458b, using high precision optical spectra obtained with the Subaru High Dispersion Spectrograph (HDS). Previously we reported an upper limit on Hα absorption of 0.1% (3σ) within a 5.1Å band. Using the same procedure, we now report upper limits on absorption due to the optical transitions of Na D, Li, Hα, Hβ, Hγ, Fe, and Ca. The 3σ upper limit for each transition is approximately 1% within a 0.3Å band (the core of the line), and a few tenths of a per cent within a 2Å band (the full line width). The wide-band results are close to the expected limit due to photon-counting (Poisson) statistics, although in the narrow-band case we have encountered unexplained systematic errors at a few times the Poisson level. These results are consistent with all previously reported detections (Charbonneau et al. 2002, ApJ, 568, 377) and upper limits (Bundy & Marcy 2000, PASP, 112, 1421; Moutou et al. 2001, A&A, 371, 260), but are significantly more sensitive yet achieved from ground based observations.

  8. A coarse-grid-projection acceleration method for finite-element incompressible flow computations

    NASA Astrophysics Data System (ADS)

    Kashefi, Ali; Staples, Anne; FiN Lab Team

    2015-11-01

    Coarse grid projection (CGP) methodology provides a framework for accelerating computations by performing some part of the computation on a coarsened grid. We apply the CGP to pressure projection methods for finite element-based incompressible flow simulations. Based on it, the predicted velocity field data is restricted to a coarsened grid, the pressure is determined by solving the Poisson equation on the coarse grid, and the resulting data are prolonged to the preset fine grid. The contributions of the CGP method to the pressure correction technique are twofold: first, it substantially lessens the computational cost devoted to the Poisson equation, which is the most time-consuming part of the simulation process. Second, it preserves the accuracy of the velocity field. The velocity and pressure spaces are approximated by Galerkin spectral element using piecewise linear basis functions. A restriction operator is designed so that fine data are directly injected into the coarse grid. The Laplacian and divergence matrices are driven by taking inner products of coarse grid shape functions. Linear interpolation is implemented to construct a prolongation operator. A study of the data accuracy and the CPU time for the CGP-based versus non-CGP computations is presented. Laboratory for Fluid Dynamics in Nature.

  9. Ionic size effects to molecular solvation energy and to ion current across a channel resulted from the nonuniform size-modified PNP equations.

    PubMed

    Qiao, Yu; Tu, Bin; Lu, Benzhuo

    2014-05-07

    Ionic finite size can impose considerable effects to both the equilibrium and non-equilibrium properties of a solvated molecular system, such as the solvation energy, ionic concentration, and transport in a channel. As discussed in our former work [B. Lu and Y. C. Zhou, Biophys. J. 100, 2475 (2011)], a class of size-modified Poisson-Boltzmann (PB)/Poisson-Nernst-Planck (PNP) models can be uniformly studied through the general nonuniform size-modified PNP (SMPNP) equations deduced from the extended free energy functional of Borukhov et al. [I. Borukhov, D. Andelman, and H. Orland, Phys. Rev. Lett. 79, 435 (1997)] This work focuses on the nonuniform size effects to molecular solvation energy and to ion current across a channel for real biomolecular systems. The main contributions are: (1) we prove that for solvation energy calculation with nonuniform size effects (through equilibrium SMPNP simulation), there exists a simplified approximation formulation which is the same as the widely used one in PB community. This approximate form avoids integration over the whole domain and makes energy calculations convenient. (2) Numerical calculations show that ionic size effects tend to negate the solvation effects, which indicates that a higher molecular solvation energy (lower absolute value) is to be predicted when ionic size effects are considered. For both calculations on a protein and a DNA fragment systems in a 0.5M 1:1 ionic solution, a difference about 10 kcal/mol in solvation energies is found between the PB and the SMPNP predictions. Moreover, it is observed that the solvation energy decreases as ionic strength increases, which behavior is similar as those predicted by the traditional PB equation (without size effect) and by the uniform size-modified Poisson-Boltzmann equation. (3) Nonequilibrium SMPNP simulations of ion permeation through a gramicidin A channel show that the ionic size effects lead to reduced ion current inside the channel compared with the results without considering size effects. As a component of the current, the drift term is the main contribution to the total current. The ionic size effects to the total current almost come through the drift term, and have little influence on the diffusion terms in SMPNP.

  10. Generalized Born Models of Macromolecular Solvation Effects

    NASA Astrophysics Data System (ADS)

    Bashford, Donald; Case, David A.

    2000-10-01

    It would often be useful in computer simulations to use a simple description of solvation effects, instead of explicitly representing the individual solvent molecules. Continuum dielectric models often work well in describing the thermodynamic aspects of aqueous solvation, and approximations to such models that avoid the need to solve the Poisson equation are attractive because of their computational efficiency. Here we give an overview of one such approximation, the generalized Born model, which is simple and fast enough to be used for molecular dynamics simulations of proteins and nucleic acids. We discuss its strengths and weaknesses, both for its fidelity to the underlying continuum model and for its ability to replace explicit consideration of solvent molecules in macromolecular simulations. We focus particularly on versions of the generalized Born model that have a pair-wise analytical form, and therefore fit most naturally into conventional molecular mechanics calculations.

  11. ANALYZING NUMERICAL ERRORS IN DOMAIN HEAT TRANSPORT MODELS USING THE CVBEM.

    USGS Publications Warehouse

    Hromadka, T.V.; ,

    1985-01-01

    Besides providing an exact solution for steady-state heat conduction processes (Laplace Poisson equations), the CVBEM (complex variable boundary element method) can be used for the numerical error analysis of domain model solutions. For problems where soil water phase change latent heat effects dominate the thermal regime, heat transport can be approximately modeled as a time-stepped steady-state condition in the thawed and frozen regions, respectively. The CVBEM provides an exact solution of the two-dimensional steady-state heat transport problem, and also provides the error in matching the prescribed boundary conditions by the development of a modeling error distribution or an approximative boundary generation. This error evaluation can be used to develop highly accurate CVBEM models of the heat transport process, and the resulting model can be used as a test case for evaluating the precision of domain models based on finite elements or finite differences.

  12. Edge Vortex Flow Due to Inhomogeneous Ion Concentration

    NASA Astrophysics Data System (ADS)

    Sugioka, Hideyuki

    2017-04-01

    The ion distribution of an open parallel electrode system is not known even though it is often used to measure the electrical characteristics of an electrolyte. Thus, for an open electrode system, we perform a non-steady direct multiphysics simulation based on the coupled Poisson-Nernst-Planck and Stokes equations and find that inhomogeneous ion concentrations at edges cause vortex flows and suppress the anomalous increase in the ion concentration near the electrodes. A surprising aspect of our findings is that the large vortex flows at the edges approximately maintain the ion-conserving condition, and thus the ion distribution of an open electrode system can be approximated by the solution of a closed electrode system that considers the ion-conserving condition rather than the Gouy-Chapman solution, which neglects the ion-conserving condition. We believe that our findings make a significant contribution to the understanding of surface science.

  13. A simple quantum mechanical treatment of scattering in nanoscale transistors

    NASA Astrophysics Data System (ADS)

    Venugopal, R.; Paulsson, M.; Goasguen, S.; Datta, S.; Lundstrom, M. S.

    2003-05-01

    We present a computationally efficient, two-dimensional quantum mechanical simulation scheme for modeling dissipative electron transport in thin body, fully depleted, n-channel, silicon-on-insulator transistors. The simulation scheme, which solves the nonequilibrium Green's function equations self consistently with Poisson's equation, treats the effect of scattering using a simple approximation inspired by the "Büttiker probes," often used in mesoscopic physics. It is based on an expansion of the active device Hamiltonian in decoupled mode space. Simulation results are used to highlight quantum effects, discuss the physics of scattering and to relate the quantum mechanical quantities used in our model to experimentally measured low field mobilities. Additionally, quantum boundary conditions are rigorously derived and the effects of strong off-equilibrium transport are examined. This paper shows that our approximate treatment of scattering, is an efficient and useful simulation method for modeling electron transport in nanoscale, silicon-on-insulator transistors.

  14. Fedosov’s formal symplectic groupoids and contravariant connections

    NASA Astrophysics Data System (ADS)

    Karabegov, Alexander V.

    2006-10-01

    Using Fedosov's approach we give a geometric construction of a formal symplectic groupoid over any Poisson manifold endowed with a torsion-free Poisson contravariant connection. In the case of Kähler-Poisson manifolds this construction provides, in particular, the formal symplectic groupoids with separation of variables. We show that the dual of a semisimple Lie algebra does not admit torsion-free Poisson contravariant connections.

  15. Complete synchronization of the global coupled dynamical network induced by Poisson noises.

    PubMed

    Guo, Qing; Wan, Fangyi

    2017-01-01

    The different Poisson noise-induced complete synchronization of the global coupled dynamical network is investigated. Based on the stability theory of stochastic differential equations driven by Poisson process, we can prove that Poisson noises can induce synchronization and sufficient conditions are established to achieve complete synchronization with probability 1. Furthermore, numerical examples are provided to show the agreement between theoretical and numerical analysis.

  16. Repairable-conditionally repairable damage model based on dual Poisson processes.

    PubMed

    Lind, B K; Persson, L M; Edgren, M R; Hedlöf, I; Brahme, A

    2003-09-01

    The advent of intensity-modulated radiation therapy makes it increasingly important to model the response accurately when large volumes of normal tissues are irradiated by controlled graded dose distributions aimed at maximizing tumor cure and minimizing normal tissue toxicity. The cell survival model proposed here is very useful and flexible for accurate description of the response of healthy tissues as well as tumors in classical and truly radiobiologically optimized radiation therapy. The repairable-conditionally repairable (RCR) model distinguishes between two different types of damage, namely the potentially repairable, which may also be lethal, i.e. if unrepaired or misrepaired, and the conditionally repairable, which may be repaired or may lead to apoptosis if it has not been repaired correctly. When potentially repairable damage is being repaired, for example by nonhomologous end joining, conditionally repairable damage may require in addition a high-fidelity correction by homologous repair. The induction of both types of damage is assumed to be described by Poisson statistics. The resultant cell survival expression has the unique ability to fit most experimental data well at low doses (the initial hypersensitive range), intermediate doses (on the shoulder of the survival curve), and high doses (on the quasi-exponential region of the survival curve). The complete Poisson expression can be approximated well by a simple bi-exponential cell survival expression, S(D) = e(-aD) + bDe(-cD), where the first term describes the survival of undamaged cells and the last term represents survival after complete repair of sublethal damage. The bi-exponential expression makes it easy to derive D(0), D(q), n and alpha, beta values to facilitate comparison with classical cell survival models.

  17. A Fock space representation for the quantum Lorentz gas

    NASA Astrophysics Data System (ADS)

    Maassen, H.; Tip, A.

    1995-02-01

    A Fock space representation is given for the quantum Lorentz gas, i.e., for random Schrödinger operators of the form H(ω)=p2+Vω=p2+∑ φ(x-xj(ω)), acting in H=L2(Rd), with Poisson distributed xjs. An operator H is defined in K=H⊗P=H⊗L2(Ω,P(dω))=L2(Ω,P(dω);H) by the action of H(ω) on its fibers in a direct integral decomposition. The stationarity of the Poisson process allows a unitarily equivalent description in terms of a new family {H(k)||k∈Rd}, where each H(k) acts in P [A. Tip, J. Math. Phys. 35, 113 (1994)]. The space P is then unitarily mapped upon the symmetric Fock space over L2(Rd,ρdx), with ρ the intensity of the Poisson process (the average number of points xj per unit volume; the scatterer density), and the equivalent of H(k) is determined. Averages now become vacuum expectation values and a further unitary transformation (removing ρ in ρdx) is made which leaves the former invariant. The resulting operator HF(k) has an interesting structure: On the nth Fock layer we encounter a single particle moving in the field of n scatterers and the randomness now appears in the coefficient √ρ in a coupling term connecting neighboring Fock layers. We also give a simple direct self-adjointness proof for HF(k), based upon Nelson's commutator theorem. Restriction to a finite number of layers (a kind of low scatterer density approximation) still gives nontrivial results, as is demonstrated by considering an example.

  18. Statistical distributions of earthquake numbers: consequence of branching process

    NASA Astrophysics Data System (ADS)

    Kagan, Yan Y.

    2010-03-01

    We discuss various statistical distributions of earthquake numbers. Previously, we derived several discrete distributions to describe earthquake numbers for the branching model of earthquake occurrence: these distributions are the Poisson, geometric, logarithmic and the negative binomial (NBD). The theoretical model is the `birth and immigration' population process. The first three distributions above can be considered special cases of the NBD. In particular, a point branching process along the magnitude (or log seismic moment) axis with independent events (immigrants) explains the magnitude/moment-frequency relation and the NBD of earthquake counts in large time/space windows, as well as the dependence of the NBD parameters on the magnitude threshold (magnitude of an earthquake catalogue completeness). We discuss applying these distributions, especially the NBD, to approximate event numbers in earthquake catalogues. There are many different representations of the NBD. Most can be traced either to the Pascal distribution or to the mixture of the Poisson distribution with the gamma law. We discuss advantages and drawbacks of both representations for statistical analysis of earthquake catalogues. We also consider applying the NBD to earthquake forecasts and describe the limits of the application for the given equations. In contrast to the one-parameter Poisson distribution so widely used to describe earthquake occurrence, the NBD has two parameters. The second parameter can be used to characterize clustering or overdispersion of a process. We determine the parameter values and their uncertainties for several local and global catalogues, and their subdivisions in various time intervals, magnitude thresholds, spatial windows, and tectonic categories. The theoretical model of how the clustering parameter depends on the corner (maximum) magnitude can be used to predict future earthquake number distribution in regions where very large earthquakes have not yet occurred.

  19. Finite element model predictions of static deformation from dislocation sources in a subduction zone: Sensitivities to homogeneous, isotropic, Poisson-solid, and half-space assumptions

    USGS Publications Warehouse

    Masterlark, Timothy

    2003-01-01

    Dislocation models can simulate static deformation caused by slip along a fault. These models usually take the form of a dislocation embedded in a homogeneous, isotropic, Poisson-solid half-space (HIPSHS). However, the widely accepted HIPSHS assumptions poorly approximate subduction zone systems of converging oceanic and continental crust. This study uses three-dimensional finite element models (FEMs) that allow for any combination (including none) of the HIPSHS assumptions to compute synthetic Green's functions for displacement. Using the 1995 Mw = 8.0 Jalisco-Colima, Mexico, subduction zone earthquake and associated measurements from a nearby GPS array as an example, FEM-generated synthetic Green's functions are combined with standard linear inverse methods to estimate dislocation distributions along the subduction interface. Loading a forward HIPSHS model with dislocation distributions, estimated from FEMs that sequentially relax the HIPSHS assumptions, yields the sensitivity of predicted displacements to each of the HIPSHS assumptions. For the subduction zone models tested and the specific field situation considered, sensitivities to the individual Poisson-solid, isotropy, and homogeneity assumptions can be substantially greater than GPS. measurement uncertainties. Forward modeling quantifies stress coupling between the Mw = 8.0 earthquake and a nearby Mw = 6.3 earthquake that occurred 63 days later. Coulomb stress changes predicted from static HIPSHS models cannot account for the 63-day lag time between events. Alternatively, an FEM that includes a poroelastic oceanic crust, which allows for postseismic pore fluid pressure recovery, can account for the lag time. The pore fluid pressure recovery rate puts an upper limit of 10-17 m2 on the bulk permeability of the oceanic crust. Copyright 2003 by the American Geophysical Union.

  20. Human dynamics scaling characteristics for aerial inbound logistics operation

    NASA Astrophysics Data System (ADS)

    Wang, Qing; Guo, Jin-Li

    2010-05-01

    In recent years, the study of power-law scaling characteristics of real-life networks has attracted much interest from scholars; it deviates from the Poisson process. In this paper, we take the whole process of aerial inbound operation in a logistics company as the empirical object. The main aim of this work is to study the statistical scaling characteristics of the task-restricted work patterns. We found that the statistical variables have the scaling characteristics of unimodal distribution with a power-law tail in five statistical distributions - that is to say, there obviously exists a peak in each distribution, the shape of the left part closes to a Poisson distribution, and the right part has a heavy-tailed scaling statistics. Furthermore, to our surprise, there is only one distribution where the right parts can be approximated by the power-law form with exponent α=1.50. Others are bigger than 1.50 (three of four are about 2.50, one of four is about 3.00). We then obtain two inferences based on these empirical results: first, the human behaviors probably both close to the Poisson statistics and power-law distributions on certain levels, and the human-computer interaction behaviors may be the most common in the logistics operational areas, even in the whole task-restricted work pattern areas. Second, the hypothesis in Vázquez et al. (2006) [A. Vázquez, J. G. Oliveira, Z. Dezsö, K.-I. Goh, I. Kondor, A.-L. Barabási. Modeling burst and heavy tails in human dynamics, Phys. Rev. E 73 (2006) 036127] is probably not sufficient; it claimed that human dynamics can be classified as two discrete university classes. There may be a new human dynamics mechanism that is different from the classical Barabási models.

  1. Conjugate-gradient preconditioning methods for shift-variant PET image reconstruction.

    PubMed

    Fessler, J A; Booth, S D

    1999-01-01

    Gradient-based iterative methods often converge slowly for tomographic image reconstruction and image restoration problems, but can be accelerated by suitable preconditioners. Diagonal preconditioners offer some improvement in convergence rate, but do not incorporate the structure of the Hessian matrices in imaging problems. Circulant preconditioners can provide remarkable acceleration for inverse problems that are approximately shift-invariant, i.e., for those with approximately block-Toeplitz or block-circulant Hessians. However, in applications with nonuniform noise variance, such as arises from Poisson statistics in emission tomography and in quantum-limited optical imaging, the Hessian of the weighted least-squares objective function is quite shift-variant, and circulant preconditioners perform poorly. Additional shift-variance is caused by edge-preserving regularization methods based on nonquadratic penalty functions. This paper describes new preconditioners that approximate more accurately the Hessian matrices of shift-variant imaging problems. Compared to diagonal or circulant preconditioning, the new preconditioners lead to significantly faster convergence rates for the unconstrained conjugate-gradient (CG) iteration. We also propose a new efficient method for the line-search step required by CG methods. Applications to positron emission tomography (PET) illustrate the method.

  2. Semi-analytical modelling of positive corona discharge in air

    NASA Astrophysics Data System (ADS)

    Pontiga, Francisco; Yanallah, Khelifa; Chen, Junhong

    2013-09-01

    Semianalytical approximate solutions of the spatial distribution of electric field and electron and ion densities have been obtained by solving Poisson's equations and the continuity equations for the charged species along the Laplacian field lines. The need to iterate for the correct value of space charge on the corona electrode has been eliminated by using the corona current distribution over the grounded plane derived by Deutsch, which predicts a cos m θ law similar to Warburg's law. Based on the results of the approximated model, a parametric study of the influence of gas pressure, the corona wire radius, and the inter-electrode wire-plate separation has been carried out. Also, the approximate solutions of the electron number density has been combined with a simplified plasma chemistry model in order to compute the ozone density generated by the corona discharge in the presence of a gas flow. This work was supported by the Consejeria de Innovacion, Ciencia y Empresa (Junta de Andalucia) and by the Ministerio de Ciencia e Innovacion, Spain, within the European Regional Development Fund contracts FQM-4983 and FIS2011-25161.

  3. Anticoagulant activity of marine green and brown algae collected from Jeju Island in Korea.

    PubMed

    Athukorala, Yasantha; Lee, Ki-Wan; Kim, Se-Kwon; Jeon, You-Jin

    2007-07-01

    Twenty-two algal species were evaluated for their potential anticoagulant activities. Hot water extracts from selected species, Codium fragile and Sargassum horneri showed high activated partial thromboplastin time (APTT). Ultraflo extract of C. fragile and S. horneri exhibited the most potent anticoagulant activity. Furthermore, in both algal species, active compounds were mainly concentrated in >30kDa faction. The crude polysaccharide fraction (>30kDa; CpoF) of C. fragile composed of approximately 80% carbohydrate and approximately 19% of protein; the crude polysaccharide fraction (>30kDa; CpoF) of S. horneri was composed of 97% of carbohydrate and approximately 2% of protein. Therefore, most probably the active compound, or compounds of the algal species were related to high molecular weight polysaccharide, or a complex form with carbohydrate and protein (proteoglycan).

  4. Application of the Conway-Maxwell-Poisson generalized linear model for analyzing motor vehicle crashes.

    PubMed

    Lord, Dominique; Guikema, Seth D; Geedipally, Srinivas Reddy

    2008-05-01

    This paper documents the application of the Conway-Maxwell-Poisson (COM-Poisson) generalized linear model (GLM) for modeling motor vehicle crashes. The COM-Poisson distribution, originally developed in 1962, has recently been re-introduced by statisticians for analyzing count data subjected to over- and under-dispersion. This innovative distribution is an extension of the Poisson distribution. The objectives of this study were to evaluate the application of the COM-Poisson GLM for analyzing motor vehicle crashes and compare the results with the traditional negative binomial (NB) model. The comparison analysis was carried out using the most common functional forms employed by transportation safety analysts, which link crashes to the entering flows at intersections or on segments. To accomplish the objectives of the study, several NB and COM-Poisson GLMs were developed and compared using two datasets. The first dataset contained crash data collected at signalized four-legged intersections in Toronto, Ont. The second dataset included data collected for rural four-lane divided and undivided highways in Texas. Several methods were used to assess the statistical fit and predictive performance of the models. The results of this study show that COM-Poisson GLMs perform as well as NB models in terms of GOF statistics and predictive performance. Given the fact the COM-Poisson distribution can also handle under-dispersed data (while the NB distribution cannot or has difficulties converging), which have sometimes been observed in crash databases, the COM-Poisson GLM offers a better alternative over the NB model for modeling motor vehicle crashes, especially given the important limitations recently documented in the safety literature about the latter type of model.

  5. Conditional Poisson models: a flexible alternative to conditional logistic case cross-over analysis.

    PubMed

    Armstrong, Ben G; Gasparrini, Antonio; Tobias, Aurelio

    2014-11-24

    The time stratified case cross-over approach is a popular alternative to conventional time series regression for analysing associations between time series of environmental exposures (air pollution, weather) and counts of health outcomes. These are almost always analyzed using conditional logistic regression on data expanded to case-control (case crossover) format, but this has some limitations. In particular adjusting for overdispersion and auto-correlation in the counts is not possible. It has been established that a Poisson model for counts with stratum indicators gives identical estimates to those from conditional logistic regression and does not have these limitations, but it is little used, probably because of the overheads in estimating many stratum parameters. The conditional Poisson model avoids estimating stratum parameters by conditioning on the total event count in each stratum, thus simplifying the computing and increasing the number of strata for which fitting is feasible compared with the standard unconditional Poisson model. Unlike the conditional logistic model, the conditional Poisson model does not require expanding the data, and can adjust for overdispersion and auto-correlation. It is available in Stata, R, and other packages. By applying to some real data and using simulations, we demonstrate that conditional Poisson models were simpler to code and shorter to run than are conditional logistic analyses and can be fitted to larger data sets than possible with standard Poisson models. Allowing for overdispersion or autocorrelation was possible with the conditional Poisson model but when not required this model gave identical estimates to those from conditional logistic regression. Conditional Poisson regression models provide an alternative to case crossover analysis of stratified time series data with some advantages. The conditional Poisson model can also be used in other contexts in which primary control for confounding is by fine stratification.

  6. Do bacterial cell numbers follow a theoretical Poisson distribution? Comparison of experimentally obtained numbers of single cells with random number generation via computer simulation.

    PubMed

    Koyama, Kento; Hokunan, Hidekazu; Hasegawa, Mayumi; Kawamura, Shuso; Koseki, Shigenobu

    2016-12-01

    We investigated a bacterial sample preparation procedure for single-cell studies. In the present study, we examined whether single bacterial cells obtained via 10-fold dilution followed a theoretical Poisson distribution. Four serotypes of Salmonella enterica, three serotypes of enterohaemorrhagic Escherichia coli and one serotype of Listeria monocytogenes were used as sample bacteria. An inoculum of each serotype was prepared via a 10-fold dilution series to obtain bacterial cell counts with mean values of one or two. To determine whether the experimentally obtained bacterial cell counts follow a theoretical Poisson distribution, a likelihood ratio test between the experimentally obtained cell counts and Poisson distribution which parameter estimated by maximum likelihood estimation (MLE) was conducted. The bacterial cell counts of each serotype sufficiently followed a Poisson distribution. Furthermore, to examine the validity of the parameters of Poisson distribution from experimentally obtained bacterial cell counts, we compared these with the parameters of a Poisson distribution that were estimated using random number generation via computer simulation. The Poisson distribution parameters experimentally obtained from bacterial cell counts were within the range of the parameters estimated using a computer simulation. These results demonstrate that the bacterial cell counts of each serotype obtained via 10-fold dilution followed a Poisson distribution. The fact that the frequency of bacterial cell counts follows a Poisson distribution at low number would be applied to some single-cell studies with a few bacterial cells. In particular, the procedure presented in this study enables us to develop an inactivation model at the single-cell level that can estimate the variability of survival bacterial numbers during the bacterial death process. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. The effect of vaccination coverage and climate on Japanese encephalitis in Sarawak, Malaysia.

    PubMed

    Impoinvil, Daniel E; Ooi, Mong How; Diggle, Peter J; Caminade, Cyril; Cardosa, Mary Jane; Morse, Andrew P; Baylis, Matthew; Solomon, Tom

    2013-01-01

    Japanese encephalitis (JE) is the leading cause of viral encephalitis across Asia with approximately 70,000 cases a year and 10,000 to 15,000 deaths. Because JE incidence varies widely over time, partly due to inter-annual climate variability effects on mosquito vector abundance, it becomes more complex to assess the effects of a vaccination programme since more or less climatically favourable years could also contribute to a change in incidence post-vaccination. Therefore, the objective of this study was to quantify vaccination effect on confirmed Japanese encephalitis (JE) cases in Sarawak, Malaysia after controlling for climate variability to better understand temporal dynamics of JE virus transmission and control. Monthly data on serologically confirmed JE cases were acquired from Sibu Hospital in Sarawak from 1997 to 2006. JE vaccine coverage (non-vaccine years vs. vaccine years) and meteorological predictor variables, including temperature, rainfall and the Southern Oscillation index (SOI) were tested for their association with JE cases using Poisson time series analysis and controlling for seasonality and long-term trend. Over the 10-years surveillance period, 133 confirmed JE cases were identified. There was an estimated 61% reduction in JE risk after the introduction of vaccination, when no account is taken of the effects of climate. This reduction is only approximately 45% when the effects of inter-annual variability in climate are controlled for in the model. The Poisson model indicated that rainfall (lag 1-month), minimum temperature (lag 6-months) and SOI (lag 6-months) were positively associated with JE cases. This study provides the first improved estimate of JE reduction through vaccination by taking account of climate inter-annual variability. Our analysis confirms that vaccination has substantially reduced JE risk in Sarawak but this benefit may be overestimated if climate effects are ignored.

  8. Evolution of deep gray matter volume across the human lifespan.

    PubMed

    Narvacan, Karl; Treit, Sarah; Camicioli, Richard; Martin, Wayne; Beaulieu, Christian

    2017-08-01

    Magnetic resonance imaging of subcortical gray matter structures, which mediate behavior, cognition and the pathophysiology of several diseases, is crucial for establishing typical maturation patterns across the human lifespan. This single site study examines T1-weighted MPRAGE images of 3 healthy cohorts: (i) a cross-sectional cohort of 406 subjects aged 5-83 years; (ii) a longitudinal neurodevelopment cohort of 84 subjects scanned twice approximately 4 years apart, aged 5-27 years at first scan; and (iii) a longitudinal aging cohort of 55 subjects scanned twice approximately 3 years apart, aged 46-83 years at first scan. First scans from longitudinal subjects were included in the cross-sectional analysis. Age-dependent changes in thalamus, caudate, putamen, globus pallidus, nucleus accumbens, hippocampus, and amygdala volumes were tested with Poisson, quadratic, and linear models in the cross-sectional cohort, and quadratic and linear models in the longitudinal cohorts. Most deep gray matter structures best fit to Poisson regressions in the cross-sectional cohort and quadratic curves in the young longitudinal cohort, whereas the volume of all structures except the caudate and globus pallidus decreased linearly in the longitudinal aging cohort. Males had larger volumes than females for all subcortical structures, but sex differences in trajectories of change with age were not significant. Within subject analysis showed that 65%-80% of 13-17 year olds underwent a longitudinal decrease in volume between scans (∼4 years apart) for the putamen, globus pallidus, and hippocampus, suggesting unique developmental processes during adolescence. This lifespan study of healthy participants will form a basis for comparison to neurological and psychiatric disorders. Hum Brain Mapp 38:3771-3790, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  9. Characterizing the performance of the Conway-Maxwell Poisson generalized linear model.

    PubMed

    Francis, Royce A; Geedipally, Srinivas Reddy; Guikema, Seth D; Dhavala, Soma Sekhar; Lord, Dominique; LaRocca, Sarah

    2012-01-01

    Count data are pervasive in many areas of risk analysis; deaths, adverse health outcomes, infrastructure system failures, and traffic accidents are all recorded as count events, for example. Risk analysts often wish to estimate the probability distribution for the number of discrete events as part of doing a risk assessment. Traditional count data regression models of the type often used in risk assessment for this problem suffer from limitations due to the assumed variance structure. A more flexible model based on the Conway-Maxwell Poisson (COM-Poisson) distribution was recently proposed, a model that has the potential to overcome the limitations of the traditional model. However, the statistical performance of this new model has not yet been fully characterized. This article assesses the performance of a maximum likelihood estimation method for fitting the COM-Poisson generalized linear model (GLM). The objectives of this article are to (1) characterize the parameter estimation accuracy of the MLE implementation of the COM-Poisson GLM, and (2) estimate the prediction accuracy of the COM-Poisson GLM using simulated data sets. The results of the study indicate that the COM-Poisson GLM is flexible enough to model under-, equi-, and overdispersed data sets with different sample mean values. The results also show that the COM-Poisson GLM yields accurate parameter estimates. The COM-Poisson GLM provides a promising and flexible approach for performing count data regression. © 2011 Society for Risk Analysis.

  10. Identification of Novel Compounds against an R294K Substitution of Influenza A (H7N9) Virus Using Ensemble Based Drug Virtual Screening

    PubMed Central

    Tran, Nhut; Van, Thanh; Nguyen, Hieu; Le, Ly

    2015-01-01

    Influenza virus H7N9 foremost emerged in China in 2013 and killed hundreds of people in Asia since they possessed all mutations that enable them to resist to all existing influenza drugs, resulting in high mortality to human. In the effort to identify novel inhibitors combat resistant strains of influenza virus H7N9; we performed virtual screening targeting the Neuraminidase (NA) protein against natural compounds of traditional Chinese medicine database (TCM) and ZINC natural products. Compounds expressed high binding affinity to the target protein was then evaluated for molecular properties to determine drug-like molecules. 4 compounds showed their binding energy less than -11Kcal/mol were selected for molecular dynamics (MD) simulation to capture intermolecular interactions of ligand-protein complexes. The molecular mechanics/Poisson-Boltzmann surface area (MM/PBSA) method was utilized to estimate binding free energy of the complex. In term of stability, NA-7181 (IUPAC namely {9-Hydroxy-10-[3-(trifluoromrthyl) cyclohexyl]-4.8-diazatricyclo [6.4.0.02,6]dodec-4-yl}(perhydro-1H-inden-5-yl)formaldehyde) achieved stable conformation after 20ns and 27ns for ligand and protein root mean square deviation, respectively. In term of binding free energy, 7181 gave the negative value of -30.031 (KJ/mol) indicating the compound obtained a favourable state in the active site of the protein. PMID:25589893

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pan, Y.; Zheng, W.T., E-mail: WTZheng@jlu.edu.cn; Guan, W.M.

    The structural formation, elastic properties, hardness and electronic structure of TMB{sub 4} (TM=Cr, Re, Ru and Os) compounds are investigated using first-principles approach. The value of C{sub 22} for these compounds is almost two times bigger than the C{sub 11} and C{sub 33}. The intrinsic hardness, shear modulus and Young's modulus are calculated to be in a sequence of CrB{sub 4}>ReB{sub 4}>RuB{sub 4}>OsB{sub 4}, and the Poisson's ratio and B/G ratio of TMB{sub 4} follow the order of CrB{sub 4}ReB{sub 4}>RuB{sub 4}>OsB{sub 4}. • The trend of hardness for TMB{sub 4} is consistent with the variation of elastic modulus. •more » The C{sub 22} value of TMB{sub 4} is bigger than that of C{sub 11} and C{sub 33}. • The high hardness of TMB{sub 4} is originated from the B–B bonds cage.« less

  12. Structural and elastic properties of AIBIIIC 2 VI semiconductors

    NASA Astrophysics Data System (ADS)

    Kumar, V.; Singh, Bhanu P.

    2018-01-01

    The plane wave pseudo-potential method within density functional theory has been used to calculate the structural and elastic properties of AIBIIIC 2 VI semiconductors. The electronic band structure, density of states, lattice constants (a and c), internal parameter (u), tetragonal distortion (η), energy gap (Eg), and bond lengths of the A-C (dAC) and B-C (dBC) bonds in AIBIIIC 2 VI semiconductors have been calculated. The values of elastic constants (Cij), bulk modulus (B), shear modulus (G), Young's modulus (Y), Poisson's ratio (υ), Zener anisotropy factor (A), Debye temperature (ϴD) and G/B ratio have also been calculated. The values of all 15 parameters of CuTlS2 and CuTlSe2 compounds, and 8 parameters of 20 compounds of AIBIIIC 2 VI family, except AgInS2 and AgInSe2, have been calculated for the first time. Reasonably good agreement has been obtained between the calculated, reported and available experimental values.

  13. Effect of Poisson's loss factor of rubbery material on underwater sound absorption of anechoic coatings

    NASA Astrophysics Data System (ADS)

    Zhong, Jie; Zhao, Honggang; Yang, Haibin; Yin, Jianfei; Wen, Jihong

    2018-06-01

    Rubbery coatings embedded with air cavities are commonly used on underwater structures to reduce reflection of incoming sound waves. In this paper, the relationships between Poisson's and modulus loss factors of rubbery materials are theoretically derived, the different effects of the tiny Poisson's loss factor on characterizing the loss factors of shear and longitudinal moduli are revealed. Given complex Young's modulus and dynamic Poisson's ratio, it is found that the shear loss factor has almost invisible variation with the Poisson's loss factor and is very close to the loss factor of Young's modulus, while the longitudinal loss factor almost linearly decreases with the increase of Poisson's loss factor. Then, a finite element (FE) model is used to investigate the effect of the tiny Poisson's loss factor, which is generally neglected in some FE models, on the underwater sound absorption of rubbery coatings. Results show that the tiny Poisson's loss factor has a significant effect on the sound absorption of homogeneous coatings within the concerned frequency range, while it has both frequency- and structure-dependent influence on the sound absorption of inhomogeneous coatings with embedded air cavities. Given the material parameters and cavity dimensions, more obvious effect can be observed for the rubbery coating with a larger lattice constant and/or a thicker cover layer.

  14. A Review of Multivariate Distributions for Count Data Derived from the Poisson Distribution

    PubMed Central

    Inouye, David; Yang, Eunho; Allen, Genevera; Ravikumar, Pradeep

    2017-01-01

    The Poisson distribution has been widely studied and used for modeling univariate count-valued data. Multivariate generalizations of the Poisson distribution that permit dependencies, however, have been far less popular. Yet, real-world high-dimensional count-valued data found in word counts, genomics, and crime statistics, for example, exhibit rich dependencies, and motivate the need for multivariate distributions that can appropriately model this data. We review multivariate distributions derived from the univariate Poisson, categorizing these models into three main classes: 1) where the marginal distributions are Poisson, 2) where the joint distribution is a mixture of independent multivariate Poisson distributions, and 3) where the node-conditional distributions are derived from the Poisson. We discuss the development of multiple instances of these classes and compare the models in terms of interpretability and theory. Then, we empirically compare multiple models from each class on three real-world datasets that have varying data characteristics from different domains, namely traffic accident data, biological next generation sequencing data, and text data. These empirical experiments develop intuition about the comparative advantages and disadvantages of each class of multivariate distribution that was derived from the Poisson. Finally, we suggest new research directions as explored in the subsequent discussion section. PMID:28983398

  15. Non-linear properties of metallic cellular materials with a negative Poisson's ratio

    NASA Technical Reports Server (NTRS)

    Choi, J. B.; Lakes, R. S.

    1992-01-01

    Negative Poisson's ratio copper foam was prepared and characterized experimentally. The transformation into re-entrant foam was accomplished by applying sequential permanent compressions above the yield point to achieve a triaxial compression. The Poisson's ratio of the re-entrant foam depended on strain and attained a relative minimum at strains near zero. Poisson's ratio as small as -0.8 was achieved. The strain dependence of properties occurred over a narrower range of strain than in the polymer foams studied earlier. Annealing of the foam resulted in a slightly greater magnitude of negative Poisson's ratio and greater toughness at the expense of a decrease in the Young's modulus.

  16. Compositions, Random Sums and Continued Random Fractions of Poisson and Fractional Poisson Processes

    NASA Astrophysics Data System (ADS)

    Orsingher, Enzo; Polito, Federico

    2012-08-01

    In this paper we consider the relation between random sums and compositions of different processes. In particular, for independent Poisson processes N α ( t), N β ( t), t>0, we have that N_{α}(N_{β}(t)) stackrel{d}{=} sum_{j=1}^{N_{β}(t)} Xj, where the X j s are Poisson random variables. We present a series of similar cases, where the outer process is Poisson with different inner processes. We highlight generalisations of these results where the external process is infinitely divisible. A section of the paper concerns compositions of the form N_{α}(tauk^{ν}), ν∈(0,1], where tauk^{ν} is the inverse of the fractional Poisson process, and we show how these compositions can be represented as random sums. Furthermore we study compositions of the form Θ( N( t)), t>0, which can be represented as random products. The last section is devoted to studying continued fractions of Cauchy random variables with a Poisson number of levels. We evaluate the exact distribution and derive the scale parameter in terms of ratios of Fibonacci numbers.

  17. Functional response and capture timing in an individual-based model: predation by northern squawfish (Ptychocheilus oregonensis) on juvenile salmonids in the Columbia River

    USGS Publications Warehouse

    Petersen, James H.; DeAngelis, Donald L.

    1992-01-01

    The behavior of individual northern squawfish (Ptychocheilus oregonensis) preying on juvenile salmonids was modeled to address questions about capture rate and the timing of prey captures (random versus contagious). Prey density, predator weight, prey weight, temperature, and diel feeding pattern were first incorporated into predation equations analogous to Holling Type 2 and Type 3 functional response models. Type 2 and Type 3 equations fit field data from the Columbia River equally well, and both models predicted predation rates on five of seven independent dates. Selecting a functional response type may be complicated by variable predation rates, analytical methods, and assumptions of the model equations. Using the Type 2 functional response, random versus contagious timing of prey capture was tested using two related models. ln the simpler model, salmon captures were assumed to be controlled by a Poisson renewal process; in the second model, several salmon captures were assumed to occur during brief "feeding bouts", modeled with a compound Poisson process. Salmon captures by individual northern squawfish were clustered through time, rather than random, based on comparison of model simulations and field data. The contagious-feeding result suggests that salmonids may be encountered as patches or schools in the river.

  18. Exploring the existence of a stayer population with mover-stayer counting process models: application to joint damage in psoriatic arthritis.

    PubMed

    Yiu, Sean; Farewell, Vernon T; Tom, Brian D M

    2017-08-01

    Many psoriatic arthritis patients do not progress to permanent joint damage in any of the 28 hand joints, even under prolonged follow-up. This has led several researchers to fit models that estimate the proportion of stayers (those who do not have the propensity to experience the event of interest) and to characterize the rate of developing damaged joints in the movers (those who have the propensity to experience the event of interest). However, when fitted to the same data, the paper demonstrates that the choice of model for the movers can lead to widely varying conclusions on a stayer population, thus implying that, if interest lies in a stayer population, a single analysis should not generally be adopted. The aim of the paper is to provide greater understanding regarding estimation of a stayer population by comparing the inferences, performance and features of multiple fitted models to real and simulated data sets. The models for the movers are based on Poisson processes with patient level random effects and/or dynamic covariates, which are used to induce within-patient correlation, and observation level random effects are used to account for time varying unobserved heterogeneity. The gamma, inverse Gaussian and compound Poisson distributions are considered for the random effects.

  19. Non-Poisson Processes: Regression to Equilibrium Versus Equilibrium Correlation Functions

    DTIC Science & Technology

    2004-07-07

    ARTICLE IN PRESSPhysica A 347 (2005) 268–2880378-4371/$ - doi:10.1016/j Correspo E-mail adwww.elsevier.com/locate/physaNon- Poisson processes : regression...05.40.a; 89.75.k; 02.50.Ey Keywords: Stochastic processes; Non- Poisson processes ; Liouville and Liouville-like equations; Correlation function...which is not legitimate with renewal non- Poisson processes , is a correct property if the deviation from the exponential relaxation is obtained by time

  20. Probabilistic Estimation of Rare Random Collisions in 3 Space

    DTIC Science & Technology

    2009-03-01

    extended Poisson process as a feature of probability theory. With the bulk of research in extended Poisson processes going into parame- ter estimation, the...application of extended Poisson processes to spatial processes is largely untouched. Faddy performed a short study of spatial data, but overtly...the theory of extended Poisson processes . To date, the processes are limited in that the rates only depend on the number of arrivals at some time

  1. Evolution of the orbitals Dy-4f in the DyB2 compound using the LDA, PBE approximations, and the PBE0 hybrid functional

    NASA Astrophysics Data System (ADS)

    Rasero Causil, Diego; Ortega López, César; Espitia Rico, Miguel

    2018-04-01

    Computational calculations of total energy based on density functional theory were used to investigate the structural, electronic, and magnetic properties of the DyB2 compounds in the hexagonal structure. The calculations were carried out by means of the full-potential linearized augmented plane wave (FP-LAPW) method, employing the computational Wien2k package. The local density approximation (LDA) and the generalized gradient approximation (GGA) were used for the electron-electron interactions. Additionally, we used the functional hybrid PBE0 for a better description the electronic and magnetic properties, because the DyB2 compound is a strongly-correlated system. We found that the calculated lattice constant agrees well with the values reported theoretically and experimentally. The density of states (DOS) calculation shows that the compound exhibits a metallic behavior and has magnetic properties, with a total magnetic moment of 5.47 μ0/cell determined mainly by the 4f states of the rare earth elements. The functional PBE0 shows a strong localization of the Dy-4f orbitals.

  2. Chemical reaction networks as a model to describe UVC- and radiolytically-induced reactions of simple compounds.

    PubMed

    Dondi, Daniele; Merli, Daniele; Albini, Angelo; Zeffiro, Alberto; Serpone, Nick

    2012-05-01

    When a chemical system is submitted to high energy sources (UV, ionizing radiation, plasma sparks, etc.), as is expected to be the case of prebiotic chemistry studies, a plethora of reactive intermediates could form. If oxygen is present in excess, carbon dioxide and water are the major products. More interesting is the case of reducing conditions where synthetic pathways are also possible. This article examines the theoretical modeling of such systems with random-generated chemical networks. Four types of random-generated chemical networks were considered that originated from a combination of two connection topologies (viz., Poisson and scale-free) with reversible and irreversible chemical reactions. The results were analyzed taking into account the number of the most abundant products required for reaching 50% of the total number of moles of compounds at equilibrium, as this may be related to an actual problem of complex mixture analysis. The model accounts for multi-component reaction systems with no a priori knowledge of reacting species and the intermediates involved if system components are sufficiently interconnected. The approach taken is relevant to an earlier study on reactions that may have occurred in prebiotic systems where only a few compounds were detected. A validation of the model was attained on the basis of results of UVC and radiolytic reactions of prebiotic mixtures of low molecular weight compounds likely present on the primeval Earth.

  3. Batteries using molten salt electrolyte

    DOEpatents

    Guidotti, Ronald A.

    2003-04-08

    An electrolyte system suitable for a molten salt electrolyte battery is described where the electrolyte system is a molten nitrate compound, an organic compound containing dissolved lithium salts, or a 1-ethyl-3-methlyimidazolium salt with a melting temperature between approximately room temperature and approximately 250.degree. C. With a compatible anode and cathode, the electrolyte system is utilized in a battery as a power source suitable for oil/gas borehole applications and in heat sensors.

  4. The Electrochemical Fluorination of Organosilicon Compounds

    NASA Technical Reports Server (NTRS)

    Seaver, Robert E.

    1961-01-01

    The electrochemical fluorination of tetramethylsilane, hexamethyl-disiloxane, diethyldichlorosilane, amyltrichlorosilane, and phenyltri-chlorosilane was conducted in an Inconel cell equipped with nickel electrodes. A potential of approximately 5.0 volts and a current of approximately 1.0 ampere were used for the electrolysis reaction. In all cases the fluorinations resulted in considerable scission of the carbon-silicon bonds yielding hydrogen and the various fluorinated decomposition products; no fluoroorganosilicon compounds were identified. The main decomposition products were silicon tetrafluoride, the corresponding fluorinated carbon compounds, and the various organofluorosilanes. It is suggested that this is due to the nucleophilic attack of the fluoride ion (or complex fluoride ion) on the carbon-silicon bond.

  5. Poisson-type inequalities for growth properties of positive superharmonic functions.

    PubMed

    Luan, Kuan; Vieira, John

    2017-01-01

    In this paper, we present new Poisson-type inequalities for Poisson integrals with continuous data on the boundary. The obtained inequalities are used to obtain growth properties at infinity of positive superharmonic functions in a smooth cone.

  6. Information transmission using non-poisson regular firing.

    PubMed

    Koyama, Shinsuke; Omi, Takahiro; Kass, Robert E; Shinomoto, Shigeru

    2013-04-01

    In many cortical areas, neural spike trains do not follow a Poisson process. In this study, we investigate a possible benefit of non-Poisson spiking for information transmission by studying the minimal rate fluctuation that can be detected by a Bayesian estimator. The idea is that an inhomogeneous Poisson process may make it difficult for downstream decoders to resolve subtle changes in rate fluctuation, but by using a more regular non-Poisson process, the nervous system can make rate fluctuations easier to detect. We evaluate the degree to which regular firing reduces the rate fluctuation detection threshold. We find that the threshold for detection is reduced in proportion to the coefficient of variation of interspike intervals.

  7. Graphic Simulations of the Poisson Process.

    DTIC Science & Technology

    1982-10-01

    RANDOM NUMBERS AND TRANSFORMATIONS..o......... 11 Go THE RANDOM NUMBERGENERATOR....... .oo..... 15 III. POISSON PROCESSES USER GUIDE....oo.ooo ......... o...again. In the superimposed mode, two Poisson processes are active, each with a different rate parameter, (call them Type I and Type II with respective...occur. The value ’p’ is generated by the following equation where ’Li’ and ’L2’ are the rates of the two Poisson processes ; p = Li / (Li + L2) The value

  8. Soft network materials with isotropic negative Poisson's ratios over large strains.

    PubMed

    Liu, Jianxing; Zhang, Yihui

    2018-01-31

    Auxetic materials with negative Poisson's ratios have important applications across a broad range of engineering areas, such as biomedical devices, aerospace engineering and automotive engineering. A variety of design strategies have been developed to achieve artificial auxetic materials with controllable responses in the Poisson's ratio. The development of designs that can offer isotropic negative Poisson's ratios over large strains can open up new opportunities in emerging biomedical applications, which, however, remains a challenge. Here, we introduce deterministic routes to soft architected materials that can be tailored precisely to yield the values of Poisson's ratio in the range from -1 to 1, in an isotropic manner, with a tunable strain range from 0% to ∼90%. The designs rely on a network construction in a periodic lattice topology, which incorporates zigzag microstructures as building blocks to connect lattice nodes. Combined experimental and theoretical studies on broad classes of network topologies illustrate the wide-ranging utility of these concepts. Quantitative mechanics modeling under both infinitesimal and finite deformations allows the development of a rigorous design algorithm that determines the necessary network geometries to yield target Poisson ratios over desired strain ranges. Demonstrative examples in artificial skin with both the negative Poisson's ratio and the nonlinear stress-strain curve precisely matching those of the cat's skin and in unusual cylindrical structures with engineered Poisson effect and shape memory effect suggest potential applications of these network materials.

  9. Universal Poisson Statistics of mRNAs with Complex Decay Pathways.

    PubMed

    Thattai, Mukund

    2016-01-19

    Messenger RNA (mRNA) dynamics in single cells are often modeled as a memoryless birth-death process with a constant probability per unit time that an mRNA molecule is synthesized or degraded. This predicts a Poisson steady-state distribution of mRNA number, in close agreement with experiments. This is surprising, since mRNA decay is known to be a complex process. The paradox is resolved by realizing that the Poisson steady state generalizes to arbitrary mRNA lifetime distributions. A mapping between mRNA dynamics and queueing theory highlights an identifiability problem: a measured Poisson steady state is consistent with a large variety of microscopic models. Here, I provide a rigorous and intuitive explanation for the universality of the Poisson steady state. I show that the mRNA birth-death process and its complex decay variants all take the form of the familiar Poisson law of rare events, under a nonlinear rescaling of time. As a corollary, not only steady-states but also transients are Poisson distributed. Deviations from the Poisson form occur only under two conditions, promoter fluctuations leading to transcriptional bursts or nonindependent degradation of mRNA molecules. These results place severe limits on the power of single-cell experiments to probe microscopic mechanisms, and they highlight the need for single-molecule measurements. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  10. Quantum confined stark effect on the binding energy of exciton in type II quantum heterostructure

    NASA Astrophysics Data System (ADS)

    Suseel, Rahul K.; Mathew, Vincent

    2018-05-01

    In this work, we have investigated the effect of external electric field on the strongly confined excitonic properties of CdTe/CdSe/CdTe/CdSe type-II quantum dot heterostructures. Within the effective mass approximation, we solved the Poisson-Schrodinger equations of the exciton in nanostructure using relaxation method in a self-consistent iterative manner. We changed both the external electric field and core radius of the quantum dot, to study the behavior of binding energy of exciton. Our studies show that the external electric field destroys the positional flipped state of exciton by modifying the confining potentials of electron and hole.

  11. Approximating SIR-B response characteristics and estimating wave height and wavelength for ocean imagery

    NASA Technical Reports Server (NTRS)

    Tilley, David G.

    1987-01-01

    NASA Space Shuttle Challenger SIR-B ocean scenes are used to derive directional wave spectra for which speckle noise is modeled as a function of Rayleigh random phase coherence downrange and Poisson random amplitude errors inherent in the Doppler measurement of along-track position. A Fourier filter that preserves SIR-B image phase relations is used to correct the stationary and dynamic response characteristics of the remote sensor and scene correlator, as well as to subtract an estimate of the speckle noise component. A two-dimensional map of sea surface elevation is obtained after the filtered image is corrected for both random and deterministic motions.

  12. An assessment of the risk arising from electrical effects associated with the release of carbon fibers from general aviation aircraft fires

    NASA Technical Reports Server (NTRS)

    Rosenfield, D.; Fiksel, J.

    1980-01-01

    A Poisson type model was developed and exercised to estimate the risk of economic losses through 1993 due to potential electric effects of carbon fibers released from United States general aviation aircraft in the aftermath of a fire. Of the expected 354 annual general aviation aircraft accidents with fire projected for 1993, approximately 88 could involve carbon fibers. The average annual loss was estimated to be about $250 (1977 dollars) and the likelihood of exceeding $107,000 (1977 dollars) in annual loss in any one year was estimated to be at most one in ten thousand.

  13. The first principles study of elastic and thermodynamic properties of ZnSe

    NASA Astrophysics Data System (ADS)

    Khatta, Swati; Kaur, Veerpal; Tripathi, S. K.; Prakash, Satya

    2018-05-01

    The elastic and thermodynamic properties of ZnSe are investigated using thermo_pw package implemented in Quantum espresso code within the framework of density functional theory. The pseudopotential method within the local density approximation is used for the exchange-correlation potential. The physical parameters of ZnSe bulk modulus and shear modulus, anisotropy factor, Young's modulus, Poisson's ratio, Pugh's ratio and Frantsevich's ratio are calculated. The sound velocity and Debye temperature are obtained from elastic constant calculations. The Helmholtz free energy and internal energy of ZnSe are also calculated. The results are compared with available theoretical calculations and experimental data.

  14. EPA Toxicologists Focus Innovative Research on PFAS Compounds

    EPA Pesticide Factsheets

    EPA researchers have partnered with researchers at the National Toxicology Program to develop a tiered testing approach to quickly generate toxicity and kinetic information for approximately 75 PFAS compounds.

  15. The solution of large multi-dimensional Poisson problems

    NASA Technical Reports Server (NTRS)

    Stone, H. S.

    1974-01-01

    The Buneman algorithm for solving Poisson problems can be adapted to solve large Poisson problems on computers with a rotating drum memory so that the computation is done with very little time lost due to rotational latency of the drum.

  16. Method of making thermally removable adhesives

    DOEpatents

    Aubert, James H.

    2004-11-30

    A method of making a thermally-removable adhesive is provided where a bismaleimide compound, a monomeric furan compound, containing an oxirane group an amine curative are mixed together at an elevated temperature of greater than approximately 90.degree. C. to form a homogeneous solution, which, when cooled to less than approximately 70.degree. C., simultaneously initiates a Diels-Alder reaction between the furan and the bismaleimide and a epoxy curing reaction between the amine curative and the oxirane group to form a thermally-removable adhesive. Subsequent heating to a temperature greater than approximately 100.degree. C. causes the adhesive to melt and allows separation of adhered pieces.

  17. Method of making thermally removable polymeric encapsulants

    DOEpatents

    Small, James H.; Loy, Douglas A.; Wheeler, David R.; McElhanon, James R.; Saunders, Randall S.

    2001-01-01

    A method of making a thermally-removable encapsulant by heating a mixture of at least one bis(maleimide) compound and at least one monomeric tris(furan) or tetrakis(furan) compound at temperatures from above room temperature to less than approximately 90.degree. C. to form a gel and cooling the gel to form the thermally-removable encapsulant. The encapsulant can be easily removed within approximately an hour by heating to temperatures greater than approximately 90.degree. C., preferably in a polar solvent. The encapsulant can be used in protecting electronic components that may require subsequent removal of the encapsulant for component repair, modification or quality control.

  18. Poisson Noise Removal in Spherical Multichannel Images: Application to Fermi data

    NASA Astrophysics Data System (ADS)

    Schmitt, Jérémy; Starck, Jean-Luc; Fadili, Jalal; Digel, Seth

    2012-03-01

    The Fermi Gamma-ray Space Telescope, which was launched by NASA in June 2008, is a powerful space observatory which studies the high-energy gamma-ray sky [5]. Fermi's main instrument, the Large Area Telescope (LAT), detects photons in an energy range between 20MeV and >300 GeV. The LAT is much more sensitive than its predecessor, the energetic gamma ray experiment telescope (EGRET) telescope on the Compton Gamma-ray Observatory, and is expected to find several thousand gamma-ray point sources, which is an order of magnitude more than its predecessor EGRET [13]. Even with its relatively large acceptance (∼2m2 sr), the number of photons detected by the LAT outside the Galactic plane and away from intense sources is relatively low and the sky overall has a diffuse glow from cosmic-ray interactions with interstellar gas and low energy photons that makes a background against which point sources need to be detected. In addition, the per-photon angular resolution of the LAT is relatively poor and strongly energy dependent, ranging from>10° at 20MeV to ∼0.1° above 100 GeV. Consequently, the spherical photon count images obtained by Fermi are degraded by the fluctuations on the number of detected photons. This kind of noise is strongly signal dependent : on the brightest parts of the image like the galactic plane or the brightest sources, we have a lot of photons per pixel, and so the photon noise is low. Outside the galactic plane, the number of photons per pixel is low, which means that the photon noise is high. Such a signal-dependent noise cannot be accurately modeled by a Gaussian distribution. The basic photon-imaging model assumes that the number of detected photons at each pixel location is Poisson distributed. More specifically, the image is considered as a realization of an inhomogeneous Poisson process. This statistical noise makes the source detection more difficult, consequently it is highly desirable to have an efficient denoising method for spherical Poisson data. Several techniques have been proposed in the literature to estimate Poisson intensity in 2-dimensional (2D). A major class of methods adopt a multiscale Bayesian framework specifically tailored for Poisson data [18], independently initiated by Timmerman and Nowak [23] and Kolaczyk [14]. Lefkimmiaits et al. [15] proposed an improved Bayesian framework for analyzing Poisson processes, based on a multiscale representation of the Poisson process in which the ratios of the underlying Poisson intensities in adjacent scales are modeled as mixtures of conjugate parametric distributions. Another approach includes preprocessing the count data by a variance stabilizing transform(VST) such as theAnscombe [4] and the Fisz [10] transforms, applied respectively in the spatial [8] or in the wavelet domain [11]. The transform reforms the data so that the noise approximately becomes Gaussian with a constant variance. Standard techniques for independent identically distributed Gaussian noise are then used for denoising. Zhang et al. [25] proposed a powerful method called multiscale (MS-VST). It consists in combining a VST with a multiscale transform (wavelets, ridgelets, or curvelets), yielding asymptotically normally distributed coefficients with known variances. The interest of using a multiscale method is to exploit the sparsity properties of the data : the data are transformed into a domain in which it is sparse, and, as the noise is not sparse in any transform domain, it is easy to separate it from the signal. When the noise is Gaussian of known variance, it is easy to remove it with a high thresholding in the wavelet domain. The choice of the multiscale transform depends on the morphology of the data. Wavelets represent more efficiently regular structures and isotropic singularities, whereas ridgelets are designed to represent global lines in an image, and curvelets represent efficiently curvilinear contours. Significant coefficients are then detected with binary hypothesis testing, and the final estimate is reconstructed with an iterative scheme. In Ref

  19. EXACT DISTRIBUTIONS OF INTRACLASS CORRELATION AND CRONBACH'S ALPHA WITH GAUSSIAN DATA AND GENERAL COVARIANCE.

    PubMed

    Kistner, Emily O; Muller, Keith E

    2004-09-01

    Intraclass correlation and Cronbach's alpha are widely used to describe reliability of tests and measurements. Even with Gaussian data, exact distributions are known only for compound symmetric covariance (equal variances and equal correlations). Recently, large sample Gaussian approximations were derived for the distribution functions. New exact results allow calculating the exact distribution function and other properties of intraclass correlation and Cronbach's alpha, for Gaussian data with any covariance pattern, not just compound symmetry. Probabilities are computed in terms of the distribution function of a weighted sum of independent chi-square random variables. New F approximations for the distribution functions of intraclass correlation and Cronbach's alpha are much simpler and faster to compute than the exact forms. Assuming the covariance matrix is known, the approximations typically provide sufficient accuracy, even with as few as ten observations. Either the exact or approximate distributions may be used to create confidence intervals around an estimate of reliability. Monte Carlo simulations led to a number of conclusions. Correctly assuming that the covariance matrix is compound symmetric leads to accurate confidence intervals, as was expected from previously known results. However, assuming and estimating a general covariance matrix produces somewhat optimistically narrow confidence intervals with 10 observations. Increasing sample size to 100 gives essentially unbiased coverage. Incorrectly assuming compound symmetry leads to pessimistically large confidence intervals, with pessimism increasing with sample size. In contrast, incorrectly assuming general covariance introduces only a modest optimistic bias in small samples. Hence the new methods seem preferable for creating confidence intervals, except when compound symmetry definitely holds.

  20. On the Determination of Poisson Statistics for Haystack Radar Observations of Orbital Debris

    NASA Technical Reports Server (NTRS)

    Stokely, Christopher L.; Benbrook, James R.; Horstman, Matt

    2007-01-01

    A convenient and powerful method is used to determine if radar detections of orbital debris are observed according to Poisson statistics. This is done by analyzing the time interval between detection events. For Poisson statistics, the probability distribution of the time interval between events is shown to be an exponential distribution. This distribution is a special case of the Erlang distribution that is used in estimating traffic loads on telecommunication networks. Poisson statistics form the basis of many orbital debris models but the statistical basis of these models has not been clearly demonstrated empirically until now. Interestingly, during the fiscal year 2003 observations with the Haystack radar in a fixed staring mode, there are no statistically significant deviations observed from that expected with Poisson statistics, either independent or dependent of altitude or inclination. One would potentially expect some significant clustering of events in time as a result of satellite breakups, but the presence of Poisson statistics indicates that such debris disperse rapidly with respect to Haystack's very narrow radar beam. An exception to Poisson statistics is observed in the months following the intentional breakup of the Fengyun satellite in January 2007.

  1. Bayesian inference on multiscale models for poisson intensity estimation: applications to photon-limited image denoising.

    PubMed

    Lefkimmiatis, Stamatios; Maragos, Petros; Papandreou, George

    2009-08-01

    We present an improved statistical model for analyzing Poisson processes, with applications to photon-limited imaging. We build on previous work, adopting a multiscale representation of the Poisson process in which the ratios of the underlying Poisson intensities (rates) in adjacent scales are modeled as mixtures of conjugate parametric distributions. Our main contributions include: 1) a rigorous and robust regularized expectation-maximization (EM) algorithm for maximum-likelihood estimation of the rate-ratio density parameters directly from the noisy observed Poisson data (counts); 2) extension of the method to work under a multiscale hidden Markov tree model (HMT) which couples the mixture label assignments in consecutive scales, thus modeling interscale coefficient dependencies in the vicinity of image edges; 3) exploration of a 2-D recursive quad-tree image representation, involving Dirichlet-mixture rate-ratio densities, instead of the conventional separable binary-tree image representation involving beta-mixture rate-ratio densities; and 4) a novel multiscale image representation, which we term Poisson-Haar decomposition, that better models the image edge structure, thus yielding improved performance. Experimental results on standard images with artificially simulated Poisson noise and on real photon-limited images demonstrate the effectiveness of the proposed techniques.

  2. Simulation Methods for Poisson Processes in Nonstationary Systems.

    DTIC Science & Technology

    1978-08-01

    for simulation of nonhomogeneous Poisson processes is stated with log-linear rate function. The method is based on an identity relating the...and relatively efficient new method for simulation of one-dimensional and two-dimensional nonhomogeneous Poisson processes is described. The method is

  3. Poisson geometry from a Dirac perspective

    NASA Astrophysics Data System (ADS)

    Meinrenken, Eckhard

    2018-03-01

    We present proofs of classical results in Poisson geometry using techniques from Dirac geometry. This article is based on mini-courses at the Poisson summer school in Geneva, June 2016, and at the workshop Quantum Groups and Gravity at the University of Waterloo, April 2016.

  4. Identification of a Class of Filtered Poisson Processes.

    DTIC Science & Technology

    1981-01-01

    LD-A135 371 IDENTIFICATION OF A CLASS OF FILERED POISSON PROCESSES I AU) NORTH CAROLINA UNIV AT CHAPEL HIL DEPT 0F STATISTICS D DE RRUC ET AL 1981...STNO&IO$ !tt ~ 4.s " . , ".7" -L N ~ TITLE :IDENTIFICATION OF A CLASS OF FILTERED POISSON PROCESSES Authors : DE BRUCQ Denis - GUALTIEROTTI Antonio...filtered Poisson processes is intro- duced : the amplitude has a law which is spherically invariant and the filter is real, linear and causal. It is shown

  5. Interactive Graphic Simulation of Rolling Element Bearings. Phase I. Low Frequency Phenomenon and RAPIDREB Development.

    DTIC Science & Technology

    1981-11-01

    RDRER413 C EH 11-22 HOUSING ELASTIC MODUJLUS (F/L**2). RDRE8415 C PO4 ?3-34 HOUSING POISSON-S PATTO . PDPR416 C DENH 35-46 HOUSING MATERIAL DFNSITY (MA/L...23-34 CAGE POISSON-S PATTO . RDPRE427 C DENC 35-46 CAC7E MATFRIAL DENSITY (MA/L-03), PDPEP4?8 C RDRER4?9 C CARD 11 RDRE9430 C ---- ROPER431 C JF 11-16

  6. Minimum risk wavelet shrinkage operator for Poisson image denoising.

    PubMed

    Cheng, Wu; Hirakawa, Keigo

    2015-05-01

    The pixel values of images taken by an image sensor are said to be corrupted by Poisson noise. To date, multiscale Poisson image denoising techniques have processed Haar frame and wavelet coefficients--the modeling of coefficients is enabled by the Skellam distribution analysis. We extend these results by solving for shrinkage operators for Skellam that minimizes the risk functional in the multiscale Poisson image denoising setting. The minimum risk shrinkage operator of this kind effectively produces denoised wavelet coefficients with minimum attainable L2 error.

  7. Cumulative Poisson Distribution Program

    NASA Technical Reports Server (NTRS)

    Bowerman, Paul N.; Scheuer, Ernest M.; Nolty, Robert

    1990-01-01

    Overflow and underflow in sums prevented. Cumulative Poisson Distribution Program, CUMPOIS, one of two computer programs that make calculations involving cumulative Poisson distributions. Both programs, CUMPOIS (NPO-17714) and NEWTPOIS (NPO-17715), used independently of one another. CUMPOIS determines cumulative Poisson distribution, used to evaluate cumulative distribution function (cdf) for gamma distributions with integer shape parameters and cdf for X (sup2) distributions with even degrees of freedom. Used by statisticians and others concerned with probabilities of independent events occurring over specific units of time, area, or volume. Written in C.

  8. Poly-symplectic Groupoids and Poly-Poisson Structures

    NASA Astrophysics Data System (ADS)

    Martinez, Nicolas

    2015-05-01

    We introduce poly-symplectic groupoids, which are natural extensions of symplectic groupoids to the context of poly-symplectic geometry, and define poly-Poisson structures as their infinitesimal counterparts. We present equivalent descriptions of poly-Poisson structures, including one related with AV-Dirac structures. We also discuss symmetries and reduction in the setting of poly-symplectic groupoids and poly-Poisson structures, and use our viewpoint to revisit results and develop new aspects of the theory initiated in Iglesias et al. (Lett Math Phys 103:1103-1133, 2013).

  9. Fractional poisson--a simple dose-response model for human norovirus.

    PubMed

    Messner, Michael J; Berger, Philip; Nappier, Sharon P

    2014-10-01

    This study utilizes old and new Norovirus (NoV) human challenge data to model the dose-response relationship for human NoV infection. The combined data set is used to update estimates from a previously published beta-Poisson dose-response model that includes parameters for virus aggregation and for a beta-distribution that describes variable susceptibility among hosts. The quality of the beta-Poisson model is examined and a simpler model is proposed. The new model (fractional Poisson) characterizes hosts as either perfectly susceptible or perfectly immune, requiring a single parameter (the fraction of perfectly susceptible hosts) in place of the two-parameter beta-distribution. A second parameter is included to account for virus aggregation in the same fashion as it is added to the beta-Poisson model. Infection probability is simply the product of the probability of nonzero exposure (at least one virus or aggregate is ingested) and the fraction of susceptible hosts. The model is computationally simple and appears to be well suited to the data from the NoV human challenge studies. The model's deviance is similar to that of the beta-Poisson, but with one parameter, rather than two. As a result, the Akaike information criterion favors the fractional Poisson over the beta-Poisson model. At low, environmentally relevant exposure levels (<100), estimation error is small for the fractional Poisson model; however, caution is advised because no subjects were challenged at such a low dose. New low-dose data would be of great value to further clarify the NoV dose-response relationship and to support improved risk assessment for environmentally relevant exposures. © 2014 Society for Risk Analysis Published 2014. This article is a U.S. Government work and is in the public domain for the U.S.A.

  10. Modeling animal-vehicle collisions using diagonal inflated bivariate Poisson regression.

    PubMed

    Lao, Yunteng; Wu, Yao-Jan; Corey, Jonathan; Wang, Yinhai

    2011-01-01

    Two types of animal-vehicle collision (AVC) data are commonly adopted for AVC-related risk analysis research: reported AVC data and carcass removal data. One issue with these two data sets is that they were found to have significant discrepancies by previous studies. In order to model these two types of data together and provide a better understanding of highway AVCs, this study adopts a diagonal inflated bivariate Poisson regression method, an inflated version of bivariate Poisson regression model, to fit the reported AVC and carcass removal data sets collected in Washington State during 2002-2006. The diagonal inflated bivariate Poisson model not only can model paired data with correlation, but also handle under- or over-dispersed data sets as well. Compared with three other types of models, double Poisson, bivariate Poisson, and zero-inflated double Poisson, the diagonal inflated bivariate Poisson model demonstrates its capability of fitting two data sets with remarkable overlapping portions resulting from the same stochastic process. Therefore, the diagonal inflated bivariate Poisson model provides researchers a new approach to investigating AVCs from a different perspective involving the three distribution parameters (λ(1), λ(2) and λ(3)). The modeling results show the impacts of traffic elements, geometric design and geographic characteristics on the occurrences of both reported AVC and carcass removal data. It is found that the increase of some associated factors, such as speed limit, annual average daily traffic, and shoulder width, will increase the numbers of reported AVCs and carcass removals. Conversely, the presence of some geometric factors, such as rolling and mountainous terrain, will decrease the number of reported AVCs. Published by Elsevier Ltd.

  11. Crystallization of a Keplerate-type polyoxometalate into a superposed kagome-lattice with huge channels.

    PubMed

    Saito, Masaki; Ozeki, Tomoji

    2012-09-07

    Crystal structures of two Sr(2+) salts of the Keplerate-type polyoxometalate, [Mo(VI)(72)Mo(V)(60)O(372)(CH(3)COO)(30)(H(2)O)(72)](42-), have been determined by single crystal X-ray diffraction. One compound exhibits a superposed kagome-lattice with huge channels whose diameters measure approximately 3.0 nm, while the arrangement of the Keplerate anions in the other compound approximates to a distorted cubic close packing.

  12. Identification d’une Classe de Processus de Poisson Filtres (Identification of a Class of Filtered Poisson Processes).

    DTIC Science & Technology

    1983-05-20

    Poisson processes is introduced: the amplitude has a law which is spherically invariant and the filter is real, linear and causal. It is shown how such a model can be identified from experimental data. (Author)

  13. Concurrent topological design of composite structures and materials containing multiple phases of distinct Poisson's ratios

    NASA Astrophysics Data System (ADS)

    Long, Kai; Yuan, Philip F.; Xu, Shanqing; Xie, Yi Min

    2018-04-01

    Most studies on composites assume that the constituent phases have different values of stiffness. Little attention has been paid to the effect of constituent phases having distinct Poisson's ratios. This research focuses on a concurrent optimization method for simultaneously designing composite structures and materials with distinct Poisson's ratios. The proposed method aims to minimize the mean compliance of the macrostructure with a given mass of base materials. In contrast to the traditional interpolation of the stiffness matrix through numerical results, an interpolation scheme of the Young's modulus and Poisson's ratio using different parameters is adopted. The numerical results demonstrate that the Poisson effect plays a key role in reducing the mean compliance of the final design. An important contribution of the present study is that the proposed concurrent optimization method can automatically distribute base materials with distinct Poisson's ratios between the macrostructural and microstructural levels under a single constraint of the total mass.

  14. Poisson pre-processing of nonstationary photonic signals: Signals with equality between mean and variance.

    PubMed

    Poplová, Michaela; Sovka, Pavel; Cifra, Michal

    2017-01-01

    Photonic signals are broadly exploited in communication and sensing and they typically exhibit Poisson-like statistics. In a common scenario where the intensity of the photonic signals is low and one needs to remove a nonstationary trend of the signals for any further analysis, one faces an obstacle: due to the dependence between the mean and variance typical for a Poisson-like process, information about the trend remains in the variance even after the trend has been subtracted, possibly yielding artifactual results in further analyses. Commonly available detrending or normalizing methods cannot cope with this issue. To alleviate this issue we developed a suitable pre-processing method for the signals that originate from a Poisson-like process. In this paper, a Poisson pre-processing method for nonstationary time series with Poisson distribution is developed and tested on computer-generated model data and experimental data of chemiluminescence from human neutrophils and mung seeds. The presented method transforms a nonstationary Poisson signal into a stationary signal with a Poisson distribution while preserving the type of photocount distribution and phase-space structure of the signal. The importance of the suggested pre-processing method is shown in Fano factor and Hurst exponent analysis of both computer-generated model signals and experimental photonic signals. It is demonstrated that our pre-processing method is superior to standard detrending-based methods whenever further signal analysis is sensitive to variance of the signal.

  15. Poisson pre-processing of nonstationary photonic signals: Signals with equality between mean and variance

    PubMed Central

    Poplová, Michaela; Sovka, Pavel

    2017-01-01

    Photonic signals are broadly exploited in communication and sensing and they typically exhibit Poisson-like statistics. In a common scenario where the intensity of the photonic signals is low and one needs to remove a nonstationary trend of the signals for any further analysis, one faces an obstacle: due to the dependence between the mean and variance typical for a Poisson-like process, information about the trend remains in the variance even after the trend has been subtracted, possibly yielding artifactual results in further analyses. Commonly available detrending or normalizing methods cannot cope with this issue. To alleviate this issue we developed a suitable pre-processing method for the signals that originate from a Poisson-like process. In this paper, a Poisson pre-processing method for nonstationary time series with Poisson distribution is developed and tested on computer-generated model data and experimental data of chemiluminescence from human neutrophils and mung seeds. The presented method transforms a nonstationary Poisson signal into a stationary signal with a Poisson distribution while preserving the type of photocount distribution and phase-space structure of the signal. The importance of the suggested pre-processing method is shown in Fano factor and Hurst exponent analysis of both computer-generated model signals and experimental photonic signals. It is demonstrated that our pre-processing method is superior to standard detrending-based methods whenever further signal analysis is sensitive to variance of the signal. PMID:29216207

  16. Algorithm Calculates Cumulative Poisson Distribution

    NASA Technical Reports Server (NTRS)

    Bowerman, Paul N.; Nolty, Robert C.; Scheuer, Ernest M.

    1992-01-01

    Algorithm calculates accurate values of cumulative Poisson distribution under conditions where other algorithms fail because numbers are so small (underflow) or so large (overflow) that computer cannot process them. Factors inserted temporarily to prevent underflow and overflow. Implemented in CUMPOIS computer program described in "Cumulative Poisson Distribution Program" (NPO-17714).

  17. Leaf wound induced ultraweak photon emission is suppressed under anoxic stress: Observations of Spathiphyllum under aerobic and anaerobic conditions using novel in vivo methodology.

    PubMed

    Oros, Carl L; Alves, Fabio

    2018-01-01

    Plants have evolved a variety of means to energetically sense and respond to abiotic and biotic environmental stress. Two typical photochemical signaling responses involve the emission of volatile organic compounds and light. The emission of certain leaf wound volatiles and light are mutually dependent upon oxygen which is subsequently required for the wound-induced lipoxygenase reactions that trigger the formation of fatty acids and hydroperoxides; ultimately leading to photon emission by chlorophyll molecules. A low noise photomultiplier with sensitivity in the visible spectrum (300-720 nm) is used to continuously measure long duration ultraweak photon emission of dark-adapting whole Spathiphyllum leaves (in vivo). Leaves were mechanically wounded after two hours of dark adaptation in aerobic and anaerobic conditions. It was found that (1) nitrogen incubation did not affect the pre-wound basal photocounts; (2) wound induced leaf biophoton emission was significantly suppressed when under anoxic stress; and (3) the aerobic wound induced emission spectra observed was > 650 nm, implicating chlorophyll as the likely emitter. Limitations of the PMT photocathode's radiant sensitivity, however, prevented accurate analysis from 700-720 nm. Further examination of leaf wounding profile photon counts revealed that the pre-wounding basal state (aerobic and anoxic), the anoxic wounding state, and the post-wounding aerobic state statistics all approximate a Poisson distribution. It is additionally observed that aerobic wounding induces two distinct exponential decay events. These observations contribute to the body of plant wound-induced luminescence research and provide a novel methodology to measure this phenomenon in vivo.

  18. Rapid detection of contaminant bacteria in platelet concentrate using differential impedance.

    PubMed

    Zhao, Z; Chalmers, A; Rieder, R

    2014-08-01

    Current FDA-approved culture-based methods for the bacterial testing of platelet concentrate (PC) can yield false-negative results attributed to Poisson-limited sampling errors incurred near the time of collection that result in undetectable bacterial concentrations. Testing PC at the point of issue (POI) extends the incubation period for any contaminant bacteria increasing the probability of detection. Data are presented from time-course experiments designed to simulate POI testing of bacterially contaminated PCs at different stages of growth using differential impedance sensing. Whole-blood-derived PCs were typically spiked with low numbers of bacteria (approximately 100 CFU/ml) and incubated under standard PC storage conditions. Each infected unit was evaluated every two hours over a 12-h period. All samples were treated with a chemical compound that induces stress in the bacterial cells only. The development of any bacterial stress was monitored by detecting changes in the dielectric properties of the PC using differential impedance. Differential impedance measurements and corresponding cell counts at the different time-points are presented for six organisms implicated in post-transfusion-septic reactions. All infected PCs were detected once contaminant bacteria reached concentrations ranging between 0·6 × 10(3) and 6 × 10(3)  CFU/ml irrespective of the phase of growth. Results were obtained within 30 min after the start of the assay and without the need for cell lysis or centrifugation. Differential impedance sensing can detect bacterial contamination in PC rapidly at concentrations below clinical thresholds known to cause adverse effects. © 2014 International Society of Blood Transfusion.

  19. Measurements of elastic moduli of pharmaceutical compacts: a new methodology using double compaction on a compaction simulator.

    PubMed

    Mazel, Vincent; Busignies, Virginie; Diarra, Harona; Tchoreloff, Pierre

    2012-06-01

    The elastic properties of pharmaceutical powders play an important role during the compaction process. The elastic behavior can be represented by Young's modulus (E) and Poisson's ratio (v). However, during the compaction, the density of the powder bed changes and the moduli must be determined as a function of the porosity. This study proposes a new methodology to determine E and v as a function of the porosity using double compaction in an instrumented compaction simulator. Precompression is used to form the compact, and the elastic properties are measured during the beginning of the main compaction. By measuring the axial and radial pressure and the powder bed thickness, E and v can be determined as a function of the porosity. Two excipients were studied, microcrystalline cellulose (MCC) and anhydrous calcium phosphate (aCP). The values of E measured are comparable to those obtained using the classical three-point bending test. Poisson's ratio was found to be close to 0.24 for aCP with only small variations with the porosity, and to increase with a decreasing porosity for MCC (0.23-0.38). The classical approximation of a value of 0.3 for ν of pharmaceutical powders should therefore be taken with caution. Copyright © 2012 Wiley Periodicals, Inc.

  20. Self-consistent field model for strong electrostatic correlations and inhomogeneous dielectric media.

    PubMed

    Ma, Manman; Xu, Zhenli

    2014-12-28

    Electrostatic correlations and variable permittivity of electrolytes are essential for exploring many chemical and physical properties of interfaces in aqueous solutions. We propose a continuum electrostatic model for the treatment of these effects in the framework of the self-consistent field theory. The model incorporates a space- or field-dependent dielectric permittivity and an excluded ion-size effect for the correlation energy. This results in a self-energy modified Poisson-Nernst-Planck or Poisson-Boltzmann equation together with state equations for the self energy and the dielectric function. We show that the ionic size is of significant importance in predicting a finite self energy for an ion in an inhomogeneous medium. Asymptotic approximation is proposed for the solution of a generalized Debye-Hückel equation, which has been shown to capture the ionic correlation and dielectric self energy. Through simulating ionic distribution surrounding a macroion, the modified self-consistent field model is shown to agree with particle-based Monte Carlo simulations. Numerical results for symmetric and asymmetric electrolytes demonstrate that the model is able to predict the charge inversion at high correlation regime in the presence of multivalent interfacial ions which is beyond the mean-field theory and also show strong effect to double layer structure due to the space- or field-dependent dielectric permittivity.

  1. Electrodiffusion Models of Neurons and Extracellular Space Using the Poisson-Nernst-Planck Equations—Numerical Simulation of the Intra- and Extracellular Potential for an Axon Model

    PubMed Central

    Pods, Jurgis; Schönke, Johannes; Bastian, Peter

    2013-01-01

    In neurophysiology, extracellular signals—as measured by local field potentials (LFP) or electroencephalography—are of great significance. Their exact biophysical basis is, however, still not fully understood. We present a three-dimensional model exploiting the cylinder symmetry of a single axon in extracellular fluid based on the Poisson-Nernst-Planck equations of electrodiffusion. The propagation of an action potential along the axonal membrane is investigated by means of numerical simulations. Special attention is paid to the Debye layer, the region with strong concentration gradients close to the membrane, which is explicitly resolved by the computational mesh. We focus on the evolution of the extracellular electric potential. A characteristic up-down-up LFP waveform in the far-field is found. Close to the membrane, the potential shows a more intricate shape. A comparison with the widely used line source approximation reveals similarities and demonstrates the strong influence of membrane currents. However, the electrodiffusion model shows another signal component stemming directly from the intracellular electric field, called the action potential echo. Depending on the neuronal configuration, this might have a significant effect on the LFP. In these situations, electrodiffusion models should be used for quantitative comparisons with experimental data. PMID:23823244

  2. A nonlinear equation for ionic diffusion in a strong binary electrolyte

    PubMed Central

    Ghosal, Sandip; Chen, Zhen

    2010-01-01

    The problem of the one-dimensional electro-diffusion of ions in a strong binary electrolyte is considered. The mathematical description, known as the Poisson–Nernst–Planck (PNP) system, consists of a diffusion equation for each species augmented by transport owing to a self-consistent electrostatic field determined by the Poisson equation. This description is also relevant to other important problems in physics, such as electron and hole diffusion across semiconductor junctions and the diffusion of ions in plasmas. If concentrations do not vary appreciably over distances of the order of the Debye length, the Poisson equation can be replaced by the condition of local charge neutrality first introduced by Planck. It can then be shown that both species diffuse at the same rate with a common diffusivity that is intermediate between that of the slow and fast species (ambipolar diffusion). Here, we derive a more general theory by exploiting the ratio of the Debye length to a characteristic length scale as a small asymptotic parameter. It is shown that the concentration of either species may be described by a nonlinear partial differential equation that provides a better approximation than the classical linear equation for ambipolar diffusion, but reduces to it in the appropriate limit. PMID:21818176

  3. On the extraction of pressure fields from PIV velocity measurements in turbines

    NASA Astrophysics Data System (ADS)

    Villegas, Arturo; Diez, Fancisco J.

    2012-11-01

    In this study, the pressure field for a water turbine is derived from particle image velocimetry (PIV) measurements. Measurements are performed in a recirculating water channel facility. The PIV measurements include calculating the tangential and axial forces applied to the turbine by solving the integral momentum equation around the airfoil. The results are compared with the forces obtained from the Blade Element Momentum theory (BEMT). Forces are calculated by using three different methods. In the first method, the pressure fields are obtained from PIV velocity fields by solving the Poisson equation. The boundary conditions are obtained from the Navier-Stokes momentum equations. In the second method, the pressure at the boundaries is determined by spatial integration of the pressure gradients along the boundaries. In the third method, applicable only to incompressible, inviscid, irrotational, and steady flow, the pressure is calculated using the Bernoulli equation. This approximated pressure is known to be accurate far from the airfoil and outside of the wake for steady flows. Additionally, the pressure is used to solve for the force from the integral momentum equation on the blade. From the three methods proposed to solve for pressure and forces from PIV measurements, the first one, which is solved by using the Poisson equation, provides the best match to the BEM theory calculations.

  4. Poisson-Box Sampling algorithms for three-dimensional Markov binary mixtures

    NASA Astrophysics Data System (ADS)

    Larmier, Coline; Zoia, Andrea; Malvagi, Fausto; Dumonteil, Eric; Mazzolo, Alain

    2018-02-01

    Particle transport in Markov mixtures can be addressed by the so-called Chord Length Sampling (CLS) methods, a family of Monte Carlo algorithms taking into account the effects of stochastic media on particle propagation by generating on-the-fly the material interfaces crossed by the random walkers during their trajectories. Such methods enable a significant reduction of computational resources as opposed to reference solutions obtained by solving the Boltzmann equation for a large number of realizations of random media. CLS solutions, which neglect correlations induced by the spatial disorder, are faster albeit approximate, and might thus show discrepancies with respect to reference solutions. In this work we propose a new family of algorithms (called 'Poisson Box Sampling', PBS) aimed at improving the accuracy of the CLS approach for transport in d-dimensional binary Markov mixtures. In order to probe the features of PBS methods, we will focus on three-dimensional Markov media and revisit the benchmark problem originally proposed by Adams, Larsen and Pomraning [1] and extended by Brantley [2]: for these configurations we will compare reference solutions, standard CLS solutions and the new PBS solutions for scalar particle flux, transmission and reflection coefficients. PBS will be shown to perform better than CLS at the expense of a reasonable increase in computational time.

  5. Universality in the distance between two teams in a football tournament

    NASA Astrophysics Data System (ADS)

    da Silva, Roberto; Dahmen, Silvio R.

    2014-03-01

    Is football (soccer) a universal sport? Beyond the question of geographical distribution, where the answer is most certainly yes, when looked at from a mathematical viewpoint the scoring process during a match can be thought of, in a first approximation, as being modeled by a Poisson distribution. Recently, it was shown that the scoring of real tournaments can be reproduced by means of an agent-based model (da Silva et al. (2013) [24]) based on two simple hypotheses: (i) the ability of a team to win a match is given by the rate of a Poisson distribution that governs its scoring during a match; and (ii) such ability evolves over time according to results of previous matches. In this article we are interested in the question of whether the time series represented by the scores of teams have universal properties. For this purpose we define a distance between two teams as the square root of the sum of squares of the score differences between teams over all rounds in a double-round-robin-system and study how this distance evolves over time. Our results suggest a universal distance distribution of tournaments of different major leagues which is better characterized by an exponentially modified Gaussian (EMG). This result is corroborated by our agent-based model.

  6. Characteristics of service requests and service processes of fire and rescue service dispatch centers: analysis of real world data and the underlying probability distributions.

    PubMed

    Krueger, Ute; Schimmelpfeng, Katja

    2013-03-01

    A sufficient staffing level in fire and rescue dispatch centers is crucial for saving lives. Therefore, it is important to estimate the expected workload properly. For this purpose, we analyzed whether a dispatch center can be considered as a call center. Current call center publications very often model call arrivals as a non-homogeneous Poisson process. This bases on the underlying assumption of the caller's independent decision to call or not to call. In case of an emergency, however, there are often calls from more than one person reporting the same incident and thus, these calls are not independent. Therefore, this paper focuses on the dependency of calls in a fire and rescue dispatch center. We analyzed and evaluated several distributions in this setting. Results are illustrated using real-world data collected from a typical German dispatch center in Cottbus ("Leitstelle Lausitz"). We identified the Pólya distribution as being superior to the Poisson distribution in describing the call arrival rate and the Weibull distribution to be more suitable than the exponential distribution for interarrival times and service times. However, the commonly used distributions offer acceptable approximations. This is important for estimating a sufficient staffing level in practice using, e.g., the Erlang-C model.

  7. Determination of Poisson Ratio of Bovine Extraocular Muscle by Computed X-Ray Tomography

    PubMed Central

    Kim, Hansang; Yoo, Lawrence; Shin, Andrew; Demer, Joseph L.

    2013-01-01

    The Poisson ratio (PR) is a fundamental mechanical parameter that approximates the ratio of relative change in cross sectional area to tensile elongation. However, the PR of extraocular muscle (EOM) is almost never measured because of experimental constraints. The problem was overcome by determining changes in EOM dimensions using computed X-ray tomography (CT) at microscopic resolution during tensile elongation to determine transverse strain indicated by the change in cross-section. Fresh bovine EOM specimens were prepared. Specimens were clamped in a tensile fixture within a CT scanner (SkyScan, Belgium) with temperature and humidity control and stretched up to 35% of initial length. Sets of 500–800 contiguous CT images were obtained at 10-micron resolution before and after tensile loading. Digital 3D models were then built and discretized into 6–8-micron-thick elements. Changes in longitudinal thickness of each microscopic element were determined to calculate strain. Green's theorem was used to calculate areal strain in transverse directions orthogonal to the stretching direction. The mean PR from discretized 3D models for every microscopic element in 14 EOM specimens averaged 0.457 ± 0.004 (SD). The measured PR of bovine EOM is thus near the limit of incompressibility. PMID:23484091

  8. Hierarchical Approach to 'Atomistic' 3-D MOSFET Simulation

    NASA Technical Reports Server (NTRS)

    Asenov, Asen; Brown, Andrew R.; Davies, John H.; Saini, Subhash

    1999-01-01

    We present a hierarchical approach to the 'atomistic' simulation of aggressively scaled sub-0.1 micron MOSFET's. These devices are so small that their characteristics depend on the precise location of dopant atoms within them, not just on their average density. A full-scale three-dimensional drift-diffusion atomistic simulation approach is first described and used to verify more economical, but restricted, options. To reduce processor time and memory requirements at high drain voltage, we have developed a self-consistent option based on a solution of the current continuity equation restricted to a thin slab of the channel. This is coupled to the solution of the Poisson equation in the whole simulation domain in the Gummel iteration cycles. The accuracy of this approach is investigated in comparison to the full self-consistent solution. At low drain voltage, a single solution of the nonlinear Poisson equation is sufficient to extract the current with satisfactory accuracy. In this case, the current is calculated by solving the current continuity equation in a drift approximation only, also in a thin slab containing the MOSFET channel. The regions of applicability for the different components of this hierarchical approach are illustrated in example simulations covering the random dopant-induced threshold voltage fluctuations, threshold voltage lowering, threshold voltage asymmetry, and drain current fluctuations.

  9. State Estimation for Linear Systems Driven Simultaneously by Wiener and Poisson Processes.

    DTIC Science & Technology

    1978-12-01

    The state estimation problem of linear stochastic systems driven simultaneously by Wiener and Poisson processes is considered, especially the case...where the incident intensities of the Poisson processes are low and the system is observed in an additive white Gaussian noise. The minimum mean squared

  10. The Validity of Poisson Assumptions in a Combined Loglinear/MDS Mapping Model.

    ERIC Educational Resources Information Center

    Everett, James E.

    1993-01-01

    Addresses objections to the validity of assuming a Poisson loglinear model as the generating process for citations from one journal into another. Fluctuations in citation rate, serial dependence on citations, impossibility of distinguishing between rate changes and serial dependence, evidence for changes in Poisson rate, and transitivity…

  11. Method for resonant measurement

    DOEpatents

    Rhodes, George W.; Migliori, Albert; Dixon, Raymond D.

    1996-01-01

    A method of measurement of objects to determine object flaws, Poisson's ratio (.sigma.) and shear modulus (.mu.) is shown and described. First, the frequency for expected degenerate responses is determined for one or more input frequencies and then splitting of degenerate resonant modes are observed to identify the presence of flaws in the object. Poisson's ratio and the shear modulus can be determined by identification of resonances dependent only on the shear modulus, and then using that shear modulus to find Poisson's ratio using other modes dependent on both the shear modulus and Poisson's ratio.

  12. Elasticity of α-Cristobalite: A Silicon Dioxide with a Negative Poisson's Ratio

    NASA Astrophysics Data System (ADS)

    Yeganeh-Haeri, Amir; Weidner, Donald J.; Parise, John B.

    1992-07-01

    Laser Brillouin spectroscopy was used to determine the adiabatic single-crystal elastic stiffness coefficients of silicon dioxide (SiO_2) in the α-cristobalite structure. This SiO_2 polymorph, unlike other silicas and silicates, exhibits a negative Poisson's ratio; α-cristobalite contracts laterally when compressed and expands laterally when stretched. Tensorial analysis of the elastic coefficients shows that Poisson's ratio reaches a maximum value of -0.5 in some directions, whereas averaged values for the single-phased aggregate yield a Poisson's ratio of -0.16.

  13. Zero-inflated Conway-Maxwell Poisson Distribution to Analyze Discrete Data.

    PubMed

    Sim, Shin Zhu; Gupta, Ramesh C; Ong, Seng Huat

    2018-01-09

    In this paper, we study the zero-inflated Conway-Maxwell Poisson (ZICMP) distribution and develop a regression model. Score and likelihood ratio tests are also implemented for testing the inflation/deflation parameter. Simulation studies are carried out to examine the performance of these tests. A data example is presented to illustrate the concepts. In this example, the proposed model is compared to the well-known zero-inflated Poisson (ZIP) and the zero- inflated generalized Poisson (ZIGP) regression models. It is shown that the fit by ZICMP is comparable or better than these models.

  14. A System of Poisson Equations for a Nonconstant Varadhan Functional on a Finite State Space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cavazos-Cadena, Rolando; Hernandez-Hernandez, Daniel

    2006-01-15

    Given a discrete-time Markov chain with finite state space and a stationary transition matrix, a system of 'local' Poisson equations characterizing the (exponential) Varadhan's functional J(.) is given. The main results, which are derived for an arbitrary transition structure so that J(.) may be nonconstant, are as follows: (i) Any solution to the local Poisson equations immediately renders Varadhan's functional, and (ii) a solution of the system always exist. The proof of this latter result is constructive and suggests a method to solve the local Poisson equations.

  15. Comparison of the Nernst-Planck model and the Poisson-Boltzmann model for electroosmotic flows in microchannels.

    PubMed

    Park, H M; Lee, J S; Kim, T W

    2007-11-15

    In the analysis of electroosmotic flows, the internal electric potential is usually modeled by the Poisson-Boltzmann equation. The Poisson-Boltzmann equation is derived from the assumption of thermodynamic equilibrium where the ionic distributions are not affected by fluid flows. Although this is a reasonable assumption for steady electroosmotic flows through straight microchannels, there are some important cases where convective transport of ions has nontrivial effects. In these cases, it is necessary to adopt the Nernst-Planck equation instead of the Poisson-Boltzmann equation to model the internal electric field. In the present work, the predictions of the Nernst-Planck equation are compared with those of the Poisson-Boltzmann equation for electroosmotic flows in various microchannels where the convective transport of ions is not negligible.

  16. Efficiency optimization of a fast Poisson solver in beam dynamics simulation

    NASA Astrophysics Data System (ADS)

    Zheng, Dawei; Pöplau, Gisela; van Rienen, Ursula

    2016-01-01

    Calculating the solution of Poisson's equation relating to space charge force is still the major time consumption in beam dynamics simulations and calls for further improvement. In this paper, we summarize a classical fast Poisson solver in beam dynamics simulations: the integrated Green's function method. We introduce three optimization steps of the classical Poisson solver routine: using the reduced integrated Green's function instead of the integrated Green's function; using the discrete cosine transform instead of discrete Fourier transform for the Green's function; using a novel fast convolution routine instead of an explicitly zero-padded convolution. The new Poisson solver routine preserves the advantages of fast computation and high accuracy. This provides a fast routine for high performance calculation of the space charge effect in accelerators.

  17. Improved Denoising via Poisson Mixture Modeling of Image Sensor Noise.

    PubMed

    Zhang, Jiachao; Hirakawa, Keigo

    2017-04-01

    This paper describes a study aimed at comparing the real image sensor noise distribution to the models of noise often assumed in image denoising designs. A quantile analysis in pixel, wavelet transform, and variance stabilization domains reveal that the tails of Poisson, signal-dependent Gaussian, and Poisson-Gaussian models are too short to capture real sensor noise behavior. A new Poisson mixture noise model is proposed to correct the mismatch of tail behavior. Based on the fact that noise model mismatch results in image denoising that undersmoothes real sensor data, we propose a mixture of Poisson denoising method to remove the denoising artifacts without affecting image details, such as edge and textures. Experiments with real sensor data verify that denoising for real image sensor data is indeed improved by this new technique.

  18. A generalized right truncated bivariate Poisson regression model with applications to health data.

    PubMed

    Islam, M Ataharul; Chowdhury, Rafiqul I

    2017-01-01

    A generalized right truncated bivariate Poisson regression model is proposed in this paper. Estimation and tests for goodness of fit and over or under dispersion are illustrated for both untruncated and right truncated bivariate Poisson regression models using marginal-conditional approach. Estimation and test procedures are illustrated for bivariate Poisson regression models with applications to Health and Retirement Study data on number of health conditions and the number of health care services utilized. The proposed test statistics are easy to compute and it is evident from the results that the models fit the data very well. A comparison between the right truncated and untruncated bivariate Poisson regression models using the test for nonnested models clearly shows that the truncated model performs significantly better than the untruncated model.

  19. A generalized right truncated bivariate Poisson regression model with applications to health data

    PubMed Central

    Islam, M. Ataharul; Chowdhury, Rafiqul I.

    2017-01-01

    A generalized right truncated bivariate Poisson regression model is proposed in this paper. Estimation and tests for goodness of fit and over or under dispersion are illustrated for both untruncated and right truncated bivariate Poisson regression models using marginal-conditional approach. Estimation and test procedures are illustrated for bivariate Poisson regression models with applications to Health and Retirement Study data on number of health conditions and the number of health care services utilized. The proposed test statistics are easy to compute and it is evident from the results that the models fit the data very well. A comparison between the right truncated and untruncated bivariate Poisson regression models using the test for nonnested models clearly shows that the truncated model performs significantly better than the untruncated model. PMID:28586344

  20. Complex wet-environments in electronic-structure calculations

    NASA Astrophysics Data System (ADS)

    Fisicaro, Giuseppe; Genovese, Luigi; Andreussi, Oliviero; Marzari, Nicola; Goedecker, Stefan

    The computational study of chemical reactions in complex, wet environments is critical for applications in many fields. It is often essential to study chemical reactions in the presence of an applied electrochemical potentials, including complex electrostatic screening coming from the solvent. In the present work we present a solver to handle both the Generalized Poisson and the Poisson-Boltzmann equation. A preconditioned conjugate gradient (PCG) method has been implemented for the Generalized Poisson and the linear regime of the Poisson-Boltzmann, allowing to solve iteratively the minimization problem with some ten iterations. On the other hand, a self-consistent procedure enables us to solve the Poisson-Boltzmann problem. The algorithms take advantage of a preconditioning procedure based on the BigDFT Poisson solver for the standard Poisson equation. They exhibit very high accuracy and parallel efficiency, and allow different boundary conditions, including surfaces. The solver has been integrated into the BigDFT and Quantum-ESPRESSO electronic-structure packages and it will be released as a independent program, suitable for integration in other codes. We present test calculations for large proteins to demonstrate efficiency and performances. This work was done within the PASC and NCCR MARVEL projects. Computer resources were provided by the Swiss National Supercomputing Centre (CSCS) under Project ID s499. LG acknowledges also support from the EXTMOS EU project.

  1. A comparison between Poisson and zero-inflated Poisson regression models with an application to number of black spots in Corriedale sheep

    PubMed Central

    Naya, Hugo; Urioste, Jorge I; Chang, Yu-Mei; Rodrigues-Motta, Mariana; Kremer, Roberto; Gianola, Daniel

    2008-01-01

    Dark spots in the fleece area are often associated with dark fibres in wool, which limits its competitiveness with other textile fibres. Field data from a sheep experiment in Uruguay revealed an excess number of zeros for dark spots. We compared the performance of four Poisson and zero-inflated Poisson (ZIP) models under four simulation scenarios. All models performed reasonably well under the same scenario for which the data were simulated. The deviance information criterion favoured a Poisson model with residual, while the ZIP model with a residual gave estimates closer to their true values under all simulation scenarios. Both Poisson and ZIP models with an error term at the regression level performed better than their counterparts without such an error. Field data from Corriedale sheep were analysed with Poisson and ZIP models with residuals. Parameter estimates were similar for both models. Although the posterior distribution of the sire variance was skewed due to a small number of rams in the dataset, the median of this variance suggested a scope for genetic selection. The main environmental factor was the age of the sheep at shearing. In summary, age related processes seem to drive the number of dark spots in this breed of sheep. PMID:18558072

  2. Persistently Auxetic Materials: Engineering the Poisson Ratio of 2D Self-Avoiding Membranes under Conditions of Non-Zero Anisotropic Strain.

    PubMed

    Ulissi, Zachary W; Govind Rajan, Ananth; Strano, Michael S

    2016-08-23

    Entropic surfaces represented by fluctuating two-dimensional (2D) membranes are predicted to have desirable mechanical properties when unstressed, including a negative Poisson's ratio ("auxetic" behavior). Herein, we present calculations of the strain-dependent Poisson ratio of self-avoiding 2D membranes demonstrating desirable auxetic properties over a range of mechanical strain. Finite-size membranes with unclamped boundary conditions have positive Poisson's ratio due to spontaneous non-zero mean curvature, which can be suppressed with an explicit bending rigidity in agreement with prior findings. Applying longitudinal strain along a singular axis to this system suppresses this mean curvature and the entropic out-of-plane fluctuations, resulting in a molecular-scale mechanism for realizing a negative Poisson's ratio above a critical strain, with values significantly more negative than the previously observed zero-strain limit for infinite sheets. We find that auxetic behavior persists over surprisingly high strains of more than 20% for the smallest surfaces, with desirable finite-size scaling producing surfaces with negative Poisson's ratio over a wide range of strains. These results promise the design of surfaces and composite materials with tunable Poisson's ratio by prestressing platelet inclusions or controlling the surface rigidity of a matrix of 2D materials.

  3. An intrinsic algorithm for parallel Poisson disk sampling on arbitrary surfaces.

    PubMed

    Ying, Xiang; Xin, Shi-Qing; Sun, Qian; He, Ying

    2013-09-01

    Poisson disk sampling has excellent spatial and spectral properties, and plays an important role in a variety of visual computing. Although many promising algorithms have been proposed for multidimensional sampling in euclidean space, very few studies have been reported with regard to the problem of generating Poisson disks on surfaces due to the complicated nature of the surface. This paper presents an intrinsic algorithm for parallel Poisson disk sampling on arbitrary surfaces. In sharp contrast to the conventional parallel approaches, our method neither partitions the given surface into small patches nor uses any spatial data structure to maintain the voids in the sampling domain. Instead, our approach assigns each sample candidate a random and unique priority that is unbiased with regard to the distribution. Hence, multiple threads can process the candidates simultaneously and resolve conflicts by checking the given priority values. Our algorithm guarantees that the generated Poisson disks are uniformly and randomly distributed without bias. It is worth noting that our method is intrinsic and independent of the embedding space. This intrinsic feature allows us to generate Poisson disk patterns on arbitrary surfaces in IR(n). To our knowledge, this is the first intrinsic, parallel, and accurate algorithm for surface Poisson disk sampling. Furthermore, by manipulating the spatially varying density function, we can obtain adaptive sampling easily.

  4. Mechanical, Thermodynamic and Electronic Properties of Wurtzite and Zinc-Blende GaN Crystals.

    PubMed

    Qin, Hongbo; Luan, Xinghe; Feng, Chuang; Yang, Daoguo; Zhang, Guoqi

    2017-12-12

    For the limitation of experimental methods in crystal characterization, in this study, the mechanical, thermodynamic and electronic properties of wurtzite and zinc-blende GaN crystals were investigated by first-principles calculations based on density functional theory. Firstly, bulk moduli, shear moduli, elastic moduli and Poisson's ratios of the two GaN polycrystals were calculated using Voigt and Hill approximations, and the results show wurtzite GaN has larger shear and elastic moduli and exhibits more obvious brittleness. Moreover, both wurtzite and zinc-blende GaN monocrystals present obvious mechanical anisotropic behavior. For wurtzite GaN monocrystal, the maximum and minimum elastic moduli are located at orientations [001] and <111>, respectively, while they are in the orientations <111> and <100> for zinc-blende GaN monocrystal, respectively. Compared to the elastic modulus, the shear moduli of the two GaN monocrystals have completely opposite direction dependences. However, different from elastic and shear moduli, the bulk moduli of the two monocrystals are nearly isotropic, especially for the zinc-blende GaN. Besides, in the wurtzite GaN, Poisson's ratios at the planes containing [001] axis are anisotropic, and the maximum value is 0.31 which is located at the directions vertical to [001] axis. For zinc-blende GaN, Poisson's ratios at planes (100) and (111) are isotropic, while the Poisson's ratio at plane (110) exhibits dramatically anisotropic phenomenon. Additionally, the calculated Debye temperatures of wurtzite and zinc-blende GaN are 641.8 and 620.2 K, respectively. At 300 K, the calculated heat capacities of wurtzite and zinc-blende are 33.6 and 33.5 J mol -1 K -1 , respectively. Finally, the band gap is located at the G point for the two crystals, and the band gaps of wurtzite and zinc-blende GaN are 3.62 eV and 3.06 eV, respectively. At the G point, the lowest energy of conduction band in the wurtzite GaN is larger, resulting in a wider band gap. Densities of states in the orbital hybridization between Ga and N atoms of wurtzite GaN are much higher, indicating more electrons participate in forming Ga-N ionic bonds in the wurtzite GaN.

  5. GRAPE- TWO-DIMENSIONAL GRIDS ABOUT AIRFOILS AND OTHER SHAPES BY THE USE OF POISSON'S EQUATION

    NASA Technical Reports Server (NTRS)

    Sorenson, R. L.

    1994-01-01

    The ability to treat arbitrary boundary shapes is one of the most desirable characteristics of a method for generating grids, including those about airfoils. In a grid used for computing aerodynamic flow over an airfoil, or any other body shape, the surface of the body is usually treated as an inner boundary and often cannot be easily represented as an analytic function. The GRAPE computer program was developed to incorporate a method for generating two-dimensional finite-difference grids about airfoils and other shapes by the use of the Poisson differential equation. GRAPE can be used with any boundary shape, even one specified by tabulated points and including a limited number of sharp corners. The GRAPE program has been developed to be numerically stable and computationally fast. GRAPE can provide the aerodynamic analyst with an efficient and consistent means of grid generation. The GRAPE procedure generates a grid between an inner and an outer boundary by utilizing an iterative procedure to solve the Poisson differential equation subject to geometrical restraints. In this method, the inhomogeneous terms of the equation are automatically chosen such that two important effects are imposed on the grid. The first effect is control of the spacing between mesh points along mesh lines intersecting the boundaries. The second effect is control of the angles with which mesh lines intersect the boundaries. Along with the iterative solution to Poisson's equation, a technique of coarse-fine sequencing is employed to accelerate numerical convergence. GRAPE program control cards and input data are entered via the NAMELIST feature. Each variable has a default value such that user supplied data is kept to a minimum. Basic input data consists of the boundary specification, mesh point spacings on the boundaries, and mesh line angles at the boundaries. Output consists of a dataset containing the grid data and, if requested, a plot of the generated mesh. The GRAPE program is written in FORTRAN IV for batch execution and has been implemented on a CDC 6000 series computer with a central memory requirement of approximately 135K (octal) of 60 bit words. For plotted output the commercially available DISSPLA graphics software package is required. The GRAPE program was developed in 1980.

  6. Structure and osmotic pressure of ionic microgel dispersions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hedrick, Mary M.; Department of Chemistry and Biochemistry, North Dakota State University, Fargo, North Dakota 58108-6050; Chung, Jun Kyung

    We investigate structural and thermodynamic properties of aqueous dispersions of ionic microgels—soft colloidal gel particles that exhibit unusual phase behavior. Starting from a coarse-grained model of microgel macroions as charged spheres that are permeable to microions, we perform simulations and theoretical calculations using two complementary implementations of Poisson-Boltzmann (PB) theory. Within a one-component model, based on a linear-screening approximation for effective electrostatic pair interactions, we perform molecular dynamics simulations to compute macroion-macroion radial distribution functions, static structure factors, and macroion contributions to the osmotic pressure. For the same model, using a variational approximation for the free energy, we compute bothmore » macroion and microion contributions to the osmotic pressure. Within a spherical cell model, which neglects macroion correlations, we solve the nonlinear PB equation to compute microion distributions and osmotic pressures. By comparing the one-component and cell model implementations of PB theory, we demonstrate that the linear-screening approximation is valid for moderately charged microgels. By further comparing cell model predictions with simulation data for osmotic pressure, we chart the cell model’s limits in predicting osmotic pressures of salty dispersions.« less

  7. On the validity of the Poisson assumption in sampling nanometer-sized aerosols

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Damit, Brian E; Wu, Dr. Chang-Yu; Cheng, Mengdawn

    2014-01-01

    A Poisson process is traditionally believed to apply to the sampling of aerosols. For a constant aerosol concentration, it is assumed that a Poisson process describes the fluctuation in the measured concentration because aerosols are stochastically distributed in space. Recent studies, however, have shown that sampling of micrometer-sized aerosols has non-Poissonian behavior with positive correlations. The validity of the Poisson assumption for nanometer-sized aerosols has not been examined and thus was tested in this study. Its validity was tested for four particle sizes - 10 nm, 25 nm, 50 nm and 100 nm - by sampling from indoor air withmore » a DMA- CPC setup to obtain a time series of particle counts. Five metrics were calculated from the data: pair-correlation function (PCF), time-averaged PCF, coefficient of variation, probability of measuring a concentration at least 25% greater than average, and posterior distributions from Bayesian inference. To identify departures from Poissonian behavior, these metrics were also calculated for 1,000 computer-generated Poisson time series with the same mean as the experimental data. For nearly all comparisons, the experimental data fell within the range of 80% of the Poisson-simulation values. Essentially, the metrics for the experimental data were indistinguishable from a simulated Poisson process. The greater influence of Brownian motion for nanometer-sized aerosols may explain the Poissonian behavior observed for smaller aerosols. Although the Poisson assumption was found to be valid in this study, it must be carefully applied as the results here do not definitively prove applicability in all sampling situations.« less

  8. Bringing consistency to simulation of population models--Poisson simulation as a bridge between micro and macro simulation.

    PubMed

    Gustafsson, Leif; Sternad, Mikael

    2007-10-01

    Population models concern collections of discrete entities such as atoms, cells, humans, animals, etc., where the focus is on the number of entities in a population. Because of the complexity of such models, simulation is usually needed to reproduce their complete dynamic and stochastic behaviour. Two main types of simulation models are used for different purposes, namely micro-simulation models, where each individual is described with its particular attributes and behaviour, and macro-simulation models based on stochastic differential equations, where the population is described in aggregated terms by the number of individuals in different states. Consistency between micro- and macro-models is a crucial but often neglected aspect. This paper demonstrates how the Poisson Simulation technique can be used to produce a population macro-model consistent with the corresponding micro-model. This is accomplished by defining Poisson Simulation in strictly mathematical terms as a series of Poisson processes that generate sequences of Poisson distributions with dynamically varying parameters. The method can be applied to any population model. It provides the unique stochastic and dynamic macro-model consistent with a correct micro-model. The paper also presents a general macro form for stochastic and dynamic population models. In an appendix Poisson Simulation is compared with Markov Simulation showing a number of advantages. Especially aggregation into state variables and aggregation of many events per time-step makes Poisson Simulation orders of magnitude faster than Markov Simulation. Furthermore, you can build and execute much larger and more complicated models with Poisson Simulation than is possible with the Markov approach.

  9. A class of renormalised meshless Laplacians for boundary value problems

    NASA Astrophysics Data System (ADS)

    Basic, Josip; Degiuli, Nastia; Ban, Dario

    2018-02-01

    A meshless approach to approximating spatial derivatives on scattered point arrangements is presented in this paper. Three various derivations of approximate discrete Laplace operator formulations are produced using the Taylor series expansion and renormalised least-squares correction of the first spatial derivatives. Numerical analyses are performed for the introduced Laplacian formulations, and their convergence rate and computational efficiency are examined. The tests are conducted on regular and highly irregular scattered point arrangements. The results are compared to those obtained by the smoothed particle hydrodynamics method and the finite differences method on a regular grid. Finally, the strong form of various Poisson and diffusion equations with Dirichlet or Robin boundary conditions are solved in two and three dimensions by making use of the introduced operators in order to examine their stability and accuracy for boundary value problems. The introduced Laplacian operators perform well for highly irregular point distribution and offer adequate accuracy for mesh and mesh-free numerical methods that require frequent movement of the grid or point cloud.

  10. Burst of virus infection and a possibly largest epidemic threshold of non-Markovian susceptible-infected-susceptible processes on networks

    NASA Astrophysics Data System (ADS)

    Liu, Qiang; Van Mieghem, Piet

    2018-02-01

    Since a real epidemic process is not necessarily Markovian, the epidemic threshold obtained under the Markovian assumption may be not realistic. To understand general non-Markovian epidemic processes on networks, we study the Weibullian susceptible-infected-susceptible (SIS) process in which the infection process is a renewal process with a Weibull time distribution. We find that, if the infection rate exceeds 1 /ln(λ1+1 ) , where λ1 is the largest eigenvalue of the network's adjacency matrix, then the infection will persist on the network under the mean-field approximation. Thus, 1 /ln(λ1+1 ) is possibly the largest epidemic threshold for a general non-Markovian SIS process with a Poisson curing process under the mean-field approximation. Furthermore, non-Markovian SIS processes may result in a multimodal prevalence. As a byproduct, we show that a limiting Weibullian SIS process has the potential to model bursts of a synchronized infection.

  11. Parallel Computing of Upwelling in a Rotating Stratified Flow

    NASA Astrophysics Data System (ADS)

    Cui, A.; Street, R. L.

    1997-11-01

    A code for the three-dimensional, unsteady, incompressible, and turbulent flow has been implemented on the IBM SP2, using message passing. The effects of rotation and variable density are included. A finite volume method is used to discretize the Navier-Stokes equations in general curvilinear coordinates on a non-staggered grid. All the spatial derivatives are approximated using second-order central differences with the exception of the convection terms, which are handled with special upwind-difference schemes. The semi-implicit, second-order accurate, time-advancement scheme employs the Adams-Bashforth method for the explicit terms and Crank-Nicolson for the implicit terms. A multigrid method, with the four-color ZEBRA as smoother, is used to solve the Poisson equation for pressure, while the momentum equations are solved with an approximate factorization technique. The code was successfully validated for a variety test cases. Simulations of a laboratory model of coastal upwelling in a rotating annulus are in progress and will be presented.

  12. Mapping quantum-classical Liouville equation: projectors and trajectories.

    PubMed

    Kelly, Aaron; van Zon, Ramses; Schofield, Jeremy; Kapral, Raymond

    2012-02-28

    The evolution of a mixed quantum-classical system is expressed in the mapping formalism where discrete quantum states are mapped onto oscillator states, resulting in a phase space description of the quantum degrees of freedom. By defining projection operators onto the mapping states corresponding to the physical quantum states, it is shown that the mapping quantum-classical Liouville operator commutes with the projection operator so that the dynamics is confined to the physical space. It is also shown that a trajectory-based solution of this equation can be constructed that requires the simulation of an ensemble of entangled trajectories. An approximation to this evolution equation which retains only the Poisson bracket contribution to the evolution operator does admit a solution in an ensemble of independent trajectories but it is shown that this operator does not commute with the projection operators and the dynamics may take the system outside the physical space. The dynamical instabilities, utility, and domain of validity of this approximate dynamics are discussed. The effects are illustrated by simulations on several quantum systems.

  13. Adaptive Detector Arrays for Optical Communications Receivers

    NASA Technical Reports Server (NTRS)

    Vilnrotter, V.; Srinivasan, M.

    2000-01-01

    The structure of an optimal adaptive array receiver for ground-based optical communications is described and its performance investigated. Kolmogorov phase screen simulations are used to model the sample functions of the focal-plane signal distribution due to turbulence and to generate realistic spatial distributions of the received optical field. This novel array detector concept reduces interference from background radiation by effectively assigning higher confidence levels at each instant of time to those detector elements that contain significant signal energy and suppressing those that do not. A simpler suboptimum structure that replaces the continuous weighting function of the optimal receiver by a hard decision on the selection of the signal detector elements also is described and evaluated. Approximations and bounds to the error probability are derived and compared with the exact calculations and receiver simulation results. It is shown that, for photon-counting receivers observing Poisson-distributed signals, performance improvements of approximately 5 dB can be obtained over conventional single-detector photon-counting receivers, when operating in high background environments.

  14. Poisson Spot with Magnetic Levitation

    ERIC Educational Resources Information Center

    Hoover, Matthew; Everhart, Michael; D'Arruda, Jose

    2010-01-01

    In this paper we describe a unique method for obtaining the famous Poisson spot without adding obstacles to the light path, which could interfere with the effect. A Poisson spot is the interference effect from parallel rays of light diffracting around a solid spherical object, creating a bright spot in the center of the shadow.

  15. Modelling infant mortality rate in Central Java, Indonesia use generalized poisson regression method

    NASA Astrophysics Data System (ADS)

    Prahutama, Alan; Sudarno

    2018-05-01

    The infant mortality rate is the number of deaths under one year of age occurring among the live births in a given geographical area during a given year, per 1,000 live births occurring among the population of the given geographical area during the same year. This problem needs to be addressed because it is an important element of a country’s economic development. High infant mortality rate will disrupt the stability of a country as it relates to the sustainability of the population in the country. One of regression model that can be used to analyze the relationship between dependent variable Y in the form of discrete data and independent variable X is Poisson regression model. Recently The regression modeling used for data with dependent variable is discrete, among others, poisson regression, negative binomial regression and generalized poisson regression. In this research, generalized poisson regression modeling gives better AIC value than poisson regression. The most significant variable is the Number of health facilities (X1), while the variable that gives the most influence to infant mortality rate is the average breastfeeding (X9).

  16. Modeling health survey data with excessive zero and K responses.

    PubMed

    Lin, Ting Hsiang; Tsai, Min-Hsiao

    2013-04-30

    Zero-inflated Poisson regression is a popular tool used to analyze data with excessive zeros. Although much work has already been performed to fit zero-inflated data, most models heavily depend on special features of the individual data. To be specific, this means that there is a sizable group of respondents who endorse the same answers making the data have peaks. In this paper, we propose a new model with the flexibility to model excessive counts other than zero, and the model is a mixture of multinomial logistic and Poisson regression, in which the multinomial logistic component models the occurrence of excessive counts, including zeros, K (where K is a positive integer) and all other values. The Poisson regression component models the counts that are assumed to follow a Poisson distribution. Two examples are provided to illustrate our models when the data have counts containing many ones and sixes. As a result, the zero-inflated and K-inflated models exhibit a better fit than the zero-inflated Poisson and standard Poisson regressions. Copyright © 2012 John Wiley & Sons, Ltd.

  17. Structural stability, mechanical properties, electronic structures and thermal properties of XS (X = Ti, V, Cr, Mn, Fe, Co, Ni) binary compounds

    NASA Astrophysics Data System (ADS)

    Liu, Yangzhen; Xing, Jiandong; Fu, Hanguang; Li, Yefei; Sun, Liang; Lv, Zheng

    2017-08-01

    The properties of sulfides are important in the design of new iron-steel materials. In this study, first-principles calculations were used to estimate the structural stability, mechanical properties, electronic structures and thermal properties of XS (X = Ti, V, Cr, Mn, Fe, Co, Ni) binary compounds. The results reveal that these XS binary compounds are thermodynamically stable, because their formation enthalpy is negative. The elastic constants, Cij, and moduli (B, G, E) were investigated using stress-strain and Voigt-Reuss-Hill approximation, respectively. The sulfide anisotropy was discussed from an anisotropic index and three-dimensional surface contours. The electronic structures reveal that the bonding characteristics of the XS compounds are a mixture of metallic and covalent bonds. Using a quasi-harmonic Debye approximation, the heat capacity at constant pressure and constant volume was estimated. NiS possesses the largest CP and CV of the sulfides.

  18. Analyzing hospitalization data: potential limitations of Poisson regression.

    PubMed

    Weaver, Colin G; Ravani, Pietro; Oliver, Matthew J; Austin, Peter C; Quinn, Robert R

    2015-08-01

    Poisson regression is commonly used to analyze hospitalization data when outcomes are expressed as counts (e.g. number of days in hospital). However, data often violate the assumptions on which Poisson regression is based. More appropriate extensions of this model, while available, are rarely used. We compared hospitalization data between 206 patients treated with hemodialysis (HD) and 107 treated with peritoneal dialysis (PD) using Poisson regression and compared results from standard Poisson regression with those obtained using three other approaches for modeling count data: negative binomial (NB) regression, zero-inflated Poisson (ZIP) regression and zero-inflated negative binomial (ZINB) regression. We examined the appropriateness of each model and compared the results obtained with each approach. During a mean 1.9 years of follow-up, 183 of 313 patients (58%) were never hospitalized (indicating an excess of 'zeros'). The data also displayed overdispersion (variance greater than mean), violating another assumption of the Poisson model. Using four criteria, we determined that the NB and ZINB models performed best. According to these two models, patients treated with HD experienced similar hospitalization rates as those receiving PD {NB rate ratio (RR): 1.04 [bootstrapped 95% confidence interval (CI): 0.49-2.20]; ZINB summary RR: 1.21 (bootstrapped 95% CI 0.60-2.46)}. Poisson and ZIP models fit the data poorly and had much larger point estimates than the NB and ZINB models [Poisson RR: 1.93 (bootstrapped 95% CI 0.88-4.23); ZIP summary RR: 1.84 (bootstrapped 95% CI 0.88-3.84)]. We found substantially different results when modeling hospitalization data, depending on the approach used. Our results argue strongly for a sound model selection process and improved reporting around statistical methods used for modeling count data. © The Author 2015. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.

  19. First-principles prediction of Si-doped Fe carbide as one of the possible constituents of Earth's inner core

    NASA Astrophysics Data System (ADS)

    Das, Tilak; Chatterjee, Swastika; Ghosh, Sujoy; Saha-Dasgupta, Tanusri

    2017-09-01

    We perform a computational study based on first-principles calculations to investigate the relative stability and elastic properties of the doped and undoped Fe carbide compounds at 200-364 GPa. We find that upon doping a few weight percent of Si impurities at the carbon sites in Fe7C3 carbide phases, the values of Poisson's ratio and density increase while VP, and VS decrease compared to their undoped counterparts. This leads to marked improvement in the agreement of seismic parameters such as P wave and S wave velocity, Poisson's ratio, and density with the Preliminary Reference Earth Model (PREM) data. The agreement with PREM data is found to be better for the orthorhombic phase of iron carbide (o-Fe7C3) compared to hexagonal phase (h-Fe7C3). Our theoretical analysis indicates that Fe carbide containing Si impurities can be a possible constituent of the Earth's inner core. Since the density of undoped Fe7C3 is low compared to that of inner core, as discussed in a recent theoretical study, our proposal of Si-doped Fe7C3 can provide an alternative solution as an important component of the Earth's inner core.

  20. Holographic study of conventional and negative Poisson's ratio metallic foams - Elasticity, yield and micro-deformation

    NASA Technical Reports Server (NTRS)

    Chen, C. P.; Lakes, R. S.

    1991-01-01

    An experimental study by holographic interferometry is reported of the following material properties of conventional and negative Poisson's ratio copper foams: Young's moduli, Poisson's ratios, yield strengths and characteristic lengths associated with inhomogeneous deformation. The Young's modulus and yield strength of the conventional copper foam were comparable to those predicted by microstructural modeling on the basis of cellular rib bending. The reentrant copper foam exhibited a negative Poisson's ratio, as indicated by the elliptical contour fringes on the specimen surface in the bending tests. Inhomogeneous, non-affine deformation was observed holographically in both foam materials.

  1. Polymers at interfaces and in colloidal dispersions.

    PubMed

    Fleer, Gerard J

    2010-09-15

    This review is an extended version of the Overbeek lecture 2009, given at the occasion of the 23rd Conference of ECIS (European Colloid and Interface Society) in Antalya, where I received the fifth Overbeek Gold Medal awarded by ECIS. I first summarize the basics of numerical SF-SCF: the Scheutjens-Fleer version of Self-Consistent-Field theory for inhomogeneous systems, including polymer adsorption and depletion. The conformational statistics are taken from the (non-SCF) DiMarzio-Rubin lattice model for homopolymer adsorption, which enumerates the conformational details exactly by a discrete propagator for the endpoint distribution but does not account for polymer-solvent interaction and for the volume-filling constraint. SF-SCF corrects for this by adjusting the field such that it becomes self-consistent. The model can be generalized to more complex systems: polydispersity, brushes, random and block copolymers, polyelectrolytes, branching, surfactants, micelles, membranes, vesicles, wetting, etc. On a mean-field level the results are exact; the disadvantage is that only numerical data are obtained. Extensions to excluded-volume polymers are in progress. Analytical approximations for simple systems are based upon solving the Edwards diffusion equation. This equation is the continuum variant of the lattice propagator, but ignores the finite segment size (analogous to the Poisson-Boltzmann equation without a Stern layer). By using the discrete propagator for segments next to the surface as the boundary condition in the continuum model, the finite segment size can be introduced into the continuum description, like the ion size in the Stern-Poisson-Boltzmann model. In most cases a ground-state approximation is needed to find analytical solutions. In this way realistic analytical approximations for simple cases can be found, including depletion effects that occur in mixtures of colloids plus non-adsorbing polymers. In the final part of this review I discuss a generalization of the free-volume theory (FVT) for the phase behavior of colloids and non-adsorbing polymer. In FVT the polymer is considered to be ideal: the osmotic pressure Pi follows the Van 't Hoff law, the depletion thickness delta equals the radius of gyration. This restricts the validity of FVT to the so-called colloid limit (polymer much smaller than the colloids). We have been able to find simple analytical approximations for Pi and delta which account for non-ideality and include established results for the semidilute limit. So we could generalize FVT to GFVT, and can now also describe the so-called protein limit (polymer larger than the 'protein-like' colloids), where the binodal polymer concentrations scale in a simple way with the polymer/colloid size ratio. For an intermediate case (polymer size approximately colloid size) we could give a quantitative description of careful experimental data. Copyright 2010 Elsevier B.V. All rights reserved.

  2. Thermoelastic Residual Stresses and Deformations at Laser Treatment

    NASA Astrophysics Data System (ADS)

    Gusarov, A. V.; Malakhova-Ziablova, I. S.; Pavlov, M. D.

    A thermoelastic model implying relaxation of stresses at melting is applied for materials with arbitrary thermoelastic properties and the melting point. The range of Poisson's ratio 0.17 - 0.34 is numerically studied. The residual stresses are independent of the space scale. In narrow remelted zones and beads the maximum longitudinal tensile stress is approximately twice as high as the transverse one. The calculations predict cracking of alumina, even with 1600 oC preheating, plastic deformation or cracking of hard metal alloys H13 and TA6 V, and no destruction of polystyrene and thestrongest grades of quartz glass. The calculation results can be used for predicting the thermomechanical stability of materials at laser treatment.

  3. Fabrication and characterization of a co-planar detector in diamond for low energy single ion implantation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abraham, John Bishoy Sam; Pacheco, Jose L.; Aguirre, Brandon Adrian

    2016-08-09

    We demonstrate low energy single ion detection using a co-planar detector fabricated on a diamond substrate and characterized by ion beam induced charge collection. Histograms are taken with low fluence ion pulses illustrating quantized ion detection down to a single ion with a signal-to-noise ratio of approximately 10. We anticipate that this detection technique can serve as a basis to optimize the yield of single color centers in diamond. In conclusion, the ability to count ions into a diamond substrate is expected to reduce the uncertainty in the yield of color center formation by removing Poisson statistics from the implantationmore » process.« less

  4. Nodal infection in Markovian susceptible-infected-susceptible and susceptible-infected-removed epidemics on networks are non-negatively correlated

    NASA Astrophysics Data System (ADS)

    Cator, E.; Van Mieghem, P.

    2014-05-01

    By invoking the famous Fortuin, Kasteleyn, and Ginibre (FKG) inequality, we prove the conjecture that the correlation of infection at the same time between any pair of nodes in a network cannot be negative for (exact) Markovian susceptible-infected-susceptible (SIS) and susceptible-infected-removed (SIR) epidemics on networks. The truth of the conjecture establishes that the N-intertwined mean-field approximation (NIMFA) upper bounds the infection probability in any graph so that network design based on NIMFA always leads to safe protections against malware spread. However, when the infection or/and curing are not Poisson processes, the infection correlation between two nodes can be negative.

  5. Nodal infection in Markovian susceptible-infected-susceptible and susceptible-infected-removed epidemics on networks are non-negatively correlated.

    PubMed

    Cator, E; Van Mieghem, P

    2014-05-01

    By invoking the famous Fortuin, Kasteleyn, and Ginibre (FKG) inequality, we prove the conjecture that the correlation of infection at the same time between any pair of nodes in a network cannot be negative for (exact) Markovian susceptible-infected-susceptible (SIS) and susceptible-infected-removed (SIR) epidemics on networks. The truth of the conjecture establishes that the N-intertwined mean-field approximation (NIMFA) upper bounds the infection probability in any graph so that network design based on NIMFA always leads to safe protections against malware spread. However, when the infection or/and curing are not Poisson processes, the infection correlation between two nodes can be negative.

  6. On the gap between an empirical distribution and an exponential distribution of waiting times for price changes in a financial market

    NASA Astrophysics Data System (ADS)

    Sazuka, Naoya

    2007-03-01

    We analyze waiting times for price changes in a foreign currency exchange rate. Recent empirical studies of high-frequency financial data support that trades in financial markets do not follow a Poisson process and the waiting times between trades are not exponentially distributed. Here we show that our data is well approximated by a Weibull distribution rather than an exponential distribution in the non-asymptotic regime. Moreover, we quantitatively evaluate how much an empirical data is far from an exponential distribution using a Weibull fit. Finally, we discuss a transition between a Weibull-law and a power-law in the long time asymptotic regime.

  7. 2-D modeling and analysis of short-channel behavior of a front high- K gate stack triple-material gate SB SON MOSFET

    NASA Astrophysics Data System (ADS)

    Banerjee, Pritha; Kumari, Tripty; Sarkar, Subir Kumar

    2018-02-01

    This paper presents the 2-D analytical modeling of a front high- K gate stack triple-material gate Schottky Barrier Silicon-On-Nothing MOSFET. Using the two-dimensional Poisson's equation and considering the popular parabolic potential approximation, expression for surface potential as well as the electric field has been considered. In addition, the response of the proposed device towards aggressive downscaling, that is, its extent of immunity towards the different short-channel effects, has also been considered in this work. The analytical results obtained have been validated using the simulated results obtained using ATLAS, a two-dimensional device simulator from SILVACO.

  8. Study on longitudinal dispersion relation in one-dimensional relativistic plasma: Linear theory and Vlasov simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, H.; Wu, S. Z.; Zhou, C. T.

    2013-09-15

    The dispersion relation of one-dimensional longitudinal plasma waves in relativistic homogeneous plasmas is investigated with both linear theory and Vlasov simulation in this paper. From the Vlasov-Poisson equations, the linear dispersion relation is derived for the proper one-dimensional Jüttner distribution. Numerically obtained linear dispersion relation as well as an approximate formula for plasma wave frequency in the long wavelength limit is given. The dispersion of longitudinal wave is also simulated with a relativistic Vlasov code. The real and imaginary parts of dispersion relation are well studied by varying wave number and plasma temperature. Simulation results are in agreement with establishedmore » linear theory.« less

  9. Background stratified Poisson regression analysis of cohort data.

    PubMed

    Richardson, David B; Langholz, Bryan

    2012-03-01

    Background stratified Poisson regression is an approach that has been used in the analysis of data derived from a variety of epidemiologically important studies of radiation-exposed populations, including uranium miners, nuclear industry workers, and atomic bomb survivors. We describe a novel approach to fit Poisson regression models that adjust for a set of covariates through background stratification while directly estimating the radiation-disease association of primary interest. The approach makes use of an expression for the Poisson likelihood that treats the coefficients for stratum-specific indicator variables as 'nuisance' variables and avoids the need to explicitly estimate the coefficients for these stratum-specific parameters. Log-linear models, as well as other general relative rate models, are accommodated. This approach is illustrated using data from the Life Span Study of Japanese atomic bomb survivors and data from a study of underground uranium miners. The point estimate and confidence interval obtained from this 'conditional' regression approach are identical to the values obtained using unconditional Poisson regression with model terms for each background stratum. Moreover, it is shown that the proposed approach allows estimation of background stratified Poisson regression models of non-standard form, such as models that parameterize latency effects, as well as regression models in which the number of strata is large, thereby overcoming the limitations of previously available statistical software for fitting background stratified Poisson regression models.

  10. [Application of negative binomial regression and modified Poisson regression in the research of risk factors for injury frequency].

    PubMed

    Cao, Qingqing; Wu, Zhenqiang; Sun, Ying; Wang, Tiezhu; Han, Tengwei; Gu, Chaomei; Sun, Yehuan

    2011-11-01

    To Eexplore the application of negative binomial regression and modified Poisson regression analysis in analyzing the influential factors for injury frequency and the risk factors leading to the increase of injury frequency. 2917 primary and secondary school students were selected from Hefei by cluster random sampling method and surveyed by questionnaire. The data on the count event-based injuries used to fitted modified Poisson regression and negative binomial regression model. The risk factors incurring the increase of unintentional injury frequency for juvenile students was explored, so as to probe the efficiency of these two models in studying the influential factors for injury frequency. The Poisson model existed over-dispersion (P < 0.0001) based on testing by the Lagrangemultiplier. Therefore, the over-dispersion dispersed data using a modified Poisson regression and negative binomial regression model, was fitted better. respectively. Both showed that male gender, younger age, father working outside of the hometown, the level of the guardian being above junior high school and smoking might be the results of higher injury frequencies. On a tendency of clustered frequency data on injury event, both the modified Poisson regression analysis and negative binomial regression analysis can be used. However, based on our data, the modified Poisson regression fitted better and this model could give a more accurate interpretation of relevant factors affecting the frequency of injury.

  11. Fast and Accurate Poisson Denoising With Trainable Nonlinear Diffusion.

    PubMed

    Feng, Wensen; Qiao, Peng; Chen, Yunjin; Wensen Feng; Peng Qiao; Yunjin Chen; Feng, Wensen; Chen, Yunjin; Qiao, Peng

    2018-06-01

    The degradation of the acquired signal by Poisson noise is a common problem for various imaging applications, such as medical imaging, night vision, and microscopy. Up to now, many state-of-the-art Poisson denoising techniques mainly concentrate on achieving utmost performance, with little consideration for the computation efficiency. Therefore, in this paper we aim to propose an efficient Poisson denoising model with both high computational efficiency and recovery quality. To this end, we exploit the newly developed trainable nonlinear reaction diffusion (TNRD) model which has proven an extremely fast image restoration approach with performance surpassing recent state-of-the-arts. However, the straightforward direct gradient descent employed in the original TNRD-based denoising task is not applicable in this paper. To solve this problem, we resort to the proximal gradient descent method. We retrain the model parameters, including the linear filters and influence functions by taking into account the Poisson noise statistics, and end up with a well-trained nonlinear diffusion model specialized for Poisson denoising. The trained model provides strongly competitive results against state-of-the-art approaches, meanwhile bearing the properties of simple structure and high efficiency. Furthermore, our proposed model comes along with an additional advantage, that the diffusion process is well-suited for parallel computation on graphics processing units (GPUs). For images of size , our GPU implementation takes less than 0.1 s to produce state-of-the-art Poisson denoising performance.

  12. High order solution of Poisson problems with piecewise constant coefficients and interface jumps

    NASA Astrophysics Data System (ADS)

    Marques, Alexandre Noll; Nave, Jean-Christophe; Rosales, Rodolfo Ruben

    2017-04-01

    We present a fast and accurate algorithm to solve Poisson problems in complex geometries, using regular Cartesian grids. We consider a variety of configurations, including Poisson problems with interfaces across which the solution is discontinuous (of the type arising in multi-fluid flows). The algorithm is based on a combination of the Correction Function Method (CFM) and Boundary Integral Methods (BIM). Interface and boundary conditions can be treated in a fast and accurate manner using boundary integral equations, and the associated BIM. Unfortunately, BIM can be costly when the solution is needed everywhere in a grid, e.g. fluid flow problems. We use the CFM to circumvent this issue. The solution from the BIM is used to rewrite the problem as a series of Poisson problems in rectangular domains-which requires the BIM solution at interfaces/boundaries only. These Poisson problems involve discontinuities at interfaces, of the type that the CFM can handle. Hence we use the CFM to solve them (to high order of accuracy) with finite differences and a Fast Fourier Transform based fast Poisson solver. We present 2-D examples of the algorithm applied to Poisson problems involving complex geometries, including cases in which the solution is discontinuous. We show that the algorithm produces solutions that converge with either 3rd or 4th order of accuracy, depending on the type of boundary condition and solution discontinuity.

  13. Modeling Zero-Inflated and Overdispersed Count Data: An Empirical Study of School Suspensions

    ERIC Educational Resources Information Center

    Desjardins, Christopher David

    2016-01-01

    The purpose of this article is to develop a statistical model that best explains variability in the number of school days suspended. Number of school days suspended is a count variable that may be zero-inflated and overdispersed relative to a Poisson model. Four models were examined: Poisson, negative binomial, Poisson hurdle, and negative…

  14. Alternative Derivations for the Poisson Integral Formula

    ERIC Educational Resources Information Center

    Chen, J. T.; Wu, C. S.

    2006-01-01

    Poisson integral formula is revisited. The kernel in the Poisson integral formula can be derived in a series form through the direct BEM free of the concept of image point by using the null-field integral equation in conjunction with the degenerate kernels. The degenerate kernels for the closed-form Green's function and the series form of Poisson…

  15. DL_MG: A Parallel Multigrid Poisson and Poisson-Boltzmann Solver for Electronic Structure Calculations in Vacuum and Solution.

    PubMed

    Womack, James C; Anton, Lucian; Dziedzic, Jacek; Hasnip, Phil J; Probert, Matt I J; Skylaris, Chris-Kriton

    2018-03-13

    The solution of the Poisson equation is a crucial step in electronic structure calculations, yielding the electrostatic potential-a key component of the quantum mechanical Hamiltonian. In recent decades, theoretical advances and increases in computer performance have made it possible to simulate the electronic structure of extended systems in complex environments. This requires the solution of more complicated variants of the Poisson equation, featuring nonhomogeneous dielectric permittivities, ionic concentrations with nonlinear dependencies, and diverse boundary conditions. The analytic solutions generally used to solve the Poisson equation in vacuum (or with homogeneous permittivity) are not applicable in these circumstances, and numerical methods must be used. In this work, we present DL_MG, a flexible, scalable, and accurate solver library, developed specifically to tackle the challenges of solving the Poisson equation in modern large-scale electronic structure calculations on parallel computers. Our solver is based on the multigrid approach and uses an iterative high-order defect correction method to improve the accuracy of solutions. Using two chemically relevant model systems, we tested the accuracy and computational performance of DL_MG when solving the generalized Poisson and Poisson-Boltzmann equations, demonstrating excellent agreement with analytic solutions and efficient scaling to ∼10 9 unknowns and 100s of CPU cores. We also applied DL_MG in actual large-scale electronic structure calculations, using the ONETEP linear-scaling electronic structure package to study a 2615 atom protein-ligand complex with routinely available computational resources. In these calculations, the overall execution time with DL_MG was not significantly greater than the time required for calculations using a conventional FFT-based solver.

  16. Constructing irregular surfaces to enclose macromolecular complexes for mesoscale modeling using the discrete surface charge optimization (DISCO) algorithm.

    PubMed

    Zhang, Qing; Beard, Daniel A; Schlick, Tamar

    2003-12-01

    Salt-mediated electrostatics interactions play an essential role in biomolecular structures and dynamics. Because macromolecular systems modeled at atomic resolution contain thousands of solute atoms, the electrostatic computations constitute an expensive part of the force and energy calculations. Implicit solvent models are one way to simplify the model and associated calculations, but they are generally used in combination with standard atomic models for the solute. To approximate electrostatics interactions in models on the polymer level (e.g., supercoiled DNA) that are simulated over long times (e.g., milliseconds) using Brownian dynamics, Beard and Schlick have developed the DiSCO (Discrete Surface Charge Optimization) algorithm. DiSCO represents a macromolecular complex by a few hundred discrete charges on a surface enclosing the system modeled by the Debye-Hückel (screened Coulombic) approximation to the Poisson-Boltzmann equation, and treats the salt solution as continuum solvation. DiSCO can represent the nucleosome core particle (>12,000 atoms), for example, by 353 discrete surface charges distributed on the surfaces of a large disk for the nucleosome core particle and a slender cylinder for the histone tail; the charges are optimized with respect to the Poisson-Boltzmann solution for the electric field, yielding a approximately 5.5% residual. Because regular surfaces enclosing macromolecules are not sufficiently general and may be suboptimal for certain systems, we develop a general method to construct irregular models tailored to the geometry of macromolecules. We also compare charge optimization based on both the electric field and electrostatic potential refinement. Results indicate that irregular surfaces can lead to a more accurate approximation (lower residuals), and the refinement in terms of the electric field is more robust. We also show that surface smoothing for irregular models is important, that the charge optimization (by the TNPACK minimizer) is efficient and does not depend on the initial assigned values, and that the residual is acceptable when the distance to the model surface is close to, or larger than, the Debye length. We illustrate applications of DiSCO's model-building procedure to chromatin folding and supercoiled DNA bound to Hin and Fis proteins. DiSCO is generally applicable to other interesting macromolecular systems for which mesoscale models are appropriate, to yield a resolution between the all-atom representative and the polymer level. Copyright 2003 Wiley Periodicals, Inc. J Comput Chem 24: 2063-2074, 2003

  17. Understanding poisson regression.

    PubMed

    Hayat, Matthew J; Higgins, Melinda

    2014-04-01

    Nurse investigators often collect study data in the form of counts. Traditional methods of data analysis have historically approached analysis of count data either as if the count data were continuous and normally distributed or with dichotomization of the counts into the categories of occurred or did not occur. These outdated methods for analyzing count data have been replaced with more appropriate statistical methods that make use of the Poisson probability distribution, which is useful for analyzing count data. The purpose of this article is to provide an overview of the Poisson distribution and its use in Poisson regression. Assumption violations for the standard Poisson regression model are addressed with alternative approaches, including addition of an overdispersion parameter or negative binomial regression. An illustrative example is presented with an application from the ENSPIRE study, and regression modeling of comorbidity data is included for illustrative purposes. Copyright 2014, SLACK Incorporated.

  18. Modified Regression Correlation Coefficient for Poisson Regression Model

    NASA Astrophysics Data System (ADS)

    Kaengthong, Nattacha; Domthong, Uthumporn

    2017-09-01

    This study gives attention to indicators in predictive power of the Generalized Linear Model (GLM) which are widely used; however, often having some restrictions. We are interested in regression correlation coefficient for a Poisson regression model. This is a measure of predictive power, and defined by the relationship between the dependent variable (Y) and the expected value of the dependent variable given the independent variables [E(Y|X)] for the Poisson regression model. The dependent variable is distributed as Poisson. The purpose of this research was modifying regression correlation coefficient for Poisson regression model. We also compare the proposed modified regression correlation coefficient with the traditional regression correlation coefficient in the case of two or more independent variables, and having multicollinearity in independent variables. The result shows that the proposed regression correlation coefficient is better than the traditional regression correlation coefficient based on Bias and the Root Mean Square Error (RMSE).

  19. Isolated and synergistic effects of PM10 and average temperature on cardiovascular and respiratory mortality.

    PubMed

    Pinheiro, Samya de Lara Lins de Araujo; Saldiva, Paulo Hilário Nascimento; Schwartz, Joel; Zanobetti, Antonella

    2014-12-01

    OBJECTIVE To analyze the effect of air pollution and temperature on mortality due to cardiovascular and respiratory diseases. METHODS We evaluated the isolated and synergistic effects of temperature and particulate matter with aerodynamic diameter < 10 µm (PM10) on the mortality of individuals > 40 years old due to cardiovascular disease and that of individuals > 60 years old due to respiratory diseases in Sao Paulo, SP, Southeastern Brazil, between 1998 and 2008. Three methodologies were used to evaluate the isolated association: time-series analysis using Poisson regression model, bidirectional case-crossover analysis matched by period, and case-crossover analysis matched by the confounding factor, i.e., average temperature or pollutant concentration. The graphical representation of the response surface, generated by the interaction term between these factors added to the Poisson regression model, was interpreted to evaluate the synergistic effect of the risk factors. RESULTS No differences were observed between the results of the case-crossover and time-series analyses. The percentage change in the relative risk of cardiovascular and respiratory mortality was 0.85% (0.45;1.25) and 1.60% (0.74;2.46), respectively, due to an increase of 10 μg/m3 in the PM10 concentration. The pattern of correlation of the temperature with cardiovascular mortality was U-shaped and that with respiratory mortality was J-shaped, indicating an increased relative risk at high temperatures. The values for the interaction term indicated a higher relative risk for cardiovascular and respiratory mortalities at low temperatures and high temperatures, respectively, when the pollution levels reached approximately 60 μg/m3. CONCLUSIONS The positive association standardized in the Poisson regression model for pollutant concentration is not confounded by temperature, and the effect of temperature is not confounded by the pollutant levels in the time-series analysis. The simultaneous exposure to different levels of environmental factors can create synergistic effects that are as disturbing as those caused by extreme concentrations.

  20. CUMPOIS- CUMULATIVE POISSON DISTRIBUTION PROGRAM

    NASA Technical Reports Server (NTRS)

    Bowerman, P. N.

    1994-01-01

    The Cumulative Poisson distribution program, CUMPOIS, is one of two programs which make calculations involving cumulative poisson distributions. Both programs, CUMPOIS (NPO-17714) and NEWTPOIS (NPO-17715), can be used independently of one another. CUMPOIS determines the approximate cumulative binomial distribution, evaluates the cumulative distribution function (cdf) for gamma distributions with integer shape parameters, and evaluates the cdf for chi-square distributions with even degrees of freedom. It can be used by statisticians and others concerned with probabilities of independent events occurring over specific units of time, area, or volume. CUMPOIS calculates the probability that n or less events (ie. cumulative) will occur within any unit when the expected number of events is given as lambda. Normally, this probability is calculated by a direct summation, from i=0 to n, of terms involving the exponential function, lambda, and inverse factorials. This approach, however, eventually fails due to underflow for sufficiently large values of n. Additionally, when the exponential term is moved outside of the summation for simplification purposes, there is a risk that the terms remaining within the summation, and the summation itself, will overflow for certain values of i and lambda. CUMPOIS eliminates these possibilities by multiplying an additional exponential factor into the summation terms and the partial sum whenever overflow/underflow situations threaten. The reciprocal of this term is then multiplied into the completed sum giving the cumulative probability. The CUMPOIS program is written in C. It was developed on an IBM AT with a numeric co-processor using Microsoft C 5.0. Because the source code is written using standard C structures and functions, it should compile correctly on most C compilers. The program format is interactive, accepting lambda and n as inputs. It has been implemented under DOS 3.2 and has a memory requirement of 26K. CUMPOIS was developed in 1988.

  1. Method selection and adaptation for distributed monitoring of infectious diseases for syndromic surveillance.

    PubMed

    Xing, Jian; Burkom, Howard; Tokars, Jerome

    2011-12-01

    Automated surveillance systems require statistical methods to recognize increases in visit counts that might indicate an outbreak. In prior work we presented methods to enhance the sensitivity of C2, a commonly used time series method. In this study, we compared the enhanced C2 method with five regression models. We used emergency department chief complaint data from US CDC BioSense surveillance system, aggregated by city (total of 206 hospitals, 16 cities) during 5/2008-4/2009. Data for six syndromes (asthma, gastrointestinal, nausea and vomiting, rash, respiratory, and influenza-like illness) was used and was stratified by mean count (1-19, 20-49, ≥50 per day) into 14 syndrome-count categories. We compared the sensitivity for detecting single-day artificially-added increases in syndrome counts. Four modifications of the C2 time series method, and five regression models (two linear and three Poisson), were tested. A constant alert rate of 1% was used for all methods. Among the regression models tested, we found that a Poisson model controlling for the logarithm of total visits (i.e., visits both meeting and not meeting a syndrome definition), day of week, and 14-day time period was best. Among 14 syndrome-count categories, time series and regression methods produced approximately the same sensitivity (<5% difference) in 6; in six categories, the regression method had higher sensitivity (range 6-14% improvement), and in two categories the time series method had higher sensitivity. When automated data are aggregated to the city level, a Poisson regression model that controls for total visits produces the best overall sensitivity for detecting artificially added visit counts. This improvement was achieved without increasing the alert rate, which was held constant at 1% for all methods. These findings will improve our ability to detect outbreaks in automated surveillance system data. Published by Elsevier Inc.

  2. Products and mechanism of secondary organic aerosol formation from reactions of n-alkanes with OH radicals in the presence of NOx.

    PubMed

    Lim, Yong Bin; Ziemann, Paul J

    2005-12-01

    Secondary organic aerosol (SOA) formation from reactions of n-alkanes with OH radicals in the presence of NOx was investigated in an environmental chamber using a thermal desorption particle beam mass spectrometer for particle analysis. SOA consisted of both first- and higher-generation products, all of which were nitrates. Major first-generation products were sigma-hydroxynitrates, while higher-generation products consisted of dinitrates, hydroxydinitrates, and substituted tetrahydrofurans containing nitrooxy, hydroxyl, and carbonyl groups. The substituted tetrahydrofurans are formed by a series of reactions in which sigma-hydroxycarbonyls isomerize to cyclic hemiacetals, which then dehydrate to form substituted dihydrofurans (unsaturated compounds) that quickly react with OH radicals to form lower volatility products. SOA yields ranged from approximately 0.5% for C8 to approximately 53% for C15, with a sharp increase from approximately 8% for C11 to approximately 50% for C13. This was probably due to an increase in the contribution of first-generation products, as well as other factors. For example, SOA formed from the C10 reaction contained no first-generation products, while for the C15 reaction SOA was approximately 40% first-generation and approximately 60% higher-generation products, respectively. First-generation sigma-hydroxycarbonyls are especially important in SOA formation, since their subsequent reactions can rapidly form low volatility compounds. In the atmosphere, substituted dihydrofurans created from sigma-hydroxycarbonyls will primarily react with O3 or NO3 radicals, thereby opening reaction pathways not normally accessible to saturated compounds.

  3. Noncommutative gauge theory for Poisson manifolds

    NASA Astrophysics Data System (ADS)

    Jurčo, Branislav; Schupp, Peter; Wess, Julius

    2000-09-01

    A noncommutative gauge theory is associated to every Abelian gauge theory on a Poisson manifold. The semi-classical and full quantum version of the map from the ordinary gauge theory to the noncommutative gauge theory (Seiberg-Witten map) is given explicitly to all orders for any Poisson manifold in the Abelian case. In the quantum case the construction is based on Kontsevich's formality theorem.

  4. Nearly associative deformation quantization

    NASA Astrophysics Data System (ADS)

    Vassilevich, Dmitri; Oliveira, Fernando Martins Costa

    2018-04-01

    We study several classes of non-associative algebras as possible candidates for deformation quantization in the direction of a Poisson bracket that does not satisfy Jacobi identities. We show that in fact alternative deformation quantization algebras require the Jacobi identities on the Poisson bracket and, under very general assumptions, are associative. At the same time, flexible deformation quantization algebras exist for any Poisson bracket.

  5. Intertime jump statistics of state-dependent Poisson processes.

    PubMed

    Daly, Edoardo; Porporato, Amilcare

    2007-01-01

    A method to obtain the probability distribution of the interarrival times of jump occurrences in systems driven by state-dependent Poisson noise is proposed. Such a method uses the survivor function obtained by a modified version of the master equation associated to the stochastic process under analysis. A model for the timing of human activities shows the capability of state-dependent Poisson noise to generate power-law distributions. The application of the method to a model for neuron dynamics and to a hydrological model accounting for land-atmosphere interaction elucidates the origin of characteristic recurrence intervals and possible persistence in state-dependent Poisson models.

  6. Effect of non-Poisson samples on turbulence spectra from laser velocimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sree, D.; Kjelgaard, S.O.; Sellers, W.L. III

    1994-12-01

    Spectral estimations from LV data are typically based on the assumption of a Poisson sampling process. It is demonstrated here that the sampling distribution must be considered before spectral estimates are used to infer turbulence scales. A non-Poisson sampling process can occur if there is nonhomogeneous distribution of particles in the flow. Based on the study of a simulated first-order spectrum, it has been shown that a non-Poisson sampling process causes the estimated spectrum to deviate from the true spectrum. Also, in this case the prefiltering techniques do not improve the spectral estimates at higher frequencies. 4 refs.

  7. An Intrinsic Algorithm for Parallel Poisson Disk Sampling on Arbitrary Surfaces.

    PubMed

    Ying, Xiang; Xin, Shi-Qing; Sun, Qian; He, Ying

    2013-03-08

    Poisson disk sampling plays an important role in a variety of visual computing, due to its useful statistical property in distribution and the absence of aliasing artifacts. While many effective techniques have been proposed to generate Poisson disk distribution in Euclidean space, relatively few work has been reported to the surface counterpart. This paper presents an intrinsic algorithm for parallel Poisson disk sampling on arbitrary surfaces. We propose a new technique for parallelizing the dart throwing. Rather than the conventional approaches that explicitly partition the spatial domain to generate the samples in parallel, our approach assigns each sample candidate a random and unique priority that is unbiased with regard to the distribution. Hence, multiple threads can process the candidates simultaneously and resolve conflicts by checking the given priority values. It is worth noting that our algorithm is accurate as the generated Poisson disks are uniformly and randomly distributed without bias. Our method is intrinsic in that all the computations are based on the intrinsic metric and are independent of the embedding space. This intrinsic feature allows us to generate Poisson disk distributions on arbitrary surfaces. Furthermore, by manipulating the spatially varying density function, we can obtain adaptive sampling easily.

  8. Solving the Fluid Pressure Poisson Equation Using Multigrid-Evaluation and Improvements.

    PubMed

    Dick, Christian; Rogowsky, Marcus; Westermann, Rudiger

    2016-11-01

    In many numerical simulations of fluids governed by the incompressible Navier-Stokes equations, the pressure Poisson equation needs to be solved to enforce mass conservation. Multigrid solvers show excellent convergence in simple scenarios, yet they can converge slowly in domains where physically separated regions are combined at coarser scales. Moreover, existing multigrid solvers are tailored to specific discretizations of the pressure Poisson equation, and they cannot easily be adapted to other discretizations. In this paper we analyze the convergence properties of existing multigrid solvers for the pressure Poisson equation in different simulation domains, and we show how to further improve the multigrid convergence rate by using a graph-based extension to determine the coarse grid hierarchy. The proposed multigrid solver is generic in that it can be applied to different kinds of discretizations of the pressure Poisson equation, by using solely the specification of the simulation domain and pre-assembled computational stencils. We analyze the proposed solver in combination with finite difference and finite volume discretizations of the pressure Poisson equation. Our evaluations show that, despite the common assumption, multigrid schemes can exploit their potential even in the most complicated simulation scenarios, yet this behavior is obtained at the price of higher memory consumption.

  9. Poisson image reconstruction with Hessian Schatten-norm regularization.

    PubMed

    Lefkimmiatis, Stamatios; Unser, Michael

    2013-11-01

    Poisson inverse problems arise in many modern imaging applications, including biomedical and astronomical ones. The main challenge is to obtain an estimate of the underlying image from a set of measurements degraded by a linear operator and further corrupted by Poisson noise. In this paper, we propose an efficient framework for Poisson image reconstruction, under a regularization approach, which depends on matrix-valued regularization operators. In particular, the employed regularizers involve the Hessian as the regularization operator and Schatten matrix norms as the potential functions. For the solution of the problem, we propose two optimization algorithms that are specifically tailored to the Poisson nature of the noise. These algorithms are based on an augmented-Lagrangian formulation of the problem and correspond to two variants of the alternating direction method of multipliers. Further, we derive a link that relates the proximal map of an l(p) norm with the proximal map of a Schatten matrix norm of order p. This link plays a key role in the development of one of the proposed algorithms. Finally, we provide experimental results on natural and biological images for the task of Poisson image deblurring and demonstrate the practical relevance and effectiveness of the proposed framework.

  10. From wine to pepper: rotundone, an obscure sesquiterpene, is a potent spicy aroma compound.

    PubMed

    Wood, Claudia; Siebert, Tracey E; Parker, Mango; Capone, Dimitra L; Elsey, Gordon M; Pollnitz, Alan P; Eggers, Marcus; Meier, Manfred; Vössing, Tobias; Widder, Sabine; Krammer, Gerhard; Sefton, Mark A; Herderich, Markus J

    2008-05-28

    An obscure sesquiterpene, rotundone, has been identified as a hitherto unrecognized important aroma impact compound with a strong spicy, peppercorn aroma. Excellent correlations were observed between the concentration of rotundone and the mean 'black pepper' aroma intensity rated by sensory panels for both grape and wine samples, indicating that rotundone is a major contributor to peppery characters in Shiraz grapes and wine (and to a lesser extent in wine of other varieties). Approximately 80% of a sensory panel were very sensitive to the aroma of rotundone (aroma detection threshold levels of 16 ng/L in red wine and 8 ng/L in water). Above these concentrations, these panelists described the spiked samples as more 'peppery' and 'spicy'. However, approximately 20% of panelists could not detect this compound at the highest concentration tested (4000 ng/L), even in water. Thus, the sensory experiences of two consumers enjoying the same glass of Shiraz wine might be very different. Rotundone was found in much higher amounts in other common herbs and spices, especially black and white peppercorns, where it was present at approximately 10000 times the level found in very 'peppery' wine. Rotundone is the first compound found in black or white peppercorns that has a distinctive peppery aroma. Rotundone has an odor activity value in pepper on the order of 50000-250000 and is, on this criterion, by far the most powerful aroma compound yet found in that most important spice.

  11. Quantification of integrated HIV DNA by repetitive-sampling Alu-HIV PCR on the basis of poisson statistics.

    PubMed

    De Spiegelaere, Ward; Malatinkova, Eva; Lynch, Lindsay; Van Nieuwerburgh, Filip; Messiaen, Peter; O'Doherty, Una; Vandekerckhove, Linos

    2014-06-01

    Quantification of integrated proviral HIV DNA by repetitive-sampling Alu-HIV PCR is a candidate virological tool to monitor the HIV reservoir in patients. However, the experimental procedures and data analysis of the assay are complex and hinder its widespread use. Here, we provide an improved and simplified data analysis method by adopting binomial and Poisson statistics. A modified analysis method on the basis of Poisson statistics was used to analyze the binomial data of positive and negative reactions from a 42-replicate Alu-HIV PCR by use of dilutions of an integration standard and on samples of 57 HIV-infected patients. Results were compared with the quantitative output of the previously described Alu-HIV PCR method. Poisson-based quantification of the Alu-HIV PCR was linearly correlated with the standard dilution series, indicating that absolute quantification with the Poisson method is a valid alternative for data analysis of repetitive-sampling Alu-HIV PCR data. Quantitative outputs of patient samples assessed by the Poisson method correlated with the previously described Alu-HIV PCR analysis, indicating that this method is a valid alternative for quantifying integrated HIV DNA. Poisson-based analysis of the Alu-HIV PCR data enables absolute quantification without the need of a standard dilution curve. Implementation of the CI estimation permits improved qualitative analysis of the data and provides a statistical basis for the required minimal number of technical replicates. © 2014 The American Association for Clinical Chemistry.

  12. Modeling bursts and heavy tails in human dynamics

    NASA Astrophysics Data System (ADS)

    Vázquez, Alexei; Oliveira, João Gama; Dezsö, Zoltán; Goh, Kwang-Il; Kondor, Imre; Barabási, Albert-László

    2006-03-01

    The dynamics of many social, technological and economic phenomena are driven by individual human actions, turning the quantitative understanding of human behavior into a central question of modern science. Current models of human dynamics, used from risk assessment to communications, assume that human actions are randomly distributed in time and thus well approximated by Poisson processes. Here we provide direct evidence that for five human activity patterns, such as email and letter based communications, web browsing, library visits and stock trading, the timing of individual human actions follow non-Poisson statistics, characterized by bursts of rapidly occurring events separated by long periods of inactivity. We show that the bursty nature of human behavior is a consequence of a decision based queuing process: when individuals execute tasks based on some perceived priority, the timing of the tasks will be heavy tailed, most tasks being rapidly executed, while a few experiencing very long waiting times. In contrast, priority blind execution is well approximated by uniform interevent statistics. We discuss two queuing models that capture human activity. The first model assumes that there are no limitations on the number of tasks an individual can hadle at any time, predicting that the waiting time of the individual tasks follow a heavy tailed distribution P(τw)˜τw-α with α=3/2 . The second model imposes limitations on the queue length, resulting in a heavy tailed waiting time distribution characterized by α=1 . We provide empirical evidence supporting the relevance of these two models to human activity patterns, showing that while emails, web browsing and library visitation display α=1 , the surface mail based communication belongs to the α=3/2 universality class. Finally, we discuss possible extension of the proposed queuing models and outline some future challenges in exploring the statistical mechanics of human dynamics.

  13. The Effect of Vaccination Coverage and Climate on Japanese Encephalitis in Sarawak, Malaysia

    PubMed Central

    Impoinvil, Daniel E.; Ooi, Mong How; Diggle, Peter J.; Caminade, Cyril; Cardosa, Mary Jane; Morse, Andrew P.

    2013-01-01

    Background Japanese encephalitis (JE) is the leading cause of viral encephalitis across Asia with approximately 70,000 cases a year and 10,000 to 15,000 deaths. Because JE incidence varies widely over time, partly due to inter-annual climate variability effects on mosquito vector abundance, it becomes more complex to assess the effects of a vaccination programme since more or less climatically favourable years could also contribute to a change in incidence post-vaccination. Therefore, the objective of this study was to quantify vaccination effect on confirmed Japanese encephalitis (JE) cases in Sarawak, Malaysia after controlling for climate variability to better understand temporal dynamics of JE virus transmission and control. Methodology/principal findings Monthly data on serologically confirmed JE cases were acquired from Sibu Hospital in Sarawak from 1997 to 2006. JE vaccine coverage (non-vaccine years vs. vaccine years) and meteorological predictor variables, including temperature, rainfall and the Southern Oscillation index (SOI) were tested for their association with JE cases using Poisson time series analysis and controlling for seasonality and long-term trend. Over the 10-years surveillance period, 133 confirmed JE cases were identified. There was an estimated 61% reduction in JE risk after the introduction of vaccination, when no account is taken of the effects of climate. This reduction is only approximately 45% when the effects of inter-annual variability in climate are controlled for in the model. The Poisson model indicated that rainfall (lag 1-month), minimum temperature (lag 6-months) and SOI (lag 6-months) were positively associated with JE cases. Conclusions/significance This study provides the first improved estimate of JE reduction through vaccination by taking account of climate inter-annual variability. Our analysis confirms that vaccination has substantially reduced JE risk in Sarawak but this benefit may be overestimated if climate effects are ignored. PMID:23951373

  14. GW study of the half-metallic Heusler compounds Co2MnSi and Co2FeSi

    NASA Astrophysics Data System (ADS)

    Meinert, Markus; Friedrich, Christoph; Reiss, Günter; Blügel, Stefan

    2012-12-01

    Quasiparticle spectra of potentially half-metallic Co2MnSi and Co2FeSi Heusler compounds have been calculated within the one-shot GW approximation in an all-electron framework without adjustable parameters. For Co2FeSi the many-body corrections are crucial: a pseudogap opens and good agreement of the magnetic moment with experiment is obtained. Otherwise, however, the changes with respect to the density-functional-theory starting point are moderate. For both cases we find that photoemission and x-ray absorption spectra are well described by the calculations. By comparison with the GW density of states, we conclude that the Kohn-Sham eigenvalue spectrum provides a reasonable approximation for the quasiparticle spectrum of the Heusler compounds considered in this work.

  15. Limiting Distributions of Functionals of Markov Chains.

    DTIC Science & Technology

    1984-08-01

    limiting distributions; periodic * nonhomoger.,!ous Poisson processes . 19 ANS? MACY IConuui oe nonoe’ee if necorglooy and edern thty by block numbers...homogeneous Poisson processes is of interest in itself. The problem considered in this paper is of interest in the theory of partially observable...where we obtain the limiting distribution of the interevent times. Key Words: Markov Chains, Limiting Distributions, Periodic Nonhomogeneous Poisson

  16. On the Dequantization of Fedosov's Deformation Quantization

    NASA Astrophysics Data System (ADS)

    Karabegov, Alexander V.

    2003-08-01

    To each natural deformation quantization on a Poisson manifold M we associate a Poisson morphism from the formal neighborhood of the zero section of the cotangent bundle to M to the formal neighborhood of the diagonal of the product M x M~, where M~ is a copy of M with the opposite Poisson structure. We call it dequantization of the natural deformation quantization. Then we "dequantize" Fedosov's quantization.

  17. Including long-range dependence in integrate-and-fire models of the high interspike-interval variability of cortical neurons.

    PubMed

    Jackson, B Scott

    2004-10-01

    Many different types of integrate-and-fire models have been designed in order to explain how it is possible for a cortical neuron to integrate over many independent inputs while still producing highly variable spike trains. Within this context, the variability of spike trains has been almost exclusively measured using the coefficient of variation of interspike intervals. However, another important statistical property that has been found in cortical spike trains and is closely associated with their high firing variability is long-range dependence. We investigate the conditions, if any, under which such models produce output spike trains with both interspike-interval variability and long-range dependence similar to those that have previously been measured from actual cortical neurons. We first show analytically that a large class of high-variability integrate-and-fire models is incapable of producing such outputs based on the fact that their output spike trains are always mathematically equivalent to renewal processes. This class of models subsumes a majority of previously published models, including those that use excitation-inhibition balance, correlated inputs, partial reset, or nonlinear leakage to produce outputs with high variability. Next, we study integrate-and-fire models that have (nonPoissonian) renewal point process inputs instead of the Poisson point process inputs used in the preceding class of models. The confluence of our analytical and simulation results implies that the renewal-input model is capable of producing high variability and long-range dependence comparable to that seen in spike trains recorded from cortical neurons, but only if the interspike intervals of the inputs have infinite variance, a physiologically unrealistic condition. Finally, we suggest a new integrate-and-fire model that does not suffer any of the previously mentioned shortcomings. By analyzing simulation results for this model, we show that it is capable of producing output spike trains with interspike-interval variability and long-range dependence that match empirical data from cortical spike trains. This model is similar to the other models in this study, except that its inputs are fractional-gaussian-noise-driven Poisson processes rather than renewal point processes. In addition to this model's success in producing realistic output spike trains, its inputs have long-range dependence similar to that found in most subcortical neurons in sensory pathways, including the inputs to cortex. Analysis of output spike trains from simulations of this model also shows that a tight balance between the amounts of excitation and inhibition at the inputs to cortical neurons is not necessary for high interspike-interval variability at their outputs. Furthermore, in our analysis of this model, we show that the superposition of many fractional-gaussian-noise-driven Poisson processes does not approximate a Poisson process, which challenges the common assumption that the total effect of a large number of inputs on a neuron is well represented by a Poisson process.

  18. Analysis of overdispersed count data by mixtures of Poisson variables and Poisson processes.

    PubMed

    Hougaard, P; Lee, M L; Whitmore, G A

    1997-12-01

    Count data often show overdispersion compared to the Poisson distribution. Overdispersion is typically modeled by a random effect for the mean, based on the gamma distribution, leading to the negative binomial distribution for the count. This paper considers a larger family of mixture distributions, including the inverse Gaussian mixture distribution. It is demonstrated that it gives a significantly better fit for a data set on the frequency of epileptic seizures. The same approach can be used to generate counting processes from Poisson processes, where the rate or the time is random. A random rate corresponds to variation between patients, whereas a random time corresponds to variation within patients.

  19. Super-stable Poissonian structures

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo

    2012-10-01

    In this paper we characterize classes of Poisson processes whose statistical structures are super-stable. We consider a flow generated by a one-dimensional ordinary differential equation, and an ensemble of particles ‘surfing’ the flow. The particles start from random initial positions, and are propagated along the flow by stochastic ‘wave processes’ with general statistics and general cross correlations. Setting the initial positions to be Poisson processes, we characterize the classes of Poisson processes that render the particles’ positions—at all times, and invariantly with respect to the wave processes—statistically identical to their initial positions. These Poisson processes are termed ‘super-stable’ and facilitate the generalization of the notion of stationary distributions far beyond the realm of Markov dynamics.

  20. Quantization with maximally degenerate Poisson brackets: the harmonic oscillator!

    NASA Astrophysics Data System (ADS)

    Nutku, Yavuz

    2003-07-01

    Nambu's construction of multi-linear brackets for super-integrable systems can be thought of as degenerate Poisson brackets with a maximal set of Casimirs in their kernel. By introducing privileged coordinates in phase space these degenerate Poisson brackets are brought to the form of Heisenberg's equations. We propose a definition for constructing quantum operators for classical functions, which enables us to turn the maximally degenerate Poisson brackets into operators. They pose a set of eigenvalue problems for a new state vector. The requirement of the single-valuedness of this eigenfunction leads to quantization. The example of the harmonic oscillator is used to illustrate this general procedure for quantizing a class of maximally super-integrable systems.

  1. Effects of unstratified and centre-stratified randomization in multi-centre clinical trials.

    PubMed

    Anisimov, Vladimir V

    2011-01-01

    This paper deals with the analysis of randomization effects in multi-centre clinical trials. The two randomization schemes most often used in clinical trials are considered: unstratified and centre-stratified block-permuted randomization. The prediction of the number of patients randomized to different treatment arms in different regions during the recruitment period accounting for the stochastic nature of the recruitment and effects of multiple centres is investigated. A new analytic approach using a Poisson-gamma patient recruitment model (patients arrive at different centres according to Poisson processes with rates sampled from a gamma distributed population) and its further extensions is proposed. Closed-form expressions for corresponding distributions of the predicted number of the patients randomized in different regions are derived. In the case of two treatments, the properties of the total imbalance in the number of patients on treatment arms caused by using centre-stratified randomization are investigated and for a large number of centres a normal approximation of imbalance is proved. The impact of imbalance on the power of the study is considered. It is shown that the loss of statistical power is practically negligible and can be compensated by a minor increase in sample size. The influence of patient dropout is also investigated. The impact of randomization on predicted drug supply overage is discussed. Copyright © 2010 John Wiley & Sons, Ltd.

  2. Stabilization of memory States by stochastic facilitating synapses.

    PubMed

    Miller, Paul

    2013-12-06

    Bistability within a small neural circuit can arise through an appropriate strength of excitatory recurrent feedback. The stability of a state of neural activity, measured by the mean dwelling time before a noise-induced transition to another state, depends on the neural firing-rate curves, the net strength of excitatory feedback, the statistics of spike times, and increases exponentially with the number of equivalent neurons in the circuit. Here, we show that such stability is greatly enhanced by synaptic facilitation and reduced by synaptic depression. We take into account the alteration in times of synaptic vesicle release, by calculating distributions of inter-release intervals of a synapse, which differ from the distribution of its incoming interspike intervals when the synapse is dynamic. In particular, release intervals produced by a Poisson spike train have a coefficient of variation greater than one when synapses are probabilistic and facilitating, whereas the coefficient of variation is less than one when synapses are depressing. However, in spite of the increased variability in postsynaptic input produced by facilitating synapses, their dominant effect is reduced synaptic efficacy at low input rates compared to high rates, which increases the curvature of neural input-output functions, leading to wider regions of bistability in parameter space and enhanced lifetimes of memory states. Our results are based on analytic methods with approximate formulae and bolstered by simulations of both Poisson processes and of circuits of noisy spiking model neurons.

  3. The lunar libration: comparisons between various models - a model fitted to LLR observations

    NASA Astrophysics Data System (ADS)

    Chapront, J.; Francou, G.

    2005-09-01

    We consider 4 libration models: 3 numerical models built by JPL (ephemerides for the libration in DE245, DE403 and DE405) and an analytical model improved with numerical complements fitted to recent LLR observations. The analytical solution uses 3 angular variables (ρ1, ρ2, τ) which represent the deviations with respect to Cassini's laws. After having referred the models to a unique reference frame, we study the differences between the models which depend on gravitational and tidal parameters of the Moon, as well as amplitudes and frequencies of the free librations. It appears that the differences vary widely depending of the above quantities. They correspond to a few meters displacement on the lunar surface, reminding that LLR distances are precise to the centimeter level. Taking advantage of the lunar libration theory built by Moons (1984) and improved by Chapront et al. (1999) we are able to establish 4 solutions and to represent their differences by Fourier series after a numerical substitution of the gravitational constants and free libration parameters. The results are confirmed by frequency analyses performed separately. Using DE245 as a basic reference ephemeris, we approximate the differences between the analytical and numerical models with Poisson series. The analytical solution - improved with numerical complements under the form of Poisson series - is valid over several centuries with an internal precision better than 5 centimeters.

  4. Unsteady electroosmosis in a microchannel with Poisson-Boltzmann charge distribution.

    PubMed

    Chang, Chien C; Kuo, Chih-Yu; Wang, Chang-Yi

    2011-11-01

    The present study is concerned with unsteady electroosmotic flow (EOF) in a microchannel with the electric charge distribution described by the Poisson-Boltzmann (PB) equation. The nonlinear PB equation is solved by a systematic perturbation with respect to the parameter λ which measures the strength of the wall zeta potential relative to the thermal potential. In the small λ limits (λ<1), we recover the linearized PB equation - the Debye-Hückel approximation. The solutions obtained by using only three terms in the perturbation series are shown to be accurate with errors <1% for λ up to 2. The accurate solution to the PB equation is then used to solve the electrokinetic fluid transport equation for two types of unsteady flow: transient flow driven by a suddenly applied voltage and oscillatory flow driven by a time-harmonic voltage. The solution for the transient flow has important implications on EOF as an effective means for transporting electrolytes in microchannels with various electrokinetic widths. On the other hand, the solution for the oscillatory flow is shown to have important physical implications on EOF in mixing electrolytes in terms of the amplitude and phase of the resulting time-harmonic EOF rate, which depends on the applied frequency and the electrokinetic width of the microchannel as well as on the parameter λ. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Dynamic properties of small-scale solar wind plasma fluctuations.

    PubMed

    Riazantseva, M O; Budaev, V P; Zelenyi, L M; Zastenker, G N; Pavlos, G P; Safrankova, J; Nemecek, Z; Prech, L; Nemec, F

    2015-05-13

    The paper presents the latest results of the studies of small-scale fluctuations in a turbulent flow of solar wind (SW) using measurements with extremely high temporal resolution (up to 0.03 s) of the bright monitor of SW (BMSW) plasma spectrometer operating on astrophysical SPECTR-R spacecraft at distances up to 350,000 km from the Earth. The spectra of SW ion flux fluctuations in the range of scales between 0.03 and 100 s are systematically analysed. The difference of slopes in low- and high-frequency parts of spectra and the frequency of the break point between these two characteristic slopes was analysed for different conditions in the SW. The statistical properties of the SW ion flux fluctuations were thoroughly analysed on scales less than 10 s. A high level of intermittency is demonstrated. The extended self-similarity of SW ion flux turbulent flow is constantly observed. The approximation of non-Gaussian probability distribution function of ion flux fluctuations by the Tsallis statistics shows the non-extensive character of SW fluctuations. Statistical characteristics of ion flux fluctuations are compared with the predictions of a log-Poisson model. The log-Poisson parametrization of the structure function scaling has shown that well-defined filament-like plasma structures are, as a rule, observed in the turbulent SW flows. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  6. Explanation of temporal clustering of tsunami sources using the epidemic-type aftershock sequence model

    USGS Publications Warehouse

    Geist, Eric L.

    2014-01-01

    Temporal clustering of tsunami sources is examined in terms of a branching process model. It previously was observed that there are more short interevent times between consecutive tsunami sources than expected from a stationary Poisson process. The epidemic‐type aftershock sequence (ETAS) branching process model is fitted to tsunami catalog events, using the earthquake magnitude of the causative event from the Centennial and Global Centroid Moment Tensor (CMT) catalogs and tsunami sizes above a completeness level as a mark to indicate that a tsunami was generated. The ETAS parameters are estimated using the maximum‐likelihood method. The interevent distribution associated with the ETAS model provides a better fit to the data than the Poisson model or other temporal clustering models. When tsunamigenic conditions (magnitude threshold, submarine location, dip‐slip mechanism) are applied to the Global CMT catalog, ETAS parameters are obtained that are consistent with those estimated from the tsunami catalog. In particular, the dip‐slip condition appears to result in a near zero magnitude effect for triggered tsunami sources. The overall consistency between results from the tsunami catalog and that from the earthquake catalog under tsunamigenic conditions indicates that ETAS models based on seismicity can provide the structure for understanding patterns of tsunami source occurrence. The fractional rate of triggered tsunami sources on a global basis is approximately 14%.

  7. Local box-counting dimensions of discrete quantum eigenvalue spectra: Analytical connection to quantum spectral statistics

    NASA Astrophysics Data System (ADS)

    Sakhr, Jamal; Nieminen, John M.

    2018-03-01

    Two decades ago, Wang and Ong, [Phys. Rev. A 55, 1522 (1997)], 10.1103/PhysRevA.55.1522 hypothesized that the local box-counting dimension of a discrete quantum spectrum should depend exclusively on the nearest-neighbor spacing distribution (NNSD) of the spectrum. In this Rapid Communication, we validate their hypothesis by deriving an explicit formula for the local box-counting dimension of a countably-infinite discrete quantum spectrum. This formula expresses the local box-counting dimension of a spectrum in terms of single and double integrals of the NNSD of the spectrum. As applications, we derive an analytical formula for Poisson spectra and closed-form approximations to the local box-counting dimension for spectra having Gaussian orthogonal ensemble (GOE), Gaussian unitary ensemble (GUE), and Gaussian symplectic ensemble (GSE) spacing statistics. In the Poisson and GOE cases, we compare our theoretical formulas with the published numerical data of Wang and Ong and observe excellent agreement between their data and our theory. We also study numerically the local box-counting dimensions of the Riemann zeta function zeros and the alternate levels of GOE spectra, which are often used as numerical models of spectra possessing GUE and GSE spacing statistics, respectively. In each case, the corresponding theoretical formula is found to accurately describe the numerically computed local box-counting dimension.

  8. Predicting rates of inbreeding in populations undergoing selection.

    PubMed Central

    Woolliams, J A; Bijma, P

    2000-01-01

    Tractable forms of predicting rates of inbreeding (DeltaF) in selected populations with general indices, nonrandom mating, and overlapping generations were developed, with the principal results assuming a period of equilibrium in the selection process. An existing theorem concerning the relationship between squared long-term genetic contributions and rates of inbreeding was extended to nonrandom mating and to overlapping generations. DeltaF was shown to be approximately (1)/(4)(1 - omega) times the expected sum of squared lifetime contributions, where omega is the deviation from Hardy-Weinberg proportions. This relationship cannot be used for prediction since it is based upon observed quantities. Therefore, the relationship was further developed to express DeltaF in terms of expected long-term contributions that are conditional on a set of selective advantages that relate the selection processes in two consecutive generations and are predictable quantities. With random mating, if selected family sizes are assumed to be independent Poisson variables then the expected long-term contribution could be substituted for the observed, providing (1)/(4) (since omega = 0) was increased to (1)/(2). Established theory was used to provide a correction term to account for deviations from the Poisson assumptions. The equations were successfully applied, using simple linear models, to the problem of predicting DeltaF with sib indices in discrete generations since previously published solutions had proved complex. PMID:10747074

  9. An unbiased risk estimator for image denoising in the presence of mixed poisson-gaussian noise.

    PubMed

    Le Montagner, Yoann; Angelini, Elsa D; Olivo-Marin, Jean-Christophe

    2014-03-01

    The behavior and performance of denoising algorithms are governed by one or several parameters, whose optimal settings depend on the content of the processed image and the characteristics of the noise, and are generally designed to minimize the mean squared error (MSE) between the denoised image returned by the algorithm and a virtual ground truth. In this paper, we introduce a new Poisson-Gaussian unbiased risk estimator (PG-URE) of the MSE applicable to a mixed Poisson-Gaussian noise model that unifies the widely used Gaussian and Poisson noise models in fluorescence bioimaging applications. We propose a stochastic methodology to evaluate this estimator in the case when little is known about the internal machinery of the considered denoising algorithm, and we analyze both theoretically and empirically the characteristics of the PG-URE estimator. Finally, we evaluate the PG-URE-driven parametrization for three standard denoising algorithms, with and without variance stabilizing transforms, and different characteristics of the Poisson-Gaussian noise mixture.

  10. On covariant Poisson brackets in classical field theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forger, Michael; Salles, Mário O.; Centro de Ciências Exatas e da Terra, Universidade Federal do Rio Grande do Norte, Campus Universitário – Lagoa Nova, BR–59078-970 Natal, RN

    2015-10-15

    How to give a natural geometric definition of a covariant Poisson bracket in classical field theory has for a long time been an open problem—as testified by the extensive literature on “multisymplectic Poisson brackets,” together with the fact that all these proposals suffer from serious defects. On the other hand, the functional approach does provide a good candidate which has come to be known as the Peierls–De Witt bracket and whose construction in a geometrical setting is now well understood. Here, we show how the basic “multisymplectic Poisson bracket” already proposed in the 1970s can be derived from the Peierls–De Witt bracket,more » applied to a special class of functionals. This relation allows to trace back most (if not all) of the problems encountered in the past to ambiguities (the relation between differential forms on multiphase space and the functionals they define is not one-to-one) and also to the fact that this class of functionals does not form a Poisson subalgebra.« less

  11. The Use of Crow-AMSAA Plots to Assess Mishap Trends

    NASA Technical Reports Server (NTRS)

    Dawson, Jeffrey W.

    2011-01-01

    Crow-AMSAA (CA) plots are used to model reliability growth. Use of CA plots has expanded into other areas, such as tracking events of interest to management, maintenance problems, and safety mishaps. Safety mishaps can often be successfully modeled using a Poisson probability distribution. CA plots show a Poisson process in log-log space. If the safety mishaps are a stable homogenous Poisson process, a linear fit to the points in a CA plot will have a slope of one. Slopes of greater than one indicate a nonhomogenous Poisson process, with increasing occurrence. Slopes of less than one indicate a nonhomogenous Poisson process, with decreasing occurrence. Changes in slope, known as "cusps," indicate a change in process, which could be an improvement or a degradation. After presenting the CA conceptual framework, examples are given of trending slips, trips and falls, and ergonomic incidents at NASA (from Agency-level data). Crow-AMSAA plotting is a robust tool for trending safety mishaps that can provide insight into safety performance over time.

  12. Quantization of Poisson Manifolds from the Integrability of the Modular Function

    NASA Astrophysics Data System (ADS)

    Bonechi, F.; Ciccoli, N.; Qiu, J.; Tarlini, M.

    2014-10-01

    We discuss a framework for quantizing a Poisson manifold via the quantization of its symplectic groupoid, combining the tools of geometric quantization with the results of Renault's theory of groupoid C*-algebras. This setting allows very singular polarizations. In particular, we consider the case when the modular function is multiplicatively integrable, i.e., when the space of leaves of the polarization inherits a groupoid structure. If suitable regularity conditions are satisfied, then one can define the quantum algebra as the convolution algebra of the subgroupoid of leaves satisfying the Bohr-Sommerfeld conditions. We apply this procedure to the case of a family of Poisson structures on , seen as Poisson homogeneous spaces of the standard Poisson-Lie group SU( n + 1). We show that a bihamiltonian system on defines a multiplicative integrable model on the symplectic groupoid; we compute the Bohr-Sommerfeld groupoid and show that it satisfies the needed properties for applying Renault theory. We recover and extend Sheu's description of quantum homogeneous spaces as groupoid C*-algebras.

  13. Spatial Gradients and Source Apportionment of Volatile Organic Compounds Near Roadways

    EPA Science Inventory

    Concentrations of 55 volatile organic compounds (VOCs) are reported near a highway in Raleigh, NC (traffic volume of approximately 125,000 vehicles/day). Levels of VOCs generally decreased exponentially with perpendicular distance from the roadway 10-100m). The EPA Chemical Mass ...

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    He, Yang; Xiao, Jianyuan; Zhang, Ruili

    Hamiltonian time integrators for the Vlasov-Maxwell equations are developed by a Hamiltonian splitting technique. The Hamiltonian functional is split into five parts, which produces five exactly solvable subsystems. Each subsystem is a Hamiltonian system equipped with the Morrison-Marsden-Weinstein Poisson bracket. Compositions of the exact solutions provide Poisson structure preserving/Hamiltonian methods of arbitrary high order for the Vlasov-Maxwell equations. They are then accurate and conservative over a long time because of the Poisson-preserving nature.

  15. A novel method for the accurate evaluation of Poisson's ratio of soft polymer materials.

    PubMed

    Lee, Jae-Hoon; Lee, Sang-Soo; Chang, Jun-Dong; Thompson, Mark S; Kang, Dong-Joong; Park, Sungchan; Park, Seonghun

    2013-01-01

    A new method with a simple algorithm was developed to accurately measure Poisson's ratio of soft materials such as polyvinyl alcohol hydrogel (PVA-H) with a custom experimental apparatus consisting of a tension device, a micro X-Y stage, an optical microscope, and a charge-coupled device camera. In the proposed method, the initial positions of the four vertices of an arbitrarily selected quadrilateral from the sample surface were first measured to generate a 2D 1st-order 4-node quadrilateral element for finite element numerical analysis. Next, minimum and maximum principal strains were calculated from differences between the initial and deformed shapes of the quadrilateral under tension. Finally, Poisson's ratio of PVA-H was determined by the ratio of minimum principal strain to maximum principal strain. This novel method has an advantage in the accurate evaluation of Poisson's ratio despite misalignment between specimens and experimental devices. In this study, Poisson's ratio of PVA-H was 0.44 ± 0.025 (n = 6) for 2.6-47.0% elongations with a tendency to decrease with increasing elongation. The current evaluation method of Poisson's ratio with a simple measurement system can be employed to a real-time automated vision-tracking system which is used to accurately evaluate the material properties of various soft materials.

  16. Analysis of Blood Transfusion Data Using Bivariate Zero-Inflated Poisson Model: A Bayesian Approach.

    PubMed

    Mohammadi, Tayeb; Kheiri, Soleiman; Sedehi, Morteza

    2016-01-01

    Recognizing the factors affecting the number of blood donation and blood deferral has a major impact on blood transfusion. There is a positive correlation between the variables "number of blood donation" and "number of blood deferral": as the number of return for donation increases, so does the number of blood deferral. On the other hand, due to the fact that many donors never return to donate, there is an extra zero frequency for both of the above-mentioned variables. In this study, in order to apply the correlation and to explain the frequency of the excessive zero, the bivariate zero-inflated Poisson regression model was used for joint modeling of the number of blood donation and number of blood deferral. The data was analyzed using the Bayesian approach applying noninformative priors at the presence and absence of covariates. Estimating the parameters of the model, that is, correlation, zero-inflation parameter, and regression coefficients, was done through MCMC simulation. Eventually double-Poisson model, bivariate Poisson model, and bivariate zero-inflated Poisson model were fitted on the data and were compared using the deviance information criteria (DIC). The results showed that the bivariate zero-inflated Poisson regression model fitted the data better than the other models.

  17. Analysis of Blood Transfusion Data Using Bivariate Zero-Inflated Poisson Model: A Bayesian Approach

    PubMed Central

    Mohammadi, Tayeb; Sedehi, Morteza

    2016-01-01

    Recognizing the factors affecting the number of blood donation and blood deferral has a major impact on blood transfusion. There is a positive correlation between the variables “number of blood donation” and “number of blood deferral”: as the number of return for donation increases, so does the number of blood deferral. On the other hand, due to the fact that many donors never return to donate, there is an extra zero frequency for both of the above-mentioned variables. In this study, in order to apply the correlation and to explain the frequency of the excessive zero, the bivariate zero-inflated Poisson regression model was used for joint modeling of the number of blood donation and number of blood deferral. The data was analyzed using the Bayesian approach applying noninformative priors at the presence and absence of covariates. Estimating the parameters of the model, that is, correlation, zero-inflation parameter, and regression coefficients, was done through MCMC simulation. Eventually double-Poisson model, bivariate Poisson model, and bivariate zero-inflated Poisson model were fitted on the data and were compared using the deviance information criteria (DIC). The results showed that the bivariate zero-inflated Poisson regression model fitted the data better than the other models. PMID:27703493

  18. Poisson process stimulation of an excitable membrane cable model.

    PubMed Central

    Goldfinger, M D

    1986-01-01

    The convergence of multiple inputs within a single-neuronal substrate is a common design feature of both peripheral and central nervous systems. Typically, the result of such convergence impinges upon an intracellularly contiguous axon, where it is encoded into a train of action potentials. The simplest representation of the result of convergence of multiple inputs is a Poisson process; a general representation of axonal excitability is the Hodgkin-Huxley/cable theory formalism. The present work addressed multiple input convergence upon an axon by applying Poisson process stimulation to the Hodgkin-Huxley axonal cable. The results showed that both absolute and relative refractory periods yielded in the axonal output a random but non-Poisson process. While smaller amplitude stimuli elicited a type of short-interval conditioning, larger amplitude stimuli elicited impulse trains approaching Poisson criteria except for the effects of refractoriness. These results were obtained for stimulus trains consisting of pulses of constant amplitude and constant or variable durations. By contrast, with or without stimulus pulse shape variability, the post-impulse conditional probability for impulse initiation in the steady-state was a Poisson-like process. For stimulus variability consisting of randomly smaller amplitudes or randomly longer durations, mean impulse frequency was attenuated or potentiated, respectively. Limitations and implications of these computations are discussed. PMID:3730505

  19. Species abundance in a forest community in South China: A case of poisson lognormal distribution

    USGS Publications Warehouse

    Yin, Z.-Y.; Ren, H.; Zhang, Q.-M.; Peng, S.-L.; Guo, Q.-F.; Zhou, G.-Y.

    2005-01-01

    Case studies on Poisson lognormal distribution of species abundance have been rare, especially in forest communities. We propose a numerical method to fit the Poisson lognormal to the species abundance data at an evergreen mixed forest in the Dinghushan Biosphere Reserve, South China. Plants in the tree, shrub and herb layers in 25 quadrats of 20 m??20 m, 5 m??5 m, and 1 m??1 m were surveyed. Results indicated that: (i) for each layer, the observed species abundance with a similarly small median, mode, and a variance larger than the mean was reverse J-shaped and followed well the zero-truncated Poisson lognormal; (ii) the coefficient of variation, skewness and kurtosis of abundance, and two Poisson lognormal parameters (?? and ??) for shrub layer were closer to those for the herb layer than those for the tree layer; and (iii) from the tree to the shrub to the herb layer, the ?? and the coefficient of variation decreased, whereas diversity increased. We suggest that: (i) the species abundance distributions in the three layers reflects the overall community characteristics; (ii) the Poisson lognormal can describe the species abundance distribution in diverse communities with a few abundant species but many rare species; and (iii) 1/?? should be an alternative measure of diversity.

  20. Hydrogen storage in lithium hydride: A theoretical approach

    NASA Astrophysics Data System (ADS)

    Banger, Suman; Nayak, Vikas; Verma, U. P.

    2018-04-01

    First principles calculations have been carried out to analyze structural stability of lithium hydride (LiH) in NaCl phase using the full potential linearized augmented plane wave (FP-LAPW) method within the framework of density functional theory (DFT). Calculations have been extended to physiosorbed H-atom compounds LiH·H2, LiH·3H2 and LiH·4H2. The obtained results are discussed in the paper. The results for LiH are in excellent agreement with earlier reported data. The obtained direct energy band gap of LiH is 3.0 eV which is in excellent agreement with earlier reported theoretical band gap. The electronic band structure plots of the hydrogen adsorbed compounds show metallic behavior. The elastic constants, anisotropy factor, shear modulus, Young's modulus, Poisson's ratio and cohesive energies of all the compounds are calculated. Calculation of the optical spectra such as the real and imaginary parts of dielectric function, optical reflectivity, absorption coefficient, optical conductivity, refractive index, extinction coefficient and electron energy loss are performed for the energy range 0-15 eV. The obtained results for LiH·H2, LiH·3H2 and LiH·4H2, are reported for the first time. This study has been made in search of materials for hydrogen storage. It is concluded that LiH is a promising material for hydrogen storage.

  1. The impact of short term synaptic depression and stochastic vesicle dynamics on neuronal variability

    PubMed Central

    Reich, Steven

    2014-01-01

    Neuronal variability plays a central role in neural coding and impacts the dynamics of neuronal networks. Unreliability of synaptic transmission is a major source of neural variability: synaptic neurotransmitter vesicles are released probabilistically in response to presynaptic action potentials and are recovered stochastically in time. The dynamics of this process of vesicle release and recovery interacts with variability in the arrival times of presynaptic spikes to shape the variability of the postsynaptic response. We use continuous time Markov chain methods to analyze a model of short term synaptic depression with stochastic vesicle dynamics coupled with three different models of presynaptic spiking: one model in which the timing of presynaptic action potentials are modeled as a Poisson process, one in which action potentials occur more regularly than a Poisson process (sub-Poisson) and one in which action potentials occur more irregularly (super-Poisson). We use this analysis to investigate how variability in a presynaptic spike train is transformed by short term depression and stochastic vesicle dynamics to determine the variability of the postsynaptic response. We find that sub-Poisson presynaptic spiking increases the average rate at which vesicles are released, that the number of vesicles released over a time window is more variable for smaller time windows than larger time windows and that fast presynaptic spiking gives rise to Poisson-like variability of the postsynaptic response even when presynaptic spike times are non-Poisson. Our results complement and extend previously reported theoretical results and provide possible explanations for some trends observed in recorded data. PMID:23354693

  2. NEWTPOIS- NEWTON POISSON DISTRIBUTION PROGRAM

    NASA Technical Reports Server (NTRS)

    Bowerman, P. N.

    1994-01-01

    The cumulative poisson distribution program, NEWTPOIS, is one of two programs which make calculations involving cumulative poisson distributions. Both programs, NEWTPOIS (NPO-17715) and CUMPOIS (NPO-17714), can be used independently of one another. NEWTPOIS determines percentiles for gamma distributions with integer shape parameters and calculates percentiles for chi-square distributions with even degrees of freedom. It can be used by statisticians and others concerned with probabilities of independent events occurring over specific units of time, area, or volume. NEWTPOIS determines the Poisson parameter (lambda), that is; the mean (or expected) number of events occurring in a given unit of time, area, or space. Given that the user already knows the cumulative probability for a specific number of occurrences (n) it is usually a simple matter of substitution into the Poisson distribution summation to arrive at lambda. However, direct calculation of the Poisson parameter becomes difficult for small positive values of n and unmanageable for large values. NEWTPOIS uses Newton's iteration method to extract lambda from the initial value condition of the Poisson distribution where n=0, taking successive estimations until some user specified error term (epsilon) is reached. The NEWTPOIS program is written in C. It was developed on an IBM AT with a numeric co-processor using Microsoft C 5.0. Because the source code is written using standard C structures and functions, it should compile correctly on most C compilers. The program format is interactive, accepting epsilon, n, and the cumulative probability of the occurrence of n as inputs. It has been implemented under DOS 3.2 and has a memory requirement of 30K. NEWTPOIS was developed in 1988.

  3. Zero-inflated Poisson model based likelihood ratio test for drug safety signal detection.

    PubMed

    Huang, Lan; Zheng, Dan; Zalkikar, Jyoti; Tiwari, Ram

    2017-02-01

    In recent decades, numerous methods have been developed for data mining of large drug safety databases, such as Food and Drug Administration's (FDA's) Adverse Event Reporting System, where data matrices are formed by drugs such as columns and adverse events as rows. Often, a large number of cells in these data matrices have zero cell counts and some of them are "true zeros" indicating that the drug-adverse event pairs cannot occur, and these zero counts are distinguished from the other zero counts that are modeled zero counts and simply indicate that the drug-adverse event pairs have not occurred yet or have not been reported yet. In this paper, a zero-inflated Poisson model based likelihood ratio test method is proposed to identify drug-adverse event pairs that have disproportionately high reporting rates, which are also called signals. The maximum likelihood estimates of the model parameters of zero-inflated Poisson model based likelihood ratio test are obtained using the expectation and maximization algorithm. The zero-inflated Poisson model based likelihood ratio test is also modified to handle the stratified analyses for binary and categorical covariates (e.g. gender and age) in the data. The proposed zero-inflated Poisson model based likelihood ratio test method is shown to asymptotically control the type I error and false discovery rate, and its finite sample performance for signal detection is evaluated through a simulation study. The simulation results show that the zero-inflated Poisson model based likelihood ratio test method performs similar to Poisson model based likelihood ratio test method when the estimated percentage of true zeros in the database is small. Both the zero-inflated Poisson model based likelihood ratio test and likelihood ratio test methods are applied to six selected drugs, from the 2006 to 2011 Adverse Event Reporting System database, with varying percentages of observed zero-count cells.

  4. Tweedie convergence: a mathematical basis for Taylor's power law, 1/f noise, and multifractality.

    PubMed

    Kendal, Wayne S; Jørgensen, Bent

    2011-12-01

    Plants and animals of a given species tend to cluster within their habitats in accordance with a power function between their mean density and the variance. This relationship, Taylor's power law, has been variously explained by ecologists in terms of animal behavior, interspecies interactions, demographic effects, etc., all without consensus. Taylor's law also manifests within a wide range of other biological and physical processes, sometimes being referred to as fluctuation scaling and attributed to effects of the second law of thermodynamics. 1/f noise refers to power spectra that have an approximately inverse dependence on frequency. Like Taylor's law these spectra manifest from a wide range of biological and physical processes, without general agreement as to cause. One contemporary paradigm for 1/f noise has been based on the physics of self-organized criticality. We show here that Taylor's law (when derived from sequential data using the method of expanding bins) implies 1/f noise, and that both phenomena can be explained by a central limit-like effect that establishes the class of Tweedie exponential dispersion models as foci for this convergence. These Tweedie models are probabilistic models characterized by closure under additive and reproductive convolution as well as under scale transformation, and consequently manifest a variance to mean power function. We provide examples of Taylor's law, 1/f noise, and multifractality within the eigenvalue deviations of the Gaussian unitary and orthogonal ensembles, and show that these deviations conform to the Tweedie compound Poisson distribution. The Tweedie convergence theorem provides a unified mathematical explanation for the origin of Taylor's law and 1/f noise applicable to a wide range of biological, physical, and mathematical processes, as well as to multifractality.

  5. Evaluation of copper, aluminum, and nickel interatomic potentials on predicting the elastic properties

    NASA Astrophysics Data System (ADS)

    Rassoulinejad-Mousavi, Seyed Moein; Mao, Yijin; Zhang, Yuwen

    2016-06-01

    Choice of appropriate force field is one of the main concerns of any atomistic simulation that needs to be seriously considered in order to yield reliable results. Since investigations on the mechanical behavior of materials at micro/nanoscale have been becoming much more widespread, it is necessary to determine an adequate potential which accurately models the interaction of the atoms for desired applications. In this framework, reliability of multiple embedded atom method based interatomic potentials for predicting the elastic properties was investigated. Assessments were carried out for different copper, aluminum, and nickel interatomic potentials at room temperature which is considered as the most applicable case. Examined force fields for the three species were taken from online repositories of National Institute of Standards and Technology, as well as the Sandia National Laboratories, the LAMMPS database. Using molecular dynamic simulations, the three independent elastic constants, C11, C12, and C44, were found for Cu, Al, and Ni cubic single crystals. Voigt-Reuss-Hill approximation was then implemented to convert elastic constants of the single crystals into isotropic polycrystalline elastic moduli including bulk modulus, shear modulus, and Young's modulus as well as Poisson's ratio. Simulation results from massive molecular dynamic were compared with available experimental data in the literature to justify the robustness of each potential for each species. Eventually, accurate interatomic potentials have been recommended for finding each of the elastic properties of the pure species. Exactitude of the elastic properties was found to be sensitive to the choice of the force fields. Those potentials that were fitted for a specific compound may not necessarily work accurately for all the existing pure species. Tabulated results in this paper might be used as a benchmark to increase assurance of using the interatomic potential that was designated for a problem.

  6. Intense Photosensitized Emission from Stoichiometric Compounds Featuring Mn(2+) in Seven- and Eightfold Coordination Environments.

    PubMed

    Reid, Howard O. N.; Kahwa, Ishenkumba A.; White, Andrew J. P.; Williams, David J.

    1998-07-27

    Synthetic, structural and luminescence studies of stoichiometric crown ether compounds of Mn(2+) in well-defined coordination environments were undertaken in an effort to understand the origin of emitting crystal defects found in cubic F23 [(K18C6)(4)MnBr(4)][TlBr(4)](2) crystals (Fender, N. S.; et al. Inorg. Chem. 1997, 36, 5539). The new compound [Mn(12C4)(2)][MnBr(4)](2)[N(CH(3))(4)](2) (3) features Mn(2+) ions in eight- and fourfold coordination environments of [Mn(12C4)(2)](2+) and MnBr(4)(2)(-) respectively, while Mn(2+) in [Mn(15C5)(H(2)O)(2)][TlBr(5)] (4) is in the sevenfold coordination polyhedron of [Mn(15C5)(H(2)O)(2)](2+). Crystal data for 3: monoclinic, P2(1)/c (No. 14); a = 14.131(3) Å, b = 12.158(1) Å, c = 14.239(2) Å, beta = 110.37(1) degrees, Z = 2, R1 = 0.039 and wR2 = 0.083. For 3, long-lived emission (77 K decay rate approximately 3 x 10 s(-)(1)) from [Mn(12C4)(2)](2+) (the first for eight-coordinate Mn(2+) in stoichiometric compounds) is observed (lambda(max) approximately 546 nm) along with that of the sensitizing MnBr(4)(2)(-) (lambda(max) approximately 513 nm), which is partially quenched. Emission from the seven-coordinate [Mn(15C5)(H(2)O)(2)](2+) species of 4 and [Mn(15C5)(H(2)O)(2)][MnBr(4)] (the first for seven-coordinate Mn(2+) in stoichiometric compounds) peaks at lambda(max) approximately 592 nm. Unusually intense absorptions attributable to the seven-coordinate species are observed at 317 ((2)T(2)((2)I) <-- (6)A(1)), 342 ((4)T(1)((4)P) <-- (6)A(1)), 406 ((4)E((4)G) <-- (6)A(1)), and 531 ((4)T(1)((4)G) <-- (6)A(1)) nm.

  7. Modeling of First-Passage Processes in Financial Markets

    NASA Astrophysics Data System (ADS)

    Inoue, Jun-Ichi; Hino, Hikaru; Sazuka, Naoya; Scalas, Enrico

    2010-03-01

    In this talk, we attempt to make a microscopic modeling the first-passage process (or the first-exit process) of the BUND future by minority game with market history. We find that the first-passage process of the minority game with appropriate history length generates the same properties as the BTP future (the middle and long term Italian Government bonds with fixed interest rates), namely, both first-passage time distributions have a crossover at some specific time scale as is the case for the Mittag-Leffler function. We also provide a macroscopic (or a phenomenological) modeling of the first-passage process of the BTP future and show analytically that the first-passage time distribution of a simplest mixture of the normal compound Poisson processes does not have such a crossover.

  8. Modeling Stochastic Variability in the Numbers of Surviving Salmonella enterica, Enterohemorrhagic Escherichia coli, and Listeria monocytogenes Cells at the Single-Cell Level in a Desiccated Environment

    PubMed Central

    Koyama, Kento; Hokunan, Hidekazu; Hasegawa, Mayumi; Kawamura, Shuso

    2016-01-01

    ABSTRACT Despite effective inactivation procedures, small numbers of bacterial cells may still remain in food samples. The risk that bacteria will survive these procedures has not been estimated precisely because deterministic models cannot be used to describe the uncertain behavior of bacterial populations. We used the Poisson distribution as a representative probability distribution to estimate the variability in bacterial numbers during the inactivation process. Strains of four serotypes of Salmonella enterica, three serotypes of enterohemorrhagic Escherichia coli, and one serotype of Listeria monocytogenes were evaluated for survival. We prepared bacterial cell numbers following a Poisson distribution (indicated by the parameter λ, which was equal to 2) and plated the cells in 96-well microplates, which were stored in a desiccated environment at 10% to 20% relative humidity and at 5, 15, and 25°C. The survival or death of the bacterial cells in each well was confirmed by adding tryptic soy broth as an enrichment culture. Changes in the Poisson distribution parameter during the inactivation process, which represent the variability in the numbers of surviving bacteria, were described by nonlinear regression with an exponential function based on a Weibull distribution. We also examined random changes in the number of surviving bacteria using a random number generator and computer simulations to determine whether the number of surviving bacteria followed a Poisson distribution during the bacterial death process by use of the Poisson process. For small initial cell numbers, more than 80% of the simulated distributions (λ = 2 or 10) followed a Poisson distribution. The results demonstrate that variability in the number of surviving bacteria can be described as a Poisson distribution by use of the model developed by use of the Poisson process. IMPORTANCE We developed a model to enable the quantitative assessment of bacterial survivors of inactivation procedures because the presence of even one bacterium can cause foodborne disease. The results demonstrate that the variability in the numbers of surviving bacteria was described as a Poisson distribution by use of the model developed by use of the Poisson process. Description of the number of surviving bacteria as a probability distribution rather than as the point estimates used in a deterministic approach can provide a more realistic estimation of risk. The probability model should be useful for estimating the quantitative risk of bacterial survival during inactivation. PMID:27940547

  9. Modeling Stochastic Variability in the Numbers of Surviving Salmonella enterica, Enterohemorrhagic Escherichia coli, and Listeria monocytogenes Cells at the Single-Cell Level in a Desiccated Environment.

    PubMed

    Koyama, Kento; Hokunan, Hidekazu; Hasegawa, Mayumi; Kawamura, Shuso; Koseki, Shigenobu

    2017-02-15

    Despite effective inactivation procedures, small numbers of bacterial cells may still remain in food samples. The risk that bacteria will survive these procedures has not been estimated precisely because deterministic models cannot be used to describe the uncertain behavior of bacterial populations. We used the Poisson distribution as a representative probability distribution to estimate the variability in bacterial numbers during the inactivation process. Strains of four serotypes of Salmonella enterica, three serotypes of enterohemorrhagic Escherichia coli, and one serotype of Listeria monocytogenes were evaluated for survival. We prepared bacterial cell numbers following a Poisson distribution (indicated by the parameter λ, which was equal to 2) and plated the cells in 96-well microplates, which were stored in a desiccated environment at 10% to 20% relative humidity and at 5, 15, and 25°C. The survival or death of the bacterial cells in each well was confirmed by adding tryptic soy broth as an enrichment culture. Changes in the Poisson distribution parameter during the inactivation process, which represent the variability in the numbers of surviving bacteria, were described by nonlinear regression with an exponential function based on a Weibull distribution. We also examined random changes in the number of surviving bacteria using a random number generator and computer simulations to determine whether the number of surviving bacteria followed a Poisson distribution during the bacterial death process by use of the Poisson process. For small initial cell numbers, more than 80% of the simulated distributions (λ = 2 or 10) followed a Poisson distribution. The results demonstrate that variability in the number of surviving bacteria can be described as a Poisson distribution by use of the model developed by use of the Poisson process. We developed a model to enable the quantitative assessment of bacterial survivors of inactivation procedures because the presence of even one bacterium can cause foodborne disease. The results demonstrate that the variability in the numbers of surviving bacteria was described as a Poisson distribution by use of the model developed by use of the Poisson process. Description of the number of surviving bacteria as a probability distribution rather than as the point estimates used in a deterministic approach can provide a more realistic estimation of risk. The probability model should be useful for estimating the quantitative risk of bacterial survival during inactivation. Copyright © 2017 Koyama et al.

  10. A GPU-based large-scale Monte Carlo simulation method for systems with long-range interactions

    NASA Astrophysics Data System (ADS)

    Liang, Yihao; Xing, Xiangjun; Li, Yaohang

    2017-06-01

    In this work we present an efficient implementation of Canonical Monte Carlo simulation for Coulomb many body systems on graphics processing units (GPU). Our method takes advantage of the GPU Single Instruction, Multiple Data (SIMD) architectures, and adopts the sequential updating scheme of Metropolis algorithm. It makes no approximation in the computation of energy, and reaches a remarkable 440-fold speedup, compared with the serial implementation on CPU. We further use this method to simulate primitive model electrolytes, and measure very precisely all ion-ion pair correlation functions at high concentrations. From these data, we extract the renormalized Debye length, renormalized valences of constituent ions, and renormalized dielectric constants. These results demonstrate unequivocally physics beyond the classical Poisson-Boltzmann theory.

  11. A differential equation for the Generalized Born radii.

    PubMed

    Fogolari, Federico; Corazza, Alessandra; Esposito, Gennaro

    2013-06-28

    The Generalized Born (GB) model offers a convenient way of representing electrostatics in complex macromolecules like proteins or nucleic acids. The computation of atomic GB radii is currently performed by different non-local approaches involving volume or surface integrals. Here we obtain a non-linear second-order partial differential equation for the Generalized Born radius, which may be solved using local iterative algorithms. The equation is derived under the assumption that the usual GB approximation to the reaction field obeys Laplace's equation. The equation admits as particular solutions the correct GB radii for the sphere and the plane. The tests performed on a set of 55 different proteins show an overall agreement with other reference GB models and "perfect" Poisson-Boltzmann based values.

  12. Auto covariance computer

    NASA Technical Reports Server (NTRS)

    Hepner, T. E.; Meyers, J. F. (Inventor)

    1985-01-01

    A laser velocimeter covariance processor which calculates the auto covariance and cross covariance functions for a turbulent flow field based on Poisson sampled measurements in time from a laser velocimeter is described. The device will process a block of data that is up to 4096 data points in length and return a 512 point covariance function with 48-bit resolution along with a 512 point histogram of the interarrival times which is used to normalize the covariance function. The device is designed to interface and be controlled by a minicomputer from which the data is received and the results returned. A typical 4096 point computation takes approximately 1.5 seconds to receive the data, compute the covariance function, and return the results to the computer.

  13. Reversible Heating in Electric Double Layer Capacitors

    NASA Astrophysics Data System (ADS)

    Janssen, Mathijs; van Roij, René

    2017-03-01

    A detailed comparison is made between different viewpoints on reversible heating in electric double layer capacitors. We show in the limit of slow charging that a combined Poisson-Nernst-Planck and heat equation, first studied by d'Entremont and Pilon [J. Power Sources 246, 887 (2014), 10.1016/j.jpowsour.2013.08.024], recovers the temperature changes as predicted by the thermodynamic identity of Janssen et al. [Phys. Rev. Lett. 113, 268501 (2014), 10.1103/PhysRevLett.113.268501], and disagrees with the approximative model of Schiffer et al. [J. Power Sources 160, 765 (2006), 10.1016/j.jpowsour.2005.12.070] that predominates the literature. The thermal response to the adiabatic charging of supercapacitors contains information on electric double layer formation that has remained largely unexplored.

  14. Bayesian methods in reliability

    NASA Astrophysics Data System (ADS)

    Sander, P.; Badoux, R.

    1991-11-01

    The present proceedings from a course on Bayesian methods in reliability encompasses Bayesian statistical methods and their computational implementation, models for analyzing censored data from nonrepairable systems, the traits of repairable systems and growth models, the use of expert judgment, and a review of the problem of forecasting software reliability. Specific issues addressed include the use of Bayesian methods to estimate the leak rate of a gas pipeline, approximate analyses under great prior uncertainty, reliability estimation techniques, and a nonhomogeneous Poisson process. Also addressed are the calibration sets and seed variables of expert judgment systems for risk assessment, experimental illustrations of the use of expert judgment for reliability testing, and analyses of the predictive quality of software-reliability growth models such as the Weibull order statistics.

  15. Probabilities for gravitational lensing by point masses in a locally inhomogeneous universe

    NASA Technical Reports Server (NTRS)

    Isaacson, Jeffrey A.; Canizares, Claude R.

    1989-01-01

    Probability functions for gravitational lensing by point masses that incorporate Poisson statistics and flux conservation are formulated in the Dyer-Roeder construction. Optical depths to lensing for distant sources are calculated using both the method of Press and Gunn (1973) which counts lenses in an otherwise empty cone, and the method of Ehlers and Schneider (1986) which projects lensing cross sections onto the source sphere. These are then used as parameters of the probability density for lensing in the case of a critical (q0 = 1/2) Friedmann universe. A comparison of the probability functions indicates that the effects of angle-averaging can be well approximated by adjusting the average magnification along a random line of sight so as to conserve flux.

  16. Analytic drain current model for III-V cylindrical nanowire transistors

    NASA Astrophysics Data System (ADS)

    Marin, E. G.; Ruiz, F. G.; Schmidt, V.; Godoy, A.; Riel, H.; Gámiz, F.

    2015-07-01

    An analytical model is proposed to determine the drain current of III-V cylindrical nanowires (NWs). The model uses the gradual channel approximation and takes into account the complete analytical solution of the Poisson and Schrödinger equations for the Γ-valley and for an arbitrary number of subbands. Fermi-Dirac statistics are considered to describe the 1D electron gas in the NWs, being the resulting recursive Fermi-Dirac integral of order -1/2 successfully integrated under reasonable assumptions. The model has been validated against numerical simulations showing excellent agreement for different semiconductor materials, diameters up to 40 nm, gate overdrive biases up to 0.7 V, and densities of interface states up to 1013eV-1cm-2 .

  17. Note on the coefficient of variations of neuronal spike trains.

    PubMed

    Lengler, Johannes; Steger, Angelika

    2017-08-01

    It is known that many neurons in the brain show spike trains with a coefficient of variation (CV) of the interspike times of approximately 1, thus resembling the properties of Poisson spike trains. Computational studies have been able to reproduce this phenomenon. However, the underlying models were too complex to be examined analytically. In this paper, we offer a simple model that shows the same effect but is accessible to an analytic treatment. The model is a random walk model with a reflecting barrier; we give explicit formulas for the CV in the regime of excess inhibition. We also analyze the effect of probabilistic synapses in our model and show that it resembles previous findings that were obtained by simulation.

  18. The Generalized Born solvation model: What is it?

    NASA Astrophysics Data System (ADS)

    Onufriev, Alexey

    2004-03-01

    Implicit solvation models provide, for many applications, an effective way of describing the electrostatic effects of aqueous solvation. Here we outline the main approximations behind the popular Generalized Born solvation model. We show how its accuracy, relative to the Poisson-Boltzmann treatment, can be significantly improved in a computationally inexpensive manner to make the model useful in the studies of large-scale conformational transitions at the atomic level. The improved model is tested in a molecular dynamics simulation of folding of a 46-residue (three helix bundle) protein. Starting from an extended structure at 450K, the protein folds to the lowest energy conformation within 6 ns of simulation time, and the predicted structure differs from the native one by 2.4 A (backbone RMSD).

  19. Oxime-Induced Reactivation of Carboxylesterase Inhibited by Organophosphorus Compounds

    DTIC Science & Technology

    1993-05-13

    detoxication enzyme for OP compounds (Maxwell, 1992a), when in the presence of an uncharged oxime, becomes even more effective because it is easily...Wolring, 1984). Therefore, oxime-induced reactivation of OP-inhibited CaE for protection by enhancement of OP detoxication occurs at approximately the

  20. Comparison of two extraction techniques, solid-phase microextraction versus continuous liquid-liquid extraction/solvent-assisted flavor evaporation, for the analysis of flavor compounds in gueuze lambic beer.

    PubMed

    Thompson-Witrick, Katherine A; Rouseff, Russell L; Cadawallader, Keith R; Duncan, Susan E; Eigel, William N; Tanko, James M; O'Keefe, Sean F

    2015-03-01

    Lambic is a beer style that undergoes spontaneous fermentation and is traditionally produced in the Payottenland region of Belgium, a valley on the Senne River west of Brussels. This region appears to have the perfect combination of airborne microorganisms required for lambic's spontaneous fermentation. Gueuze lambic is a substyle of lambic that is made by mixing young (approximately 1 year) and old (approximately 2 to 3 years) lambics with subsequent bottle conditioning. We compared 2 extraction techniques, solid-phase microextraction (SPME) and continuous liquid-liquid extraction/solvent-assisted flavor evaporation (CCLE/SAFE), for the isolation of volatile compounds in commercially produced gueuze lambic beer. Fifty-four volatile compounds were identified and could be divided into acids (14), alcohols (12), aldehydes (3), esters (20), phenols (3), and miscellaneous (2). SPME extracted a total of 40 volatile compounds, whereas CLLE/SAFE extracted 36 volatile compounds. CLLE/SAFE extracted a greater number of acids than SPME, whereas SPME was able to isolate a greater number of esters. Neither extraction technique proved to be clearly superior and both extraction methods can be utilized for the isolation of volatile compounds found in gueuze lambic beer. © 2015 Institute of Food Technologists®

Top