Science.gov

Sample records for additive poisson models

  1. Modelling of filariasis in East Java with Poisson regression and generalized Poisson regression models

    NASA Astrophysics Data System (ADS)

    Darnah

    2016-04-01

    Poisson regression has been used if the response variable is count data that based on the Poisson distribution. The Poisson distribution assumed equal dispersion. In fact, a situation where count data are over dispersion or under dispersion so that Poisson regression inappropriate because it may underestimate the standard errors and overstate the significance of the regression parameters, and consequently, giving misleading inference about the regression parameters. This paper suggests the generalized Poisson regression model to handling over dispersion and under dispersion on the Poisson regression model. The Poisson regression model and generalized Poisson regression model will be applied the number of filariasis cases in East Java. Based regression Poisson model the factors influence of filariasis are the percentage of families who don't behave clean and healthy living and the percentage of families who don't have a healthy house. The Poisson regression model occurs over dispersion so that we using generalized Poisson regression. The best generalized Poisson regression model showing the factor influence of filariasis is percentage of families who don't have healthy house. Interpretation of result the model is each additional 1 percentage of families who don't have healthy house will add 1 people filariasis patient.

  2. Impact of Influenza on Outpatient Visits, Hospitalizations, and Deaths by Using a Time Series Poisson Generalized Additive Model

    PubMed Central

    Guo, Ru-ning; Zheng, Hui-zhen; Ou, Chun-quan; Huang, Li-qun; Zhou, Yong; Zhang, Xin; Liang, Can-kun; Lin, Jin-yan; Zhong, Hao-jie; Song, Tie; Luo, Hui-ming

    2016-01-01

    Background The disease burden associated with influenza in developing tropical and subtropical countries is poorly understood owing to the lack of a comprehensive disease surveillance system and information-exchange mechanisms. The impact of influenza on outpatient visits, hospital admissions, and deaths has not been fully demonstrated to date in south China. Methods A time series Poisson generalized additive model was used to quantitatively assess influenza-like illness (ILI) and influenza disease burden by using influenza surveillance data in Zhuhai City from 2007 to 2009, combined with the outpatient, inpatient, and respiratory disease mortality data of the same period. Results The influenza activity in Zhuhai City demonstrated a typical subtropical seasonal pattern; however, each influenza virus subtype showed a specific transmission variation. The weekly ILI case number and virus isolation rate had a very close positive correlation (r = 0.774, P < 0.0001). The impact of ILI and influenza on weekly outpatient visits was statistically significant (P < 0.05). We determined that 10.7% of outpatient visits were associated with ILI and 1.88% were associated with influenza. ILI also had a significant influence on the hospitalization rates (P < 0.05), but mainly in populations <25 years of age. No statistically significant effect of influenza on hospital admissions was found (P > 0.05). The impact of ILI on chronic obstructive pulmonary disease (COPD) was most significant (P < 0.05), with 33.1% of COPD-related deaths being attributable to ILI. The impact of influenza on the mortality rate requires further evaluation. Conclusions ILI is a feasible indicator of influenza activity. Both ILI and influenza have a large impact on outpatient visits. Although ILI affects the number of hospital admissions and deaths, we found no consistent influence of influenza, which requires further assessment. PMID:26894876

  3. Estimation of count data using mixed Poisson, generalized Poisson and finite Poisson mixture regression models

    NASA Astrophysics Data System (ADS)

    Zamani, Hossein; Faroughi, Pouya; Ismail, Noriszura

    2014-06-01

    This study relates the Poisson, mixed Poisson (MP), generalized Poisson (GP) and finite Poisson mixture (FPM) regression models through mean-variance relationship, and suggests the application of these models for overdispersed count data. As an illustration, the regression models are fitted to the US skin care count data. The results indicate that FPM regression model is the best model since it provides the largest log likelihood and the smallest AIC, followed by Poisson-Inverse Gaussion (PIG), GP and negative binomial (NB) regression models. The results also show that NB, PIG and GP regression models provide similar results.

  4. Extensions of Rasch's Multiplicative Poisson Model.

    ERIC Educational Resources Information Center

    Jansen, Margo G. H.; van Duijn, Marijtje A. J.

    1992-01-01

    A model developed by G. Rasch that assumes scores on some attainment tests can be realizations of a Poisson process is explained and expanded by assuming a prior distribution, with fixed but unknown parameters, for the subject parameters. How additional between-subject and within-subject factors can be incorporated is discussed. (SLD)

  5. The Poisson and Exponential Models

    ERIC Educational Resources Information Center

    Richards, Winston A.

    1978-01-01

    The students in a basic course on probability and statistics in Trinidad demonstrated that the number of fatal highway accidents appeared to follow a Poisson distribution while the length of time between deaths followed exponential distribution. (MN)

  6. On Generalizing the Two-Poisson Model.

    ERIC Educational Resources Information Center

    Srinivasan, Padmini

    1990-01-01

    After reviewing the literature on automatic indexing research an experiment is described which examined term distribution and the effectiveness of the Two Poisson and Three Poisson Models in identifying good index terms. The conclusion reached is that these models should be applied with caution in document retrieval. (25 references) (EAM)

  7. Loop coproducts, Gaudin models and Poisson coalgebras

    NASA Astrophysics Data System (ADS)

    Musso, F.

    2010-10-01

    In this paper we show that if A is a Poisson algebra equipped with a set of maps Δ(i)λ: A → Aotimes N satisfying suitable conditions, then the images of the Casimir functions of A under the maps Δ(i)λ (that we call 'loop coproducts') are in involution. Rational, trigonometric and elliptic Gaudin models can be recovered as particular cases of this construction, and we show that the same happens for the integrable (or partially integrable) models that can be obtained through the so-called coproduct method. On the other hand, we show that the loop coproduct approach provides a natural generalization of the Gaudin algebras from the Lie-Poisson to the generic Poisson algebra context and, hopefully, can lead to the definition of new integrable models.

  8. MODELING PAVEMENT DETERIORATION PROCESSES BY POISSON HIDDEN MARKOV MODELS

    NASA Astrophysics Data System (ADS)

    Nam, Le Thanh; Kaito, Kiyoyuki; Kobayashi, Kiyoshi; Okizuka, Ryosuke

    In pavement management, it is important to estimate lifecycle cost, which is composed of the expenses for repairing local damages, including potholes, and repairing and rehabilitating the surface and base layers of pavements, including overlays. In this study, a model is produced under the assumption that the deterioration process of pavement is a complex one that includes local damages, which occur frequently, and the deterioration of the surface and base layers of pavement, which progresses slowly. The variation in pavement soundness is expressed by the Markov deterioration model and the Poisson hidden Markov deterioration model, in which the frequency of local damage depends on the distribution of pavement soundness, is formulated. In addition, the authors suggest a model estimation method using the Markov Chain Monte Carlo (MCMC) method, and attempt to demonstrate the applicability of the proposed Poisson hidden Markov deterioration model by studying concrete application cases.

  9. Modelling Documents with Multiple Poisson Distributions.

    ERIC Educational Resources Information Center

    Margulis, Eugene L.

    1993-01-01

    Reports on the validity of the Multiple Poisson (nP) model of word distribution in full-text document collections. A practical algorithm for determining whether a certain word is distributed according to an nP distribution and the results of a test of this algorithm in three different document collections are described. (14 references) (KRN)

  10. Rasch's Multiplicative Poisson Model with Covariates.

    ERIC Educational Resources Information Center

    Ogasawara, Haruhiko

    1996-01-01

    Rasch's multiplicative Poisson model is extended so that parameters for individuals in the prior gamma distribution have continuous covariates. Parameters for individuals are integrated out, and hyperparameters in the prior distribution are estimated by a numerical method separately from difficulty parameters that are treated as fixed parameters…

  11. Destructive weighted Poisson cure rate models.

    PubMed

    Rodrigues, Josemar; de Castro, Mário; Balakrishnan, N; Cancho, Vicente G

    2011-07-01

    In this paper, we develop a flexible cure rate survival model by assuming the number of competing causes of the event of interest to follow a compound weighted Poisson distribution. This model is more flexible in terms of dispersion than the promotion time cure model. Moreover, it gives an interesting and realistic interpretation of the biological mechanism of the occurrence of event of interest as it includes a destructive process of the initial risk factors in a competitive scenario. In other words, what is recorded is only from the undamaged portion of the original number of risk factors.

  12. Modelling of nonlinear filtering Poisson time series

    NASA Astrophysics Data System (ADS)

    Bochkarev, Vladimir V.; Belashova, Inna A.

    2016-08-01

    In this article, algorithms of non-linear filtering of Poisson time series are tested using statistical modelling. The objective is to find a representation of a time series as a wavelet series with a small number of non-linear coefficients, which allows distinguishing statistically significant details. There are well-known efficient algorithms of non-linear wavelet filtering for the case when the values of a time series have a normal distribution. However, if the distribution is not normal, good results can be expected using the maximum likelihood estimations. The filtration is studied according to the criterion of maximum likelihood by the example of Poisson time series. For direct optimisation of the likelihood function, different stochastic (genetic algorithms, annealing method) and deterministic optimization algorithms are used. Testing of the algorithm using both simulated series and empirical data (series of rare words frequencies according to the Google Books Ngram data were used) showed that filtering based on the criterion of maximum likelihood has a great advantage over well-known algorithms for the case of Poisson series. Also, the most perspective methods of optimisation were selected for this problem.

  13. Nonlocal Poisson-Fermi model for ionic solvent.

    PubMed

    Xie, Dexuan; Liu, Jinn-Liang; Eisenberg, Bob

    2016-07-01

    We propose a nonlocal Poisson-Fermi model for ionic solvent that includes ion size effects and polarization correlations among water molecules in the calculation of electrostatic potential. It includes the previous Poisson-Fermi models as special cases, and its solution is the convolution of a solution of the corresponding nonlocal Poisson dielectric model with a Yukawa-like kernel function. The Fermi distribution is shown to be a set of optimal ionic concentration functions in the sense of minimizing an electrostatic potential free energy. Numerical results are reported to show the difference between a Poisson-Fermi solution and a corresponding Poisson solution. PMID:27575084

  14. Nonlocal Poisson-Fermi model for ionic solvent

    NASA Astrophysics Data System (ADS)

    Xie, Dexuan; Liu, Jinn-Liang; Eisenberg, Bob

    2016-07-01

    We propose a nonlocal Poisson-Fermi model for ionic solvent that includes ion size effects and polarization correlations among water molecules in the calculation of electrostatic potential. It includes the previous Poisson-Fermi models as special cases, and its solution is the convolution of a solution of the corresponding nonlocal Poisson dielectric model with a Yukawa-like kernel function. The Fermi distribution is shown to be a set of optimal ionic concentration functions in the sense of minimizing an electrostatic potential free energy. Numerical results are reported to show the difference between a Poisson-Fermi solution and a corresponding Poisson solution.

  15. A Poisson model for random multigraphs

    PubMed Central

    Ranola, John M. O.; Ahn, Sangtae; Sehl, Mary; Smith, Desmond J.; Lange, Kenneth

    2010-01-01

    Motivation: Biological networks are often modeled by random graphs. A better modeling vehicle is a multigraph where each pair of nodes is connected by a Poisson number of edges. In the current model, the mean number of edges equals the product of two propensities, one for each node. In this context it is possible to construct a simple and effective algorithm for rapid maximum likelihood estimation of all propensities. Given estimated propensities, it is then possible to test statistically for functionally connected nodes that show an excess of observed edges over expected edges. The model extends readily to directed multigraphs. Here, propensities are replaced by outgoing and incoming propensities. Results: The theory is applied to real data on neuronal connections, interacting genes in radiation hybrids, interacting proteins in a literature curated database, and letter and word pairs in seven Shaskespearean plays. Availability: All data used are fully available online from their respective sites. Source code and software is available from http://code.google.com/p/poisson-multigraph/ Contact: klange@ucla.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:20554690

  16. Poisson-Boltzmann-Nernst-Planck model

    NASA Astrophysics Data System (ADS)

    Zheng, Qiong; Wei, Guo-Wei

    2011-05-01

    The Poisson-Nernst-Planck (PNP) model is based on a mean-field approximation of ion interactions and continuum descriptions of concentration and electrostatic potential. It provides qualitative explanation and increasingly quantitative predictions of experimental measurements for the ion transport problems in many areas such as semiconductor devices, nanofluidic systems, and biological systems, despite many limitations. While the PNP model gives a good prediction of the ion transport phenomenon for chemical, physical, and biological systems, the number of equations to be solved and the number of diffusion coefficient profiles to be determined for the calculation directly depend on the number of ion species in the system, since each ion species corresponds to one Nernst-Planck equation and one position-dependent diffusion coefficient profile. In a complex system with multiple ion species, the PNP can be computationally expensive and parameter demanding, as experimental measurements of diffusion coefficient profiles are generally quite limited for most confined regions such as ion channels, nanostructures and nanopores. We propose an alternative model to reduce number of Nernst-Planck equations to be solved in complex chemical and biological systems with multiple ion species by substituting Nernst-Planck equations with Boltzmann distributions of ion concentrations. As such, we solve the coupled Poisson-Boltzmann and Nernst-Planck (PBNP) equations, instead of the PNP equations. The proposed PBNP equations are derived from a total energy functional by using the variational principle. We design a number of computational techniques, including the Dirichlet to Neumann mapping, the matched interface and boundary, and relaxation based iterative procedure, to ensure efficient solution of the proposed PBNP equations. Two protein molecules, cytochrome c551 and Gramicidin A, are employed to validate the proposed model under a wide range of bulk ion concentrations and external

  17. Poisson-Boltzmann-Nernst-Planck model

    SciTech Connect

    Zheng Qiong; Wei Guowei

    2011-05-21

    The Poisson-Nernst-Planck (PNP) model is based on a mean-field approximation of ion interactions and continuum descriptions of concentration and electrostatic potential. It provides qualitative explanation and increasingly quantitative predictions of experimental measurements for the ion transport problems in many areas such as semiconductor devices, nanofluidic systems, and biological systems, despite many limitations. While the PNP model gives a good prediction of the ion transport phenomenon for chemical, physical, and biological systems, the number of equations to be solved and the number of diffusion coefficient profiles to be determined for the calculation directly depend on the number of ion species in the system, since each ion species corresponds to one Nernst-Planck equation and one position-dependent diffusion coefficient profile. In a complex system with multiple ion species, the PNP can be computationally expensive and parameter demanding, as experimental measurements of diffusion coefficient profiles are generally quite limited for most confined regions such as ion channels, nanostructures and nanopores. We propose an alternative model to reduce number of Nernst-Planck equations to be solved in complex chemical and biological systems with multiple ion species by substituting Nernst-Planck equations with Boltzmann distributions of ion concentrations. As such, we solve the coupled Poisson-Boltzmann and Nernst-Planck (PBNP) equations, instead of the PNP equations. The proposed PBNP equations are derived from a total energy functional by using the variational principle. We design a number of computational techniques, including the Dirichlet to Neumann mapping, the matched interface and boundary, and relaxation based iterative procedure, to ensure efficient solution of the proposed PBNP equations. Two protein molecules, cytochrome c551 and Gramicidin A, are employed to validate the proposed model under a wide range of bulk ion concentrations and external

  18. Analyzing Historical Count Data: Poisson and Negative Binomial Regression Models.

    ERIC Educational Resources Information Center

    Beck, E. M.; Tolnay, Stewart E.

    1995-01-01

    Asserts that traditional approaches to multivariate analysis, including standard linear regression techniques, ignore the special character of count data. Explicates three suitable alternatives to standard regression techniques, a simple Poisson regression, a modified Poisson regression, and a negative binomial model. (MJP)

  19. Collision prediction models using multivariate Poisson-lognormal regression.

    PubMed

    El-Basyouny, Karim; Sayed, Tarek

    2009-07-01

    This paper advocates the use of multivariate Poisson-lognormal (MVPLN) regression to develop models for collision count data. The MVPLN approach presents an opportunity to incorporate the correlations across collision severity levels and their influence on safety analyses. The paper introduces a new multivariate hazardous location identification technique, which generalizes the univariate posterior probability of excess that has been commonly proposed and applied in the literature. In addition, the paper presents an alternative approach for quantifying the effect of the multivariate structure on the precision of expected collision frequency. The MVPLN approach is compared with the independent (separate) univariate Poisson-lognormal (PLN) models with respect to model inference, goodness-of-fit, identification of hot spots and precision of expected collision frequency. The MVPLN is modeled using the WinBUGS platform which facilitates computation of posterior distributions as well as providing a goodness-of-fit measure for model comparisons. The results indicate that the estimates of the extra Poisson variation parameters were considerably smaller under MVPLN leading to higher precision. The improvement in precision is due mainly to the fact that MVPLN accounts for the correlation between the latent variables representing property damage only (PDO) and injuries plus fatalities (I+F). This correlation was estimated at 0.758, which is highly significant, suggesting that higher PDO rates are associated with higher I+F rates, as the collision likelihood for both types is likely to rise due to similar deficiencies in roadway design and/or other unobserved factors. In terms of goodness-of-fit, the MVPLN model provided a superior fit than the independent univariate models. The multivariate hazardous location identification results demonstrated that some hazardous locations could be overlooked if the analysis was restricted to the univariate models. PMID:19540972

  20. Modeling laser velocimeter signals as triply stochastic Poisson processes

    NASA Technical Reports Server (NTRS)

    Mayo, W. T., Jr.

    1976-01-01

    Previous models of laser Doppler velocimeter (LDV) systems have not adequately described dual-scatter signals in a manner useful for analysis and simulation of low-level photon-limited signals. At low photon rates, an LDV signal at the output of a photomultiplier tube is a compound nonhomogeneous filtered Poisson process, whose intensity function is another (slower) Poisson process with the nonstationary rate and frequency parameters controlled by a random flow (slowest) process. In the present paper, generalized Poisson shot noise models are developed for low-level LDV signals. Theoretical results useful in detection error analysis and simulation are presented, along with measurements of burst amplitude statistics. Computer generated simulations illustrate the difference between Gaussian and Poisson models of low-level signals.

  1. Validation of the Poisson Stochastic Radiative Transfer Model

    NASA Technical Reports Server (NTRS)

    Zhuravleva, Tatiana; Marshak, Alexander

    2004-01-01

    A new approach to validation of the Poisson stochastic radiative transfer method is proposed. In contrast to other validations of stochastic models, the main parameter of the Poisson model responsible for cloud geometrical structure - cloud aspect ratio - is determined entirely by matching measurements and calculations of the direct solar radiation. If the measurements of the direct solar radiation is unavailable, it was shown that there is a range of the aspect ratios that allows the stochastic model to accurately approximate the average measurements of surface downward and cloud top upward fluxes. Realizations of the fractionally integrated cascade model are taken as a prototype of real measurements.

  2. Markov modulated Poisson process models incorporating covariates for rainfall intensity.

    PubMed

    Thayakaran, R; Ramesh, N I

    2013-01-01

    Time series of rainfall bucket tip times at the Beaufort Park station, Bracknell, in the UK are modelled by a class of Markov modulated Poisson processes (MMPP) which may be thought of as a generalization of the Poisson process. Our main focus in this paper is to investigate the effects of including covariate information into the MMPP model framework on statistical properties. In particular, we look at three types of time-varying covariates namely temperature, sea level pressure, and relative humidity that are thought to be affecting the rainfall arrival process. Maximum likelihood estimation is used to obtain the parameter estimates, and likelihood ratio tests are employed in model comparison. Simulated data from the fitted model are used to make statistical inferences about the accumulated rainfall in the discrete time interval. Variability of the daily Poisson arrival rates is studied.

  3. On supermatrix models, Poisson geometry, and noncommutative supersymmetric gauge theories

    SciTech Connect

    Klimčík, Ctirad

    2015-12-15

    We construct a new supermatrix model which represents a manifestly supersymmetric noncommutative regularisation of the UOSp(2|1) supersymmetric Schwinger model on the supersphere. Our construction is much simpler than those already existing in the literature and it was found by using Poisson geometry in a substantial way.

  4. Poisson and Multinomial Mixture Models for Multivariate SIMS Image Segmentation

    SciTech Connect

    Willse, Alan R.; Tyler, Bonnie

    2002-11-08

    Multivariate statistical methods have been advocated for analysis of spectral images, such as those obtained with imaging time-of-flight secondary ion mass spectrometry (TOF-SIMS). TOF-SIMS images using total secondary ion counts or secondary ion counts at individual masses often fail to reveal all salient chemical patterns on the surface. Multivariate methods simultaneously analyze peak intensities at all masses. We propose multivariate methods based on Poisson and multinomial mixture models to segment SIMS images into chemically homogeneous regions. The Poisson mixture model is derived from the assumption that secondary ion counts at any mass in a chemically homogeneous region vary according to the Poisson distribution. The multinomial model is derived as a standardized Poisson mixture model, which is analogous to standardizing the data by dividing by total secondary ion counts. The methods are adapted for contextual image segmentation, allowing for spatial correlation of neighboring pixels. The methods are applied to 52 mass units of a SIMS image with known chemical components. The spectral profile and relative prevalence for each chemical phase are obtained from estimates of model parameters.

  5. Using the Gamma-Poisson Model to Predict Library Circulations.

    ERIC Educational Resources Information Center

    Burrell, Quentin L.

    1990-01-01

    Argues that the gamma mixture of Poisson processes, for all its perceived defects, can be used to make predictions regarding future library book circulations of a quality adequate for general management requirements. The use of the model is extensively illustrated with data from two academic libraries. (Nine references) (CLB)

  6. The Poisson-Lognormal Model for Bibliometric/Scientometric Distributions.

    ERIC Educational Resources Information Center

    Stewart, John A.

    1994-01-01

    Illustrates that the Poisson-lognormal model provides good fits to a diverse set of distributions commonly studied in bibliometrics and scientometrics. Topics discussed include applications to the empirical data sets related to the laws of Lotka, Bradford, and Zipf; causal processes that could generate lognormal distributions; and implications for…

  7. Wide-area traffic: The failure of Poisson modeling

    SciTech Connect

    Paxson, V.; Floyd, S.

    1994-08-01

    Network arrivals are often modeled as Poisson processes for analytic simplicity, even though a number of traffic studies have shown that packet interarrivals are not exponentially distributed. The authors evaluate 21 wide-area traces, investigating a number of wide-area TCP arrival processes (session and connection arrivals, FTPDATA connection arrivals within FTP sessions, and TELNET packet arrivals) to determine the error introduced by modeling them using Poisson processes. The authors find that user-initiated TCP session arrivals, such as remote-login and file-transfer, are well-modeled as Poisson processes with fixed hourly rates, but that other connection arrivals deviate considerably from Poisson; that modeling TELNET packet interarrivals as exponential grievously underestimates the burstiness of TELNET traffic, but using the empirical Tcplib[DJCME92] interarrivals preserves burstiness over many time scales; and that FTPDATA connection arrivals within FTP sessions come bunched into ``connection bursts``, the largest of which are so large that they completely dominate FTPDATA traffic. Finally, they offer some preliminary results regarding how the findings relate to the possible self-similarity of wide-area traffic.

  8. A generalized Poisson-gamma model for spatially overdispersed data.

    PubMed

    Neyens, Thomas; Faes, Christel; Molenberghs, Geert

    2012-09-01

    Modern disease mapping commonly uses hierarchical Bayesian methods to model overdispersion and spatial correlation. Classical random-effects based solutions include the Poisson-gamma model, which uses the conjugacy between the Poisson and gamma distributions, but which does not model spatial correlation, on the one hand, and the more advanced CAR model, which also introduces a spatial autocorrelation term but without a closed-form posterior distribution on the other. In this paper, a combined model is proposed: an alternative convolution model accounting for both overdispersion and spatial correlation in the data by combining the Poisson-gamma model with a spatially-structured normal CAR random effect. The Limburg Cancer Registry data on kidney and prostate cancer in Limburg were used to compare the conventional and new models. A simulation study confirmed results and interpretations coming from the real datasets. Relative risk maps showed that the combined model provides an intermediate between the non-patterned negative binomial and the sometimes oversmoothed CAR convolution model. PMID:22749204

  9. Studying Resist Stochastics with the Multivariate Poisson Propagation Model

    DOE PAGES

    Naulleau, Patrick; Anderson, Christopher; Chao, Weilun; Bhattarai, Suchit; Neureuther, Andrew

    2014-01-01

    Progress in the ultimate performance of extreme ultraviolet resist has arguably decelerated in recent years suggesting an approach to stochastic limits both in photon counts and material parameters. Here we report on the performance of a variety of leading extreme ultraviolet resist both with and without chemical amplification. The measured performance is compared to stochastic modeling results using the Multivariate Poisson Propagation Model. The results show that the best materials are indeed nearing modeled performance limits.

  10. Modeling environmental noise exceedances using non-homogeneous Poisson processes.

    PubMed

    Guarnaccia, Claudio; Quartieri, Joseph; Barrios, Juan M; Rodrigues, Eliane R

    2014-10-01

    In this work a non-homogeneous Poisson model is considered to study noise exposure. The Poisson process, counting the number of times that a sound level surpasses a threshold, is used to estimate the probability that a population is exposed to high levels of noise a certain number of times in a given time interval. The rate function of the Poisson process is assumed to be of a Weibull type. The presented model is applied to community noise data from Messina, Sicily (Italy). Four sets of data are used to estimate the parameters involved in the model. After the estimation and tuning are made, a way of estimating the probability that an environmental noise threshold is exceeded a certain number of times in a given time interval is presented. This estimation can be very useful in the study of noise exposure of a population and also to predict, given the current behavior of the data, the probability of occurrence of high levels of noise in the near future. One of the most important features of the model is that it implicitly takes into account different noise sources, which need to be treated separately when using usual models.

  11. Modeling Repeated Count Data: Some Extensions of the Rasch Poisson Counts Model.

    ERIC Educational Resources Information Center

    Duijn, Marijtje A. J. van; Jansen, Margo G. H.

    1995-01-01

    The Rasch Poisson Counts Model, a unidimensional latent trait model for tests that postulates that intensity parameters are products of test difficulty and subject ability parameters, is expanded into the Dirichlet-Gamma-Poisson model that takes into account variation between subjects and interaction between subjects and tests. (SLD)

  12. Numerical Poisson-Boltzmann Model for Continuum Membrane Systems.

    PubMed

    Botello-Smith, Wesley M; Liu, Xingping; Cai, Qin; Li, Zhilin; Zhao, Hongkai; Luo, Ray

    2013-01-01

    Membrane protein systems are important computational research topics due to their roles in rational drug design. In this study, we developed a continuum membrane model utilizing a level set formulation under the numerical Poisson-Boltzmann framework within the AMBER molecular mechanics suite for applications such as protein-ligand binding affinity and docking pose predictions. Two numerical solvers were adapted for periodic systems to alleviate possible edge effects. Validation on systems ranging from organic molecules to membrane proteins up to 200 residues, demonstrated good numerical properties. This lays foundations for sophisticated models with variable dielectric treatments and second-order accurate modeling of solvation interactions.

  13. Numerical Poisson-Boltzmann Model for Continuum Membrane Systems.

    PubMed

    Botello-Smith, Wesley M; Liu, Xingping; Cai, Qin; Li, Zhilin; Zhao, Hongkai; Luo, Ray

    2013-01-01

    Membrane protein systems are important computational research topics due to their roles in rational drug design. In this study, we developed a continuum membrane model utilizing a level set formulation under the numerical Poisson-Boltzmann framework within the AMBER molecular mechanics suite for applications such as protein-ligand binding affinity and docking pose predictions. Two numerical solvers were adapted for periodic systems to alleviate possible edge effects. Validation on systems ranging from organic molecules to membrane proteins up to 200 residues, demonstrated good numerical properties. This lays foundations for sophisticated models with variable dielectric treatments and second-order accurate modeling of solvation interactions. PMID:23439886

  14. On population size estimators in the Poisson mixture model.

    PubMed

    Mao, Chang Xuan; Yang, Nan; Zhong, Jinhua

    2013-09-01

    Estimating population sizes via capture-recapture experiments has enormous applications. The Poisson mixture model can be adopted for those applications with a single list in which individuals appear one or more times. We compare several nonparametric estimators, including the Chao estimator, the Zelterman estimator, two jackknife estimators and the bootstrap estimator. The target parameter of the Chao estimator is a lower bound of the population size. Those of the other four estimators are not lower bounds, and they may produce lower confidence limits for the population size with poor coverage probabilities. A simulation study is reported and two examples are investigated. PMID:23865502

  15. On population size estimators in the Poisson mixture model.

    PubMed

    Mao, Chang Xuan; Yang, Nan; Zhong, Jinhua

    2013-09-01

    Estimating population sizes via capture-recapture experiments has enormous applications. The Poisson mixture model can be adopted for those applications with a single list in which individuals appear one or more times. We compare several nonparametric estimators, including the Chao estimator, the Zelterman estimator, two jackknife estimators and the bootstrap estimator. The target parameter of the Chao estimator is a lower bound of the population size. Those of the other four estimators are not lower bounds, and they may produce lower confidence limits for the population size with poor coverage probabilities. A simulation study is reported and two examples are investigated.

  16. Lindley frailty model for a class of compound Poisson processes

    NASA Astrophysics Data System (ADS)

    Kadilar, Gamze Özel; Ata, Nihal

    2013-10-01

    The Lindley distribution gain importance in survival analysis for the similarity of exponential distribution and allowance for the different shapes of hazard function. Frailty models provide an alternative to proportional hazards model where misspecified or omitted covariates are described by an unobservable random variable. Despite of the distribution of the frailty is generally assumed to be continuous, it is appropriate to consider discrete frailty distributions In some circumstances. In this paper, frailty models with discrete compound Poisson process for the Lindley distributed failure time are introduced. Survival functions are derived and maximum likelihood estimation procedures for the parameters are studied. Then, the fit of the models to the earthquake data set of Turkey are examined.

  17. Poisson Growth Mixture Modeling of Intensive Longitudinal Data: An Application to Smoking Cessation Behavior

    PubMed Central

    Shiyko, Mariya P.; Li, Yuelin; Rindskopf, David

    2011-01-01

    Intensive longitudinal data (ILD) have become increasingly common in the social and behavioral sciences; count variables, such as the number of daily smoked cigarettes, are frequently-used outcomes in many ILD studies. We demonstrate a generalized extension of growth mixture modeling (GMM) to Poisson-distributed ILD for identifying qualitatively distinct trajectories in the context of developmental heterogeneity in count data. Accounting for the Poisson outcome distribution is essential for correct model identification and estimation. In addition, setting up the model in a way that is conducive to ILD measures helps with data complexities – large data volume, missing observations, and differences in sampling frequency across individuals. We present technical details of model fitting, summarize an empirical example of patterns of smoking behavior change, and describe research questions the generalized GMM helps to address. PMID:22408365

  18. Polyelectrolyte Microcapsules: Ion Distributions from a Poisson-Boltzmann Model

    NASA Astrophysics Data System (ADS)

    Tang, Qiyun; Denton, Alan R.; Rozairo, Damith; Croll, Andrew B.

    2014-03-01

    Recent experiments have shown that polystyrene-polyacrylic-acid-polystyrene (PS-PAA-PS) triblock copolymers in a solvent mixture of water and toluene can self-assemble into spherical microcapsules. Suspended in water, the microcapsules have a toluene core surrounded by an elastomer triblock shell. The longer, hydrophilic PAA blocks remain near the outer surface of the shell, becoming charged through dissociation of OH functional groups in water, while the shorter, hydrophobic PS blocks form a networked (glass or gel) structure. Within a mean-field Poisson-Boltzmann theory, we model these polyelectrolyte microcapsules as spherical charged shells, assuming different dielectric constants inside and outside the capsule. By numerically solving the nonlinear Poisson-Boltzmann equation, we calculate the radial distribution of anions and cations and the osmotic pressure within the shell as a function of salt concentration. Our predictions, which can be tested by comparison with experiments, may guide the design of microcapsules for practical applications, such as drug delivery. This work was supported by the National Science Foundation under Grant No. DMR-1106331.

  19. Identifying Seismicity Levels via Poisson Hidden Markov Models

    NASA Astrophysics Data System (ADS)

    Orfanogiannaki, K.; Karlis, D.; Papadopoulos, G. A.

    2010-08-01

    Poisson Hidden Markov models (PHMMs) are introduced to model temporal seismicity changes. In a PHMM the unobserved sequence of states is a finite-state Markov chain and the distribution of the observation at any time is Poisson with rate depending only on the current state of the chain. Thus, PHMMs allow a region to have varying seismicity rate. We applied the PHMM to model earthquake frequencies in the seismogenic area of Killini, Ionian Sea, Greece, between period 1990 and 2006. Simulations of data from the assumed model showed that it describes quite well the true data. The earthquake catalogue is dominated by main shocks occurring in 1993, 1997 and 2002. The time plot of PHMM seismicity states not only reproduces the three seismicity clusters but also quantifies the seismicity level and underlies the degree of strength of the serial dependence of the events at any point of time. Foreshock activity becomes quite evident before the three sequences with the gradual transition to states of cascade seismicity. Traditional analysis, based on the determination of highly significant changes of seismicity rates, failed to recognize foreshocks before the 1997 main shock due to the low number of events preceding that main shock. Then, PHMM has better performance than traditional analysis since the transition from one state to another does not only depend on the total number of events involved but also on the current state of the system. Therefore, PHMM recognizes significant changes of seismicity soon after they start, which is of particular importance for real-time recognition of foreshock activities and other seismicity changes.

  20. A bivariate survival model with compound Poisson frailty.

    PubMed

    Wienke, A; Ripatti, S; Palmgren, J; Yashin, A

    2010-01-30

    A correlated frailty model is suggested for analysis of bivariate time-to-event data. The model is an extension of the correlated power variance function (PVF) frailty model (correlated three-parameter frailty model) (J. Epidemiol. Biostat. 1999; 4:53-60). It is based on a bivariate extension of the compound Poisson frailty model in univariate survival analysis (Ann. Appl. Probab. 1992; 4:951-972). It allows for a non-susceptible fraction (of zero frailty) in the population, overcoming the common assumption in survival analysis that all individuals are susceptible to the event under study. The model contains the correlated gamma frailty model and the correlated inverse Gaussian frailty model as special cases. A maximum likelihood estimation procedure for the parameters is presented and its properties are studied in a small simulation study. This model is applied to breast cancer incidence data of Swedish twins. The proportion of women susceptible to breast cancer is estimated to be 15 per cent.

  1. Linear-Nonlinear-Poisson models of primate choice dynamics.

    PubMed

    Corrado, Greg S; Sugrue, Leo P; Seung, H Sebastian; Newsome, William T

    2005-11-01

    The equilibrium phenomenon of matching behavior traditionally has been studied in stationary environments. Here we attempt to uncover the local mechanism of choice that gives rise to matching by studying behavior in a highly dynamic foraging environment. In our experiments, 2 rhesus monkeys (Macacca mulatta) foraged for juice rewards by making eye movements to one of two colored icons presented on a computer monitor, each rewarded on dynamic variable-interval schedules. Using a generalization of Wiener kernel analysis, we recover a compact mechanistic description of the impact of past reward on future choice in the form of a Linear-Nonlinear-Poisson model. We validate this model through rigorous predictive and generative testing. Compared to our earlier work with this same data set, this model proves to be a better description of choice behavior and is more tightly correlated with putative neural value signals. Refinements over previous models include hyperbolic (as opposed to exponential) temporal discounting of past rewards, and differential (as opposed to fractional) comparisons of option value. Through numerical simulation we find that within this class of strategies, the model parameters employed by animals are very close to those that maximize reward harvesting efficiency.

  2. A new form of bivariate generalized Poisson regression model

    NASA Astrophysics Data System (ADS)

    Faroughi, Pouya; Ismail, Noriszura

    2014-09-01

    This paper introduces a new form of bivariate generalized Poisson (BGP) regression which can be fitted to bivariate and correlated count data with covariates. The BGP regression suggested in this study can be fitted not only to bivariate count data with positive, zero or negative correlations, but also to underdispersed or overdispersed bivariate count data. Applications of bivariate Poisson (BP) regression and the new BGP regression are illustrated on Malaysian motor insurance data.

  3. The Dependent Poisson Race Model and Modeling Dependence in Conjoint Choice Experiments

    ERIC Educational Resources Information Center

    Ruan, Shiling; MacEachern, Steven N.; Otter, Thomas; Dean, Angela M.

    2008-01-01

    Conjoint choice experiments are used widely in marketing to study consumer preferences amongst alternative products. We develop a class of choice models, belonging to the class of Poisson race models, that describe a "random utility" which lends itself to a process-based description of choice. The models incorporate a dependence structure which…

  4. Poisson-Lie T-duals of the bi-Yang-Baxter models

    NASA Astrophysics Data System (ADS)

    Klimčík, Ctirad

    2016-09-01

    We prove the conjecture of Sfetsos, Siampos and Thompson that suitable analytic continuations of the Poisson-Lie T-duals of the bi-Yang-Baxter sigma models coincide with the recently introduced generalized λ-models. We then generalize this result by showing that the analytic continuation of a generic σ-model of "universal WZW-type" introduced by Tseytlin in 1993 is nothing but the Poisson-Lie T-dual of a generic Poisson-Lie symmetric σ-model introduced by Klimčík and Ševera in 1995.

  5. Modeling animal-vehicle collisions using diagonal inflated bivariate Poisson regression.

    PubMed

    Lao, Yunteng; Wu, Yao-Jan; Corey, Jonathan; Wang, Yinhai

    2011-01-01

    Two types of animal-vehicle collision (AVC) data are commonly adopted for AVC-related risk analysis research: reported AVC data and carcass removal data. One issue with these two data sets is that they were found to have significant discrepancies by previous studies. In order to model these two types of data together and provide a better understanding of highway AVCs, this study adopts a diagonal inflated bivariate Poisson regression method, an inflated version of bivariate Poisson regression model, to fit the reported AVC and carcass removal data sets collected in Washington State during 2002-2006. The diagonal inflated bivariate Poisson model not only can model paired data with correlation, but also handle under- or over-dispersed data sets as well. Compared with three other types of models, double Poisson, bivariate Poisson, and zero-inflated double Poisson, the diagonal inflated bivariate Poisson model demonstrates its capability of fitting two data sets with remarkable overlapping portions resulting from the same stochastic process. Therefore, the diagonal inflated bivariate Poisson model provides researchers a new approach to investigating AVCs from a different perspective involving the three distribution parameters (λ(1), λ(2) and λ(3)). The modeling results show the impacts of traffic elements, geometric design and geographic characteristics on the occurrences of both reported AVC and carcass removal data. It is found that the increase of some associated factors, such as speed limit, annual average daily traffic, and shoulder width, will increase the numbers of reported AVCs and carcass removals. Conversely, the presence of some geometric factors, such as rolling and mountainous terrain, will decrease the number of reported AVCs.

  6. Bayesian inference on multiscale models for poisson intensity estimation: applications to photon-limited image denoising.

    PubMed

    Lefkimmiatis, Stamatios; Maragos, Petros; Papandreou, George

    2009-08-01

    We present an improved statistical model for analyzing Poisson processes, with applications to photon-limited imaging. We build on previous work, adopting a multiscale representation of the Poisson process in which the ratios of the underlying Poisson intensities (rates) in adjacent scales are modeled as mixtures of conjugate parametric distributions. Our main contributions include: 1) a rigorous and robust regularized expectation-maximization (EM) algorithm for maximum-likelihood estimation of the rate-ratio density parameters directly from the noisy observed Poisson data (counts); 2) extension of the method to work under a multiscale hidden Markov tree model (HMT) which couples the mixture label assignments in consecutive scales, thus modeling interscale coefficient dependencies in the vicinity of image edges; 3) exploration of a 2-D recursive quad-tree image representation, involving Dirichlet-mixture rate-ratio densities, instead of the conventional separable binary-tree image representation involving beta-mixture rate-ratio densities; and 4) a novel multiscale image representation, which we term Poisson-Haar decomposition, that better models the image edge structure, thus yielding improved performance. Experimental results on standard images with artificially simulated Poisson noise and on real photon-limited images demonstrate the effectiveness of the proposed techniques.

  7. The Kramers-Kronig relations for usual and anomalous Poisson-Nernst-Planck models.

    PubMed

    Evangelista, Luiz Roberto; Lenzi, Ervin Kaminski; Barbero, Giovanni

    2013-11-20

    The consistency of the frequency response predicted by a class of electrochemical impedance expressions is analytically checked by invoking the Kramers-Kronig (KK) relations. These expressions are obtained in the context of Poisson-Nernst-Planck usual or anomalous diffusional models that satisfy Poisson's equation in a finite length situation. The theoretical results, besides being successful in interpreting experimental data, are also shown to obey the KK relations when these relations are modified accordingly.

  8. Modeling the number of bids received on federal offshore hydrocarbon leases by poisson-type models

    SciTech Connect

    Nachtsheim, C.J.; Bruckner, L.A.

    1980-05-01

    Since 1954, the federal government has held over 40 hydrocarbon lease sales on the Outer Continental Shelf. A maximum of 18 sealed bonus bids was received for each of the tracts offered. A mixed Poisson-type model has been suggested for the relative frequencies of the number of bids received by an offered tract. Here we show that statistically this model may only be supportable for the number of solo bids received in sales after the Joint-Bidding Ban of 1975. A truncated model (excluding tracts receiving no bids) is proposed. While an improvement results from the use of this model, it appears that models of the mixed Poisson-type may not be generally applicable to the number of bids data.

  9. A comparison between Poisson and zero-inflated Poisson regression models with an application to number of black spots in Corriedale sheep

    PubMed Central

    Naya, Hugo; Urioste, Jorge I; Chang, Yu-Mei; Rodrigues-Motta, Mariana; Kremer, Roberto; Gianola, Daniel

    2008-01-01

    Dark spots in the fleece area are often associated with dark fibres in wool, which limits its competitiveness with other textile fibres. Field data from a sheep experiment in Uruguay revealed an excess number of zeros for dark spots. We compared the performance of four Poisson and zero-inflated Poisson (ZIP) models under four simulation scenarios. All models performed reasonably well under the same scenario for which the data were simulated. The deviance information criterion favoured a Poisson model with residual, while the ZIP model with a residual gave estimates closer to their true values under all simulation scenarios. Both Poisson and ZIP models with an error term at the regression level performed better than their counterparts without such an error. Field data from Corriedale sheep were analysed with Poisson and ZIP models with residuals. Parameter estimates were similar for both models. Although the posterior distribution of the sire variance was skewed due to a small number of rams in the dataset, the median of this variance suggested a scope for genetic selection. The main environmental factor was the age of the sheep at shearing. In summary, age related processes seem to drive the number of dark spots in this breed of sheep. PMID:18558072

  10. Maximum Entropy Discrimination Poisson Regression for Software Reliability Modeling.

    PubMed

    Chatzis, Sotirios P; Andreou, Andreas S

    2015-11-01

    Reliably predicting software defects is one of the most significant tasks in software engineering. Two of the major components of modern software reliability modeling approaches are: 1) extraction of salient features for software system representation, based on appropriately designed software metrics and 2) development of intricate regression models for count data, to allow effective software reliability data modeling and prediction. Surprisingly, research in the latter frontier of count data regression modeling has been rather limited. More specifically, a lack of simple and efficient algorithms for posterior computation has made the Bayesian approaches appear unattractive, and thus underdeveloped in the context of software reliability modeling. In this paper, we try to address these issues by introducing a novel Bayesian regression model for count data, based on the concept of max-margin data modeling, effected in the context of a fully Bayesian model treatment with simple and efficient posterior distribution updates. Our novel approach yields a more discriminative learning technique, making more effective use of our training data during model inference. In addition, it allows of better handling uncertainty in the modeled data, which can be a significant problem when the training data are limited. We derive elegant inference algorithms for our model under the mean-field paradigm and exhibit its effectiveness using the publicly available benchmark data sets.

  11. Doubly stochastic Poisson process models for precipitation at fine time-scales

    NASA Astrophysics Data System (ADS)

    Ramesh, Nadarajah I.; Onof, Christian; Xie, Dichao

    2012-09-01

    This paper considers a class of stochastic point process models, based on doubly stochastic Poisson processes, in the modelling of rainfall. We examine the application of this class of models, a neglected alternative to the widely-known Poisson cluster models, in the analysis of fine time-scale rainfall intensity. These models are mainly used to analyse tipping-bucket raingauge data from a single site but an extension to multiple sites is illustrated which reveals the potential of this class of models to study the temporal and spatial variability of precipitation at fine time-scales.

  12. Analysis of Blood Transfusion Data Using Bivariate Zero-Inflated Poisson Model: A Bayesian Approach

    PubMed Central

    Mohammadi, Tayeb; Sedehi, Morteza

    2016-01-01

    Recognizing the factors affecting the number of blood donation and blood deferral has a major impact on blood transfusion. There is a positive correlation between the variables “number of blood donation” and “number of blood deferral”: as the number of return for donation increases, so does the number of blood deferral. On the other hand, due to the fact that many donors never return to donate, there is an extra zero frequency for both of the above-mentioned variables. In this study, in order to apply the correlation and to explain the frequency of the excessive zero, the bivariate zero-inflated Poisson regression model was used for joint modeling of the number of blood donation and number of blood deferral. The data was analyzed using the Bayesian approach applying noninformative priors at the presence and absence of covariates. Estimating the parameters of the model, that is, correlation, zero-inflation parameter, and regression coefficients, was done through MCMC simulation. Eventually double-Poisson model, bivariate Poisson model, and bivariate zero-inflated Poisson model were fitted on the data and were compared using the deviance information criteria (DIC). The results showed that the bivariate zero-inflated Poisson regression model fitted the data better than the other models. PMID:27703493

  13. Poisson-Fermi model of single ion activities in aqueous solutions

    NASA Astrophysics Data System (ADS)

    Liu, Jinn-Liang; Eisenberg, Bob

    2015-09-01

    A Poisson-Fermi model is proposed for calculating activity coefficients of single ions in strong electrolyte solutions based on the experimental Born radii and hydration shells of ions in aqueous solutions. The steric effect of water molecules and interstitial voids in the first and second hydration shells play an important role in our model. The screening and polarization effects of water are also included in the model that can thus describe spatial variations of dielectric permittivity, water density, void volume, and ionic concentration. The activity coefficients obtained by the Poisson-Fermi model with only one adjustable parameter are shown to agree with experimental data, which vary nonmonotonically with salt concentrations.

  14. On q-deformed symmetries as Poisson-Lie symmetries and application to Yang-Baxter type models

    NASA Astrophysics Data System (ADS)

    Delduc, F.; Lacroix, S.; Magro, M.; Vicedo, B.

    2016-10-01

    Yang-Baxter type models are integrable deformations of integrable field theories, such as the principal chiral model on a Lie group G or σ-models on (semi-)symmetric spaces G/F. The deformation has the effect of breaking the global G-symmetry of the original model, replacing the associated set of conserved charges by ones whose Poisson brackets are those of the q-deformed Poisson-Hopf algebra {{\\mathscr{U}}}q({g}). Working at the Hamiltonian level, we show how this q-deformed Poisson algebra originates from a Poisson-Lie G-symmetry. The theory of Poisson-Lie groups and their actions on Poisson manifolds, in particular the formalism of the non-abelian moment map, is reviewed. For a coboundary Poisson-Lie group G, this non-abelian moment map must obey the Semenov-Tian-Shansky bracket on the dual group {G}* , up to terms involving central quantities. When the latter vanish, we develop a general procedure linking this Poisson bracket to the defining relations of the Poisson-Hopf algebra {{\\mathscr{U}}}q({g}), including the q-Poisson-Serre relations. We consider reality conditions leading to q being either real or a phase. We determine the non-abelian moment map for Yang-Baxter type models. This enables to compute the corresponding action of G on the fields parametrising the phase space of these models.

  15. Repairable-conditionally repairable damage model based on dual Poisson processes.

    PubMed

    Lind, B K; Persson, L M; Edgren, M R; Hedlöf, I; Brahme, A

    2003-09-01

    The advent of intensity-modulated radiation therapy makes it increasingly important to model the response accurately when large volumes of normal tissues are irradiated by controlled graded dose distributions aimed at maximizing tumor cure and minimizing normal tissue toxicity. The cell survival model proposed here is very useful and flexible for accurate description of the response of healthy tissues as well as tumors in classical and truly radiobiologically optimized radiation therapy. The repairable-conditionally repairable (RCR) model distinguishes between two different types of damage, namely the potentially repairable, which may also be lethal, i.e. if unrepaired or misrepaired, and the conditionally repairable, which may be repaired or may lead to apoptosis if it has not been repaired correctly. When potentially repairable damage is being repaired, for example by nonhomologous end joining, conditionally repairable damage may require in addition a high-fidelity correction by homologous repair. The induction of both types of damage is assumed to be described by Poisson statistics. The resultant cell survival expression has the unique ability to fit most experimental data well at low doses (the initial hypersensitive range), intermediate doses (on the shoulder of the survival curve), and high doses (on the quasi-exponential region of the survival curve). The complete Poisson expression can be approximated well by a simple bi-exponential cell survival expression, S(D) = e(-aD) + bDe(-cD), where the first term describes the survival of undamaged cells and the last term represents survival after complete repair of sublethal damage. The bi-exponential expression makes it easy to derive D(0), D(q), n and alpha, beta values to facilitate comparison with classical cell survival models.

  16. Educational Aspirations: Markov and Poisson Models. Rural Industrial Development Project Working Paper Number 14, August 1971.

    ERIC Educational Resources Information Center

    Kayser, Brian D.

    The fit of educational aspirations of Illinois rural high school youths to 3 related one-parameter mathematical models was investigated. The models used were the continuous-time Markov chain model, the discrete-time Markov chain, and the Poisson distribution. The sample of 635 students responded to questionnaires from 1966 to 1969 as part of an…

  17. Prediction of forest fires occurrences with area-level Poisson mixed models.

    PubMed

    Boubeta, Miguel; Lombardía, María José; Marey-Pérez, Manuel Francisco; Morales, Domingo

    2015-05-01

    The number of fires in forest areas of Galicia (north-west of Spain) during the summer period is quite high. Local authorities are interested in analyzing the factors that explain this phenomenon. Poisson regression models are good tools for describing and predicting the number of fires per forest areas. This work employs area-level Poisson mixed models for treating real data about fires in forest areas. A parametric bootstrap method is applied for estimating the mean squared errors of fires predictors. The developed methodology and software are applied to a real data set of fires in forest areas of Galicia.

  18. Mixtures of compound Poisson processes as models of tick-by-tick financial data

    NASA Astrophysics Data System (ADS)

    Scalas, Enrico

    2007-10-01

    A model for the phenomenological description of tick-by-tick share prices in a stock exchange is introduced. It is based on mixtures of compound Poisson processes. Preliminary results based on Monte Carlo simulation show that this model can reproduce various stylized facts.

  19. Functional Generalized Additive Models.

    PubMed

    McLean, Mathew W; Hooker, Giles; Staicu, Ana-Maria; Scheipl, Fabian; Ruppert, David

    2014-01-01

    We introduce the functional generalized additive model (FGAM), a novel regression model for association studies between a scalar response and a functional predictor. We model the link-transformed mean response as the integral with respect to t of F{X(t), t} where F(·,·) is an unknown regression function and X(t) is a functional covariate. Rather than having an additive model in a finite number of principal components as in Müller and Yao (2008), our model incorporates the functional predictor directly and thus our model can be viewed as the natural functional extension of generalized additive models. We estimate F(·,·) using tensor-product B-splines with roughness penalties. A pointwise quantile transformation of the functional predictor is also considered to ensure each tensor-product B-spline has observed data on its support. The methods are evaluated using simulated data and their predictive performance is compared with other competing scalar-on-function regression alternatives. We illustrate the usefulness of our approach through an application to brain tractography, where X(t) is a signal from diffusion tensor imaging at position, t, along a tract in the brain. In one example, the response is disease-status (case or control) and in a second example, it is the score on a cognitive test. R code for performing the simulations and fitting the FGAM can be found in supplemental materials available online.

  20. A Conway-Maxwell-Poisson (CMP) model to address data dispersion on positron emission tomography.

    PubMed

    Santarelli, Maria Filomena; Della Latta, Daniele; Scipioni, Michele; Positano, Vincenzo; Landini, Luigi

    2016-10-01

    Positron emission tomography (PET) in medicine exploits the properties of positron-emitting unstable nuclei. The pairs of γ- rays emitted after annihilation are revealed by coincidence detectors and stored as projections in a sinogram. It is well known that radioactive decay follows a Poisson distribution; however, deviation from Poisson statistics occurs on PET projection data prior to reconstruction due to physical effects, measurement errors, correction of deadtime, scatter, and random coincidences. A model that describes the statistical behavior of measured and corrected PET data can aid in understanding the statistical nature of the data: it is a prerequisite to develop efficient reconstruction and processing methods and to reduce noise. The deviation from Poisson statistics in PET data could be described by the Conway-Maxwell-Poisson (CMP) distribution model, which is characterized by the centring parameter λ and the dispersion parameter ν, the latter quantifying the deviation from a Poisson distribution model. In particular, the parameter ν allows quantifying over-dispersion (ν<1) or under-dispersion (ν>1) of data. A simple and efficient method for λ and ν parameters estimation is introduced and assessed using Monte Carlo simulation for a wide range of activity values. The application of the method to simulated and experimental PET phantom data demonstrated that the CMP distribution parameters could detect deviation from the Poisson distribution both in raw and corrected PET data. It may be usefully implemented in image reconstruction algorithms and quantitative PET data analysis, especially in low counting emission data, as in dynamic PET data, where the method demonstrated the best accuracy. PMID:27522237

  1. Applicability of the Poisson distribution to model the data of the German Children's Cancer Registry.

    PubMed

    Westermeier, T; Michaelis, J

    1995-03-01

    Since 1980 the German Children's Cancer Registry has documented all childhood malignancies in the Federal Republic of Germany. Various statistical procedures have been proposed to identify municipalities or other geographic units with increased numbers of malignancies. Usually the Poisson distribution, which requires the malignancies to be distributed homogeneously and uncorrelated, is applied. Other discrete statistical distributions (so-called cluster distributions) like the generalized or compound Poisson distributions are applicable more generally. In this paper we present a first explorative approach to the question of whether it is necessary to use one of these cluster distributions to model the data of the German Children's Cancer Registry. In conclusion, we find no indication that the Poisson approach is insufficient. PMID:7604164

  2. Poisson Growth Mixture Modeling of Intensive Longitudinal Data: An Application to Smoking Cessation Behavior

    ERIC Educational Resources Information Center

    Shiyko, Mariya P.; Li, Yuelin; Rindskopf, David

    2012-01-01

    Intensive longitudinal data (ILD) have become increasingly common in the social and behavioral sciences; count variables, such as the number of daily smoked cigarettes, are frequently used outcomes in many ILD studies. We demonstrate a generalized extension of growth mixture modeling (GMM) to Poisson-distributed ILD for identifying qualitatively…

  3. Poisson-Based Inference for Perturbation Models in Adaptive Spelling Training

    ERIC Educational Resources Information Center

    Baschera, Gian-Marco; Gross, Markus

    2010-01-01

    We present an inference algorithm for perturbation models based on Poisson regression. The algorithm is designed to handle unclassified input with multiple errors described by independent mal-rules. This knowledge representation provides an intelligent tutoring system with local and global information about a student, such as error classification…

  4. A marginalized zero-inflated Poisson regression model with overall exposure effects.

    PubMed

    Long, D Leann; Preisser, John S; Herring, Amy H; Golin, Carol E

    2014-12-20

    The zero-inflated Poisson (ZIP) regression model is often employed in public health research to examine the relationships between exposures of interest and a count outcome exhibiting many zeros, in excess of the amount expected under sampling from a Poisson distribution. The regression coefficients of the ZIP model have latent class interpretations, which correspond to a susceptible subpopulation at risk for the condition with counts generated from a Poisson distribution and a non-susceptible subpopulation that provides the extra or excess zeros. The ZIP model parameters, however, are not well suited for inference targeted at marginal means, specifically, in quantifying the effect of an explanatory variable in the overall mixture population. We develop a marginalized ZIP model approach for independent responses to model the population mean count directly, allowing straightforward inference for overall exposure effects and empirical robust variance estimation for overall log-incidence density ratios. Through simulation studies, the performance of maximum likelihood estimation of the marginalized ZIP model is assessed and compared with other methods of estimating overall exposure effects. The marginalized ZIP model is applied to a recent study of a motivational interviewing-based safer sex counseling intervention, designed to reduce unprotected sexual act counts. PMID:25220537

  5. Modeling the number of bids received for outer continental shelf leases by Poisson-type models

    SciTech Connect

    Bruckner, L.A.; Nachtsheim, C.J.

    1983-07-01

    Since 1954, the U.S. federal government has held hydrocarbon lease sales for areas on the Outer Continental Shelf (OCS). The U.S. DOE is charged with developing lease sale policies designed to increase competition on the offered tracts. Increased competition has been assumed synonymous with increased number of bids (NOB). To study the influence of alternative bidding systems on the number of bids received, a mixed Poisson-type model has previously been employed. The authors show why this model is not statistically supportable. A truncated model is proposed and is shown to be statistically justified for the number of solo bids over all sales and marginally supported for the number of joint bids on sales before the joint-bidding ban.

  6. Formulation of the Multi-Hit Model With a Non-Poisson Distribution of Hits

    SciTech Connect

    Vassiliev, Oleg N.

    2012-07-15

    Purpose: We proposed a formulation of the multi-hit single-target model in which the Poisson distribution of hits was replaced by a combination of two distributions: one for the number of particles entering the target and one for the number of hits a particle entering the target produces. Such an approach reflects the fact that radiation damage is a result of two different random processes: particle emission by a radiation source and interaction of particles with matter inside the target. Methods and Materials: Poisson distribution is well justified for the first of the two processes. The second distribution depends on how a hit is defined. To test our approach, we assumed that the second distribution was also a Poisson distribution. The two distributions combined resulted in a non-Poisson distribution. We tested the proposed model by comparing it with previously reported data for DNA single- and double-strand breaks induced by protons and electrons, for survival of a range of cell lines, and variation of the initial slopes of survival curves with radiation quality for heavy-ion beams. Results: Analysis of cell survival equations for this new model showed that they had realistic properties overall, such as the initial and high-dose slopes of survival curves, the shoulder, and relative biological effectiveness (RBE) In most cases tested, a better fit of survival curves was achieved with the new model than with the linear-quadratic model. The results also suggested that the proposed approach may extend the multi-hit model beyond its traditional role in analysis of survival curves to predicting effects of radiation quality and analysis of DNA strand breaks. Conclusions: Our model, although conceptually simple, performed well in all tests. The model was able to consistently fit data for both cell survival and DNA single- and double-strand breaks. It correctly predicted the dependence of radiation effects on parameters of radiation quality.

  7. Full Bayes Poisson gamma, Poisson lognormal, and zero inflated random effects models: Comparing the precision of crash frequency estimates.

    PubMed

    Aguero-Valverde, Jonathan

    2013-01-01

    In recent years, complex statistical modeling approaches have being proposed to handle the unobserved heterogeneity and the excess of zeros frequently found in crash data, including random effects and zero inflated models. This research compares random effects, zero inflated, and zero inflated random effects models using a full Bayes hierarchical approach. The models are compared not just in terms of goodness-of-fit measures but also in terms of precision of posterior crash frequency estimates since the precision of these estimates is vital for ranking of sites for engineering improvement. Fixed-over-time random effects models are also compared to independent-over-time random effects models. For the crash dataset being analyzed, it was found that once the random effects are included in the zero inflated models, the probability of being in the zero state is drastically reduced, and the zero inflated models degenerate to their non zero inflated counterparts. Also by fixing the random effects over time the fit of the models and the precision of the crash frequency estimates are significantly increased. It was found that the rankings of the fixed-over-time random effects models are very consistent among them. In addition, the results show that by fixing the random effects over time, the standard errors of the crash frequency estimates are significantly reduced for the majority of the segments on the top of the ranking.

  8. FAST TRACK COMMUNICATION: Poisson-sigma model for 2D gravity with non-metricity

    NASA Astrophysics Data System (ADS)

    Adak, M.; Grumiller, D.

    2007-10-01

    We present a Poisson-sigma model describing general 2D dilaton gravity with non-metricity, torsion and curvature. It involves three arbitrary functions of the dilaton field, two of which are well known from metric compatible theories, while the third one characterizes the local strength of non-metricity. As an example we show that α' corrections in 2D string theory can generate (target space) non-metricity.

  9. Investigation of time and weather effects on crash types using full Bayesian multivariate Poisson lognormal models.

    PubMed

    El-Basyouny, Karim; Barua, Sudip; Islam, Md Tazul

    2014-12-01

    Previous research shows that various weather elements have significant effects on crash occurrence and risk; however, little is known about how these elements affect different crash types. Consequently, this study investigates the impact of weather elements and sudden extreme snow or rain weather changes on crash type. Multivariate models were used for seven crash types using five years of daily weather and crash data collected for the entire City of Edmonton. In addition, the yearly trend and random variation of parameters across the years were analyzed by using four different modeling formulations. The proposed models were estimated in a full Bayesian context via Markov Chain Monte Carlo simulation. The multivariate Poisson lognormal model with yearly varying coefficients provided the best fit for the data according to Deviance Information Criteria. Overall, results showed that temperature and snowfall were statistically significant with intuitive signs (crashes decrease with increasing temperature; crashes increase as snowfall intensity increases) for all crash types, while rainfall was mostly insignificant. Previous snow showed mixed results, being statistically significant and positively related to certain crash types, while negatively related or insignificant in other cases. Maximum wind gust speed was found mostly insignificant with a few exceptions that were positively related to crash type. Major snow or rain events following a dry weather condition were highly significant and positively related to three crash types: Follow-Too-Close, Stop-Sign-Violation, and Ran-Off-Road crashes. The day-of-the-week dummy variables were statistically significant, indicating a possible weekly variation in exposure. Transportation authorities might use the above results to improve road safety by providing drivers with information regarding the risk of certain crash types for a particular weather condition.

  10. A Bayesian destructive weighted Poisson cure rate model and an application to a cutaneous melanoma data.

    PubMed

    Rodrigues, Josemar; Cancho, Vicente G; de Castro, Mário; Balakrishnan, N

    2012-12-01

    In this article, we propose a new Bayesian flexible cure rate survival model, which generalises the stochastic model of Klebanov et al. [Klebanov LB, Rachev ST and Yakovlev AY. A stochastic-model of radiation carcinogenesis--latent time distributions and their properties. Math Biosci 1993; 113: 51-75], and has much in common with the destructive model formulated by Rodrigues et al. [Rodrigues J, de Castro M, Balakrishnan N and Cancho VG. Destructive weighted Poisson cure rate models. Technical Report, Universidade Federal de São Carlos, São Carlos-SP. Brazil, 2009 (accepted in Lifetime Data Analysis)]. In our approach, the accumulated number of lesions or altered cells follows a compound weighted Poisson distribution. This model is more flexible than the promotion time cure model in terms of dispersion. Moreover, it possesses an interesting and realistic interpretation of the biological mechanism of the occurrence of the event of interest as it includes a destructive process of tumour cells after an initial treatment or the capacity of an individual exposed to irradiation to repair altered cells that results in cancer induction. In other words, what is recorded is only the damaged portion of the original number of altered cells not eliminated by the treatment or repaired by the repair system of an individual. Markov Chain Monte Carlo (MCMC) methods are then used to develop Bayesian inference for the proposed model. Also, some discussions on the model selection and an illustration with a cutaneous melanoma data set analysed by Rodrigues et al. [Rodrigues J, de Castro M, Balakrishnan N and Cancho VG. Destructive weighted Poisson cure rate models. Technical Report, Universidade Federal de São Carlos, São Carlos-SP. Brazil, 2009 (accepted in Lifetime Data Analysis)] are presented.

  11. Kinetic models in n-dimensional Euclidean spaces: From the Maxwellian to the Poisson kernel.

    PubMed

    Zadehgol, Abed

    2015-06-01

    In this work, minimal kinetic theories based on unconventional entropy functions, H∼ln f (Burg entropy) for 2D and H∼f(1-2/n) (Tsallis entropy) for nD with n≥3, are studied. These entropy functions were originally derived by Boghosian et al. [Phys. Rev. E 68, 025103 (2003)] as a basis for discrete-velocity and lattice Boltzmann models for incompressible fluid dynamics. The present paper extends the entropic models of Boghosian et al. and shows that the explicit form of the equilibrium distribution function (EDF) of their models, in the continuous-velocity limit, can be identified with the Poisson kernel of the Poisson integral formula. The conservation and Navier-Stokes equations are recovered at low Mach numbers, and it is shown that rest particles can be used to rectify the speed of sound of the extended models. Fourier series expansion of the EDF is used to evaluate the discretization errors of the model. It is shown that the expansion coefficients of the Fourier series coincide with the velocity moments of the model. Employing two-, three-, and four-dimensional (2D, 3D, and 4D) complex systems, the real velocity space is mapped into the hypercomplex spaces and it is shown that the velocity moments can be evaluated, using the Poisson integral formula, in the hypercomplex space. For the practical applications, a 3D projection of the 4D model is presented, and the existence of an H theorem for the discrete model is investigated. The theoretical results have been verified by simulating the following benchmark problems: (1) the Kelvin-Helmholtz instability of thin shear layers in a doubly periodic domain and (2) the 3D flow of incompressible fluid in a lid-driven cubic cavity. The present results are in agreement with the previous works, while they show better stability of the proposed kinetic model, as compared with the BGK type (with single relaxation time) lattice Boltzmann models. PMID:26172826

  12. Electrostatic component of binding energy: Interpreting predictions from poisson-boltzmann equation and modeling protocols.

    PubMed

    Chakavorty, Arghya; Li, Lin; Alexov, Emil

    2016-10-30

    Macromolecular interactions are essential for understanding numerous biological processes and are typically characterized by the binding free energy. Important component of the binding free energy is the electrostatics, which is frequently modeled via the solutions of the Poisson-Boltzmann Equations (PBE). However, numerous works have shown that the electrostatic component (ΔΔGelec ) of binding free energy is very sensitive to the parameters used and modeling protocol. This prompted some researchers to question the robustness of PBE in predicting ΔΔGelec . We argue that the sensitivity of the absolute ΔΔGelec calculated with PBE using different input parameters and definitions does not indicate PBE deficiency, rather this is what should be expected. We show how the apparent sensitivity should be interpreted in terms of the underlying changes in several numerous and physical parameters. We demonstrate that PBE approach is robust within each considered force field (CHARMM-27, AMBER-94, and OPLS-AA) once the corresponding structures are energy minimized. This observation holds despite of using two different molecular surface definitions, pointing again that PBE delivers consistent results within particular force field. The fact that PBE delivered ΔΔGelec values may differ if calculated with different modeling protocols is not a deficiency of PBE, but natural results of the differences of the force field parameters and potential functions for energy minimization. In addition, while the absolute ΔΔGelec values calculated with different force field differ, their ordering remains practically the same allowing for consistent ranking despite of the force field used. © 2016 Wiley Periodicals, Inc.

  13. Scaling the Poisson Distribution

    ERIC Educational Resources Information Center

    Farnsworth, David L.

    2014-01-01

    We derive the additive property of Poisson random variables directly from the probability mass function. An important application of the additive property to quality testing of computer chips is presented.

  14. Poisson-Fokker-Planck model for biomolecules translocation through nanopore driven by electroosmotic flow

    NASA Astrophysics Data System (ADS)

    Lin, XiaoHui; Zhang, ChiBin; Gu, Jun; Jiang, ShuYun; Yang, JueKuan

    2014-11-01

    A non-continuous electroosmotic flow model (PFP model) is built based on Poisson equation, Fokker-Planck equation and Navier-Stokse equation, and used to predict the DNA molecule translocation through nanopore. PFP model discards the continuum assumption of ion translocation and considers ions as discrete particles. In addition, this model includes the contributions of Coulomb electrostatic potential between ions, Brownian motion of ions and viscous friction to ion transportation. No ionic diffusion coefficient and other phenomenological parameters are needed in the PFP model. It is worth noting that the PFP model can describe non-equilibrium electroosmotic transportation of ions in a channel of a size comparable with the mean free path of ion. A modified clustering method is proposed for the numerical solution of PFP model, and ion current translocation through nanopore with a radius of 1 nm is simulated using the modified clustering method. The external electric field, wall charge density of nanopore, surface charge density of DNA, as well as ion average number density, influence the electroosmotic velocity profile of electrolyte solution, the velocity of DNA translocation through nanopore and ion current blockade. Results show that the ion average number density of electrolyte and surface charge density of nanopore have a significant effect on the translocation velocity of DNA and the ion current blockade. The translocation velocity of DNA is proportional to the surface charge density of nanopore, and is inversely proportional to ion average number density of electrolyte solution. Thus, the translocation velocity of DNAs can be controlled to improve the accuracy of sequencing by adjusting the external electric field, ion average number density of electrolyte and surface charge density of nanopore. Ion current decreases when the ion average number density is larger than the critical value and increases when the ion average number density is lower than the

  15. Beyond Poisson-Boltzmann: Modeling Biomolecule-Water and Water-Water Interactions

    PubMed Central

    Koehl, Patrice; Orland, Henri; Delarue, Marc

    2009-01-01

    We present an extension to the Poisson-Boltzmann model in which the solvent is modeled as an assembly of self-orienting dipoles of variable densities. Interactions between these dipoles are included implicitly using a Yukawa potential field. This model leads to a set of equations whose solutions give the dipole densities; we use the latter to study the organization of water around biomolecules. The computed water density profiles resemble those derived from molecular dynamics simulations. We also derive an excess free energy that discriminates correct from incorrect conformations of proteins. PMID:19257790

  16. Linear and Poisson models for genetic evaluation of tick resistance in cross-bred Hereford x Nellore cattle.

    PubMed

    Ayres, D R; Pereira, R J; Boligon, A A; Silva, F F; Schenkel, F S; Roso, V M; Albuquerque, L G

    2013-12-01

    Cattle resistance to ticks is measured by the number of ticks infesting the animal. The model used for the genetic analysis of cattle resistance to ticks frequently requires logarithmic transformation of the observations. The objective of this study was to evaluate the predictive ability and goodness of fit of different models for the analysis of this trait in cross-bred Hereford x Nellore cattle. Three models were tested: a linear model using logarithmic transformation of the observations (MLOG); a linear model without transformation of the observations (MLIN); and a generalized linear Poisson model with residual term (MPOI). All models included the classificatory effects of contemporary group and genetic group and the covariates age of animal at the time of recording and individual heterozygosis, as well as additive genetic effects as random effects. Heritability estimates were 0.08 ± 0.02, 0.10 ± 0.02 and 0.14 ± 0.04 for MLIN, MLOG and MPOI models, respectively. The model fit quality, verified by deviance information criterion (DIC) and residual mean square, indicated fit superiority of MPOI model. The predictive ability of the models was compared by validation test in independent sample. The MPOI model was slightly superior in terms of goodness of fit and predictive ability, whereas the correlations between observed and predicted tick counts were practically the same for all models. A higher rank correlation between breeding values was observed between models MLOG and MPOI. Poisson model can be used for the selection of tick-resistant animals. PMID:24236604

  17. Mixed additive models

    NASA Astrophysics Data System (ADS)

    Carvalho, Francisco; Covas, Ricardo

    2016-06-01

    We consider mixed models y =∑i =0 w Xiβi with V (y )=∑i =1 w θiMi Where Mi=XiXi⊤ , i = 1, . . ., w, and µ = X0β0. For these we will estimate the variance components θ1, . . ., θw, aswell estimable vectors through the decomposition of the initial model into sub-models y(h), h ∈ Γ, with V (y (h ))=γ (h )Ig (h )h ∈Γ . Moreover we will consider L extensions of these models, i.e., y˚=Ly+ɛ, where L=D (1n1, . . ., 1nw) and ɛ, independent of y, has null mean vector and variance covariance matrix θw+1Iw, where w =∑i =1 n wi .

  18. Bringing consistency to simulation of population models--Poisson simulation as a bridge between micro and macro simulation.

    PubMed

    Gustafsson, Leif; Sternad, Mikael

    2007-10-01

    Population models concern collections of discrete entities such as atoms, cells, humans, animals, etc., where the focus is on the number of entities in a population. Because of the complexity of such models, simulation is usually needed to reproduce their complete dynamic and stochastic behaviour. Two main types of simulation models are used for different purposes, namely micro-simulation models, where each individual is described with its particular attributes and behaviour, and macro-simulation models based on stochastic differential equations, where the population is described in aggregated terms by the number of individuals in different states. Consistency between micro- and macro-models is a crucial but often neglected aspect. This paper demonstrates how the Poisson Simulation technique can be used to produce a population macro-model consistent with the corresponding micro-model. This is accomplished by defining Poisson Simulation in strictly mathematical terms as a series of Poisson processes that generate sequences of Poisson distributions with dynamically varying parameters. The method can be applied to any population model. It provides the unique stochastic and dynamic macro-model consistent with a correct micro-model. The paper also presents a general macro form for stochastic and dynamic population models. In an appendix Poisson Simulation is compared with Markov Simulation showing a number of advantages. Especially aggregation into state variables and aggregation of many events per time-step makes Poisson Simulation orders of magnitude faster than Markov Simulation. Furthermore, you can build and execute much larger and more complicated models with Poisson Simulation than is possible with the Markov approach.

  19. Micromechanical poroelastic finite element and shear-lag models of tendon predict large strain dependent Poisson's ratios and fluid expulsion under tensile loading.

    PubMed

    Ahmadzadeh, Hossein; Freedman, Benjamin R; Connizzo, Brianne K; Soslowsky, Louis J; Shenoy, Vivek B

    2015-08-01

    As tendons are loaded, they reduce in volume and exude fluid to the surrounding medium. Experimental studies have shown that tendon stretching results in a Poisson's ratio greater than 0.5, with a maximum value at small strains followed by a nonlinear decay. Here we present a computational model that attributes this macroscopic observation to the microscopic mechanism of the load transfer between fibrils under stretch. We develop a finite element model based on the mechanical role of the interfibrillar-linking elements, such as thin fibrils that bridge the aligned fibrils or macromolecules such as glycosaminoglycans (GAGs) in the interfibrillar sliding and verify it with a theoretical shear-lag model. We showed the existence of a previously unappreciated structure-function mechanism whereby the Poisson's ratio in tendon is affected by the strain applied and interfibrillar-linker properties, and together these features predict tendon volume shrinkage under tensile loading. During loading, the interfibrillar-linkers pulled fibrils toward each other and squeezed the matrix, leading to the Poisson's ratio larger than 0.5 and fluid expulsion. In addition, the rotation of the interfibrillar-linkers with respect to the fibrils at large strains caused a reduction in the volume shrinkage and eventual nonlinear decay in Poisson's ratio at large strains. Our model also predicts a fluid flow that has a radial pattern toward the surrounding medium, with the larger fluid velocities in proportion to the interfibrillar sliding. PMID:25934322

  20. Poisson-Nernst-Planck model with Chang-Jaffe, diffusion, and ohmic boundary conditions

    NASA Astrophysics Data System (ADS)

    Lelidis, I.; Macdonald, J. Ross; Barbero, G.

    2016-01-01

    Using the linear Poisson-Nernst-Planck impedance-response continuum model, we investigate the possible equivalences of three different types of boundary conditions previously proposed to model the electrode behavior of an electrolytic cell in the shape of a slab. We show analytically that the boundary conditions proposed long ago by Chang-Jaffe are fully equivalent to the ohmic boundary conditions only if the positive and negative ions have the same mobility, or when only ions of a single polarity are mobile. In the case where the ions have different and non-zero mobilities, we fit exact impedance spectra created for ohmic boundary conditions by using the Chang-Jaffe Poisson-Nernst-Planck response model, one that is dominated by diffusion effects. These fits yield conditions for essentially exact or approximate numerical correspondence for the complex impedance between the two models even in the unequal mobility case. Finally, diffusion type boundary conditions are shown to be fully equivalent to the ohmic one. Some limiting cases of the model parameters are investigated.

  1. A Poisson random field model of pathogen transport in surface water

    NASA Astrophysics Data System (ADS)

    Yeghiazarian, L.; Samorodnitsky, G.; Montemagno, C. D.

    2009-11-01

    To address the uncertainty associated with microbial transport and surface water contamination events, we developed a new comprehensive stochastic framework that combines processes on the microscopic (single microorganism) and macroscopic (ensembles of microorganisms) scales. The spatial and temporal population behavior is modeled as a nonhomogeneous Poisson random field with Markovian field dynamics. The model parameters are based on the actual physical and biological characteristics of the Cryptosporidium parvum transport process and can be extended to cover a variety of other pathogens. Since soil particles have been shown to be a major vehicle in microbial transport, a U.S. Department of Agriculture approved erosion model (Water Erosion Prediction Project) is incorporated into the model. Risk assessment is an integral part of the stochastic model and is conducted using a set of simple calculations. Poisson intensity functions and correlations are computed. The results consistently indicate that surface water contamination events are transient, with traveling high peaks of microorganism concentrations. Correlations between microorganism populations at different points in time and space reach relatively significant levels even at large distances from one another. This information is aimed to assist water resources management teams in the decision-making process to identify the likely timing and locations of high-risk areas and thus to avoid collection of contaminated water.

  2. Simulation of high tensile Poisson's ratios of articular cartilage with a finite element fibril-reinforced hyperelastic model.

    PubMed

    García, José Jaime

    2008-06-01

    Analyses with a finite element fibril-reinforced hyperelastic model were undertaken in this study to simulate high tensile Poisson's ratios that have been consistently documented in experimental studies of articular cartilage. The solid phase was represented by an isotropic matrix reinforced with four sets of fibrils, two of them aligned in orthogonal directions and two oblique fibrils in a symmetric configuration respect to the orthogonal axes. Two distinct hyperelastic functions were used to represent the matrix and the fibrils. Results of the analyses showed that only by considering non-orthogonal fibrils was it possible to represent Poisson's ratios higher than one. Constrains in the grips and finite deformations played a minor role in the calculated Poisson's ratio. This study also showed that the model with oblique fibrils at 45 degrees was able to represent significant differences in Poisson's ratios near 1 documented in experimental studies. However, even considering constrains in the grips, this model was not capable to simulate Poisson's ratios near 2 that have been reported in other studies. The study also confirmed that only with a high relation between the stiffness of the fibers and that of the matrix was it possible to obtain high Poisson's ratios for the tissue. Results suggest that analytical models with a finite number of fibrils are appropriate to represent main mechanical effects of articular cartilage.

  3. Simulation of high tensile Poisson's ratios of articular cartilage with a finite element fibril-reinforced hyperelastic model.

    PubMed

    García, José Jaime

    2008-06-01

    Analyses with a finite element fibril-reinforced hyperelastic model were undertaken in this study to simulate high tensile Poisson's ratios that have been consistently documented in experimental studies of articular cartilage. The solid phase was represented by an isotropic matrix reinforced with four sets of fibrils, two of them aligned in orthogonal directions and two oblique fibrils in a symmetric configuration respect to the orthogonal axes. Two distinct hyperelastic functions were used to represent the matrix and the fibrils. Results of the analyses showed that only by considering non-orthogonal fibrils was it possible to represent Poisson's ratios higher than one. Constrains in the grips and finite deformations played a minor role in the calculated Poisson's ratio. This study also showed that the model with oblique fibrils at 45 degrees was able to represent significant differences in Poisson's ratios near 1 documented in experimental studies. However, even considering constrains in the grips, this model was not capable to simulate Poisson's ratios near 2 that have been reported in other studies. The study also confirmed that only with a high relation between the stiffness of the fibers and that of the matrix was it possible to obtain high Poisson's ratios for the tissue. Results suggest that analytical models with a finite number of fibrils are appropriate to represent main mechanical effects of articular cartilage. PMID:17690001

  4. Application of spatial Poisson process models to air mass thunderstorm rainfall

    NASA Technical Reports Server (NTRS)

    Eagleson, P. S.; Fennessy, N. M.; Wang, Qinliang; Rodriguez-Iturbe, I.

    1987-01-01

    Eight years of summer storm rainfall observations from 93 stations in and around the 154 sq km Walnut Gulch catchment of the Agricultural Research Service, U.S. Department of Agriculture, in Arizona are processed to yield the total station depths of 428 storms. Statistical analysis of these random fields yields the first two moments, the spatial correlation and variance functions, and the spatial distribution of total rainfall for each storm. The absolute and relative worth of three Poisson models are evaluated by comparing their prediction of the spatial distribution of storm rainfall with observations from the second half of the sample. The effect of interstorm parameter variation is examined.

  5. Information transfer with rate-modulated Poisson processes: a simple model for nonstationary stochastic resonance.

    PubMed

    Goychuk, I

    2001-08-01

    Stochastic resonance in a simple model of information transfer is studied for sensory neurons and ensembles of ion channels. An exact expression for the information gain is obtained for the Poisson process with the signal-modulated spiking rate. This result allows one to generalize the conventional stochastic resonance (SR) problem (with periodic input signal) to the arbitrary signals of finite duration (nonstationary SR). Moreover, in the case of a periodic signal, the rate of information gain is compared with the conventional signal-to-noise ratio. The paper establishes the general nonequivalence between both measures notwithstanding their apparent similarity in the limit of weak signals.

  6. Information transfer with rate-modulated Poisson processes: A simple model for nonstationary stochastic resonance

    NASA Astrophysics Data System (ADS)

    Goychuk, Igor

    2001-08-01

    Stochastic resonance in a simple model of information transfer is studied for sensory neurons and ensembles of ion channels. An exact expression for the information gain is obtained for the Poisson process with the signal-modulated spiking rate. This result allows one to generalize the conventional stochastic resonance (SR) problem (with periodic input signal) to the arbitrary signals of finite duration (nonstationary SR). Moreover, in the case of a periodic signal, the rate of information gain is compared with the conventional signal-to-noise ratio. The paper establishes the general nonequivalence between both measures notwithstanding their apparent similarity in the limit of weak signals.

  7. Nonlinear Poisson-Boltzmann model of charged lipid membranes: Accounting for the presence of zwitterionic lipids

    NASA Astrophysics Data System (ADS)

    Mengistu, Demmelash H.; May, Sylvio

    2008-09-01

    The nonlinear Poisson-Boltzmann model is used to derive analytical expressions for the free energies of both mixed anionic-zwitterionic and mixed cationic-zwitterionic lipid membranes as function of the mole fraction of charged lipids. Accounting explicitly for the electrostatic properties of the zwitterionic lipid species affects the free energy of anionic and cationic membranes in a qualitatively different way: That of an anionic membrane changes monotonously as a function of the mole fraction of charged lipids, whereas it passes through a pronounced minimum for a cationic membrane.

  8. Clustered mixed nonhomogeneous Poisson process spline models for the analysis of recurrent event panel data.

    PubMed

    Nielsen, J D; Dean, C B

    2008-09-01

    A flexible semiparametric model for analyzing longitudinal panel count data arising from mixtures is presented. Panel count data refers here to count data on recurrent events collected as the number of events that have occurred within specific follow-up periods. The model assumes that the counts for each subject are generated by mixtures of nonhomogeneous Poisson processes with smooth intensity functions modeled with penalized splines. Time-dependent covariate effects are also incorporated into the process intensity using splines. Discrete mixtures of these nonhomogeneous Poisson process spline models extract functional information from underlying clusters representing hidden subpopulations. The motivating application is an experiment to test the effectiveness of pheromones in disrupting the mating pattern of the cherry bark tortrix moth. Mature moths arise from hidden, but distinct, subpopulations and monitoring the subpopulation responses was of interest. Within-cluster random effects are used to account for correlation structures and heterogeneity common to this type of data. An estimating equation approach to inference requiring only low moment assumptions is developed and the finite sample properties of the proposed estimating functions are investigated empirically by simulation.

  9. Evolving Scale-Free Networks by Poisson Process: Modeling and Degree Distribution.

    PubMed

    Feng, Minyu; Qu, Hong; Yi, Zhang; Xie, Xiurui; Kurths, Jurgen

    2016-05-01

    Since the great mathematician Leonhard Euler initiated the study of graph theory, the network has been one of the most significant research subject in multidisciplinary. In recent years, the proposition of the small-world and scale-free properties of complex networks in statistical physics made the network science intriguing again for many researchers. One of the challenges of the network science is to propose rational models for complex networks. In this paper, in order to reveal the influence of the vertex generating mechanism of complex networks, we propose three novel models based on the homogeneous Poisson, nonhomogeneous Poisson and birth death process, respectively, which can be regarded as typical scale-free networks and utilized to simulate practical networks. The degree distribution and exponent are analyzed and explained in mathematics by different approaches. In the simulation, we display the modeling process, the degree distribution of empirical data by statistical methods, and reliability of proposed networks, results show our models follow the features of typical complex networks. Finally, some future challenges for complex systems are discussed.

  10. Evolving Scale-Free Networks by Poisson Process: Modeling and Degree Distribution.

    PubMed

    Feng, Minyu; Qu, Hong; Yi, Zhang; Xie, Xiurui; Kurths, Jurgen

    2016-05-01

    Since the great mathematician Leonhard Euler initiated the study of graph theory, the network has been one of the most significant research subject in multidisciplinary. In recent years, the proposition of the small-world and scale-free properties of complex networks in statistical physics made the network science intriguing again for many researchers. One of the challenges of the network science is to propose rational models for complex networks. In this paper, in order to reveal the influence of the vertex generating mechanism of complex networks, we propose three novel models based on the homogeneous Poisson, nonhomogeneous Poisson and birth death process, respectively, which can be regarded as typical scale-free networks and utilized to simulate practical networks. The degree distribution and exponent are analyzed and explained in mathematics by different approaches. In the simulation, we display the modeling process, the degree distribution of empirical data by statistical methods, and reliability of proposed networks, results show our models follow the features of typical complex networks. Finally, some future challenges for complex systems are discussed. PMID:25956002

  11. A coregionalization model can assist specification of Geographically Weighted Poisson Regression: Application to an ecological study.

    PubMed

    Ribeiro, Manuel Castro; Sousa, António Jorge; Pereira, Maria João

    2016-05-01

    The geographical distribution of health outcomes is influenced by socio-economic and environmental factors operating on different spatial scales. Geographical variations in relationships can be revealed with semi-parametric Geographically Weighted Poisson Regression (sGWPR), a model that can combine both geographically varying and geographically constant parameters. To decide whether a parameter should vary geographically, two models are compared: one in which all parameters are allowed to vary geographically and one in which all except the parameter being evaluated are allowed to vary geographically. The model with the lower corrected Akaike Information Criterion (AICc) is selected. Delivering model selection exclusively according to the AICc might hide important details in spatial variations of associations. We propose assisting the decision by using a Linear Model of Coregionalization (LMC). Here we show how LMC can refine sGWPR on ecological associations between socio-economic and environmental variables and low birth weight outcomes in the west-north-central region of Portugal.

  12. Kinetic models in n -dimensional Euclidean spaces: From the Maxwellian to the Poisson kernel

    NASA Astrophysics Data System (ADS)

    Zadehgol, Abed

    2015-06-01

    In this work, minimal kinetic theories based on unconventional entropy functions, H ˜lnf (Burg entropy) for 2D and H ˜f1 -2/n (Tsallis entropy) for n D with n ≥3 , are studied. These entropy functions were originally derived by Boghosian et al. [Phys. Rev. E 68, 025103 (2003), 10.1103/PhysRevE.68.025103] as a basis for discrete-velocity and lattice Boltzmann models for incompressible fluid dynamics. The present paper extends the entropic models of Boghosian et al. and shows that the explicit form of the equilibrium distribution function (EDF) of their models, in the continuous-velocity limit, can be identified with the Poisson kernel of the Poisson integral formula. The conservation and Navier-Stokes equations are recovered at low Mach numbers, and it is shown that rest particles can be used to rectify the speed of sound of the extended models. Fourier series expansion of the EDF is used to evaluate the discretization errors of the model. It is shown that the expansion coefficients of the Fourier series coincide with the velocity moments of the model. Employing two-, three-, and four-dimensional (2D, 3D, and 4D) complex systems, the real velocity space is mapped into the hypercomplex spaces and it is shown that the velocity moments can be evaluated, using the Poisson integral formula, in the hypercomplex space. For the practical applications, a 3D projection of the 4D model is presented, and the existence of an H theorem for the discrete model is investigated. The theoretical results have been verified by simulating the following benchmark problems: (1) the Kelvin-Helmholtz instability of thin shear layers in a doubly periodic domain and (2) the 3D flow of incompressible fluid in a lid-driven cubic cavity. The present results are in agreement with the previous works, while they show better stability of the proposed kinetic model, as compared with the BGK type (with single relaxation time) lattice Boltzmann models.

  13. Integrate-and-fire vs Poisson models of LGN input to V1 cortex: noisier inputs reduce orientation selectivity.

    PubMed

    Lin, I-Chun; Xing, Dajun; Shapley, Robert

    2012-12-01

    One of the reasons the visual cortex has attracted the interest of computational neuroscience is that it has well-defined inputs. The lateral geniculate nucleus (LGN) of the thalamus is the source of visual signals to the primary visual cortex (V1). Most large-scale cortical network models approximate the spike trains of LGN neurons as simple Poisson point processes. However, many studies have shown that neurons in the early visual pathway are capable of spiking with high temporal precision and their discharges are not Poisson-like. To gain an understanding of how response variability in the LGN influences the behavior of V1, we study response properties of model V1 neurons that receive purely feedforward inputs from LGN cells modeled either as noisy leaky integrate-and-fire (NLIF) neurons or as inhomogeneous Poisson processes. We first demonstrate that the NLIF model is capable of reproducing many experimentally observed statistical properties of LGN neurons. Then we show that a V1 model in which the LGN input to a V1 neuron is modeled as a group of NLIF neurons produces higher orientation selectivity than the one with Poisson LGN input. The second result implies that statistical characteristics of LGN spike trains are important for V1's function. We conclude that physiologically motivated models of V1 need to include more realistic LGN spike trains that are less noisy than inhomogeneous Poisson processes.

  14. Pareto genealogies arising from a Poisson branching evolution model with selection.

    PubMed

    Huillet, Thierry E

    2014-02-01

    We study a class of coalescents derived from a sampling procedure out of N i.i.d. Pareto(α) random variables, normalized by their sum, including β-size-biasing on total length effects (β < α). Depending on the range of α we derive the large N limit coalescents structure, leading either to a discrete-time Poisson-Dirichlet (α, -β) Ξ-coalescent (α ε[0, 1)), or to a family of continuous-time Beta (2 - α, α - β)Λ-coalescents (α ε[1, 2)), or to the Kingman coalescent (α ≥ 2). We indicate that this class of coalescent processes (and their scaling limits) may be viewed as the genealogical processes of some forward in time evolving branching population models including selection effects. In such constant-size population models, the reproduction step, which is based on a fitness-dependent Poisson Point Process with scaling power-law(α) intensity, is coupled to a selection step consisting of sorting out the N fittest individuals issued from the reproduction step.

  15. A switching poisson process model for high concentrations in short-range atmospheric dispersion

    NASA Astrophysics Data System (ADS)

    Anderson, C. W.; Mole, N.; Nadarajah, S.

    High concentrations of a pollutant dispersing in a turbulent atmosphere may be described in terms of the times at which concentration exceeds a high threshold and the values it reaches at those times. A stochastic model based on a switching Poisson process is proposed to account for both aspects, extending an earlier model of Mole et al. (1995, Environmetrics6, 595-606), which described only the magnitudes of high concentrations. The model is fitted by maximum likelihood and is shown to be capable of capturing the broad features of extreme concentrations in a series of atmospheric dispersion experiments. Evidence is found that in some cases parameters of the model vary with time, and it is argued that this lends support to an explanation of the variability of extreme concentrations based on a meandering plume hypothesis.

  16. Poisson`s ratio and crustal seismology

    SciTech Connect

    Christensen, N.I.

    1996-02-10

    This report discusses the use of Poisson`s ratio to place constraints on continental crustal composition. A summary of Poisson`s ratios for many common rock formations is also included with emphasis on igneous and metamorphic rock properties.

  17. Poisson Coordinates.

    PubMed

    Li, Xian-Ying; Hu, Shi-Min

    2013-02-01

    Harmonic functions are the critical points of a Dirichlet energy functional, the linear projections of conformal maps. They play an important role in computer graphics, particularly for gradient-domain image processing and shape-preserving geometric computation. We propose Poisson coordinates, a novel transfinite interpolation scheme based on the Poisson integral formula, as a rapid way to estimate a harmonic function on a certain domain with desired boundary values. Poisson coordinates are an extension of the Mean Value coordinates (MVCs) which inherit their linear precision, smoothness, and kernel positivity. We give explicit formulas for Poisson coordinates in both continuous and 2D discrete forms. Superior to MVCs, Poisson coordinates are proved to be pseudoharmonic (i.e., they reproduce harmonic functions on n-dimensional balls). Our experimental results show that Poisson coordinates have lower Dirichlet energies than MVCs on a number of typical 2D domains (particularly convex domains). As well as presenting a formula, our approach provides useful insights for further studies on coordinates-based interpolation and fast estimation of harmonic functions.

  18. WAITING TIME DISTRIBUTION OF SOLAR ENERGETIC PARTICLE EVENTS MODELED WITH A NON-STATIONARY POISSON PROCESS

    SciTech Connect

    Li, C.; Su, W.; Fang, C.; Zhong, S. J.; Wang, L.

    2014-09-10

    We present a study of the waiting time distributions (WTDs) of solar energetic particle (SEP) events observed with the spacecraft WIND and GOES. The WTDs of both solar electron events (SEEs) and solar proton events (SPEs) display a power-law tail of ∼Δt {sup –γ}. The SEEs display a broken power-law WTD. The power-law index is γ{sub 1} = 0.99 for the short waiting times (<70 hr) and γ{sub 2} = 1.92 for large waiting times (>100 hr). The break of the WTD of SEEs is probably due to the modulation of the corotating interaction regions. The power-law index, γ ∼ 1.82, is derived for the WTD of the SPEs which is consistent with the WTD of type II radio bursts, indicating a close relationship between the shock wave and the production of energetic protons. The WTDs of SEP events can be modeled with a non-stationary Poisson process, which was proposed to understand the waiting time statistics of solar flares. We generalize the method and find that, if the SEP event rate λ = 1/Δt varies as the time distribution of event rate f(λ) = Aλ{sup –α}exp (– βλ), the time-dependent Poisson distribution can produce a power-law tail WTD of ∼Δt {sup α} {sup –3}, where 0 ≤ α < 2.

  19. Generating fibre orientation maps in human heart models using Poisson interpolation.

    PubMed

    Wong, Jonathan; Kuhl, Ellen

    2014-01-01

    Smoothly varying muscle fibre orientations in the heart are critical to its electrical and mechanical function. From detailed histological studies and diffusion tensor imaging, we now know that fibre orientations in humans vary gradually from approximately -70° in the outer wall to +80° in the inner wall. However, the creation of fibre orientation maps for computational analyses remains one of the most challenging problems in cardiac electrophysiology and cardiac mechanics. Here, we show that Poisson interpolation generates smoothly varying vector fields that satisfy a set of user-defined constraints in arbitrary domains. Specifically, we enforce the Poisson interpolation in the weak sense using a standard linear finite element algorithm for scalar-valued second-order boundary value problems and introduce the feature to be interpolated as a global unknown. User-defined constraints are then simply enforced in the strong sense as Dirichlet boundary conditions. We demonstrate that the proposed concept is capable of generating smoothly varying fibre orientations, quickly, efficiently and robustly, both in a generic bi-ventricular model and in a patient-specific human heart. Sensitivity analyses demonstrate that the underlying algorithm is conceptually able to handle both arbitrarily and uniformly distributed user-defined constraints; however, the quality of the interpolation is best for uniformly distributed constraints. We anticipate our algorithm to be immediately transformative to experimental and clinical settings, in which it will allow us to quickly and reliably create smooth interpolations of arbitrary fields from data-sets, which are sparse but uniformly distributed.

  20. Comparing INLA and OpenBUGS for hierarchical Poisson modeling in disease mapping.

    PubMed

    Carroll, R; Lawson, A B; Faes, C; Kirby, R S; Aregay, M; Watjou, K

    2015-01-01

    The recently developed R package INLA (Integrated Nested Laplace Approximation) is becoming a more widely used package for Bayesian inference. The INLA software has been promoted as a fast alternative to MCMC for disease mapping applications. Here, we compare the INLA package to the MCMC approach by way of the BRugs package in R, which calls OpenBUGS. We focus on the Poisson data model commonly used for disease mapping. Ultimately, INLA is a computationally efficient way of implementing Bayesian methods and returns nearly identical estimates for fixed parameters in comparison to OpenBUGS, but falls short in recovering the true estimates for the random effects, their precisions, and model goodness of fit measures under the default settings. We assumed default settings for ground truth parameters, and through altering these default settings in our simulation study, we were able to recover estimates comparable to those produced in OpenBUGS under the same assumptions. PMID:26530822

  1. Elastic-plastic cube model for ultrasonic friction reduction via Poisson's effect.

    PubMed

    Dong, Sheng; Dapino, Marcelo J

    2014-01-01

    Ultrasonic friction reduction has been studied experimentally and theoretically. This paper presents a new elastic-plastic cube model which can be applied to various ultrasonic lubrication cases. A cube is used to represent all the contacting asperities of two surfaces. Friction force is considered as the product of the tangential contact stiffness and the deformation of the cube. Ultrasonic vibrations are projected onto three orthogonal directions, separately changing contact parameters and deformations. Hence, the overall change of friction forces. Experiments are conducted to examine ultrasonic friction reduction using different materials under normal loads that vary from 40 N to 240 N. Ultrasonic vibrations are generated both in longitudinal and vertical (out-of-plane) directions by way of the Poisson effect. The tests show up to 60% friction reduction; model simulations describe the trends observed experimentally.

  2. Incorporating headgroup structure into the Poisson-Boltzmann model of charged lipid membranes

    NASA Astrophysics Data System (ADS)

    Wang, Muyang; Chen, Er-Qiang; Yang, Shuang; May, Sylvio

    2013-07-01

    Charged lipids often possess a complex headgroup structure with several spatially separated charges and internal conformational degrees of freedom. We propose a headgroup model consisting of two rod-like segments of the same length that form a flexible joint, with three charges of arbitrary sign and valence located at the joint and the two terminal positions. One terminal charge is firmly anchored at the polar-apolar interface of the lipid layer whereas the other two benefit from the orientational degrees of freedom of the two headgroup segments. This headgroup model is incorporated into the mean-field continuum Poisson-Boltzmann formalism of the electric double layer. For sufficiently small lengths of the two rod-like segments a closed-form expression of the charging free energy is calculated. For three specific examples—a zwitterionic headgroup with conformational freedom and two headgroups that carry an excess charge—we analyze and discuss conformational properties and electrostatic free energies.

  3. Bayesian hierarchical Poisson models with a hidden Markov structure for the detection of influenza epidemic outbreaks.

    PubMed

    Conesa, D; Martínez-Beneito, M A; Amorós, R; López-Quílez, A

    2015-04-01

    Considerable effort has been devoted to the development of statistical algorithms for the automated monitoring of influenza surveillance data. In this article, we introduce a framework of models for the early detection of the onset of an influenza epidemic which is applicable to different kinds of surveillance data. In particular, the process of the observed cases is modelled via a Bayesian Hierarchical Poisson model in which the intensity parameter is a function of the incidence rate. The key point is to consider this incidence rate as a normal distribution in which both parameters (mean and variance) are modelled differently, depending on whether the system is in an epidemic or non-epidemic phase. To do so, we propose a hidden Markov model in which the transition between both phases is modelled as a function of the epidemic state of the previous week. Different options for modelling the rates are described, including the option of modelling the mean at each phase as autoregressive processes of order 0, 1 or 2. Bayesian inference is carried out to provide the probability of being in an epidemic state at any given moment. The methodology is applied to various influenza data sets. The results indicate that our methods outperform previous approaches in terms of sensitivity, specificity and timeliness.

  4. Testing a Poisson Counter Model for Visual Identification of Briefly Presented, Mutually Confusable Single Stimuli in Pure Accuracy Tasks

    ERIC Educational Resources Information Center

    Kyllingsbaek, Soren; Markussen, Bo; Bundesen, Claus

    2012-01-01

    The authors propose and test a simple model of the time course of visual identification of briefly presented, mutually confusable single stimuli in pure accuracy tasks. The model implies that during stimulus analysis, tentative categorizations that stimulus i belongs to category j are made at a constant Poisson rate, v(i, j). The analysis is…

  5. Advanced 3D Poisson solvers and particle-in-cell methods for accelerator modeling

    NASA Astrophysics Data System (ADS)

    Serafini, David B.; McCorquodale, Peter; Colella, Phillip

    2005-01-01

    We seek to improve on the conventional FFT-based algorithms for solving the Poisson equation with infinite-domain (open) boundary conditions for large problems in accelerator modeling and related areas. In particular, improvements in both accuracy and performance are possible by combining several technologies: the method of local corrections (MLC); the James algorithm; and adaptive mesh refinement (AMR). The MLC enables the parallelization (by domain decomposition) of problems with large domains and many grid points. This improves on the FFT-based Poisson solvers typically used as it doesn't require the all-to-all communication pattern that parallel 3d FFT algorithms require, which tends to be a performance bottleneck on current (and foreseeable) parallel computers. In initial tests, good scalability up to 1000 processors has been demonstrated for our new MLC solver. An essential component of our approach is a new version of the James algorithm for infinite-domain boundary conditions for the case of three dimensions. By using a simplified version of the fast multipole method in the boundary-to-boundary potential calculation, we improve on the performance of the Hockney algorithm typically used by reducing the number of grid points by a factor of 8, and the CPU costs by a factor of 3. This is particularly important for large problems where computer memory limits are a consideration. The MLC allows for the use of adaptive mesh refinement, which reduces the number of grid points and increases the accuracy in the Poisson solution. This improves on the uniform grid methods typically used in PIC codes, particularly in beam problems where the halo is large. Also, the number of particles per cell can be controlled more closely with adaptivity than with a uniform grid. To use AMR with particles is more complicated than using uniform grids. It affects depositing particles on the non-uniform grid, reassigning particles when the adaptive grid changes and maintaining the load

  6. Semiparametric bivariate zero-inflated Poisson models with application to studies of abundance for multiple species

    USGS Publications Warehouse

    Arab, Ali; Holan, Scott H.; Wikle, Christopher K.; Wildhaber, Mark L.

    2012-01-01

    Ecological studies involving counts of abundance, presence–absence or occupancy rates often produce data having a substantial proportion of zeros. Furthermore, these types of processes are typically multivariate and only adequately described by complex nonlinear relationships involving externally measured covariates. Ignoring these aspects of the data and implementing standard approaches can lead to models that fail to provide adequate scientific understanding of the underlying ecological processes, possibly resulting in a loss of inferential power. One method of dealing with data having excess zeros is to consider the class of univariate zero-inflated generalized linear models. However, this class of models fails to address the multivariate and nonlinear aspects associated with the data usually encountered in practice. Therefore, we propose a semiparametric bivariate zero-inflated Poisson model that takes into account both of these data attributes. The general modeling framework is hierarchical Bayes and is suitable for a broad range of applications. We demonstrate the effectiveness of our model through a motivating example on modeling catch per unit area for multiple species using data from the Missouri River Benthic Fishes Study, implemented by the United States Geological Survey.

  7. Exact protein distributions for stochastic models of gene expression using partitioning of Poisson processes

    NASA Astrophysics Data System (ADS)

    Pendar, Hodjat; Platini, Thierry; Kulkarni, Rahul V.

    2013-04-01

    Stochasticity in gene expression gives rise to fluctuations in protein levels across a population of genetically identical cells. Such fluctuations can lead to phenotypic variation in clonal populations; hence, there is considerable interest in quantifying noise in gene expression using stochastic models. However, obtaining exact analytical results for protein distributions has been an intractable task for all but the simplest models. Here, we invoke the partitioning property of Poisson processes to develop a mapping that significantly simplifies the analysis of stochastic models of gene expression. The mapping leads to exact protein distributions using results for mRNA distributions in models with promoter-based regulation. Using this approach, we derive exact analytical results for steady-state and time-dependent distributions for the basic two-stage model of gene expression. Furthermore, we show how the mapping leads to exact protein distributions for extensions of the basic model that include the effects of posttranscriptional and posttranslational regulation. The approach developed in this work is widely applicable and can contribute to a quantitative understanding of stochasticity in gene expression and its regulation.

  8. Exact protein distributions for stochastic models of gene expression using partitioning of Poisson processes.

    PubMed

    Pendar, Hodjat; Platini, Thierry; Kulkarni, Rahul V

    2013-04-01

    Stochasticity in gene expression gives rise to fluctuations in protein levels across a population of genetically identical cells. Such fluctuations can lead to phenotypic variation in clonal populations; hence, there is considerable interest in quantifying noise in gene expression using stochastic models. However, obtaining exact analytical results for protein distributions has been an intractable task for all but the simplest models. Here, we invoke the partitioning property of Poisson processes to develop a mapping that significantly simplifies the analysis of stochastic models of gene expression. The mapping leads to exact protein distributions using results for mRNA distributions in models with promoter-based regulation. Using this approach, we derive exact analytical results for steady-state and time-dependent distributions for the basic two-stage model of gene expression. Furthermore, we show how the mapping leads to exact protein distributions for extensions of the basic model that include the effects of posttranscriptional and posttranslational regulation. The approach developed in this work is widely applicable and can contribute to a quantitative understanding of stochasticity in gene expression and its regulation.

  9. Graded Poisson-sigma models and dilaton-deformed 2D supergravity algebra

    NASA Astrophysics Data System (ADS)

    Bergamin, Luzi; Kummer, Wolfgang

    2003-05-01

    Supergravity extensions of generic 2d gravity theories obtained from the graded Poisson-Sigma model (gPSM) approach show a large degree of ambiguity. On the other hand, obstructions may reduce the allowed range of fields as given by the bosonic theory, or even prohibit any extension in certain cases. In our present work we relate the finite W-algebras inherent in the gPSM algebra of constraints to supergravity algebras (Neuveu-Schwarz or Ramond algebras resp.), deformed by the presence of the dilaton field. With very straightforward and natural assumptions on them - like the one linking the anti-commutator of certain fermionic charges to the Hamiltonian constraint without deformation - we are able not only to remove the ambiguities but, at the same time, the singularities referred to above. Thus all especially interesting bosonic models (spherically reduced gravity, the Jackiw-Teitelboim model etc.) under these conditions possess a unique fermionic extension and are free from new singularities. The superspace supergravity model of Howe is found as a special case of this supergravity action. For this class of models the relation between bosonic potential and prepotential does not introduce obstructions as well.

  10. Deconvolution of Poisson-Limited Data Using a Bayesian Multi-Scale Model

    NASA Astrophysics Data System (ADS)

    Kolaczyk, E. D.; Nowak, R. D.

    1999-04-01

    We present a new approach for producing deconvolved spectra and images, based on a novel non-parametric, multi-scale statistical model designed explicitly for Poisson limited data. The framework within which we work is completely general, requiring only that the user specify the manner in which the data were ``blurred'' (for example, through an instrument-specific PSF). Therefore, we anticipate our method serving in problems involving data at any of a variety of energies, especially at the x-ray and gamma-ray levels, for deconvolution problems arising in a variety of missions -- particularly when there is not yet a good analytical model for the source(s). The underlying statistical framework explicitly models the process of (dis)aggregating counts across multiple resolutions. The result is a multi-scale (though not wavelet-based) representation of the source object to be recovered through the deconvolution. Furthermore, this framework is built completely within the context of the original Poisson data likelihood, so that we proceed without the use of statistical approximations (such as chi (2) approximations) or data transformations. Adopting a Bayesian paradigm, a flexible prior probability structure is used to regularize the set of possible solutions to what is formally a statistical inverse problem. This prior is both intuitive and interpretable, in that it models the degree to which counts are allowed to be (dis)aggregated at each location-scale combination. Estimates of the ``deblurred'' source object are obtained in our procedure using standard Bayesian statistical techniques (i.e., based on the mode of the posterior distribution of the object given the data). Despite the generality of this method and the potentially complex structures that may be modeled, these estimates may be produced using an efficient iterative algorithm (i.e., the expectation-maximization (EM) algorithm), wherein iterates at each stage are yielded by closed-form solutions to simple

  11. Analytical solutions of nonlocal Poisson dielectric models with multiple point charges inside a dielectric sphere

    NASA Astrophysics Data System (ADS)

    Xie, Dexuan; Volkmer, Hans W.; Ying, Jinyong

    2016-04-01

    The nonlocal dielectric approach has led to new models and solvers for predicting electrostatics of proteins (or other biomolecules), but how to validate and compare them remains a challenge. To promote such a study, in this paper, two typical nonlocal dielectric models are revisited. Their analytical solutions are then found in the expressions of simple series for a dielectric sphere containing any number of point charges. As a special case, the analytical solution of the corresponding Poisson dielectric model is also derived in simple series, which significantly improves the well known Kirkwood's double series expansion. Furthermore, a convolution of one nonlocal dielectric solution with a commonly used nonlocal kernel function is obtained, along with the reaction parts of these local and nonlocal solutions. To turn these new series solutions into a valuable research tool, they are programed as a free fortran software package, which can input point charge data directly from a protein data bank file. Consequently, different validation tests can be quickly done on different proteins. Finally, a test example for a protein with 488 atomic charges is reported to demonstrate the differences between the local and nonlocal models as well as the importance of using the reaction parts to develop local and nonlocal dielectric solvers.

  12. Introduction of effective dielectric constant to the Poisson-Nernst-Planck model.

    PubMed

    Sawada, Atsushi

    2016-05-01

    The Poisson-Nernst-Planck (PNP) model has been widely used for analyzing impedance or dielectric spectra observed for dilute electrolytic cells. In the analysis, the behavior of mobile ions in the cell under an external electric field has been explained by a conductive nature regardless of ionic concentrations. However, if the cell has parallel-plate blocking electrodes, the mobile ions may also play a role as a dielectric medium in the cell by the effect of space-charge polarization when the ionic concentration is sufficiently low. Thus the mobile ions confined between the blocking electrodes can have conductive and dielectric natures simultaneously, and their intensities are affected by the ionic concentration and the adsorption of solvent molecules on the electrodes. The balance of the conductive and dielectric natures is quantitatively determined by introducing an effective dielectric constant to the PNP model in the data analysis. The generalized PNP model with the effective dielectric constant successfully explains the anomalous frequency-dependent dielectric behaviors brought about by the mobile ions in dilute electrolytic cells, for which the conventional PNP model fails in interpretation.

  13. Analytical solutions of nonlocal Poisson dielectric models with multiple point charges inside a dielectric sphere.

    PubMed

    Xie, Dexuan; Volkmer, Hans W; Ying, Jinyong

    2016-04-01

    The nonlocal dielectric approach has led to new models and solvers for predicting electrostatics of proteins (or other biomolecules), but how to validate and compare them remains a challenge. To promote such a study, in this paper, two typical nonlocal dielectric models are revisited. Their analytical solutions are then found in the expressions of simple series for a dielectric sphere containing any number of point charges. As a special case, the analytical solution of the corresponding Poisson dielectric model is also derived in simple series, which significantly improves the well known Kirkwood's double series expansion. Furthermore, a convolution of one nonlocal dielectric solution with a commonly used nonlocal kernel function is obtained, along with the reaction parts of these local and nonlocal solutions. To turn these new series solutions into a valuable research tool, they are programed as a free fortran software package, which can input point charge data directly from a protein data bank file. Consequently, different validation tests can be quickly done on different proteins. Finally, a test example for a protein with 488 atomic charges is reported to demonstrate the differences between the local and nonlocal models as well as the importance of using the reaction parts to develop local and nonlocal dielectric solvers.

  14. Analytical solutions of nonlocal Poisson dielectric models with multiple point charges inside a dielectric sphere.

    PubMed

    Xie, Dexuan; Volkmer, Hans W; Ying, Jinyong

    2016-04-01

    The nonlocal dielectric approach has led to new models and solvers for predicting electrostatics of proteins (or other biomolecules), but how to validate and compare them remains a challenge. To promote such a study, in this paper, two typical nonlocal dielectric models are revisited. Their analytical solutions are then found in the expressions of simple series for a dielectric sphere containing any number of point charges. As a special case, the analytical solution of the corresponding Poisson dielectric model is also derived in simple series, which significantly improves the well known Kirkwood's double series expansion. Furthermore, a convolution of one nonlocal dielectric solution with a commonly used nonlocal kernel function is obtained, along with the reaction parts of these local and nonlocal solutions. To turn these new series solutions into a valuable research tool, they are programed as a free fortran software package, which can input point charge data directly from a protein data bank file. Consequently, different validation tests can be quickly done on different proteins. Finally, a test example for a protein with 488 atomic charges is reported to demonstrate the differences between the local and nonlocal models as well as the importance of using the reaction parts to develop local and nonlocal dielectric solvers. PMID:27176425

  15. Introduction of effective dielectric constant to the Poisson-Nernst-Planck model

    NASA Astrophysics Data System (ADS)

    Sawada, Atsushi

    2016-05-01

    The Poisson-Nernst-Planck (PNP) model has been widely used for analyzing impedance or dielectric spectra observed for dilute electrolytic cells. In the analysis, the behavior of mobile ions in the cell under an external electric field has been explained by a conductive nature regardless of ionic concentrations. However, if the cell has parallel-plate blocking electrodes, the mobile ions may also play a role as a dielectric medium in the cell by the effect of space-charge polarization when the ionic concentration is sufficiently low. Thus the mobile ions confined between the blocking electrodes can have conductive and dielectric natures simultaneously, and their intensities are affected by the ionic concentration and the adsorption of solvent molecules on the electrodes. The balance of the conductive and dielectric natures is quantitatively determined by introducing an effective dielectric constant to the PNP model in the data analysis. The generalized PNP model with the effective dielectric constant successfully explains the anomalous frequency-dependent dielectric behaviors brought about by the mobile ions in dilute electrolytic cells, for which the conventional PNP model fails in interpretation.

  16. Nonparametric estimation of the heterogeneity of a random medium using compound Poisson process modeling of wave multiple scattering

    NASA Astrophysics Data System (ADS)

    Le Bihan, Nicolas; Margerin, Ludovic

    2009-07-01

    In this paper, we present a nonparametric method to estimate the heterogeneity of a random medium from the angular distribution of intensity of waves transmitted through a slab of random material. Our approach is based on the modeling of forward multiple scattering using compound Poisson processes on compact Lie groups. The estimation technique is validated through numerical simulations based on radiative transfer theory.

  17. Nonparametric estimation of the heterogeneity of a random medium using compound Poisson process modeling of wave multiple scattering.

    PubMed

    Le Bihan, Nicolas; Margerin, Ludovic

    2009-07-01

    In this paper, we present a nonparametric method to estimate the heterogeneity of a random medium from the angular distribution of intensity of waves transmitted through a slab of random material. Our approach is based on the modeling of forward multiple scattering using compound Poisson processes on compact Lie groups. The estimation technique is validated through numerical simulations based on radiative transfer theory.

  18. An efficient parallel sampling technique for Multivariate Poisson-Lognormal model: Analysis with two crash count datasets

    SciTech Connect

    Zhan, Xianyuan; Aziz, H. M. Abdul; Ukkusuri, Satish V.

    2015-11-19

    Our study investigates the Multivariate Poisson-lognormal (MVPLN) model that jointly models crash frequency and severity accounting for correlations. The ordinary univariate count models analyze crashes of different severity level separately ignoring the correlations among severity levels. The MVPLN model is capable to incorporate the general correlation structure and takes account of the over dispersion in the data that leads to a superior data fitting. But, the traditional estimation approach for MVPLN model is computationally expensive, which often limits the use of MVPLN model in practice. In this work, a parallel sampling scheme is introduced to improve the original Markov Chain Monte Carlo (MCMC) estimation approach of the MVPLN model, which significantly reduces the model estimation time. Two MVPLN models are developed using the pedestrian vehicle crash data collected in New York City from 2002 to 2006, and the highway-injury data from Washington State (5-year data from 1990 to 1994) The Deviance Information Criteria (DIC) is used to evaluate the model fitting. The estimation results show that the MVPLN models provide a superior fit over univariate Poisson-lognormal (PLN), univariate Poisson, and Negative Binomial models. Moreover, the correlations among the latent effects of different severity levels are found significant in both datasets that justifies the importance of jointly modeling crash frequency and severity accounting for correlations.

  19. An efficient parallel sampling technique for Multivariate Poisson-Lognormal model: Analysis with two crash count datasets

    DOE PAGES

    Zhan, Xianyuan; Aziz, H. M. Abdul; Ukkusuri, Satish V.

    2015-11-19

    Our study investigates the Multivariate Poisson-lognormal (MVPLN) model that jointly models crash frequency and severity accounting for correlations. The ordinary univariate count models analyze crashes of different severity level separately ignoring the correlations among severity levels. The MVPLN model is capable to incorporate the general correlation structure and takes account of the over dispersion in the data that leads to a superior data fitting. But, the traditional estimation approach for MVPLN model is computationally expensive, which often limits the use of MVPLN model in practice. In this work, a parallel sampling scheme is introduced to improve the original Markov Chainmore » Monte Carlo (MCMC) estimation approach of the MVPLN model, which significantly reduces the model estimation time. Two MVPLN models are developed using the pedestrian vehicle crash data collected in New York City from 2002 to 2006, and the highway-injury data from Washington State (5-year data from 1990 to 1994) The Deviance Information Criteria (DIC) is used to evaluate the model fitting. The estimation results show that the MVPLN models provide a superior fit over univariate Poisson-lognormal (PLN), univariate Poisson, and Negative Binomial models. Moreover, the correlations among the latent effects of different severity levels are found significant in both datasets that justifies the importance of jointly modeling crash frequency and severity accounting for correlations.« less

  20. Relative risk estimation of Chikungunya disease in Malaysia: An analysis based on Poisson-gamma model

    NASA Astrophysics Data System (ADS)

    Samat, N. A.; Ma'arof, S. H. Mohd Imam

    2015-05-01

    Disease mapping is a method to display the geographical distribution of disease occurrence, which generally involves the usage and interpretation of a map to show the incidence of certain diseases. Relative risk (RR) estimation is one of the most important issues in disease mapping. This paper begins by providing a brief overview of Chikungunya disease. This is followed by a review of the classical model used in disease mapping, based on the standardized morbidity ratio (SMR), which we then apply to our Chikungunya data. We then fit an extension of the classical model, which we refer to as a Poisson-Gamma model, when prior distributions for the relative risks are assumed known. Both results are displayed and compared using maps and we reveal a smoother map with fewer extremes values of estimated relative risk. The extensions of this paper will consider other methods that are relevant to overcome the drawbacks of the existing methods, in order to inform and direct government strategy for monitoring and controlling Chikungunya disease.

  1. Self-consistent Modeling of the logN-logS in the Poisson Limit

    NASA Astrophysics Data System (ADS)

    Sourlas, E.; Kashyap, V.; Zezas, A.; van Dyk, D.

    2004-08-01

    logN-logS curves are a fundamental tool in the study of source populations, luminosity functions, and cosmological parameters. However, their determination is hampered by statistical effects such as the Eddington bias, incompleteness due to detection efficiency, faint source flux fluctuations, etc. Here we present a new and powerful method using the full Poisson machinery that allows us to model the logN-logS distribution of X-ray sources in a self-consistent manner. Because we properly account for all the above statistical effects, our modeling is valid over the full range of the data. We use a Bayesian approach, modeling the fluxes with known functional forms such as simple or broken power-laws. The expected photon counts are conditioned on the fluxes, the background contamination, effective area, detector vignetting, and detection probability. The built-in flexibility of the algorithm also allows a simultaneous analysis of multiple datasets. We demonstrate the power of our algorithm by applying it to a set of Chandra observations. This project is part of the California-Harvard/CXC AstroStatistics Collaboration. The authors gratefully acknowledge funding for this project partially provided by NSF grant DMS-01-04129 and by NASA Contract NAS8-39073, and NASA grants NCC2-1350 and NAG5-13056.

  2. Modeling both of the number of pausibacillary and multibacillary leprosy patients by using bivariate poisson regression

    NASA Astrophysics Data System (ADS)

    Winahju, W. S.; Mukarromah, A.; Putri, S.

    2015-03-01

    Leprosy is a chronic infectious disease caused by bacteria of leprosy (Mycobacterium leprae). Leprosy has become an important thing in Indonesia because its morbidity is quite high. Based on WHO data in 2014, in 2012 Indonesia has the highest number of new leprosy patients after India and Brazil with a contribution of 18.994 people (8.7% of the world). This number makes Indonesia automatically placed as the country with the highest number of leprosy morbidity of ASEAN countries. The province that most contributes to the number of leprosy patients in Indonesia is East Java. There are two kind of leprosy. They consist of pausibacillary and multibacillary. The morbidity of multibacillary leprosy is higher than pausibacillary leprosy. This paper will discuss modeling both of the number of multibacillary and pausibacillary leprosy patients as responses variables. These responses are count variables, so modeling will be conducted by using bivariate poisson regression method. Unit experiment used is in East Java, and predictors involved are: environment, demography, and poverty. The model uses data in 2012, and the result indicates that all predictors influence significantly.

  3. ADAPTIVE FINITE ELEMENT MODELING TECHNIQUES FOR THE POISSON-BOLTZMANN EQUATION.

    PubMed

    Holst, Michael; McCammon, James Andrew; Yu, Zeyun; Zhou, Youngcheng; Zhu, Yunrong

    2012-01-01

    We consider the design of an effective and reliable adaptive finite element method (AFEM) for the nonlinear Poisson-Boltzmann equation (PBE). We first examine the two-term regularization technique for the continuous problem recently proposed by Chen, Holst, and Xu based on the removal of the singular electrostatic potential inside biomolecules; this technique made possible the development of the first complete solution and approximation theory for the Poisson-Boltzmann equation, the first provably convergent discretization, and also allowed for the development of a provably convergent AFEM. However, in practical implementation, this two-term regularization exhibits numerical instability. Therefore, we examine a variation of this regularization technique which can be shown to be less susceptible to such instability. We establish a priori estimates and other basic results for the continuous regularized problem, as well as for Galerkin finite element approximations. We show that the new approach produces regularized continuous and discrete problems with the same mathematical advantages of the original regularization. We then design an AFEM scheme for the new regularized problem, and show that the resulting AFEM scheme is accurate and reliable, by proving a contraction result for the error. This result, which is one of the first results of this type for nonlinear elliptic problems, is based on using continuous and discrete a priori L(∞) estimates to establish quasi-orthogonality. To provide a high-quality geometric model as input to the AFEM algorithm, we also describe a class of feature-preserving adaptive mesh generation algorithms designed specifically for constructing meshes of biomolecular structures, based on the intrinsic local structure tensor of the molecular surface. All of the algorithms described in the article are implemented in the Finite Element Toolkit (FETK), developed and maintained at UCSD. The stability advantages of the new regularization scheme

  4. ADAPTIVE FINITE ELEMENT MODELING TECHNIQUES FOR THE POISSON-BOLTZMANN EQUATION

    PubMed Central

    HOLST, MICHAEL; MCCAMMON, JAMES ANDREW; YU, ZEYUN; ZHOU, YOUNGCHENG; ZHU, YUNRONG

    2011-01-01

    We consider the design of an effective and reliable adaptive finite element method (AFEM) for the nonlinear Poisson-Boltzmann equation (PBE). We first examine the two-term regularization technique for the continuous problem recently proposed by Chen, Holst, and Xu based on the removal of the singular electrostatic potential inside biomolecules; this technique made possible the development of the first complete solution and approximation theory for the Poisson-Boltzmann equation, the first provably convergent discretization, and also allowed for the development of a provably convergent AFEM. However, in practical implementation, this two-term regularization exhibits numerical instability. Therefore, we examine a variation of this regularization technique which can be shown to be less susceptible to such instability. We establish a priori estimates and other basic results for the continuous regularized problem, as well as for Galerkin finite element approximations. We show that the new approach produces regularized continuous and discrete problems with the same mathematical advantages of the original regularization. We then design an AFEM scheme for the new regularized problem, and show that the resulting AFEM scheme is accurate and reliable, by proving a contraction result for the error. This result, which is one of the first results of this type for nonlinear elliptic problems, is based on using continuous and discrete a priori L∞ estimates to establish quasi-orthogonality. To provide a high-quality geometric model as input to the AFEM algorithm, we also describe a class of feature-preserving adaptive mesh generation algorithms designed specifically for constructing meshes of biomolecular structures, based on the intrinsic local structure tensor of the molecular surface. All of the algorithms described in the article are implemented in the Finite Element Toolkit (FETK), developed and maintained at UCSD. The stability advantages of the new regularization scheme

  5. Effect of air pollution on lung cancer: a Poisson regression model based on vital statistics.

    PubMed Central

    Tango, T

    1994-01-01

    This article describes a Poisson regression model for time trends of mortality to detect the long-term effects of common levels of air pollution on lung cancer, in which the adjustment for cigarette smoking is not always necessary. The main hypothesis to be tested in the model is that if the long-term and common-level air pollution had an effect on lung cancer, the death rate from lung cancer could be expected to increase gradually at a higher rate in the region with relatively high levels of air pollution than in the region with low levels, and that this trend would not be expected for other control diseases in which cigarette smoking is a risk factor. Using this approach, we analyzed the trend of mortality in females aged 40 to 79, from lung cancer and two control diseases, ischemic heart disease and cerebrovascular disease, based on vital statistics in 23 wards of the Tokyo metropolitan area for 1972 to 1988. Ward-specific mean levels per day of SO2 and NO2 from 1974 through 1976 estimated by Makino (1978) were used as the ward-specific exposure measure of air pollution. No data on tobacco consumption in each ward is available. Our analysis supported the existence of long-term effects of air pollution on lung cancer. PMID:7851329

  6. Identifying Functional Co-activation Patterns in Neuroimaging Studies via Poisson Graphical Models

    PubMed Central

    Xue, Wenqiong; Kang, Jian; Bowman, F. DuBois; Wager, Tor D.; Guo, Jian

    2014-01-01

    Summary Studying the interactions between different brain regions is essential to achieve a more complete understanding of brain function. In this paper, we focus on identifying functional co-activation patterns and undirected functional networks in neuroimaging studies. We build a functional brain network, using a sparse covariance matrix, with elements representing associations between region-level peak activations. We adopt a penalized likelihood approach to impose sparsity on the covariance matrix based on an extended multivariate Poisson model. We obtain penalized maximum likelihood estimates via the expectation-maximization (EM) algorithm and optimize an associated tuning parameter by maximizing the predictive log-likelihood. Permutation tests on the brain co-activation patterns provide region pair and network-level inference. Simulations suggest that the proposed approach has minimal biases and provides a coverage rate close to 95% of covariance estimations. Conducting a meta-analysis of 162 functional neuroimaging studies on emotions, our model identifies a functional network that consists of connected regions within the basal ganglia, limbic system, and other emotion-related brain regions. We characterize this network through statistical inference on region-pair connections as well as by graph measures. PMID:25147001

  7. On the Linear Stability of Crystals in the Schrödinger-Poisson Model

    NASA Astrophysics Data System (ADS)

    Komech, A.; Kopylova, E.

    2016-09-01

    We consider the Schrödinger-Poisson-Newton equations for crystals with one ion per cell. We linearize this dynamics at the periodic minimizers of energy per cell and introduce a novel class of the ion charge densities that ensures the stability of the linearized dynamics. Our main result is the energy positivity for the Bloch generators of the linearized dynamics under a Wiener-type condition on the ion charge density. We also adopt an additional `Jellium' condition which cancels the negative contribution caused by the electrostatic instability and provides the `Jellium' periodic minimizers and the optimality of the lattice: the energy per cell of the periodic minimizer attains the global minimum among all possible lattices. We show that the energy positivity can fail if the Jellium condition is violated, while the Wiener condition holds. The proof of the energy positivity relies on a novel factorization of the corresponding Hamilton functional. The Bloch generators are nonselfadjoint (and even nonsymmetric) Hamilton operators. We diagonalize these generators using our theory of spectral resolution of the Hamilton operators with positive definite energy (Komech and Kopylova in, J Stat Phys 154(1-2):503-521, 2014, J Spectral Theory 5(2):331-361, 2015). The stability of the linearized crystal dynamics is established using this spectral resolution.

  8. On the Linear Stability of Crystals in the Schrödinger-Poisson Model

    NASA Astrophysics Data System (ADS)

    Komech, A.; Kopylova, E.

    2016-10-01

    We consider the Schrödinger-Poisson-Newton equations for crystals with one ion per cell. We linearize this dynamics at the periodic minimizers of energy per cell and introduce a novel class of the ion charge densities that ensures the stability of the linearized dynamics. Our main result is the energy positivity for the Bloch generators of the linearized dynamics under a Wiener-type condition on the ion charge density. We also adopt an additional `Jellium' condition which cancels the negative contribution caused by the electrostatic instability and provides the `Jellium' periodic minimizers and the optimality of the lattice: the energy per cell of the periodic minimizer attains the global minimum among all possible lattices. We show that the energy positivity can fail if the Jellium condition is violated, while the Wiener condition holds. The proof of the energy positivity relies on a novel factorization of the corresponding Hamilton functional. The Bloch generators are nonselfadjoint (and even nonsymmetric) Hamilton operators. We diagonalize these generators using our theory of spectral resolution of the Hamilton operators with positive definite energy (Komech and Kopylova in, J Stat Phys 154(1-2):503-521, 2014, J Spectral Theory 5(2):331-361, 2015). The stability of the linearized crystal dynamics is established using this spectral resolution.

  9. Bayesian semi-parametric analysis of Poisson change-point regression models: application to policy making in Cali, Colombia

    PubMed Central

    Park, Taeyoung; Krafty, Robert T.; Sánchez, Alvaro I.

    2012-01-01

    A Poisson regression model with an offset assumes a constant baseline rate after accounting for measured covariates, which may lead to biased estimates of coefficients in an inhomogeneous Poisson process. To correctly estimate the effect of time-dependent covariates, we propose a Poisson change-point regression model with an offset that allows a time-varying baseline rate. When the nonconstant pattern of a log baseline rate is modeled with a nonparametric step function, the resulting semi-parametric model involves a model component of varying dimension and thus requires a sophisticated varying-dimensional inference to obtain correct estimates of model parameters of fixed dimension. To fit the proposed varying-dimensional model, we devise a state-of-the-art MCMC-type algorithm based on partial collapse. The proposed model and methods are used to investigate an association between daily homicide rates in Cali, Colombia and policies that restrict the hours during which the legal sale of alcoholic beverages is permitted. While simultaneously identifying the latent changes in the baseline homicide rate which correspond to the incidence of sociopolitical events, we explore the effect of policies governing the sale of alcohol on homicide rates and seek a policy that balances the economic and cultural dependencies on alcohol sales to the health of the public. PMID:23393408

  10. Poisson type models and descriptive statistics of computer network information flows

    SciTech Connect

    Downing, D.; Fedorov, V.; Dunigan, T.; Batsell, S.

    1997-08-01

    Many contemporary publications on network traffic gravitate to ideas of self-similarity and long-range dependence. The corresponding elegant and parsimonious mathematical techniques proved to be efficient for the description of a wide class of aggregated processes. Sharing the enthusiasm about the above ideas the authors also believe that whenever it is possible any problem must be considered at the most basic level in an attempt to understand the driving forces of the processes under analysis. Consequently the authors try to show that some behavioral patterns of descriptive statistics which are typical for long-memory processes (a particular case of long-range dependence) can also be explained in the framework of the traditional Poisson process paradigm. Applying the concepts of inhomogeneity, compoundness and double stochasticity they propose a simple and intuitively transparent approach of explaining the expected shape of the observed histograms of counts and the expected behavior of the sample covariance functions. Matching the images of these two descriptive statistics allows them to infer the presence of trends or double stochasticity in analyzed time series. They considered only statistics which are based on counts. A similar approach may be applied to waiting or inter-arrival time sequences and will be discussed in other publications. They hope that combining the reported results with the statistical methods based on aggregated models may lead to computationally affordable on-line techniques of compact and visualized data analysis of network flows.

  11. Bayesian Hierarchical Poisson Regression Models: An Application to a Driving Study with Kinematic Events

    PubMed Central

    Kim, Sungduk; Chen, Zhen; Zhang, Zhiwei; Simons-Morton, Bruce G.; Albert, Paul S.

    2013-01-01

    Although there is evidence that teenagers are at a high risk of crashes in the early months after licensure, the driving behavior of these teenagers is not well understood. The Naturalistic Teenage Driving Study (NTDS) is the first U.S. study to document continuous driving performance of newly-licensed teenagers during their first 18 months of licensure. Counts of kinematic events such as the number of rapid accelerations are available for each trip, and their incidence rates represent different aspects of driving behavior. We propose a hierarchical Poisson regression model incorporating over-dispersion, heterogeneity, and serial correlation as well as a semiparametric mean structure. Analysis of the NTDS data is carried out with a hierarchical Bayesian framework using reversible jump Markov chain Monte Carlo algorithms to accommodate the flexible mean structure. We show that driving with a passenger and night driving decrease kinematic events, while having risky friends increases these events. Further the within-subject variation in these events is comparable to the between-subject variation. This methodology will be useful for other intensively collected longitudinal count data, where event rates are low and interest focuses on estimating the mean and variance structure of the process. This article has online supplementary materials. PMID:24076760

  12. Poisson-Fermi Modeling of the Ion Exchange Mechanism of the Sodium/Calcium Exchanger.

    PubMed

    Liu, Jinn-Liang; Hsieh, Hann-Jeng; Eisenberg, Bob

    2016-03-17

    The ion exchange mechanism of the sodium/calcium exchanger (NCX) crystallized by Liao et al. in 2012 is studied using the Poisson-Fermi theory developed by Liu and Eisenberg in 2014. A cycle of binding and unbinding is proposed to account for the Na(+)/Ca(2+) exchange function of the NCX molecule. Outputs of the theory include electric and steric fields of ions with different sizes, correlations of ions of different charges, and polarization of water, along with number densities of ions, water molecules, and interstitial voids. We calculate the electrostatic and steric potentials of the four binding sites in NCX, i.e., three Na(+) binding sites and one Ca(2+) binding site, with protein charges provided by the software PDB2PQR. The energy profiles of Na(+) and Ca(2+) ions along their respective Na(+) and Ca(2+) pathways in experimental conditions enable us to explain the fundamental mechanism of NCX that extrudes intracellular Ca(2+) across the cell membrane against its chemical gradient by using the downhill gradient of Na(+). Atomic and numerical details of the binding sites are given to illustrate the 3 Na(+):1 Ca(2+) stoichiometry of NCX. The protein NCX is a catalyst. It does not provide (free) energy for transport. All energy for transport in our model comes from the ions in surrounding baths. PMID:26906748

  13. Ionic screening of charged impurities in electrolytically gated graphene: A partially linearized Poisson-Boltzmann model.

    PubMed

    Sharma, P; Mišković, Z L

    2015-10-01

    We present a model describing the electrostatic interactions across a structure that consists of a single layer of graphene with large area, lying above an oxide substrate of finite thickness, with its surface exposed to a thick layer of liquid electrolyte containing salt ions. Our goal is to analyze the co-operative screening of the potential fluctuation in a doped graphene due to randomness in the positions of fixed charged impurities in the oxide by the charge carriers in graphene and by the mobile ions in the diffuse layer of the electrolyte. In order to account for a possibly large potential drop in the diffuse later that may arise in an electrolytically gated graphene, we use a partially linearized Poisson-Boltzmann (PB) model of the electrolyte, in which we solve a fully nonlinear PB equation for the surface average of the potential in one dimension, whereas the lateral fluctuations of the potential in graphene are tackled by linearizing the PB equation about the average potential. In this way, we are able to describe the regime of equilibrium doping of graphene to large densities for arbitrary values of the ion concentration without restrictions to the potential drop in the electrolyte. We evaluate the electrostatic Green's function for the partially linearized PB model, which is used to express the screening contributions of the graphene layer and the nearby electrolyte by means of an effective dielectric function. We find that, while the screened potential of a single charged impurity at large in-graphene distances exhibits a strong dependence on the ion concentration in the electrolyte and on the doping density in graphene, in the case of a spatially correlated two-dimensional ensemble of impurities, this dependence is largely suppressed in the autocovariance of the fluctuating potential. PMID:26450303

  14. Ionic screening of charged impurities in electrolytically gated graphene: A partially linearized Poisson-Boltzmann model.

    PubMed

    Sharma, P; Mišković, Z L

    2015-10-01

    We present a model describing the electrostatic interactions across a structure that consists of a single layer of graphene with large area, lying above an oxide substrate of finite thickness, with its surface exposed to a thick layer of liquid electrolyte containing salt ions. Our goal is to analyze the co-operative screening of the potential fluctuation in a doped graphene due to randomness in the positions of fixed charged impurities in the oxide by the charge carriers in graphene and by the mobile ions in the diffuse layer of the electrolyte. In order to account for a possibly large potential drop in the diffuse later that may arise in an electrolytically gated graphene, we use a partially linearized Poisson-Boltzmann (PB) model of the electrolyte, in which we solve a fully nonlinear PB equation for the surface average of the potential in one dimension, whereas the lateral fluctuations of the potential in graphene are tackled by linearizing the PB equation about the average potential. In this way, we are able to describe the regime of equilibrium doping of graphene to large densities for arbitrary values of the ion concentration without restrictions to the potential drop in the electrolyte. We evaluate the electrostatic Green's function for the partially linearized PB model, which is used to express the screening contributions of the graphene layer and the nearby electrolyte by means of an effective dielectric function. We find that, while the screened potential of a single charged impurity at large in-graphene distances exhibits a strong dependence on the ion concentration in the electrolyte and on the doping density in graphene, in the case of a spatially correlated two-dimensional ensemble of impurities, this dependence is largely suppressed in the autocovariance of the fluctuating potential.

  15. Poisson-Gaussian Noise Reduction Using the Hidden Markov Model in Contourlet Domain for Fluorescence Microscopy Images

    PubMed Central

    Yang, Sejung; Lee, Byung-Uk

    2015-01-01

    In certain image acquisitions processes, like in fluorescence microscopy or astronomy, only a limited number of photons can be collected due to various physical constraints. The resulting images suffer from signal dependent noise, which can be modeled as a Poisson distribution, and a low signal-to-noise ratio. However, the majority of research on noise reduction algorithms focuses on signal independent Gaussian noise. In this paper, we model noise as a combination of Poisson and Gaussian probability distributions to construct a more accurate model and adopt the contourlet transform which provides a sparse representation of the directional components in images. We also apply hidden Markov models with a framework that neatly describes the spatial and interscale dependencies which are the properties of transformation coefficients of natural images. In this paper, an effective denoising algorithm for Poisson-Gaussian noise is proposed using the contourlet transform, hidden Markov models and noise estimation in the transform domain. We supplement the algorithm by cycle spinning and Wiener filtering for further improvements. We finally show experimental results with simulations and fluorescence microscopy images which demonstrate the improved performance of the proposed approach. PMID:26352138

  16. Poisson-Gaussian Noise Reduction Using the Hidden Markov Model in Contourlet Domain for Fluorescence Microscopy Images.

    PubMed

    Yang, Sejung; Lee, Byung-Uk

    2015-01-01

    In certain image acquisitions processes, like in fluorescence microscopy or astronomy, only a limited number of photons can be collected due to various physical constraints. The resulting images suffer from signal dependent noise, which can be modeled as a Poisson distribution, and a low signal-to-noise ratio. However, the majority of research on noise reduction algorithms focuses on signal independent Gaussian noise. In this paper, we model noise as a combination of Poisson and Gaussian probability distributions to construct a more accurate model and adopt the contourlet transform which provides a sparse representation of the directional components in images. We also apply hidden Markov models with a framework that neatly describes the spatial and interscale dependencies which are the properties of transformation coefficients of natural images. In this paper, an effective denoising algorithm for Poisson-Gaussian noise is proposed using the contourlet transform, hidden Markov models and noise estimation in the transform domain. We supplement the algorithm by cycle spinning and Wiener filtering for further improvements. We finally show experimental results with simulations and fluorescence microscopy images which demonstrate the improved performance of the proposed approach. PMID:26352138

  17. A Family of Poisson Processes for Use in Stochastic Models of Precipitation

    NASA Astrophysics Data System (ADS)

    Penland, C.

    2013-12-01

    Both modified Poisson processes and compound Poisson processes can be relevant to stochastic parameterization of precipitation. This presentation compares the dynamical properties of these systems and discusses the physical situations in which each might be appropriate. If the parameters describing either class of systems originate in hydrodynamics, then proper consideration of stochastic calculus is required during numerical implementation of the parameterization. It is shown here that an improper numerical treatment can have severe implications for estimating rainfall distributions, particularly in the tails of the distributions and, thus, on the frequency of extreme events.

  18. Modeling spiking behavior of neurons with time-dependent Poisson processes.

    PubMed

    Shinomoto, S; Tsubo, Y

    2001-10-01

    Three kinds of interval statistics, as represented by the coefficient of variation, the skewness coefficient, and the correlation coefficient of consecutive intervals, are evaluated for three kinds of time-dependent Poisson processes: pulse regulated, sinusoidally regulated, and doubly stochastic. Among these three processes, the sinusoidally regulated and doubly stochastic Poisson processes, in the case when the spike rate varies slowly compared with the mean interval between spikes, are found to be consistent with the three statistical coefficients exhibited by data recorded from neurons in the prefrontal cortex of monkeys.

  19. Modeling spiking behavior of neurons with time-dependent Poisson processes

    NASA Astrophysics Data System (ADS)

    Shinomoto, Shigeru; Tsubo, Yasuhiro

    2001-10-01

    Three kinds of interval statistics, as represented by the coefficient of variation, the skewness coefficient, and the correlation coefficient of consecutive intervals, are evaluated for three kinds of time-dependent Poisson processes: pulse regulated, sinusoidally regulated, and doubly stochastic. Among these three processes, the sinusoidally regulated and doubly stochastic Poisson processes, in the case when the spike rate varies slowly compared with the mean interval between spikes, are found to be consistent with the three statistical coefficients exhibited by data recorded from neurons in the prefrontal cortex of monkeys.

  20. Poisson-Nernst-Planck-Fermi theory for modeling biological ion channels.

    PubMed

    Liu, Jinn-Liang; Eisenberg, Bob

    2014-12-14

    A Poisson-Nernst-Planck-Fermi (PNPF) theory is developed for studying ionic transport through biological ion channels. Our goal is to deal with the finite size of particle using a Fermi like distribution without calculating the forces between the particles, because they are both expensive and tricky to compute. We include the steric effect of ions and water molecules with nonuniform sizes and interstitial voids, the correlation effect of crowded ions with different valences, and the screening effect of water molecules in an inhomogeneous aqueous electrolyte. Including the finite volume of water and the voids between particles is an important new part of the theory presented here. Fermi like distributions of all particle species are derived from the volume exclusion of classical particles. Volume exclusion and the resulting saturation phenomena are especially important to describe the binding and permeation mechanisms of ions in a narrow channel pore. The Gibbs free energy of the Fermi distribution reduces to that of a Boltzmann distribution when these effects are not considered. The classical Gibbs entropy is extended to a new entropy form - called Gibbs-Fermi entropy - that describes mixing configurations of all finite size particles and voids in a thermodynamic system where microstates do not have equal probabilities. The PNPF model describes the dynamic flow of ions, water molecules, as well as voids with electric fields and protein charges. The model also provides a quantitative mean-field description of the charge/space competition mechanism of particles within the highly charged and crowded channel pore. The PNPF results are in good accord with experimental currents recorded in a 10(8)-fold range of Ca(2+) concentrations. The results illustrate the anomalous mole fraction effect, a signature of L-type calcium channels. Moreover, numerical results concerning water density, dielectric permittivity, void volume, and steric energy provide useful details to study

  1. Relative age and birthplace effect in Japanese professional sports: a quantitative evaluation using a Bayesian hierarchical Poisson model.

    PubMed

    Ishigami, Hideaki

    2016-01-01

    Relative age effect (RAE) in sports has been well documented. Recent studies investigate the effect of birthplace in addition to the RAE. The first objective of this study was to show the magnitude of the RAE in two major professional sports in Japan, baseball and soccer. Second, we examined the birthplace effect and compared its magnitude with that of the RAE. The effect sizes were estimated using a Bayesian hierarchical Poisson model with the number of players as dependent variable. The RAEs were 9.0% and 7.7% per month for soccer and baseball, respectively. These estimates imply that children born in the first month of a school year have about three times greater chance of becoming a professional player than those born in the last month of the year. Over half of the difference in likelihoods of becoming a professional player between birthplaces was accounted for by weather conditions, with the likelihood decreasing by 1% per snow day. An effect of population size was not detected in the data. By investigating different samples, we demonstrated that using quarterly data leads to underestimation and that the age range of sampled athletes should be set carefully. PMID:25917193

  2. Relative age and birthplace effect in Japanese professional sports: a quantitative evaluation using a Bayesian hierarchical Poisson model.

    PubMed

    Ishigami, Hideaki

    2016-01-01

    Relative age effect (RAE) in sports has been well documented. Recent studies investigate the effect of birthplace in addition to the RAE. The first objective of this study was to show the magnitude of the RAE in two major professional sports in Japan, baseball and soccer. Second, we examined the birthplace effect and compared its magnitude with that of the RAE. The effect sizes were estimated using a Bayesian hierarchical Poisson model with the number of players as dependent variable. The RAEs were 9.0% and 7.7% per month for soccer and baseball, respectively. These estimates imply that children born in the first month of a school year have about three times greater chance of becoming a professional player than those born in the last month of the year. Over half of the difference in likelihoods of becoming a professional player between birthplaces was accounted for by weather conditions, with the likelihood decreasing by 1% per snow day. An effect of population size was not detected in the data. By investigating different samples, we demonstrated that using quarterly data leads to underestimation and that the age range of sampled athletes should be set carefully.

  3. Numerical methods for a Poisson-Nernst-Planck-Fermi model of biological ion channels.

    PubMed

    Liu, Jinn-Liang; Eisenberg, Bob

    2015-07-01

    Numerical methods are proposed for an advanced Poisson-Nernst-Planck-Fermi (PNPF) model for studying ion transport through biological ion channels. PNPF contains many more correlations than most models and simulations of channels, because it includes water and calculates dielectric properties consistently as outputs. This model accounts for the steric effect of ions and water molecules with different sizes and interstitial voids, the correlation effect of crowded ions with different valences, and the screening effect of polarized water molecules in an inhomogeneous aqueous electrolyte. The steric energy is shown to be comparable to the electrical energy under physiological conditions, demonstrating the crucial role of the excluded volume of particles and the voids in the natural function of channel proteins. Water is shown to play a critical role in both correlation and steric effects in the model. We extend the classical Scharfetter-Gummel (SG) method for semiconductor devices to include the steric potential for ion channels, which is a fundamental physical property not present in semiconductors. Together with a simplified matched interface and boundary (SMIB) method for treating molecular surfaces and singular charges of channel proteins, the extended SG method is shown to exhibit important features in flow simulations such as optimal convergence, efficient nonlinear iterations, and physical conservation. The generalized SG stability condition shows why the standard discretization (without SG exponential fitting) of NP equations may fail and that divalent Ca(2+) may cause more unstable discrete Ca(2+) fluxes than that of monovalent Na(+). Two different methods-called the SMIB and multiscale methods-are proposed for two different types of channels, namely, the gramicidin A channel and an L-type calcium channel, depending on whether water is allowed to pass through the channel. Numerical methods are first validated with constructed models whose exact solutions are

  4. Numerical methods for a Poisson-Nernst-Planck-Fermi model of biological ion channels

    NASA Astrophysics Data System (ADS)

    Liu, Jinn-Liang; Eisenberg, Bob

    2015-07-01

    Numerical methods are proposed for an advanced Poisson-Nernst-Planck-Fermi (PNPF) model for studying ion transport through biological ion channels. PNPF contains many more correlations than most models and simulations of channels, because it includes water and calculates dielectric properties consistently as outputs. This model accounts for the steric effect of ions and water molecules with different sizes and interstitial voids, the correlation effect of crowded ions with different valences, and the screening effect of polarized water molecules in an inhomogeneous aqueous electrolyte. The steric energy is shown to be comparable to the electrical energy under physiological conditions, demonstrating the crucial role of the excluded volume of particles and the voids in the natural function of channel proteins. Water is shown to play a critical role in both correlation and steric effects in the model. We extend the classical Scharfetter-Gummel (SG) method for semiconductor devices to include the steric potential for ion channels, which is a fundamental physical property not present in semiconductors. Together with a simplified matched interface and boundary (SMIB) method for treating molecular surfaces and singular charges of channel proteins, the extended SG method is shown to exhibit important features in flow simulations such as optimal convergence, efficient nonlinear iterations, and physical conservation. The generalized SG stability condition shows why the standard discretization (without SG exponential fitting) of NP equations may fail and that divalent Ca2 + may cause more unstable discrete Ca2 + fluxes than that of monovalent Na+. Two different methods—called the SMIB and multiscale methods—are proposed for two different types of channels, namely, the gramicidin A channel and an L-type calcium channel, depending on whether water is allowed to pass through the channel. Numerical methods are first validated with constructed models whose exact solutions are

  5. Birth and Death Process Modeling Leads to the Poisson Distribution: A Journey Worth Taking

    ERIC Educational Resources Information Center

    Rash, Agnes M.; Winkel, Brian J.

    2009-01-01

    This paper describes details of development of the general birth and death process from which we can extract the Poisson process as a special case. This general process is appropriate for a number of courses and units in courses and can enrich the study of mathematics for students as it touches and uses a diverse set of mathematical topics, e.g.,…

  6. Zero-modified Poisson model: Bayesian approach, influence diagnostics, and an application to a Brazilian leptospirosis notification data.

    PubMed

    Conceição, Katiane S; Andrade, Marinho G; Louzada, Francisco

    2013-09-01

    In this paper, a Bayesian method for inference is developed for the zero-modified Poisson (ZMP) regression model. This model is very flexible for analyzing count data without requiring any information about inflation or deflation of zeros in the sample. A general class of prior densities based on an information matrix is considered for the model parameters. A sensitivity study to detect influential cases that can change the results is performed based on the Kullback-Leibler divergence. Simulation studies are presented in order to illustrate the performance of the developed methodology. Two real datasets on leptospirosis notification in Bahia State (Brazil) are analyzed using the proposed methodology for the ZMP model.

  7. Framework based on Markov modulated Poisson processes for modeling traffic with long-range dependence

    NASA Astrophysics Data System (ADS)

    Ferreira Salvador, Paulo J.; Valadas, Rui J. M. T.

    2001-07-01

    This paper proposes a novel fitting procedure for Markov Modulated Poisson Processes (MMPPs), consisting of the superposition of N 2-MMPPs, that is capable of capturing the long-range characteristics of the traffic. The procedure matches both the autocovariance and marginal distribution functions of the rate process. We start by matching each 2-MMPP to a different component of the autocovariance function. We then map the parameters of the model with N individual 2-MMPPs (termed superposed MMPP) to the parameters of the equivalent MMPP with 2N states that results from the superposition of the N individual 2-MMPPs (termed generic MMPP). Finally, the parameters of the generic MMPP are fitted to the marginal distribution, subject to the constraints imposed by the autocovariance matching. Specifically, the matching of the distribution will be restricted by the fact that it may not be possible to decompose a generic MMPP back into individual 2-MMPPs. Overall, our procedure is motivated by the fact that direct relationships can be established between the autocovariance and the parameters of the superposed MMPP and between the marginal distribution and the parameters of the generic MMPP. We apply the fitting procedure to traffic traces exhibiting LRD including (i) IP traffic measured at our institution and (ii) IP traffic traces available in the Internet such as the well known, publicly available, Bellcore traces. The selected traces are representative of a wide range of services/protocols used in the Internet. We assess the fitting procedure by comparing the measured and fitted traces (traces generated from the fitted models) in terms of (i) Hurst parameter; (ii) degree of approximation between the autocovariance and marginal distribution curves; (iii) range of time scales where LRD is observed using a wavelet based estimator and (iv) packet loss ratio suffered in a single buffer for different values of the buffer capacity. Results are very clear in showing that MMPPs

  8. Poisson-Nernst-Planck-Fermi theory for modeling biological ion channels

    SciTech Connect

    Liu, Jinn-Liang; Eisenberg, Bob

    2014-12-14

    A Poisson-Nernst-Planck-Fermi (PNPF) theory is developed for studying ionic transport through biological ion channels. Our goal is to deal with the finite size of particle using a Fermi like distribution without calculating the forces between the particles, because they are both expensive and tricky to compute. We include the steric effect of ions and water molecules with nonuniform sizes and interstitial voids, the correlation effect of crowded ions with different valences, and the screening effect of water molecules in an inhomogeneous aqueous electrolyte. Including the finite volume of water and the voids between particles is an important new part of the theory presented here. Fermi like distributions of all particle species are derived from the volume exclusion of classical particles. Volume exclusion and the resulting saturation phenomena are especially important to describe the binding and permeation mechanisms of ions in a narrow channel pore. The Gibbs free energy of the Fermi distribution reduces to that of a Boltzmann distribution when these effects are not considered. The classical Gibbs entropy is extended to a new entropy form — called Gibbs-Fermi entropy — that describes mixing configurations of all finite size particles and voids in a thermodynamic system where microstates do not have equal probabilities. The PNPF model describes the dynamic flow of ions, water molecules, as well as voids with electric fields and protein charges. The model also provides a quantitative mean-field description of the charge/space competition mechanism of particles within the highly charged and crowded channel pore. The PNPF results are in good accord with experimental currents recorded in a 10{sup 8}-fold range of Ca{sup 2+} concentrations. The results illustrate the anomalous mole fraction effect, a signature of L-type calcium channels. Moreover, numerical results concerning water density, dielectric permittivity, void volume, and steric energy provide useful

  9. Mapping species abundance by a spatial zero-inflated Poisson model: a case study in the Wadden Sea, the Netherlands.

    PubMed

    Lyashevska, Olga; Brus, Dick J; van der Meer, Jaap

    2016-01-01

    The objective of the study was to provide a general procedure for mapping species abundance when data are zero-inflated and spatially correlated counts. The bivalve species Macoma balthica was observed on a 500×500 m grid in the Dutch part of the Wadden Sea. In total, 66% of the 3451 counts were zeros. A zero-inflated Poisson mixture model was used to relate counts to environmental covariates. Two models were considered, one with relatively fewer covariates (model "small") than the other (model "large"). The models contained two processes: a Bernoulli (species prevalence) and a Poisson (species intensity, when the Bernoulli process predicts presence). The model was used to make predictions for sites where only environmental data are available. Predicted prevalences and intensities show that the model "small" predicts lower mean prevalence and higher mean intensity, than the model "large". Yet, the product of prevalence and intensity, which might be called the unconditional intensity, is very similar. Cross-validation showed that the model "small" performed slightly better, but the difference was small. The proposed methodology might be generally applicable, but is computer intensive. PMID:26843936

  10. A Poisson-lognormal conditional-autoregressive model for multivariate spatial analysis of pedestrian crash counts across neighborhoods.

    PubMed

    Wang, Yiyi; Kockelman, Kara M

    2013-11-01

    This work examines the relationship between 3-year pedestrian crash counts across Census tracts in Austin, Texas, and various land use, network, and demographic attributes, such as land use balance, residents' access to commercial land uses, sidewalk density, lane-mile densities (by roadway class), and population and employment densities (by type). The model specification allows for region-specific heterogeneity, correlation across response types, and spatial autocorrelation via a Poisson-based multivariate conditional auto-regressive (CAR) framework and is estimated using Bayesian Markov chain Monte Carlo methods. Least-squares regression estimates of walk-miles traveled per zone serve as the exposure measure. Here, the Poisson-lognormal multivariate CAR model outperforms an aspatial Poisson-lognormal multivariate model and a spatial model (without cross-severity correlation), both in terms of fit and inference. Positive spatial autocorrelation emerges across neighborhoods, as expected (due to latent heterogeneity or missing variables that trend in space, resulting in spatial clustering of crash counts). In comparison, the positive aspatial, bivariate cross correlation of severe (fatal or incapacitating) and non-severe crash rates reflects latent covariates that have impacts across severity levels but are more local in nature (such as lighting conditions and local sight obstructions), along with spatially lagged cross correlation. Results also suggest greater mixing of residences and commercial land uses is associated with higher pedestrian crash risk across different severity levels, ceteris paribus, presumably since such access produces more potential conflicts between pedestrian and vehicle movements. Interestingly, network densities show variable effects, and sidewalk provision is associated with lower severe-crash rates. PMID:24036167

  11. Integrated analysis of transcriptomic and proteomic data of Desulfovibrio vulgaris: Zero-Inflated Poisson regression models to predict abundance of undetected proteins

    SciTech Connect

    Nie, Lei; Wu, Gang; Brockman, Fred J.; Zhang, Weiwen

    2006-05-04

    Abstract Advances in DNA microarray and proteomics technologies have enabled high-throughput measurement of mRNA expression and protein abundance. Parallel profiling of mRNA and protein on a global scale and integrative analysis of these two data types could provide additional insight into the metabolic mechanisms underlying complex biological systems. However, because protein abundance and mRNA expression are affected by many cellular and physical processes, there have been conflicting results on the correlation of these two measurements. In addition, as current proteomic methods can detect only a small fraction of proteins present in cells, no correlation study of these two data types has been done thus far at the whole-genome level. In this study, we describe a novel data-driven statistical model to integrate whole-genome microarray and proteomic data collected from Desulfovibrio vulgaris grown under three different conditions. Based on the Poisson distribution pattern of proteomic data and the fact that a large number of proteins were undetected (excess zeros), Zero-inflated Poisson models were used to define the correlation pattern of mRNA and protein abundance. The models assumed that there is a probability mass at zero representing some of the undetected proteins because of technical limitations. The models thus use abundance measurements of transcripts and proteins experimentally detected as input to generate predictions of protein abundances as output for all genes in the genome. We demonstrated the statistical models by comparatively analyzing D. vulgaris grown on lactate-based versus formate-based media. The increased expressions of Ech hydrogenase and alcohol dehydrogenase (Adh)-periplasmic Fe-only hydrogenase (Hyd) pathway for ATP synthesis were predicted for D. vulgaris grown on formate.

  12. A New Poisson-Nernst-Planck Model with Ion-Water Interactions for Charge Transport in Ion Channels.

    PubMed

    Chen, Duan

    2016-08-01

    In this work, we propose a new Poisson-Nernst-Planck (PNP) model with ion-water interactions for biological charge transport in ion channels. Due to narrow geometries of these membrane proteins, ion-water interaction is critical for both dielectric property of water molecules in channel pore and transport dynamics of mobile ions. We model the ion-water interaction energy based on realistic experimental observations in an efficient mean-field approach. Variation of a total energy functional of the biological system yields a new PNP-type continuum model. Numerical simulations show that the proposed model with ion-water interaction energy has the new features that quantitatively describe dielectric properties of water molecules in narrow pores and are possible to model the selectivity of some ion channels.

  13. A New Poisson-Nernst-Planck Model with Ion-Water Interactions for Charge Transport in Ion Channels.

    PubMed

    Chen, Duan

    2016-08-01

    In this work, we propose a new Poisson-Nernst-Planck (PNP) model with ion-water interactions for biological charge transport in ion channels. Due to narrow geometries of these membrane proteins, ion-water interaction is critical for both dielectric property of water molecules in channel pore and transport dynamics of mobile ions. We model the ion-water interaction energy based on realistic experimental observations in an efficient mean-field approach. Variation of a total energy functional of the biological system yields a new PNP-type continuum model. Numerical simulations show that the proposed model with ion-water interaction energy has the new features that quantitatively describe dielectric properties of water molecules in narrow pores and are possible to model the selectivity of some ion channels. PMID:27480225

  14. A general Poisson-Boltzmann model with position-dependent dielectric permittivity for electric double layer analysis.

    PubMed

    Le, Guigao; Zhang, Junfeng

    2011-05-01

    In this paper, we propose a general Poisson-Boltzmann model for electric double layer (EDL) analysis with the position dependence of dielectric permittivity considered. This model provides physically reasonable property profiles in the EDL region, and it is then utilized to investigate the depletion layer effect on EDL structure and interaction near hydrophobic surfaces. Our results show that both the electric potential and the interaction pressure between surfaces decrease due to the lower permittivity in the depletion layer. The reduction becomes more profound at larger variation magnitude and range. This trend is in general agreement with that observed from the previous stepwise model; however, that model has overestimated the influence of permittivity variation effect. For the thin depletion layer and the relative thick EDL, our calculation indicates that the permittivity variation effect on EDL usually can be neglected. Furthermore, our model can be readily extended to study the permittivity variation in EDL due to ion accumulation and hydration in the EDL region.

  15. Paretian Poisson Processes

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo; Klafter, Joseph

    2008-05-01

    Many random populations can be modeled as a countable set of points scattered randomly on the positive half-line. The points may represent magnitudes of earthquakes and tornados, masses of stars, market values of public companies, etc. In this article we explore a specific class of random such populations we coin ` Paretian Poisson processes'. This class is elemental in statistical physics—connecting together, in a deep and fundamental way, diverse issues including: the Poisson distribution of the Law of Small Numbers; Paretian tail statistics; the Fréchet distribution of Extreme Value Theory; the one-sided Lévy distribution of the Central Limit Theorem; scale-invariance, renormalization and fractality; resilience to random perturbations.

  16. An asymptotic preserving scheme for the two-fluid Euler-Poisson model in the quasineutral limit

    SciTech Connect

    Crispel, Pierre . E-mail: crispel@mip.ups-tlse.fr; Degond, Pierre . E-mail: degond@mip.ups-tlse.fr; Vignal, Marie-Helene . E-mail: mhvignal@mip.ups-tlse.fr

    2007-04-10

    This paper deals with the modeling of a plasma in the quasineutral limit using the two-fluid Euler-Poisson system. In this limit, explicit numerical schemes suffer from severe numerical constraints related to the small Debye length and large plasma frequency. Here, we propose an implicit scheme which reduces to a scheme for the quasineutral Euler model in the quasineutral limit. Such a property is referred to as 'asymptotic preservation'. One of the distinctive features of this scheme is that it has a comparable numerical cost to that of an explicit scheme: simply the Poisson equation is replaced by a different (but formally equivalent) elliptic problem. We present numerical simulations for two different one-dimensional test-cases. They confirm the expected stability of the scheme in the quasineutral limit. They also show that this scheme has some accuracy problems in the limit of small electron to ion mass ratio in reproducing the correct electron velocity. But this problem is already present in the results of the classical algorithm. Numerical simulations are also performed for a two-dimensional problem of a plasma expansion in vacuum between two electrodes.

  17. Derivation of Poisson and Nernst-Planck equations in a bath and channel from a molecular model.

    PubMed

    Schuss, Z; Nadler, B; Eisenberg, R S

    2001-09-01

    Permeation of ions from one electrolytic solution to another, through a protein channel, is a biological process of considerable importance. Permeation occurs on a time scale of micro- to milliseconds, far longer than the femtosecond time scales of atomic motion. Direct simulations of atomic dynamics are not yet possible for such long-time scales; thus, averaging is unavoidable. The question is what and how to average. In this paper, we average a Langevin model of ionic motion in a bulk solution and protein channel. The main result is a coupled system of averaged Poisson and Nernst-Planck equations (CPNP) involving conditional and unconditional charge densities and conditional potentials. The resulting NP equations contain the averaged force on a single ion, which is the sum of two components. The first component is the gradient of a conditional electric potential that is the solution of Poisson's equation with conditional and permanent charge densities and boundary conditions of the applied voltage. The second component is the self-induced force on an ion due to surface charges induced only by that ion at dielectric interfaces. The ion induces surface polarization charge that exerts a significant force on the ion itself, not present in earlier PNP equations. The proposed CPNP system is not complete, however, because the electric potential satisfies Poisson's equation with conditional charge densities, conditioned on the location of an ion, while the NP equations contain unconditional densities. The conditional densities are closely related to the well-studied pair-correlation functions of equilibrium statistical mechanics. We examine a specific closure relation, which on the one hand replaces the conditional charge densities by the unconditional ones in the Poisson equation, and on the other hand replaces the self-induced force in the NP equation by an effective self-induced force. This effective self-induced force is nearly zero in the baths but is approximately

  18. Determination of Diffusion Coefficients in Cement-Based Materials: An Inverse Problem for the Nernst-Planck and Poisson Models

    NASA Astrophysics Data System (ADS)

    Szyszkiewicz-Warzecha, Krzysztof; Jasielec, Jerzy J.; Fausek, Janusz; Filipek, Robert

    2016-08-01

    Transport properties of ions have significant impact on the possibility of rebars corrosion thus the knowledge of a diffusion coefficient is important for reinforced concrete durability. Numerous tests for the determination of diffusion coefficients have been proposed but analysis of some of these tests show that they are too simplistic or even not valid. Hence, more rigorous models to calculate the coefficients should be employed. Here we propose the Nernst-Planck and Poisson equations, which take into account the concentration and electric potential field. Based on this model a special inverse method is presented for determination of a chloride diffusion coefficient. It requires the measurement of concentration profiles or flux on the boundary and solution of the NPP model to define the goal function. Finding the global minimum is equivalent to the determination of diffusion coefficients. Typical examples of the application of the presented method are given.

  19. Climatology of Station Storm Rainfall in the Continental United States: Parameters of the Bartlett-Lewis and Poisson Rectangular Pulses Models

    NASA Technical Reports Server (NTRS)

    Hawk, Kelly Lynn; Eagleson, Peter S.

    1992-01-01

    The parameters of two stochastic models of point rainfall, the Bartlett-Lewis model and the Poisson rectangular pulses model, are estimated for each month of the year from the historical records of hourly precipitation at more than seventy first-order stations in the continental United States. The parameters are presented both in tabular form and as isopleths on maps. The Poisson rectangular pulses parameters are useful in implementing models of the land surface water balance. The Bartlett-Lewis parameters are useful in disaggregating precipitation to a time period shorter than that of existing observations. Information is also included on a floppy disk.

  20. Spatio-energetic cross-talks in photon counting detectors: detector model and correlated Poisson data generator

    NASA Astrophysics Data System (ADS)

    Taguchi, Katsuyuki; Polster, Christoph; Lee, Okkyun; Kappler, Steffen

    2016-03-01

    An x-ray photon interacts with photon counting detectors (PCDs) and generates an electron charge cloud or multiple clouds. The clouds (thus, the photon energy) may be split between two adjacent PCD pixels when the interaction occurs near pixel boundaries, producing a count at both of the two pixels. This is called double-counting with charge sharing. The output of individual PCD pixel is Poisson distributed integer counts; however, the outputs of adjacent pixels are correlated due to double-counting. Major problems are the lack of detector noise model for the spatio-energetic crosstalk and the lack of an efficient simulation tool. Monte Carlo simulation can accurately simulate these phenomena and produce noisy data; however, it is not computationally efficient. In this study, we developed a new detector model and implemented into an efficient software simulator which uses a Poisson random number generator to produce correlated noisy integer counts. The detector model takes the following effects into account effects: (1) detection efficiency and incomplete charge collection; (2) photoelectric effect with total absorption; (3) photoelectric effect with fluorescence x-ray emission and re-absorption; (4) photoelectric effect with fluorescence x-ray emission which leaves PCD completely; and (5) electric noise. The model produced total detector spectrum similar to previous MC simulation data. The model can be used to predict spectrum and correlation with various different settings. The simulated noisy data demonstrated the expected performance: (a) data were integers; (b) the mean and covariance matrix was close to the target values; (c) noisy data generation was very efficient

  1. Predictors of the number of under-five malnourished children in Bangladesh: application of the generalized poisson regression model

    PubMed Central

    2013-01-01

    Background Malnutrition is one of the principal causes of child mortality in developing countries including Bangladesh. According to our knowledge, most of the available studies, that addressed the issue of malnutrition among under-five children, considered the categorical (dichotomous/polychotomous) outcome variables and applied logistic regression (binary/multinomial) to find their predictors. In this study malnutrition variable (i.e. outcome) is defined as the number of under-five malnourished children in a family, which is a non-negative count variable. The purposes of the study are (i) to demonstrate the applicability of the generalized Poisson regression (GPR) model as an alternative of other statistical methods and (ii) to find some predictors of this outcome variable. Methods The data is extracted from the Bangladesh Demographic and Health Survey (BDHS) 2007. Briefly, this survey employs a nationally representative sample which is based on a two-stage stratified sample of households. A total of 4,460 under-five children is analysed using various statistical techniques namely Chi-square test and GPR model. Results The GPR model (as compared to the standard Poisson regression and negative Binomial regression) is found to be justified to study the above-mentioned outcome variable because of its under-dispersion (variance < mean) property. Our study also identify several significant predictors of the outcome variable namely mother’s education, father’s education, wealth index, sanitation status, source of drinking water, and total number of children ever born to a woman. Conclusions Consistencies of our findings in light of many other studies suggest that the GPR model is an ideal alternative of other statistical models to analyse the number of under-five malnourished children in a family. Strategies based on significant predictors may improve the nutritional status of children in Bangladesh. PMID:23297699

  2. A new set of atomic radii for accurate estimation of solvation free energy by Poisson-Boltzmann solvent model.

    PubMed

    Yamagishi, Junya; Okimoto, Noriaki; Morimoto, Gentaro; Taiji, Makoto

    2014-11-01

    The Poisson-Boltzmann implicit solvent (PB) is widely used to estimate the solvation free energies of biomolecules in molecular simulations. An optimized set of atomic radii (PB radii) is an important parameter for PB calculations, which determines the distribution of dielectric constants around the solute. We here present new PB radii for the AMBER protein force field to accurately reproduce the solvation free energies obtained from explicit solvent simulations. The presented PB radii were optimized using results from explicit solvent simulations of the large systems. In addition, we discriminated PB radii for N- and C-terminal residues from those for nonterminal residues. The performances using our PB radii showed high accuracy for the estimation of solvation free energies at the level of the molecular fragment. The obtained PB radii are effective for the detailed analysis of the solvation effects of biomolecules.

  3. Where the linearized Poisson-Boltzmann cell model fails: Spurious phase separation in charged colloidal suspensions

    NASA Astrophysics Data System (ADS)

    Tamashiro, M. N.; Schiessel, H.

    2003-07-01

    The Poisson-Boltzmann (PB) spherical Wigner-Seitz cell model—introduced to theoretically describe suspensions of spherical charged colloidal particles—is investigated at the nonlinear and linearized levels. The linearization of the mean-field PB functional yields linearized Debye-Hückel-type equations agreeing asymptotically with the nonlinear PB results in the weak-coupling (high-temperature) limit. Both the canonical (fixed number of microions) as well as the semigrand-canonical (in contact with an infinite salt reservoir) cases are considered and discussed in a unified linearized framework. In disagreement with the exact nonlinear PB solution inside a Wigner-Seitz cell, the linearized theory predicts the occurrence of a thermodynamical instability with an associated phase separation of the homogeneous suspension into dilute (gas) and dense (liquid) phases, being thus a spurious result of the linearization. We show that these artifacts, although thermodynamically consistent with quadratic expansions of the nonlinear functional and osmotic pressure, may be traced back to the nonfulfillment of the underlying assumptions of the linearization. This raises questions about the reliability of the prediction of gas/liquid-like phase separation in deionized aqueous suspensions of charged colloids mediated by monovalent counterions obtained by linearized theories.

  4. Poisson structures for the Aristotelian model of three-body motion

    NASA Astrophysics Data System (ADS)

    Abadoğlu, E.; Gümral, H.

    2011-08-01

    We present explicitly Poisson structures of a dynamical system with three degrees of freedom introduced and studied by Calogero et al (2005 J. Phys. A: Math. Gen. 38 8873-96). We first show the construction of a formal Hamiltonian structure for a time-dependent Hamiltonian function. We then cast the dynamical equations into the form of a gradient flow by means of a potential function. By reducing the number of equations, we obtain the second time-independent Hamiltonian function which includes all parameters of the system. This extends the result of Calogero et al (2009 J. Phys. A: Math. Theor. 42 015205) for semi-symmetrical motion. We present bi-Hamiltonian structures for two special cases of the cited references. It turns out that the case of three bodies two of which are not interacting with each other but are coupled through the interaction of a third one requires a separate treatment. We conclude with a discussion on generic form of the second time-independent Hamiltonian function.

  5. Random transitions described by the stochastic Smoluchowski-Poisson system and by the stochastic Keller-Segel model.

    PubMed

    Chavanis, P H; Delfini, L

    2014-03-01

    We study random transitions between two metastable states that appear below a critical temperature in a one-dimensional self-gravitating Brownian gas with a modified Poisson equation experiencing a second order phase transition from a homogeneous phase to an inhomogeneous phase [P. H. Chavanis and L. Delfini, Phys. Rev. E 81, 051103 (2010)]. We numerically solve the N-body Langevin equations and the stochastic Smoluchowski-Poisson system, which takes fluctuations (finite N effects) into account. The system switches back and forth between the two metastable states (bistability) and the particles accumulate successively at the center or at the boundary of the domain. We explicitly show that these random transitions exhibit the phenomenology of the ordinary Kramers problem for a Brownian particle in a double-well potential. The distribution of the residence time is Poissonian and the average lifetime of a metastable state is given by the Arrhenius law; i.e., it is proportional to the exponential of the barrier of free energy ΔF divided by the energy of thermal excitation kBT. Since the free energy is proportional to the number of particles N for a system with long-range interactions, the lifetime of metastable states scales as eN and is considerable for N≫1. As a result, in many applications, metastable states of systems with long-range interactions can be considered as stable states. However, for moderate values of N, or close to a critical point, the lifetime of the metastable states is reduced since the barrier of free energy decreases. In that case, the fluctuations become important and the mean field approximation is no more valid. This is the situation considered in this paper. By an appropriate change of notations, our results also apply to bacterial populations experiencing chemotaxis in biology. Their dynamics can be described by a stochastic Keller-Segel model that takes fluctuations into account and goes beyond the usual mean field approximation. PMID

  6. Random transitions described by the stochastic Smoluchowski-Poisson system and by the stochastic Keller-Segel model.

    PubMed

    Chavanis, P H; Delfini, L

    2014-03-01

    We study random transitions between two metastable states that appear below a critical temperature in a one-dimensional self-gravitating Brownian gas with a modified Poisson equation experiencing a second order phase transition from a homogeneous phase to an inhomogeneous phase [P. H. Chavanis and L. Delfini, Phys. Rev. E 81, 051103 (2010)]. We numerically solve the N-body Langevin equations and the stochastic Smoluchowski-Poisson system, which takes fluctuations (finite N effects) into account. The system switches back and forth between the two metastable states (bistability) and the particles accumulate successively at the center or at the boundary of the domain. We explicitly show that these random transitions exhibit the phenomenology of the ordinary Kramers problem for a Brownian particle in a double-well potential. The distribution of the residence time is Poissonian and the average lifetime of a metastable state is given by the Arrhenius law; i.e., it is proportional to the exponential of the barrier of free energy ΔF divided by the energy of thermal excitation kBT. Since the free energy is proportional to the number of particles N for a system with long-range interactions, the lifetime of metastable states scales as eN and is considerable for N≫1. As a result, in many applications, metastable states of systems with long-range interactions can be considered as stable states. However, for moderate values of N, or close to a critical point, the lifetime of the metastable states is reduced since the barrier of free energy decreases. In that case, the fluctuations become important and the mean field approximation is no more valid. This is the situation considered in this paper. By an appropriate change of notations, our results also apply to bacterial populations experiencing chemotaxis in biology. Their dynamics can be described by a stochastic Keller-Segel model that takes fluctuations into account and goes beyond the usual mean field approximation.

  7. Similarities and differences among the models proposed for real electrodes in the Poisson-Nernst-Planck theory.

    PubMed

    Barbero, G; Scalerandi, M

    2012-02-28

    The ionic distribution induced by an external field is investigated by means of the Poisson-Nernst-Planck model, by taking into account the non-blocking properties of the limiting electrodes. Three types of models proposed for the description of real electrodes are considered. The first two assume an ionic current on the electrodes proportional to the variation of the bulk density of ions and to the surface electric field, respectively. The third model assumes that the sample is limited by perfectly blocking electrodes with a true resistance in parallel to the cell. Here we show that the first two models are equivalent, in the sense that it is possible to find a phenomenological parameter by means of which the predictions of the two models, for what concerns the spectra of the real and imaginary parts of the impedance of the cell, are the same. On the contrary, the third model is equivalent to the others only if the conduction current across the electrodes is small with respect to the displacement current.

  8. Modelling carcinogenesis after radiotherapy using Poisson statistics: implications for IMRT, protons and ions.

    PubMed

    Jones, Bleddyn

    2009-06-01

    Current technical radiotherapy advances aim to (a) better conform the dose contours to cancers and (b) reduce the integral dose exposure and thereby minimise unnecessary dose exposure to normal tissues unaffected by the cancer. Various types of conformal and intensity modulated radiotherapy (IMRT) using x-rays can achieve (a) while charged particle therapy (CPT)-using proton and ion beams-can achieve both (a) and (b), but at greater financial cost. Not only is the long term risk of radiation related normal tissue complications important, but so is the risk of carcinogenesis. Physical dose distribution plans can be generated to show the differences between the above techniques. IMRT is associated with a dose bath of low to medium dose due to fluence transfer: dose is effectively transferred from designated organs at risk to other areas; thus dose and risk are transferred. Many clinicians are concerned that there may be additional carcinogenesis many years after IMRT. CPT reduces the total energy deposition in the body and offers many potential advantages in terms of the prospects for better quality of life along with cancer cure. With C ions there is a tail of dose beyond the Bragg peaks, due to nuclear fragmentation; this is not found with protons. CPT generally uses higher linear energy transfer (which varies with particle and energy), which carries a higher relative risk of malignant induction, but also of cell death quantified by the relative biological effect concept, so at higher dose levels the frank development of malignancy should be reduced. Standard linear radioprotection models have been used to show a reduction in carcinogenesis risk of between two- and 15-fold depending on the CPT location. But the standard risk models make no allowance for fractionation and some have a dose limit at 4 Gy. Alternatively, tentative application of the linear quadratic model and Poissonian statistics to chromosome breakage and cell kill simultaneously allows estimation of

  9. Electrostatics of ligand binding: parameterization of the generalized Born model and comparison with the Poisson-Boltzmann approach

    PubMed Central

    Liu, Hao-Yang; Zou, Xiaoqin

    2008-01-01

    An accurate and fast evaluation of the electrostatics in ligand-protein interactions is crucial for computer-aided drug design. The pairwise generalized Born (GB) model, a fast analytical method originally developed for studying solvation of organic molecules, has been widely applied to macromolecular systems, including ligand-protein complexes. Yet, this model involves several empirical scaling parameters, which have been optimized for solvation of organic molecules, peptides and nucleic acids, but not for energetics of ligand binding. Studies have shown that a good solvation energy does not guarantee a correct model of solvent-mediated interactions. Thus in this study, we have used the Poisson-Boltzmann (PB) approach as a reference to optimize the GB model for studies of ligand-protein interactions. Specifically, we have employed the pairwise descreening approximation proposed by Hawkins et al [1] for GB calculations, and DelPhi for PB calculations. The AMBER all-atom force field parameters have been used in this work. Seventeen protein-ligand complexes have been used as a training database, and a set of atomic descreening parameters has been selected with which the pairwise GB model and the PB model yield comparable results on atomic Born radii, the electrostatic component of free energies of ligand binding, and desolvation energies of the ligands and proteins. The energetics of the fifteen test complexes calculated with the GB model using this set of parameters also agrees well with the energetics calculated with the PB method. This is the first time that the GB model is parameterized and thoroughly compared with the PB model for the electrostatics of ligand binding. PMID:16671749

  10. Marginal regression models for clustered count data based on zero-inflated Conway-Maxwell-Poisson distribution with applications.

    PubMed

    Choo-Wosoba, Hyoyoung; Levy, Steven M; Datta, Somnath

    2016-06-01

    Community water fluoridation is an important public health measure to prevent dental caries, but it continues to be somewhat controversial. The Iowa Fluoride Study (IFS) is a longitudinal study on a cohort of Iowa children that began in 1991. The main purposes of this study (http://www.dentistry.uiowa.edu/preventive-fluoride-study) were to quantify fluoride exposures from both dietary and nondietary sources and to associate longitudinal fluoride exposures with dental fluorosis (spots on teeth) and dental caries (cavities). We analyze a subset of the IFS data by a marginal regression model with a zero-inflated version of the Conway-Maxwell-Poisson distribution for count data exhibiting excessive zeros and a wide range of dispersion patterns. In general, we introduce two estimation methods for fitting a ZICMP marginal regression model. Finite sample behaviors of the estimators and the resulting confidence intervals are studied using extensive simulation studies. We apply our methodologies to the dental caries data. Our novel modeling incorporating zero inflation, clustering, and overdispersion sheds some new light on the effect of community water fluoridation and other factors. We also include a second application of our methodology to a genomic (next-generation sequencing) dataset that exhibits underdispersion. PMID:26575079

  11. A multivariate Poisson-lognormal regression model for prediction of crash counts by severity, using Bayesian methods.

    PubMed

    Ma, Jianming; Kockelman, Kara M; Damien, Paul

    2008-05-01

    Numerous efforts have been devoted to investigating crash occurrence as related to roadway design features, environmental factors and traffic conditions. However, most of the research has relied on univariate count models; that is, traffic crash counts at different levels of severity are estimated separately, which may neglect shared information in unobserved error terms, reduce efficiency in parameter estimates, and lead to potential biases in sample databases. This paper offers a multivariate Poisson-lognormal (MVPLN) specification that simultaneously models crash counts by injury severity. The MVPLN specification allows for a more general correlation structure as well as overdispersion. This approach addresses several questions that are difficult to answer when estimating crash counts separately. Thanks to recent advances in crash modeling and Bayesian statistics, parameter estimation is done within the Bayesian paradigm, using a Gibbs Sampler and the Metropolis-Hastings (M-H) algorithms for crashes on Washington State rural two-lane highways. Estimation results from the MVPLN approach show statistically significant correlations between crash counts at different levels of injury severity. The non-zero diagonal elements suggest overdispersion in crash counts at all levels of severity. The results lend themselves to several recommendations for highway safety treatments and design policies. For example, wide lanes and shoulders are key for reducing crash frequencies, as are longer vertical curves. PMID:18460364

  12. Computational Process Modeling for Additive Manufacturing

    NASA Technical Reports Server (NTRS)

    Bagg, Stacey; Zhang, Wei

    2014-01-01

    Computational Process and Material Modeling of Powder Bed additive manufacturing of IN 718. Optimize material build parameters with reduced time and cost through modeling. Increase understanding of build properties. Increase reliability of builds. Decrease time to adoption of process for critical hardware. Potential to decrease post-build heat treatments. Conduct single-track and coupon builds at various build parameters. Record build parameter information and QM Meltpool data. Refine Applied Optimization powder bed AM process model using data. Report thermal modeling results. Conduct metallography of build samples. Calibrate STK models using metallography findings. Run STK models using AO thermal profiles and report STK modeling results. Validate modeling with additional build. Photodiode Intensity measurements highly linear with power input. Melt Pool Intensity highly correlated to Melt Pool Size. Melt Pool size and intensity increase with power. Applied Optimization will use data to develop powder bed additive manufacturing process model.

  13. Partitioning the aggregation of parasites on hosts into intrinsic and extrinsic components via an extended Poisson-gamma mixture model.

    PubMed

    Calabrese, Justin M; Brunner, Jesse L; Ostfeld, Richard S

    2011-01-01

    It is well known that parasites are often highly aggregated on their hosts such that relatively few individuals host the large majority of parasites. When the parasites are vectors of infectious disease, a key consequence of this aggregation can be increased disease transmission rates. The cause of this aggregation, however, is much less clear, especially for parasites such as arthropod vectors, which generally spend only a short time on their hosts. Regression-based analyses of ticks on various hosts have focused almost exclusively on identifying the intrinsic host characteristics associated with large burdens, but these efforts have had mixed results; most host traits examined have some small influence, but none are key. An alternative approach, the Poisson-gamma mixture distribution, has often been used to describe aggregated parasite distributions in a range of host/macroparasite systems, but lacks a clear mechanistic basis. Here, we extend this framework by linking it to a general model of parasite accumulation. Then, focusing on blacklegged ticks (Ixodes scapularis) on mice (Peromyscus leucopus), we fit the extended model to the best currently available larval tick burden datasets via hierarchical Bayesian methods, and use it to explore the relative contributions of intrinsic and extrinsic factors on observed tick burdens. Our results suggest that simple bad luck-inhabiting a home range with high vector density-may play a much larger role in determining parasite burdens than is currently appreciated. PMID:22216216

  14. Fuzzy classifier based support vector regression framework for Poisson ratio determination

    NASA Astrophysics Data System (ADS)

    Asoodeh, Mojtaba; Bagheripour, Parisa

    2013-09-01

    Poisson ratio is considered as one of the most important rock mechanical properties of hydrocarbon reservoirs. Determination of this parameter through laboratory measurement is time, cost, and labor intensive. Furthermore, laboratory measurements do not provide continuous data along the reservoir intervals. Hence, a fast, accurate, and inexpensive way of determining Poisson ratio which produces continuous data over the whole reservoir interval is desirable. For this purpose, support vector regression (SVR) method based on statistical learning theory (SLT) was employed as a supervised learning algorithm to estimate Poisson ratio from conventional well log data. SVR is capable of accurately extracting the implicit knowledge contained in conventional well logs and converting the gained knowledge into Poisson ratio data. Structural risk minimization (SRM) principle which is embedded in the SVR structure in addition to empirical risk minimization (EMR) principle provides a robust model for finding quantitative formulation between conventional well log data and Poisson ratio. Although satisfying results were obtained from an individual SVR model, it had flaws of overestimation in low Poisson ratios and underestimation in high Poisson ratios. These errors were eliminated through implementation of fuzzy classifier based SVR (FCBSVR). The FCBSVR significantly improved accuracy of the final prediction. This strategy was successfully applied to data from carbonate reservoir rocks of an Iranian Oil Field. Results indicated that SVR predicted Poisson ratio values are in good agreement with measured values.

  15. Random transitions described by the stochastic Smoluchowski-Poisson system and by the stochastic Keller-Segel model

    NASA Astrophysics Data System (ADS)

    Chavanis, P. H.; Delfini, L.

    2014-03-01

    We study random transitions between two metastable states that appear below a critical temperature in a one-dimensional self-gravitating Brownian gas with a modified Poisson equation experiencing a second order phase transition from a homogeneous phase to an inhomogeneous phase [P. H. Chavanis and L. Delfini, Phys. Rev. E 81, 051103 (2010), 10.1103/PhysRevE.81.051103]. We numerically solve the N-body Langevin equations and the stochastic Smoluchowski-Poisson system, which takes fluctuations (finite N effects) into account. The system switches back and forth between the two metastable states (bistability) and the particles accumulate successively at the center or at the boundary of the domain. We explicitly show that these random transitions exhibit the phenomenology of the ordinary Kramers problem for a Brownian particle in a double-well potential. The distribution of the residence time is Poissonian and the average lifetime of a metastable state is given by the Arrhenius law; i.e., it is proportional to the exponential of the barrier of free energy ΔF divided by the energy of thermal excitation kBT. Since the free energy is proportional to the number of particles N for a system with long-range interactions, the lifetime of metastable states scales as eN and is considerable for N ≫1. As a result, in many applications, metastable states of systems with long-range interactions can be considered as stable states. However, for moderate values of N, or close to a critical point, the lifetime of the metastable states is reduced since the barrier of free energy decreases. In that case, the fluctuations become important and the mean field approximation is no more valid. This is the situation considered in this paper. By an appropriate change of notations, our results also apply to bacterial populations experiencing chemotaxis in biology. Their dynamics can be described by a stochastic Keller-Segel model that takes fluctuations into account and goes beyond the usual mean

  16. Between algorithm and model: different Molecular Surface definitions for the Poisson-Boltzmann based electrostatic characterization of biomolecules in solution.

    PubMed

    Decherchi, Sergio; Colmenares, José; Catalano, Chiara Eva; Spagnuolo, Michela; Alexov, Emil; Rocchia, Walter

    2013-01-01

    The definition of a molecular surface which is physically sound and computationally efficient is a very interesting and long standing problem in the implicit solvent continuum modeling of biomolecular systems as well as in the molecular graphics field. In this work, two molecular surfaces are evaluated with respect to their suitability for electrostatic computation as alternatives to the widely used Connolly-Richards surface: the blobby surface, an implicit Gaussian atom centered surface, and the skin surface. As figures of merit, we considered surface differentiability and surface area continuity with respect to atom positions, and the agreement with explicit solvent simulations. Geometric analysis seems to privilege the skin to the blobby surface, and points to an unexpected relationship between the non connectedness of the surface, caused by interstices in the solute volume, and the surface area dependence on atomic centers. In order to assess the ability to reproduce explicit solvent results, specific software tools have been developed to enable the use of the skin surface in Poisson-Boltzmann calculations with the DelPhi solver. Results indicate that the skin and Connolly surfaces have a comparable performance from this last point of view.

  17. How many dinosaur species were there? Fossil bias and true richness estimated using a Poisson sampling model

    PubMed Central

    Starrfelt, Jostein; Liow, Lee Hsiang

    2016-01-01

    The fossil record is a rich source of information about biological diversity in the past. However, the fossil record is not only incomplete but has also inherent biases due to geological, physical, chemical and biological factors. Our knowledge of past life is also biased because of differences in academic and amateur interests and sampling efforts. As a result, not all individuals or species that lived in the past are equally likely to be discovered at any point in time or space. To reconstruct temporal dynamics of diversity using the fossil record, biased sampling must be explicitly taken into account. Here, we introduce an approach that uses the variation in the number of times each species is observed in the fossil record to estimate both sampling bias and true richness. We term our technique TRiPS (True Richness estimated using a Poisson Sampling model) and explore its robustness to violation of its assumptions via simulations. We then venture to estimate sampling bias and absolute species richness of dinosaurs in the geological stages of the Mesozoic. Using TRiPS, we estimate that 1936 (1543–2468) species of dinosaurs roamed the Earth during the Mesozoic. We also present improved estimates of species richness trajectories of the three major dinosaur clades: the sauropodomorphs, ornithischians and theropods, casting doubt on the Jurassic–Cretaceous extinction event and demonstrating that all dinosaur groups are subject to considerable sampling bias throughout the Mesozoic. PMID:26977060

  18. How many dinosaur species were there? Fossil bias and true richness estimated using a Poisson sampling model.

    PubMed

    Starrfelt, Jostein; Liow, Lee Hsiang

    2016-04-01

    The fossil record is a rich source of information about biological diversity in the past. However, the fossil record is not only incomplete but has also inherent biases due to geological, physical, chemical and biological factors. Our knowledge of past life is also biased because of differences in academic and amateur interests and sampling efforts. As a result, not all individuals or species that lived in the past are equally likely to be discovered at any point in time or space. To reconstruct temporal dynamics of diversity using the fossil record, biased sampling must be explicitly taken into account. Here, we introduce an approach that uses the variation in the number of times each species is observed in the fossil record to estimate both sampling bias and true richness. We term our technique TRiPS (True Richness estimated using a Poisson Sampling model) and explore its robustness to violation of its assumptions via simulations. We then venture to estimate sampling bias and absolute species richness of dinosaurs in the geological stages of the Mesozoic. Using TRiPS, we estimate that 1936 (1543-2468) species of dinosaurs roamed the Earth during the Mesozoic. We also present improved estimates of species richness trajectories of the three major dinosaur clades: the sauropodomorphs, ornithischians and theropods, casting doubt on the Jurassic-Cretaceous extinction event and demonstrating that all dinosaur groups are subject to considerable sampling bias throughout the Mesozoic.

  19. How many dinosaur species were there? Fossil bias and true richness estimated using a Poisson sampling model.

    PubMed

    Starrfelt, Jostein; Liow, Lee Hsiang

    2016-04-01

    The fossil record is a rich source of information about biological diversity in the past. However, the fossil record is not only incomplete but has also inherent biases due to geological, physical, chemical and biological factors. Our knowledge of past life is also biased because of differences in academic and amateur interests and sampling efforts. As a result, not all individuals or species that lived in the past are equally likely to be discovered at any point in time or space. To reconstruct temporal dynamics of diversity using the fossil record, biased sampling must be explicitly taken into account. Here, we introduce an approach that uses the variation in the number of times each species is observed in the fossil record to estimate both sampling bias and true richness. We term our technique TRiPS (True Richness estimated using a Poisson Sampling model) and explore its robustness to violation of its assumptions via simulations. We then venture to estimate sampling bias and absolute species richness of dinosaurs in the geological stages of the Mesozoic. Using TRiPS, we estimate that 1936 (1543-2468) species of dinosaurs roamed the Earth during the Mesozoic. We also present improved estimates of species richness trajectories of the three major dinosaur clades: the sauropodomorphs, ornithischians and theropods, casting doubt on the Jurassic-Cretaceous extinction event and demonstrating that all dinosaur groups are subject to considerable sampling bias throughout the Mesozoic. PMID:26977060

  20. Detection of Gaussian signals in Poisson-modulated interference.

    PubMed

    Streit, R L

    2000-10-01

    Passive broadband detection of target signals by an array of hydrophones in the presence of multiple discrete interferers is analyzed under Gaussian statistics and low signal-to-noise ratio conditions. A nonhomogeneous Poisson-modulated interference process is used to model the ensemble of possible arrival directions of the discrete interferers. Closed-form expressions are derived for the recognition differential of the passive-sonar equation in the presence of Poisson-modulated interference. The interference-compensated recognition differential differs from the classical recognition differential by an additive positive term that depend on the interference-to-noise ratio, the directionality of the Poisson-modulated interference, and the array beam pattern.

  1. Poisson's ratio model derived from P- and S-wave reflection seismic data at the CO2CRC Otway Project pilot site, Australia

    NASA Astrophysics Data System (ADS)

    Beilecke, Thies; Krawczyk, Charlotte M.; Tanner, David C.; Ziesch, Jennifer; Research Group Protect

    2014-05-01

    Compressional wave (P-wave) reflection seismic field measurements are a standard tool for subsurface exploration. 2-D seismic measurements are often used for overview measurements, but also as near-surface supplement to fill gaps that often exist in 3-D seismic data sets. Such supplementing 2-D measurements are typically simple with respect to field layout. This is an opportunity for the use of shear waves (S-waves). Within the last years, S-waves have become more and more important. One reason is that P- and S-waves are differently sensitive to fluids and pore fill so that the additional S-wave information can be used to enhance lithological studies. Another reason is that S-waves have the advantage of higher spatial resolution. Within the same signal bandwidth they typically have about half the wavelength of P-waves. In near-surface unconsolidated sediments they can even enhance the structural resolution by one order of magnitude. We make use of these capabilities within the PROTECT project. In addition to already existing 2-D P-wave data, we carried out a near surface 2-D S-wave field survey at the CO2CRC Otway Project pilot site, close to Warrnambool, Australia in November 2013. The combined analysis of P-wave and S-wave data is used to construct a Poisson's Ratio 2-D model down to roughly 600 m depth. The Poisson's ratio values along a 1 km long profile at the site are surprisingly high, ranging from 0.47 in the carbonate-dominated near surface to 0.4 at depth. In the literature, average lab measurements of 0.22 for unfissured carbonates and 0.37 for fissured examples have been reported. The high values that we found may indicate areas of rather unconsolidated or fractured material, or enhanced fluid contents, and will be subject of further studies. This work is integrated in a larger workflow towards prediction of CO2 leakage and monitoring strategies for subsurface storage in general. Acknowledgement: This work was sponsored in part by the Australian

  2. Vlasov-Maxwell and Vlasov-Poisson equations as models of a one-dimensional electron plasma

    NASA Technical Reports Server (NTRS)

    Klimas, A. J.; Cooper, J.

    1983-01-01

    The Vlasov-Maxwell and Vlasov-Poisson systems of equations for a one-dimensional electron plasma are defined and discussed. A method for transforming a solution of one system which is periodic over a bounded or unbounded spatial interval to a similar solution of the other is constructed.

  3. Effect of Nutritional Habits on Dental Caries in Permanent Dentition among Schoolchildren Aged 10–12 Years: A Zero-Inflated Generalized Poisson Regression Model Approach

    PubMed Central

    ALMASI, Afshin; RAHIMIFOROUSHANI, Abbas; ESHRAGHIAN, Mohammad Reza; MOHAMMAD, Kazem; PASDAR, Yahya; TARRAHI, Mohammad Javad; MOGHIMBEIGI, Abbas; AHMADI JOUYBARI, Touraj

    2016-01-01

    Background: The aim of this study was to assess the associations between nutrition and dental caries in permanent dentition among schoolchildren. Methods: A cross-sectional survey was undertaken on 698 schoolchildren aged 10 to 12 yr from a random sample of primary schools in Kermanshah, western Iran, in 2014. The study was based on the data obtained from the questionnaire containing information on nutritional habits and the outcome of decayed/missing/filled teeth (DMFT) index. The association between predictors and dental caries was modeled using the Zero Inflated Generalized Poisson (ZIGP) regression model. Results: Fourteen percent of the children were caries free. The model was shown that in female children, the odds of being in a caries susceptible sub-group was 1.23 (95% CI: 1.08–1.51) times more likely than boys (P=0.041). Additionally, mean caries count in children who consumed the fizzy soft beverages and sweet biscuits more than once daily was 1.41 (95% CI: 1.19–1.63) and 1.27 (95% CI: 1.18–1.37) times more than children that were in category of less than 3 times a week or never, respectively. Conclusions: Girls were at a higher risk of caries than boys were. Since our study showed that nutritional status may have significant effect on caries in permanent teeth, we recommend that health promotion activities in school should be emphasized on healthful eating practices; especially limiting beverages containing sugar to only occasionally between meals. PMID:27141498

  4. A Poisson model for identifying characteristic size effects in frequency data: Application to frequency-size distributions for global earthquakes, "starquakes", and fault lengths

    NASA Astrophysics Data System (ADS)

    Leonard, Thomas; Papasouliotis, Orestis; Main, Ian G.

    2001-01-01

    The standard Gaussian distribution for incremental frequency data requires a constant variance which is independent of the mean. We develop a more general and appropriate method based on the Poisson distribution, which assumes different unknown variances for the frequencies, equal to the means. We explicitly include "empty bins", and our method is quite insensitive to the choice of bin width. We develop a maximum likelihood technique that minimizes bias in the curve fits, and penalizes additional free parameters by objective information criteria. Various data sets are used to test three different physical models that have been suggested for the density distribution: the power law; the double power law; and the "gamma" distribution. For the CMT catalog of global earthquakes, two peaks in the posterior distribution are observed at moment magnitudes m* = 6.4 and 6.9 implying a bimodal distribution of seismogenic depth at around 15 and 30 km, respectively. A similar break at a characteristic length of 60 km or so is observed in moment-length data, but this does not outperform the simpler power law model. For the earthquake frequency-moment data the gamma distribution provides the best overall fit to the data, implying a finite correlation length and a system near but below the critical point. In contrast, data from soft gamma ray repeaters show that the power law is the best fit, implying infinite correlation length and a system that is precisely critical. For the fault break data a significant break of slope is found instead at characteristic scale of 44 km, implying a typical seismogenic thickness of up to 22 km or so in west central Nevada. The exponent changes from 1.5 to -2.1, too large to be accounted for by changes in sampling for an ideal, isotropic fractal set.

  5. Generalized additive modeling with implicit variable selection by likelihood-based boosting.

    PubMed

    Tutz, Gerhard; Binder, Harald

    2006-12-01

    The use of generalized additive models in statistical data analysis suffers from the restriction to few explanatory variables and the problems of selection of smoothing parameters. Generalized additive model boosting circumvents these problems by means of stagewise fitting of weak learners. A fitting procedure is derived which works for all simple exponential family distributions, including binomial, Poisson, and normal response variables. The procedure combines the selection of variables and the determination of the appropriate amount of smoothing. Penalized regression splines and the newly introduced penalized stumps are considered as weak learners. Estimates of standard deviations and stopping criteria, which are notorious problems in iterative procedures, are based on an approximate hat matrix. The method is shown to be a strong competitor to common procedures for the fitting of generalized additive models. In particular, in high-dimensional settings with many nuisance predictor variables it performs very well. PMID:17156269

  6. Poisson Regression Analysis of Illness and Injury Surveillance Data

    SciTech Connect

    Frome E.L., Watkins J.P., Ellis E.D.

    2012-12-12

    The Department of Energy (DOE) uses illness and injury surveillance to monitor morbidity and assess the overall health of the work force. Data collected from each participating site include health events and a roster file with demographic information. The source data files are maintained in a relational data base, and are used to obtain stratified tables of health event counts and person time at risk that serve as the starting point for Poisson regression analysis. The explanatory variables that define these tables are age, gender, occupational group, and time. Typical response variables of interest are the number of absences due to illness or injury, i.e., the response variable is a count. Poisson regression methods are used to describe the effect of the explanatory variables on the health event rates using a log-linear main effects model. Results of fitting the main effects model are summarized in a tabular and graphical form and interpretation of model parameters is provided. An analysis of deviance table is used to evaluate the importance of each of the explanatory variables on the event rate of interest and to determine if interaction terms should be considered in the analysis. Although Poisson regression methods are widely used in the analysis of count data, there are situations in which over-dispersion occurs. This could be due to lack-of-fit of the regression model, extra-Poisson variation, or both. A score test statistic and regression diagnostics are used to identify over-dispersion. A quasi-likelihood method of moments procedure is used to evaluate and adjust for extra-Poisson variation when necessary. Two examples are presented using respiratory disease absence rates at two DOE sites to illustrate the methods and interpretation of the results. In the first example the Poisson main effects model is adequate. In the second example the score test indicates considerable over-dispersion and a more detailed analysis attributes the over-dispersion to extra-Poisson

  7. RNA gels with negative Poisson ratio

    NASA Astrophysics Data System (ADS)

    Ahsan, Amir

    2005-03-01

    We present a simple model for the elastic properties of very large single-stranded RNA molecules linked by partial complementary pairing, such as a viral RNA genome in solution. It shown that the sign of Poisson's Ratio is determined by the convexity of the force-extension curve of single-stranded RNA. The implications of negative Poisson Ratio's for viral genome encapsidation will be discussed.

  8. New generalized poisson mixture model for bimodal count data with drug effect: An application to rodent brief‐access taste aversion experiments

    PubMed Central

    Soto, J; Orlu Gul, M; Cortina‐Borja, M; Tuleu, C; Standing, JF

    2016-01-01

    Pharmacodynamic (PD) count data can exhibit bimodality and nonequidispersion complicating the inclusion of drug effect. The purpose of this study was to explore four different mixture distribution models for bimodal count data by including both drug effect and distribution truncation. An example dataset, which exhibited bimodal pattern, was from rodent brief‐access taste aversion (BATA) experiments to assess the bitterness of ascending concentrations of an aversive tasting drug. The two generalized Poisson mixture models performed the best and was flexible to explain both under and overdispersion. A sigmoid maximum effect (Emax) model with logistic transformation was introduced to link the drug effect to the data partition within each distribution. Predicted density‐histogram plot is suggested as a model evaluation tool due to its capability to directly compare the model predicted density with the histogram from raw data. The modeling approach presented here could form a useful strategy for modeling similar count data types. PMID:27472892

  9. New generalized poisson mixture model for bimodal count data with drug effect: An application to rodent brief-access taste aversion experiments.

    PubMed

    Sheng, Y; Soto, J; Orlu Gul, M; Cortina-Borja, M; Tuleu, C; Standing, J F

    2016-08-01

    Pharmacodynamic (PD) count data can exhibit bimodality and nonequidispersion complicating the inclusion of drug effect. The purpose of this study was to explore four different mixture distribution models for bimodal count data by including both drug effect and distribution truncation. An example dataset, which exhibited bimodal pattern, was from rodent brief-access taste aversion (BATA) experiments to assess the bitterness of ascending concentrations of an aversive tasting drug. The two generalized Poisson mixture models performed the best and was flexible to explain both under and overdispersion. A sigmoid maximum effect (Emax ) model with logistic transformation was introduced to link the drug effect to the data partition within each distribution. Predicted density-histogram plot is suggested as a model evaluation tool due to its capability to directly compare the model predicted density with the histogram from raw data. The modeling approach presented here could form a useful strategy for modeling similar count data types. PMID:27472892

  10. Multiphase semiclassical approximation of an electron in a one-dimensional crystalline lattice - III. From ab initio models to WKB for Schroedinger-Poisson

    SciTech Connect

    Gosse, Laurent . E-mail: mauser@univie.ac.at

    2006-01-01

    This work is concerned with the semiclassical approximation of the Schroedinger-Poisson equation modeling ballistic transport in a 1D periodic potential by means of WKB techniques. It is derived by considering the mean-field limit of a N-body quantum problem, then K-multivalued solutions are adapted to the treatment of this weakly nonlinear system obtained after homogenization without taking into account for Pauli's exclusion principle. Numerical experiments display the behaviour of self-consistent wave packets and screening effects.

  11. Network reconstruction using nonparametric additive ODE models.

    PubMed

    Henderson, James; Michailidis, George

    2014-01-01

    Network representations of biological systems are widespread and reconstructing unknown networks from data is a focal problem for computational biologists. For example, the series of biochemical reactions in a metabolic pathway can be represented as a network, with nodes corresponding to metabolites and edges linking reactants to products. In a different context, regulatory relationships among genes are commonly represented as directed networks with edges pointing from influential genes to their targets. Reconstructing such networks from data is a challenging problem receiving much attention in the literature. There is a particular need for approaches tailored to time-series data and not reliant on direct intervention experiments, as the former are often more readily available. In this paper, we introduce an approach to reconstructing directed networks based on dynamic systems models. Our approach generalizes commonly used ODE models based on linear or nonlinear dynamics by extending the functional class for the functions involved from parametric to nonparametric models. Concomitantly we limit the complexity by imposing an additive structure on the estimated slope functions. Thus the submodel associated with each node is a sum of univariate functions. These univariate component functions form the basis for a novel coupling metric that we define in order to quantify the strength of proposed relationships and hence rank potential edges. We show the utility of the method by reconstructing networks using simulated data from computational models for the glycolytic pathway of Lactocaccus Lactis and a gene network regulating the pluripotency of mouse embryonic stem cells. For purposes of comparison, we also assess reconstruction performance using gene networks from the DREAM challenges. We compare our method to those that similarly rely on dynamic systems models and use the results to attempt to disentangle the distinct roles of linearity, sparsity, and derivative

  12. Network Reconstruction Using Nonparametric Additive ODE Models

    PubMed Central

    Henderson, James; Michailidis, George

    2014-01-01

    Network representations of biological systems are widespread and reconstructing unknown networks from data is a focal problem for computational biologists. For example, the series of biochemical reactions in a metabolic pathway can be represented as a network, with nodes corresponding to metabolites and edges linking reactants to products. In a different context, regulatory relationships among genes are commonly represented as directed networks with edges pointing from influential genes to their targets. Reconstructing such networks from data is a challenging problem receiving much attention in the literature. There is a particular need for approaches tailored to time-series data and not reliant on direct intervention experiments, as the former are often more readily available. In this paper, we introduce an approach to reconstructing directed networks based on dynamic systems models. Our approach generalizes commonly used ODE models based on linear or nonlinear dynamics by extending the functional class for the functions involved from parametric to nonparametric models. Concomitantly we limit the complexity by imposing an additive structure on the estimated slope functions. Thus the submodel associated with each node is a sum of univariate functions. These univariate component functions form the basis for a novel coupling metric that we define in order to quantify the strength of proposed relationships and hence rank potential edges. We show the utility of the method by reconstructing networks using simulated data from computational models for the glycolytic pathway of Lactocaccus Lactis and a gene network regulating the pluripotency of mouse embryonic stem cells. For purposes of comparison, we also assess reconstruction performance using gene networks from the DREAM challenges. We compare our method to those that similarly rely on dynamic systems models and use the results to attempt to disentangle the distinct roles of linearity, sparsity, and derivative

  13. Generalized Poisson distribution: the property of mixture of Poisson and comparison with negative binomial distribution.

    PubMed

    Joe, Harry; Zhu, Rong

    2005-04-01

    We prove that the generalized Poisson distribution GP(theta, eta) (eta > or = 0) is a mixture of Poisson distributions; this is a new property for a distribution which is the topic of the book by Consul (1989). Because we find that the fits to count data of the generalized Poisson and negative binomial distributions are often similar, to understand their differences, we compare the probability mass functions and skewnesses of the generalized Poisson and negative binomial distributions with the first two moments fixed. They have slight differences in many situations, but their zero-inflated distributions, with masses at zero, means and variances fixed, can differ more. These probabilistic comparisons are helpful in selecting a better fitting distribution for modelling count data with long right tails. Through a real example of count data with large zero fraction, we illustrate how the generalized Poisson and negative binomial distributions as well as their zero-inflated distributions can be discriminated. PMID:16389919

  14. Computational Process Modeling for Additive Manufacturing (OSU)

    NASA Technical Reports Server (NTRS)

    Bagg, Stacey; Zhang, Wei

    2015-01-01

    Powder-Bed Additive Manufacturing (AM) through Direct Metal Laser Sintering (DMLS) or Selective Laser Melting (SLM) is being used by NASA and the Aerospace industry to "print" parts that traditionally are very complex, high cost, or long schedule lead items. The process spreads a thin layer of metal powder over a build platform, then melts the powder in a series of welds in a desired shape. The next layer of powder is applied, and the process is repeated until layer-by-layer, a very complex part can be built. This reduces cost and schedule by eliminating very complex tooling and processes traditionally used in aerospace component manufacturing. To use the process to print end-use items, NASA seeks to understand SLM material well enough to develop a method of qualifying parts for space flight operation. Traditionally, a new material process takes many years and high investment to generate statistical databases and experiential knowledge, but computational modeling can truncate the schedule and cost -many experiments can be run quickly in a model, which would take years and a high material cost to run empirically. This project seeks to optimize material build parameters with reduced time and cost through modeling.

  15. Cascaded Poisson processes

    NASA Astrophysics Data System (ADS)

    Matsuo, Kuniaki; Saleh, Bahaa E. A.; Teich, Malvin Carl

    1982-12-01

    We investigate the counting statistics for stationary and nonstationary cascaded Poisson processes. A simple equation is obtained for the variance-to-mean ratio in the limit of long counting times. Explicit expressions for the forward-recurrence and inter-event-time probability density functions are also obtained. The results are expected to be of use in a number of areas of physics.

  16. Demonstrating Poisson Statistics.

    ERIC Educational Resources Information Center

    Vetterling, William T.

    1980-01-01

    Describes an apparatus that offers a very lucid demonstration of Poisson statistics as applied to electrical currents, and the manner in which such statistics account for shot noise when applied to macroscopic currents. The experiment described is intended for undergraduate physics students. (HM)

  17. CREATION OF THE MODEL ADDITIONAL PROTOCOL

    SciTech Connect

    Houck, F.; Rosenthal, M.; Wulf, N.

    2010-05-25

    In 1991, the international nuclear nonproliferation community was dismayed to discover that the implementation of safeguards by the International Atomic Energy Agency (IAEA) under its NPT INFCIRC/153 safeguards agreement with Iraq had failed to detect Iraq's nuclear weapon program. It was now clear that ensuring that states were fulfilling their obligations under the NPT would require not just detecting diversion but also the ability to detect undeclared materials and activities. To achieve this, the IAEA initiated what would turn out to be a five-year effort to reappraise the NPT safeguards system. The effort engaged the IAEA and its Member States and led to agreement in 1997 on a new safeguards agreement, the Model Protocol Additional to the Agreement(s) between States and the International Atomic Energy Agency for the Application of Safeguards. The Model Protocol makes explicit that one IAEA goal is to provide assurance of the absence of undeclared nuclear material and activities. The Model Protocol requires an expanded declaration that identifies a State's nuclear potential, empowers the IAEA to raise questions about the correctness and completeness of the State's declaration, and, if needed, allows IAEA access to locations. The information required and the locations available for access are much broader than those provided for under INFCIRC/153. The negotiation was completed in quite a short time because it started with a relatively complete draft of an agreement prepared by the IAEA Secretariat. This paper describes how the Model Protocol was constructed and reviews key decisions that were made both during the five-year period and in the actual negotiation.

  18. Investigating the effect of modeling single-vehicle and multi-vehicle crashes separately on confidence intervals of Poisson-gamma models.

    PubMed

    Geedipally, Srinivas Reddy; Lord, Dominique

    2010-07-01

    Crash prediction models still constitute one of the primary tools for estimating traffic safety. These statistical models play a vital role in various types of safety studies. With a few exceptions, they have often been employed to estimate the number of crashes per unit of time for an entire highway segment or intersection, without distinguishing the influence different sub-groups have on crash risk. The two most important sub-groups that have been identified in the literature are single- and multi-vehicle crashes. Recently, some researchers have noted that developing two distinct models for these two categories of crashes provides better predicting performance than developing models combining both crash categories together. Thus, there is a need to determine whether a significant difference exists for the computation of confidence intervals when a single model is applied rather than two distinct models for single- and multi-vehicle crashes. Building confidence intervals have many important applications in highway safety. This paper investigates the effect of modeling single- and multi-vehicle (head-on and rear-end only) crashes separately versus modeling them together on the prediction of confidence intervals of Poisson-gamma models. Confidence intervals were calculated for total (all severities) crash models and fatal and severe injury crash models. The data used for the comparison analysis were collected on Texas multilane undivided highways for the years 1997-2001. This study shows that modeling single- and multi-vehicle crashes separately predicts larger confidence intervals than modeling them together as a single model. This difference is much larger for fatal and injury crash models than for models for all severity levels. Furthermore, it is found that the single- and multi-vehicle crashes are not independent. Thus, a joint (bivariate) model which accounts for correlation between single- and multi-vehicle crashes is developed and it predicts wider

  19. Detecting contaminated birthdates using generalized additive models

    PubMed Central

    2014-01-01

    Background Erroneous patient birthdates are common in health databases. Detection of these errors usually involves manual verification, which can be resource intensive and impractical. By identifying a frequent manifestation of birthdate errors, this paper presents a principled and statistically driven procedure to identify erroneous patient birthdates. Results Generalized additive models (GAM) enabled explicit incorporation of known demographic trends and birth patterns. With false positive rates controlled, the method identified birthdate contamination with high accuracy. In the health data set used, of the 58 actual incorrect birthdates manually identified by the domain expert, the GAM-based method identified 51, with 8 false positives (resulting in a positive predictive value of 86.0% (51/59) and a false negative rate of 12.0% (7/58)). These results outperformed linear time-series models. Conclusions The GAM-based method is an effective approach to identify systemic birthdate errors, a common data quality issue in both clinical and administrative databases, with high accuracy. PMID:24923281

  20. 3D solutions of the Poisson-Vlasov equations for a charged plasma and particle-core model in a line of FODO cells

    NASA Astrophysics Data System (ADS)

    Turchetti, G.; Rambaldi, S.; Bazzani, A.; Comunian, M.; Pisent, A.

    2003-09-01

    We consider a charged plasma of positive ions in a periodic focusing channel of quadrupolar magnets in the presence of RF cavities. The ions are bunched into charged triaxial ellipsoids and their description requires the solution of a fully 3D Poisson-Vlasov equation. We also analyze the trajectories of test particles in the exterior of the ion bunches in order to estimate their diffusion rate. This rate is relevant for a high intensity linac (TRASCO project). A numerical PIC scheme to integrate the Poisson-Vlasov equations in a periodic focusing system in 2 and 3 space dimensions is presented. The scheme consists of a single particle symplectic integrator and a Poisson solver based on FFT plus tri-diagonal matrix inversion. In the 2D version arbitrary boundary conditions can be chosen. Since no analytical self-consistent 3D solution is known, we chose an initial Neuffer-KV distribution in phase space, whose electric field is close to the one generated by a uniformly filled ellipsoid. For a matched (periodic) beam the orbits of test particles moving in the field of an ellipsoidal bunch, whose semi-axis satisfy the envelope equations, is similar to the orbits generated by the self-consistent charge distribition obtained from the PIC simulation, even though it relaxes to a Fermi-Dirac-like distribution. After a transient the RMS radii and emittances have small amplitude oscillations. The PIC simulations for a mismatched (quasiperiodic) beam are no longer comparable with the ellipsoidal bunch model even though the qualitative behavior is the same, namely a stronger diffusion due to the increase of resonances.

  1. Testing deviation for a set of serial dilution most probable numbers from a Poisson-binomial model.

    PubMed

    Blodgett, Robert J

    2006-01-01

    A serial dilution experiment estimates the microbial concentration in a broth by inoculating several sets of tubes with various amounts of the broth. The estimation uses the Poisson distribution and the number of tubes in each of these sets that show growth. Several factors, such as interfering microbes, toxins, or disaggregation of adhering microbes, may distort the results of a serial dilution experiment. A mild enough distortion may not raise suspicion with a single outcome. The test introduced here judges whether the entire set of serial dilution outcomes appears unusual. This test forms lists of the possible outcomes. The set of outcomes is declared unusual if any occurrence of an observed outcome is on the first list, or more than one is on the first or second list, etc. A similar test can apply when there are only a finite number of possible outcomes, and each outcome has a calculable probability, and few outcomes have tied probabilities.

  2. Finite element model predictions of static deformation from dislocation sources in a subduction zone: Sensitivities to homogeneous, isotropic, Poisson-solid, and half-space assumptions

    USGS Publications Warehouse

    Masterlark, Timothy

    2003-01-01

    Dislocation models can simulate static deformation caused by slip along a fault. These models usually take the form of a dislocation embedded in a homogeneous, isotropic, Poisson-solid half-space (HIPSHS). However, the widely accepted HIPSHS assumptions poorly approximate subduction zone systems of converging oceanic and continental crust. This study uses three-dimensional finite element models (FEMs) that allow for any combination (including none) of the HIPSHS assumptions to compute synthetic Green's functions for displacement. Using the 1995 Mw = 8.0 Jalisco-Colima, Mexico, subduction zone earthquake and associated measurements from a nearby GPS array as an example, FEM-generated synthetic Green's functions are combined with standard linear inverse methods to estimate dislocation distributions along the subduction interface. Loading a forward HIPSHS model with dislocation distributions, estimated from FEMs that sequentially relax the HIPSHS assumptions, yields the sensitivity of predicted displacements to each of the HIPSHS assumptions. For the subduction zone models tested and the specific field situation considered, sensitivities to the individual Poisson-solid, isotropy, and homogeneity assumptions can be substantially greater than GPS. measurement uncertainties. Forward modeling quantifies stress coupling between the Mw = 8.0 earthquake and a nearby Mw = 6.3 earthquake that occurred 63 days later. Coulomb stress changes predicted from static HIPSHS models cannot account for the 63-day lag time between events. Alternatively, an FEM that includes a poroelastic oceanic crust, which allows for postseismic pore fluid pressure recovery, can account for the lag time. The pore fluid pressure recovery rate puts an upper limit of 10-17 m2 on the bulk permeability of the oceanic crust. Copyright 2003 by the American Geophysical Union.

  3. A study of the dengue epidemic and meteorological factors in Guangzhou, China, by using a zero-inflated Poisson regression model.

    PubMed

    Wang, Chenggang; Jiang, Baofa; Fan, Jingchun; Wang, Furong; Liu, Qiyong

    2014-01-01

    The aim of this study is to develop a model that correctly identifies and quantifies the relationship between dengue and meteorological factors in Guangzhou, China. By cross-correlation analysis, meteorological variables and their lag effects were determined. According to the epidemic characteristics of dengue in Guangzhou, those statistically significant variables were modeled by a zero-inflated Poisson regression model. The number of dengue cases and minimum temperature at 1-month lag, along with average relative humidity at 0- to 1-month lag were all positively correlated with the prevalence of dengue fever, whereas wind velocity and temperature in the same month along with rainfall at 2 months' lag showed negative association with dengue incidence. Minimum temperature at 1-month lag and wind velocity in the same month had a greater impact on the dengue epidemic than other variables in Guangzhou.

  4. Generalized poisson 3-D scatterer distributions.

    PubMed

    Laporte, Catherine; Clark, James J; Arbel, Tal

    2009-02-01

    This paper describes a simple, yet powerful ultrasound scatterer distribution model. The model extends a 1-D generalized Poisson process to multiple dimensions using a Hilbert curve. The model is intuitively tuned by spatial density and regularity parameters which reliably predict the first and second-order statistics of varied synthetic imagery. PMID:19251530

  5. Prediction of accrual closure date in multi-center clinical trials with discrete-time Poisson process models

    PubMed Central

    Tang, Gong; Kong, Yuan; Chang, Chung-Chou Ho; Kong, Lan; Costantino, Joseph P.

    2016-01-01

    In a phase III multi-center cancer clinical trial or large public health studies, sample size is predetermined to achieve desired power and study participants are enrolled from tens or hundreds of participating institutions. As the accrual is closing to the target size, the coordinating data center needs to project the accrual closure date based on the observed accrual pattern and notify the participating sites several weeks in advance. In the past, projections were simply based on some crude assessment and conservative measures were incorporated in order to achieve the target accrual size. This approach often resulted in excessive accrual size and subsequently unnecessary financial burden on the study sponsors. Here we proposed a discrete-time Poisson process-based method to estimate the accrual rate at time of projection and subsequently the trial closure date. To ensure that target size would be reached with high confidence, we also proposed a conservative method for the closure date projection. The proposed method was illustrated through the analysis of the accrual data of NSABP trial B-38. The results showed that application of proposed method could help to save considerable amount of expenditure in patient management without compromising the accrual goal in multi-center clinical trials. PMID:22411544

  6. Filtering with Marked Point Process Observations via Poisson Chaos Expansion

    SciTech Connect

    Sun Wei; Zeng Yong; Zhang Shu

    2013-06-15

    We study a general filtering problem with marked point process observations. The motivation comes from modeling financial ultra-high frequency data. First, we rigorously derive the unnormalized filtering equation with marked point process observations under mild assumptions, especially relaxing the bounded condition of stochastic intensity. Then, we derive the Poisson chaos expansion for the unnormalized filter. Based on the chaos expansion, we establish the uniqueness of solutions of the unnormalized filtering equation. Moreover, we derive the Poisson chaos expansion for the unnormalized filter density under additional conditions. To explore the computational advantage, we further construct a new consistent recursive numerical scheme based on the truncation of the chaos density expansion for a simple case. The new algorithm divides the computations into those containing solely system coefficients and those including the observations, and assign the former off-line.

  7. A Poisson-Boltzmann dynamics method with nonperiodic boundary condition

    NASA Astrophysics Data System (ADS)

    Lu, Qiang; Luo, Ray

    2003-12-01

    We have developed a well-behaved and efficient finite difference Poisson-Boltzmann dynamics method with a nonperiodic boundary condition. This is made possible, in part, by a rather fine grid spacing used for the finite difference treatment of the reaction field interaction. The stability is also made possible by a new dielectric model that is smooth both over time and over space, an important issue in the application of implicit solvents. In addition, the electrostatic focusing technique facilitates the use of an accurate yet efficient nonperiodic boundary condition: boundary grid potentials computed by the sum of potentials from individual grid charges. Finally, the particle-particle particle-mesh technique is adopted in the computation of the Coulombic interaction to balance accuracy and efficiency in simulations of large biomolecules. Preliminary testing shows that the nonperiodic Poisson-Boltzmann dynamics method is numerically stable in trajectories at least 4 ns long. The new model is also fairly efficient: it is comparable to that of the pairwise generalized Born solvent model, making it a strong candidate for dynamics simulations of biomolecules in dilute aqueous solutions. Note that the current treatment of total electrostatic interactions is with no cutoff, which is important for simulations of biomolecules. Rigorous treatment of the Debye-Hückel screening is also possible within the Poisson-Boltzmann framework: its importance is demonstrated by a simulation of a highly charged protein.

  8. On the Burgers-Poisson equation

    NASA Astrophysics Data System (ADS)

    Grunert, K.; Nguyen, Khai T.

    2016-09-01

    In this paper, we prove the existence and uniqueness of weak entropy solutions to the Burgers-Poisson equation for initial data in L1 (R). In addition an Oleinik type estimate is established and some criteria on local smoothness and wave breaking for weak entropy solutions are provided.

  9. Evolutionary inference via the Poisson Indel Process.

    PubMed

    Bouchard-Côté, Alexandre; Jordan, Michael I

    2013-01-22

    We address the problem of the joint statistical inference of phylogenetic trees and multiple sequence alignments from unaligned molecular sequences. This problem is generally formulated in terms of string-valued evolutionary processes along the branches of a phylogenetic tree. The classic evolutionary process, the TKF91 model [Thorne JL, Kishino H, Felsenstein J (1991) J Mol Evol 33(2):114-124] is a continuous-time Markov chain model composed of insertion, deletion, and substitution events. Unfortunately, this model gives rise to an intractable computational problem: The computation of the marginal likelihood under the TKF91 model is exponential in the number of taxa. In this work, we present a stochastic process, the Poisson Indel Process (PIP), in which the complexity of this computation is reduced to linear. The Poisson Indel Process is closely related to the TKF91 model, differing only in its treatment of insertions, but it has a global characterization as a Poisson process on the phylogeny. Standard results for Poisson processes allow key computations to be decoupled, which yields the favorable computational profile of inference under the PIP model. We present illustrative experiments in which Bayesian inference under the PIP model is compared with separate inference of phylogenies and alignments.

  10. A generalized Poisson and Poisson-Boltzmann solver for electrostatic environments

    NASA Astrophysics Data System (ADS)

    Fisicaro, G.; Genovese, L.; Andreussi, O.; Marzari, N.; Goedecker, S.

    2016-01-01

    The computational study of chemical reactions in complex, wet environments is critical for applications in many fields. It is often essential to study chemical reactions in the presence of applied electrochemical potentials, taking into account the non-trivial electrostatic screening coming from the solvent and the electrolytes. As a consequence, the electrostatic potential has to be found by solving the generalized Poisson and the Poisson-Boltzmann equations for neutral and ionic solutions, respectively. In the present work, solvers for both problems have been developed. A preconditioned conjugate gradient method has been implemented for the solution of the generalized Poisson equation and the linear regime of the Poisson-Boltzmann, allowing to solve iteratively the minimization problem with some ten iterations of the ordinary Poisson equation solver. In addition, a self-consistent procedure enables us to solve the non-linear Poisson-Boltzmann problem. Both solvers exhibit very high accuracy and parallel efficiency and allow for the treatment of periodic, free, and slab boundary conditions. The solver has been integrated into the BigDFT and Quantum-ESPRESSO electronic-structure packages and will be released as an independent program, suitable for integration in other codes.

  11. Additives

    NASA Technical Reports Server (NTRS)

    Smalheer, C. V.

    1973-01-01

    The chemistry of lubricant additives is discussed to show what the additives are chemically and what functions they perform in the lubrication of various kinds of equipment. Current theories regarding the mode of action of lubricant additives are presented. The additive groups discussed include the following: (1) detergents and dispersants, (2) corrosion inhibitors, (3) antioxidants, (4) viscosity index improvers, (5) pour point depressants, and (6) antifouling agents.

  12. The oligarchic structure of Paretian Poisson processes

    NASA Astrophysics Data System (ADS)

    Eliazar, I.; Klafter, J.

    2008-08-01

    Paretian Poisson processes are a mathematical model of random fractal populations governed by Paretian power law tail statistics, and connect together and underlie elemental issues in statistical physics. Considering Paretian Poisson processes to represent the wealth of individuals in human populations, we explore their oligarchic structure via the analysis of the following random ratios: the aggregate wealth of the oligarchs ranked from m+1 to n, measured relative to the wealth of the m-th oligarch (n> m). A mean analysis and a stochastic-limit analysis (as n→∞) of these ratios are conducted. We obtain closed-form results which turn out to be highly contingent on the fractal exponent of the Paretian Poisson process considered.

  13. Additive interaction in survival analysis: use of the additive hazards model.

    PubMed

    Rod, Naja Hulvej; Lange, Theis; Andersen, Ingelise; Marott, Jacob Louis; Diderichsen, Finn

    2012-09-01

    It is a widely held belief in public health and clinical decision-making that interventions or preventive strategies should be aimed at patients or population subgroups where most cases could potentially be prevented. To identify such subgroups, deviation from additivity of absolute effects is the relevant measure of interest. Multiplicative survival models, such as the Cox proportional hazards model, are often used to estimate the association between exposure and risk of disease in prospective studies. In Cox models, deviations from additivity have usually been assessed by surrogate measures of additive interaction derived from multiplicative models-an approach that is both counter-intuitive and sometimes invalid. This paper presents a straightforward and intuitive way of assessing deviation from additivity of effects in survival analysis by use of the additive hazards model. The model directly estimates the absolute size of the deviation from additivity and provides confidence intervals. In addition, the model can accommodate both continuous and categorical exposures and models both exposures and potential confounders on the same underlying scale. To illustrate the approach, we present an empirical example of interaction between education and smoking on risk of lung cancer. We argue that deviations from additivity of effects are important for public health interventions and clinical decision-making, and such estimations should be encouraged in prospective studies on health. A detailed implementation guide of the additive hazards model is provided in the appendix.

  14. Modeling techniques for gaining additional urban space

    NASA Astrophysics Data System (ADS)

    Thunig, Holger; Naumann, Simone; Siegmund, Alexander

    2009-09-01

    One of the major accompaniments of the globalization is the rapid growing of urban areas. Urban sprawl is the main environmental problem affecting those cities across different characteristics and continents. Various reasons for the increase in urban sprawl in the last 10 to 30 years have been proposed [1], and often depend on the socio-economic situation of cities. The quantitative reduction and the sustainable handling of land should be performed by inner urban development instead of expanding urban regions. Following the principal "spare the urban fringe, develop the inner suburbs first" requires differentiated tools allowing for quantitative and qualitative appraisals of current building potentials. Using spatial high resolution remote sensing data within an object-based approach enables the detection of potential areas while GIS-data provides information for the quantitative valuation. This paper presents techniques for modeling urban environment and opportunities of utilization of the retrieved information for urban planners and their special needs.

  15. Short-Term Effects of Climatic Variables on Hand, Foot, and Mouth Disease in Mainland China, 2008–2013: A Multilevel Spatial Poisson Regression Model Accounting for Overdispersion

    PubMed Central

    Yang, Fang; Yang, Min; Hu, Yuehua; Zhang, Juying

    2016-01-01

    Background Hand, Foot, and Mouth Disease (HFMD) is a worldwide infectious disease. In China, many provinces have reported HFMD cases, especially the south and southwest provinces. Many studies have found a strong association between the incidence of HFMD and climatic factors such as temperature, rainfall, and relative humidity. However, few studies have analyzed cluster effects between various geographical units. Methods The nonlinear relationships and lag effects between weekly HFMD cases and climatic variables were estimated for the period of 2008–2013 using a polynomial distributed lag model. The extra-Poisson multilevel spatial polynomial model was used to model the exact relationship between weekly HFMD incidence and climatic variables after considering cluster effects, provincial correlated structure of HFMD incidence and overdispersion. The smoothing spline methods were used to detect threshold effects between climatic factors and HFMD incidence. Results The HFMD incidence spatial heterogeneity distributed among provinces, and the scale measurement of overdispersion was 548.077. After controlling for long-term trends, spatial heterogeneity and overdispersion, temperature was highly associated with HFMD incidence. Weekly average temperature and weekly temperature difference approximate inverse “V” shape and “V” shape relationships associated with HFMD incidence. The lag effects for weekly average temperature and weekly temperature difference were 3 weeks and 2 weeks. High spatial correlated HFMD incidence were detected in northern, central and southern province. Temperature can be used to explain most of variation of HFMD incidence in southern and northeastern provinces. After adjustment for temperature, eastern and Northern provinces still had high variation HFMD incidence. Conclusion We found a relatively strong association between weekly HFMD incidence and weekly average temperature. The association between the HFMD incidence and climatic

  16. Fractal Poisson processes

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo; Klafter, Joseph

    2008-09-01

    The Central Limit Theorem (CLT) and Extreme Value Theory (EVT) study, respectively, the stochastic limit-laws of sums and maxima of sequences of independent and identically distributed (i.i.d.) random variables via an affine scaling scheme. In this research we study the stochastic limit-laws of populations of i.i.d. random variables via nonlinear scaling schemes. The stochastic population-limits obtained are fractal Poisson processes which are statistically self-similar with respect to the scaling scheme applied, and which are characterized by two elemental structures: (i) a universal power-law structure common to all limits, and independent of the scaling scheme applied; (ii) a specific structure contingent on the scaling scheme applied. The sum-projection and the maximum-projection of the population-limits obtained are generalizations of the classic CLT and EVT results - extending them from affine to general nonlinear scaling schemes.

  17. Solves Poisson's Equation in Axizymmetric Geometry on a Rectangular Mesh

    1996-09-10

    DATHETA4.0 computes the magnetostatic field produced by multiple point current sources in the presence of perfect conductors in axisymmetric geometry. DATHETA4.0 has an interactive user interface and solves Poisson''s equation using the ADI method on a rectangular finite-difference mesh. DATHETA4.0 uncludes models specific to applied-B ion diodes.

  18. Supervised Gamma Process Poisson Factorization

    SciTech Connect

    Anderson, Dylan Zachary

    2015-05-01

    This thesis develops the supervised gamma process Poisson factorization (S- GPPF) framework, a novel supervised topic model for joint modeling of count matrices and document labels. S-GPPF is fully generative and nonparametric: document labels and count matrices are modeled under a uni ed probabilistic framework and the number of latent topics is controlled automatically via a gamma process prior. The framework provides for multi-class classification of documents using a generative max-margin classifier. Several recent data augmentation techniques are leveraged to provide for exact inference using a Gibbs sampling scheme. The first portion of this thesis reviews supervised topic modeling and several key mathematical devices used in the formulation of S-GPPF. The thesis then introduces the S-GPPF generative model and derives the conditional posterior distributions of the latent variables for posterior inference via Gibbs sampling. The S-GPPF is shown to exhibit state-of-the-art performance for joint topic modeling and document classification on a dataset of conference abstracts, beating out competing supervised topic models. The unique properties of S-GPPF along with its competitive performance make it a novel contribution to supervised topic modeling.

  19. Algorithm Calculates Cumulative Poisson Distribution

    NASA Technical Reports Server (NTRS)

    Bowerman, Paul N.; Nolty, Robert C.; Scheuer, Ernest M.

    1992-01-01

    Algorithm calculates accurate values of cumulative Poisson distribution under conditions where other algorithms fail because numbers are so small (underflow) or so large (overflow) that computer cannot process them. Factors inserted temporarily to prevent underflow and overflow. Implemented in CUMPOIS computer program described in "Cumulative Poisson Distribution Program" (NPO-17714).

  20. Poisson Spot with Magnetic Levitation

    ERIC Educational Resources Information Center

    Hoover, Matthew; Everhart, Michael; D'Arruda, Jose

    2010-01-01

    In this paper we describe a unique method for obtaining the famous Poisson spot without adding obstacles to the light path, which could interfere with the effect. A Poisson spot is the interference effect from parallel rays of light diffracting around a solid spherical object, creating a bright spot in the center of the shadow.

  1. Mean-square state and parameter estimation for stochastic linear systems with Gaussian and Poisson noises

    NASA Astrophysics Data System (ADS)

    Basin, M.; Maldonado, J. J.; Zendejo, O.

    2016-07-01

    This paper proposes new mean-square filter and parameter estimator design for linear stochastic systems with unknown parameters over linear observations, where unknown parameters are considered as combinations of Gaussian and Poisson white noises. The problem is treated by reducing the original problem to a filtering problem for an extended state vector that includes parameters as additional states, modelled as combinations of independent Gaussian and Poisson processes. The solution to this filtering problem is based on the mean-square filtering equations for incompletely polynomial states confused with Gaussian and Poisson noises over linear observations. The resulting mean-square filter serves as an identifier for the unknown parameters. Finally, a simulation example shows effectiveness of the proposed mean-square filter and parameter estimator.

  2. Intertime jump statistics of state-dependent Poisson processes.

    PubMed

    Daly, Edoardo; Porporato, Amilcare

    2007-01-01

    A method to obtain the probability distribution of the interarrival times of jump occurrences in systems driven by state-dependent Poisson noise is proposed. Such a method uses the survivor function obtained by a modified version of the master equation associated to the stochastic process under analysis. A model for the timing of human activities shows the capability of state-dependent Poisson noise to generate power-law distributions. The application of the method to a model for neuron dynamics and to a hydrological model accounting for land-atmosphere interaction elucidates the origin of characteristic recurrence intervals and possible persistence in state-dependent Poisson models.

  3. Intertime jump statistics of state-dependent Poisson processes

    NASA Astrophysics Data System (ADS)

    Daly, Edoardo; Porporato, Amilcare

    2007-01-01

    A method to obtain the probability distribution of the interarrival times of jump occurrences in systems driven by state-dependent Poisson noise is proposed. Such a method uses the survivor function obtained by a modified version of the master equation associated to the stochastic process under analysis. A model for the timing of human activities shows the capability of state-dependent Poisson noise to generate power-law distributions. The application of the method to a model for neuron dynamics and to a hydrological model accounting for land-atmosphere interaction elucidates the origin of characteristic recurrence intervals and possible persistence in state-dependent Poisson models.

  4. Poisson-Boltzmann model for protein-surface electrostatic interactions and grid-convergence study using the PyGBe code

    NASA Astrophysics Data System (ADS)

    Cooper, Christopher D.; Barba, Lorena A.

    2016-05-01

    Interactions between surfaces and proteins occur in many vital processes and are crucial in biotechnology: the ability to control specific interactions is essential in fields like biomaterials, biomedical implants and biosensors. In the latter case, biosensor sensitivity hinges on ligand proteins adsorbing on bioactive surfaces with a favorable orientation, exposing reaction sites to target molecules. Protein adsorption, being a free-energy-driven process, is difficult to study experimentally. This paper develops and evaluates a computational model to study electrostatic interactions of proteins and charged nanosurfaces, via the Poisson-Boltzmann equation. We extended the implicit-solvent model used in the open-source code PyGBe to include surfaces of imposed charge or potential. This code solves the boundary integral formulation of the Poisson-Boltzmann equation, discretized with surface elements. PyGBe has at its core a treecode-accelerated Krylov iterative solver, resulting in O(N log N) scaling, with further acceleration on hardware via multi-threaded execution on GPUs. It computes solvation and surface free energies, providing a framework for studying the effect of electrostatics on adsorption. We derived an analytical solution for a spherical charged surface interacting with a spherical dielectric cavity, and used it in a grid-convergence study to build evidence on the correctness of our approach. The study showed the error decaying with the average area of the boundary elements, i.e., the method is O(1 / N) , which is consistent with our previous verification studies using PyGBe. We also studied grid-convergence using a real molecular geometry (protein G B1 D4‧), in this case using Richardson extrapolation (in the absence of an analytical solution) and confirmed the O(1 / N) scaling. With this work, we can now access a completely new family of problems, which no other major bioelectrostatics solver, e.g. APBS, is capable of dealing with. PyGBe is open

  5. Salt effects on polyelectrolyte-ligand binding: comparison of Poisson-Boltzmann, and limiting law/counterion binding models.

    PubMed

    Sharp, K A; Friedman, R A; Misra, V; Hecht, J; Honig, B

    1995-08-01

    The theory for salt dependence of the free energy, entropy, and enthalpy of a polyelectrolyte in the PB (PB) model is extended to treat the nonspecific salt dependence of polyelectrolyte-ligand binding reactions. The salt dependence of the binding constant (K) is given by the difference in osmotic pressure terms between the reactants and products. For simple 1-1 salts it is shown that this treatment is equivalent to the general preferential interaction model for the salt dependence of binding [C. Anderson and M. Record (1993) Journal of Physical Chemistry, Vol. 97, pp. 7116-7126]. The salt dependence, entropy, and enthalpy are compared for the PB model and one specific form of the preferential interaction coefficient model that uses counterion condensation/limiting law (LL) behavior. The PB and LL models are applied to three ligand-polyelectrolyte systems with the same net ligand charge: a model sphere-cylinder binding reaction, a drug-DNA binding reaction, and a protein-DNA binding reaction. For the small ligands both the PB and limiting law models give (In K vs. In[salt]) slopes close in magnitude to the net ligand charge. However, the enthalpy/entropy breakdown of the salt dependence is quite different. In the PB model there are considerable contributions from electrostatic enthalpy and dielectric (water reorientation) entropy, compared to the predominant ion cratic (release) entropy in the limiting law model. The relative contributions of these three terms in the PB model depends on the ligand: For the protein, ion release entropy is the smallest contribution to the salt dependence of binding. The effect of three approximations made in the LL model is examined: These approximations are (1) the ligand behaves ideally, (2) the preferential interaction coefficient of the polyelectrolyte is unchanged upon ligand binding, and (3) the polyelectrolyte preferential interaction coefficient is given by the limiting law/counterion-condensation value. Analysis of the PB

  6. Resources allocation in healthcare for cancer: a case study using generalised additive mixed models.

    PubMed

    Musio, Monica; Sauleau, Erik A; Augustin, Nicole H

    2012-11-01

    Our aim is to develop a method for helping resources re-allocation in healthcare linked to cancer, in order to replan the allocation of providers. Ageing of the population has a considerable impact on the use of health resources because aged people require more specialised medical care due notably to cancer. We propose a method useful to monitor changes of cancer incidence in space and time taking into account two age categories, according to healthcar general organisation. We use generalised additive mixed models with a Poisson response, according to the methodology presented in Wood, Generalised additive models: an introduction with R. Chapman and Hall/CRC, 2006. Besides one-dimensional smooth functions accounting for non-linear effects of covariates, the space-time interaction can be modelled using scale invariant smoothers. Incidence data collected by a general cancer registry between 1992 and 2007 in a specific area of France is studied. Our best model exhibits a strong increase of the incidence of cancer along time and an obvious spatial pattern for people more than 70 years with a higher incidence in the central band of the region. This is a strong argument for re-allocating resources for old people cancer care in this sub-region. PMID:23242683

  7. Resources allocation in healthcare for cancer: a case study using generalised additive mixed models.

    PubMed

    Musio, Monica; Sauleau, Erik A; Augustin, Nicole H

    2012-11-01

    Our aim is to develop a method for helping resources re-allocation in healthcare linked to cancer, in order to replan the allocation of providers. Ageing of the population has a considerable impact on the use of health resources because aged people require more specialised medical care due notably to cancer. We propose a method useful to monitor changes of cancer incidence in space and time taking into account two age categories, according to healthcar general organisation. We use generalised additive mixed models with a Poisson response, according to the methodology presented in Wood, Generalised additive models: an introduction with R. Chapman and Hall/CRC, 2006. Besides one-dimensional smooth functions accounting for non-linear effects of covariates, the space-time interaction can be modelled using scale invariant smoothers. Incidence data collected by a general cancer registry between 1992 and 2007 in a specific area of France is studied. Our best model exhibits a strong increase of the incidence of cancer along time and an obvious spatial pattern for people more than 70 years with a higher incidence in the central band of the region. This is a strong argument for re-allocating resources for old people cancer care in this sub-region.

  8. Solution of Poisson's equation in a volume conductor using resistor mesh models: Application to event related potential imaging

    NASA Astrophysics Data System (ADS)

    Franceries, X.; Doyon, B.; Chauveau, N.; Rigaud, B.; Celsis, P.; Morucci, J.-P.

    2003-03-01

    In electroencephalography (EEG) and event related potentials (ERP), localizing the electrical sources at the origin of scalp potentials (inverse problem) imposes, in a first step, the computation of scalp potential distribution from the simulation of sources (forward problem). This article proposes an alternative method for mimicing both the electrical and geometrical properties of the head, including brain, skull, and scalp tissue with resistors. Two resistor mesh models have been designed to reproduce the three-sphere reference model (analytical model). The first one (spherical resistor mesh) closely mimics the geometrical and electrical properties of the analytical model. The second one (cubic resistor mesh) is designed to conveniently handle anatomical data from magnetic resonance imaging. Both models have been validated, in reference to the analytical solution calculated on the three-sphere model, by computing the magnification factor and the relative difference measure. Results suggest that the mesh models can be used as robust and user-friendly simulation or exploration tools in EEG/ERP.

  9. Efficient self-consistent Schrödinger-Poisson-rate equation iteration method for the modeling of strained quantum cascade lasers

    NASA Astrophysics Data System (ADS)

    Li, Jian; Ma, Xunpeng; Wei, Xin; Jiang, Yu; Fu, Dong; Wu, Haoyue; Song, Guofeng; Chen, Lianghui

    2016-05-01

    We present an efficient method for the calculation of the transmission characteristic of quantum cascade lasers (QCLs). A fully Schrödinger-Poisson-rate equation iteration with strained term is presented in our calculation. The two-band strained term of the Schrödinger equation is derived from the eight-band Hamiltonian. The equivalent strain energy that affects the effective mass and raises the energy level is introduced to include the biaxial strain into the conduction band profile. We simplified the model of the electron-electron scattering process and improved the calculation efficiency by about two orders of magnitude. The thermobackfilling effect is optimized by replacing the lattice temperature with the electron temperature. The quasi-subband-Fermi level is used to calculate the electron density of laser subbands. Compared with the experiment results, our method gives reasonable threshold current (depends on the assumption of waveguide loss and scattering processes) and more accurate wavelength, making the method efficient and practical for QCL simulations.

  10. Speech parts as Poisson processes.

    PubMed

    Badalamenti, A F

    2001-09-01

    This paper presents evidence that six of the seven parts of speech occur in written text as Poisson processes, simple or recurring. The six major parts are nouns, verbs, adjectives, adverbs, prepositions, and conjunctions, with the interjection occurring too infrequently to support a model. The data consist of more than the first 5000 words of works by four major authors coded to label the parts of speech, as well as periods (sentence terminators). Sentence length is measured via the period and found to be normally distributed with no stochastic model identified for its occurrence. The models for all six speech parts but the noun significantly distinguish some pairs of authors and likewise for the joint use of all words types. Any one author is significantly distinguished from any other by at least one word type and sentence length very significantly distinguishes each from all others. The variety of word type use, measured by Shannon entropy, builds to about 90% of its maximum possible value. The rate constants for nouns are close to the fractions of maximum entropy achieved. This finding together with the stochastic models and the relations among them suggest that the noun may be a primitive organizer of written text.

  11. Measurement of the impedance of aqueous solutions of KCl: An analysis using an extension of the Poisson-Nernst-Planck model

    NASA Astrophysics Data System (ADS)

    Duarte, A. R.; Batalioto, F.; Barbero, G.; Figueiredo Neto, A. M.

    2014-07-01

    We investigate the frequency dependence of the real and imaginary parts of the electric impedance of a cell with titanium electrodes, filled with aqueous solution of KCl in different concentrations. Our experimental data are interpreted by means of an extension of the Poisson-Nernst-Planck model, assuming that the electrodes are not blocking and well described by Ohmic boundary conditions, and that two groups of ions are responsible for the electric conduction. One group is due to the dissociation of KCl in water (majority carriers), the other to the impurities dissolved in water or present in KCl (minority carriers), whose bulk density is very small with respect to first group. The agreement between the experimental data and the theoretical predictions is good, taking into account the small number of free parameters entering in the model. In particular, the diffusion coefficient for the potassium and chloride ions well comparer with those reported in literature. According to our analysis, the role of the carriers related to the impurities present in the solution play a fundamental role in the fit of the experimental data in the low frequency region. The presented model where two groups of ions are present, with the assumption of equal mobilities for positive and negative charges in a group, is motivated by the experimental evidence that in aqueous solution of KCl, K+, and Cl- have approximately the same mobilities. Since the PNP model for an electrolytic solution of the case considered by us predicts an electric response similar to that of an electrolytic solution where the positive and negative ions have different mobility, a comparison with the results reported recently by Macdonald is presented [J. R. Macdonald, Electrochim. Acta, 123, 535 (2014)]. Alternative interpretation of our experimental results related to the assumption of non-blocking electrodes is also discussed.

  12. Irreversible thermodynamics of Poisson processes with reaction.

    PubMed

    Méndez, V; Fort, J

    1999-11-01

    A kinetic model is derived to study the successive movements of particles, described by a Poisson process, as well as their generation. The irreversible thermodynamics of this system is also studied from the kinetic model. This makes it possible to evaluate the differences between thermodynamical quantities computed exactly and up to second-order. Such differences determine the range of validity of the second-order approximation to extended irreversible thermodynamics.

  13. Irreversible thermodynamics of Poisson processes with reaction

    NASA Astrophysics Data System (ADS)

    Méndez, Vicenç; Fort, Joaquim

    1999-11-01

    A kinetic model is derived to study the successive movements of particles, described by a Poisson process, as well as their generation. The irreversible thermodynamics of this system is also studied from the kinetic model. This makes it possible to evaluate the differences between thermodynamical quantities computed exactly and up to second-order. Such differences determine the range of validity of the second-order approximation to extended irreversible thermodynamics.

  14. Simulations of Cyclic Voltammetry for Electric Double Layers in Asymmetric Electrolytes: A Generalized Modified Poisson-Nernst-Planck Model

    SciTech Connect

    Wang, Hainan; Thiele, Alexander; Pilon, Laurent

    2013-11-15

    This paper presents a generalized modified Poisson–Nernst–Planck (MPNP) model derived from first principles based on excess chemical potential and Langmuir activity coefficient to simulate electric double-layer dynamics in asymmetric electrolytes. The model accounts simultaneously for (1) asymmetric electrolytes with (2) multiple ion species, (3) finite ion sizes, and (4) Stern and diffuse layers along with Ohmic potential drop in the electrode. It was used to simulate cyclic voltammetry (CV) measurements for binary asymmetric electrolytes. The results demonstrated that the current density increased significantly with decreasing ion diameter and/or increasing valency |zi| of either ion species. By contrast, the ion diffusion coefficients affected the CV curves and capacitance only at large scan rates. Dimensional analysis was also performed, and 11 dimensionless numbers were identified to govern the CV measurements of the electric double layer in binary asymmetric electrolytes between two identical planar electrodes of finite thickness. A self-similar behavior was identified for the electric double-layer integral capacitance estimated from CV measurement simulations. Two regimes were identified by comparing the half cycle period τCV and the “RC time scale” τRC corresponding to the characteristic time of ions’ electrodiffusion. For τRC ← τCV, quasi-equilibrium conditions prevailed and the capacitance was diffusion-independent while for τRC → τCV, the capacitance was diffusion-limited. The effect of the electrode was captured by the dimensionless electrode electrical conductivity representing the ratio of characteristic times associated with charge transport in the electrolyte and that in the electrode. The model developed here will be useful for simulating and designing various practical electrochemical, colloidal, and biological systems for a wide range of applications.

  15. On classification of discrete, scalar-valued Poisson brackets

    NASA Astrophysics Data System (ADS)

    Parodi, E.

    2012-10-01

    We address the problem of classifying discrete differential-geometric Poisson brackets (dDGPBs) of any fixed order on a target space of dimension 1. We prove that these Poisson brackets (PBs) are in one-to-one correspondence with the intersection points of certain projective hypersurfaces. In addition, they can be reduced to a cubic PB of the standard Volterra lattice by discrete Miura-type transformations. Finally, by improving a lattice consolidation procedure, we obtain new families of non-degenerate, vector-valued and first-order dDGPBs that can be considered in the framework of admissible Lie-Poisson group theory.

  16. Analysis of overdispersed count data by mixtures of Poisson variables and Poisson processes.

    PubMed

    Hougaard, P; Lee, M L; Whitmore, G A

    1997-12-01

    Count data often show overdispersion compared to the Poisson distribution. Overdispersion is typically modeled by a random effect for the mean, based on the gamma distribution, leading to the negative binomial distribution for the count. This paper considers a larger family of mixture distributions, including the inverse Gaussian mixture distribution. It is demonstrated that it gives a significantly better fit for a data set on the frequency of epileptic seizures. The same approach can be used to generate counting processes from Poisson processes, where the rate or the time is random. A random rate corresponds to variation between patients, whereas a random time corresponds to variation within patients.

  17. Criteria for deviation from predictions by the concentration addition model.

    PubMed

    Takeshita, Jun-Ichi; Seki, Masanori; Kamo, Masashi

    2016-07-01

    Loewe's additivity (concentration addition) is a well-known model for predicting the toxic effects of chemical mixtures under the additivity assumption of toxicity. However, from the perspective of chemical risk assessment and/or management, it is important to identify chemicals whose toxicities are additive when present concurrently, that is, it should be established whether there are chemical mixtures to which the concentration addition predictive model can be applied. The objective of the present study was to develop criteria for judging test results that deviated from the predictions by the concentration addition chemical mixture model. These criteria were based on the confidence interval of the concentration addition model's prediction and on estimation of errors of the predicted concentration-effect curves by toxicity tests after exposure to single chemicals. A log-logit model with 2 parameters was assumed for the concentration-effect curve of each individual chemical. These parameters were determined by the maximum-likelihood method, and the criteria were defined using the variances and the covariance of the parameters. In addition, the criteria were applied to a toxicity test of a binary mixture of p-n-nonylphenol and p-n-octylphenol using the Japanese killifish, medaka (Oryzias latipes). Consequently, the concentration addition model using confidence interval was capable of predicting the test results at any level, and no reason for rejecting the concentration addition was found. Environ Toxicol Chem 2016;35:1806-1814. © 2015 SETAC. PMID:26660330

  18. Doubly stochastic Poisson processes in artificial neural learning.

    PubMed

    Card, H C

    1998-01-01

    This paper investigates neuron activation statistics in artificial neural networks employing stochastic arithmetic. It is shown that a doubly stochastic Poisson process is an appropriate model for the signals in these circuits.

  19. Evolution of Fermionic Systems as AN Expectation Over Poisson Processes

    NASA Astrophysics Data System (ADS)

    Beccaria, M.; Presilla, C.; de Angelis, G. F.; Jona-Lasinio, G.

    We derive an exact probabilistic representation for the evolution of a Hubbard model with site- and spin-dependent hopping coefficients and site-dependent interactions in terms of an associated stochastic dynamics of a collection of Poisson processes.

  20. Background stratified Poisson regression analysis of cohort data

    PubMed Central

    Langholz, Bryan

    2012-01-01

    Background stratified Poisson regression is an approach that has been used in the analysis of data derived from a variety of epidemiologically important studies of radiation-exposed populations, including uranium miners, nuclear industry workers, and atomic bomb survivors. We describe a novel approach to fit Poisson regression models that adjust for a set of covariates through background stratification while directly estimating the radiation-disease association of primary interest. The approach makes use of an expression for the Poisson likelihood that treats the coefficients for stratum-specific indicator variables as ‘nuisance’ variables and avoids the need to explicitly estimate the coefficients for these stratum-specific parameters. Log-linear models, as well as other general relative rate models, are accommodated. This approach is illustrated using data from the Life Span Study of Japanese atomic bomb survivors and data from a study of underground uranium miners. The point estimate and confidence interval obtained from this ‘conditional’ regression approach are identical to the values obtained using unconditional Poisson regression with model terms for each background stratum. Moreover, it is shown that the proposed approach allows estimation of background stratified Poisson regression models of non-standard form, such as models that parameterize latency effects, as well as regression models in which the number of strata is large, thereby overcoming the limitations of previously available statistical software for fitting background stratified Poisson regression models. PMID:22193911

  1. Background stratified Poisson regression analysis of cohort data.

    PubMed

    Richardson, David B; Langholz, Bryan

    2012-03-01

    Background stratified Poisson regression is an approach that has been used in the analysis of data derived from a variety of epidemiologically important studies of radiation-exposed populations, including uranium miners, nuclear industry workers, and atomic bomb survivors. We describe a novel approach to fit Poisson regression models that adjust for a set of covariates through background stratification while directly estimating the radiation-disease association of primary interest. The approach makes use of an expression for the Poisson likelihood that treats the coefficients for stratum-specific indicator variables as 'nuisance' variables and avoids the need to explicitly estimate the coefficients for these stratum-specific parameters. Log-linear models, as well as other general relative rate models, are accommodated. This approach is illustrated using data from the Life Span Study of Japanese atomic bomb survivors and data from a study of underground uranium miners. The point estimate and confidence interval obtained from this 'conditional' regression approach are identical to the values obtained using unconditional Poisson regression with model terms for each background stratum. Moreover, it is shown that the proposed approach allows estimation of background stratified Poisson regression models of non-standard form, such as models that parameterize latency effects, as well as regression models in which the number of strata is large, thereby overcoming the limitations of previously available statistical software for fitting background stratified Poisson regression models. PMID:22193911

  2. Lamb wave propagation in negative Poisson's ratio composites

    NASA Astrophysics Data System (ADS)

    Remillat, Chrystel; Wilcox, Paul; Scarpa, Fabrizio

    2008-03-01

    Lamb wave propagation is evaluated for cross-ply laminate composites exhibiting through-the-thickness negative Poisson's ratio. The laminates are mechanically modeled using the Classical Laminate Theory, while the propagation of Lamb waves is investigated using a combination of semi analytical models and Finite Element time-stepping techniques. The auxetic laminates exhibit well spaced bending, shear and symmetric fundamental modes, while featuring normal stresses for A 0 mode 3 times lower than composite laminates with positive Poisson's ratio.

  3. Sparsity-based Poisson denoising with dictionary learning.

    PubMed

    Giryes, Raja; Elad, Michael

    2014-12-01

    The problem of Poisson denoising appears in various imaging applications, such as low-light photography, medical imaging, and microscopy. In cases of high SNR, several transformations exist so as to convert the Poisson noise into an additive-independent identically distributed. Gaussian noise, for which many effective algorithms are available. However, in a low-SNR regime, these transformations are significantly less accurate, and a strategy that relies directly on the true noise statistics is required. Salmon et al took this route, proposing a patch-based exponential image representation model based on Gaussian mixture model, leading to state-of-the-art results. In this paper, we propose to harness sparse-representation modeling to the image patches, adopting the same exponential idea. Our scheme uses a greedy pursuit with boot-strapping-based stopping condition and dictionary learning within the denoising process. The reconstruction performance of the proposed scheme is competitive with leading methods in high SNR and achieving state-of-the-art results in cases of low SNR. PMID:25312930

  4. Poisson validity for orbital debris: II. Combinatorics and simulation

    NASA Astrophysics Data System (ADS)

    Fudge, Michael L.; Maclay, Timothy D.

    1997-10-01

    The International Space Station (ISS) will be at risk from orbital debris and micrometeorite impact (i.e., an impact that penetrates a critical component, possibly leading to loss of life). In support of ISS, last year the authors examined a fundamental assumption upon which the modeling of risk is based; namely, the assertion that the orbital collision problem can be modeled using a Poisson distribution. The assumption was found to be appropriate based upon the Poisson's general use as an approximation for the binomial distribution and the fact that is it proper to physically model exposure to the orbital debris flux environment using the binomial. This paper examines another fundamental issue in the expression of risk posed to space structures: the methodology by which individual incremental collision probabilities are combined to express an overall collision probability. The specific situation of ISS in this regard is that the determination of the level of safety for ISS is made via a single overall expression of critical component penetration risk. This paper details the combinatorial mathematical methods for calculating and expressing individual component (or incremental) penetration risks, utilizing component risk probabilities to produce an overall station penetration risk probability, and calculating an expected probability of loss from estimates for the loss of life given a penetration. Additionally, the paper will examine whether the statistical Poissonian answer to the orbital collision problem can be favorably compared to the results of a Monte Carlo simulation.

  5. Rethinking Poisson-based statistics for ground water quality monitoring

    SciTech Connect

    Loftis, J.C.; Iyer, H.K.; Baker, H.J.

    1999-03-01

    Both the US Environmental Protection Agency (EPA) and the American Society for Testing and Materials (ASTM) provide guidance for selecting statistical procedures for ground water detection monitoring at Resource Conservation and Recovery Act (RCRA) solid and hazardous waste facilities. The procedures recommended for dealing with large numbers of nondetects, as may often be found in data for volatile organic compounds (VOCs), include, but are not limited to, Poisson prediction limits and Poisson tolerance limits. However, many of the proposed applications of the Poisson model are inappropriate. The development and application of the Poisson-based methods are explored for two types of data, counts of analytical hits and actual concentration measurements. Each of these two applications is explored along two lines of reasoning, a first-principles argument and a simple empirical fit. The application of Poisson-based methods to counts of analytical hits, including simultaneous consideration of multiple VOCs, appears to have merit from both a first principles and an empirical standpoint. On the other hand, the Poisson distribution is not appropriate for modeling concentration data, primarily because the variance of the distribution does not scale appropriately with changing units of measurement. Tolerance and prediction limits based on the Poisson distribution are not scale invariant. By changing the units of observation in example problems drawn from EPA guidance, use of the Poisson-based tolerance and prediction limits can result in significant errors. In short, neither the Poisson distribution nor associated tolerance or prediction limits should be used with concentration data. EPA guidance does present, however, other, more appropriate, methods for dealing with concentration data in which the number of nondetects is large. These include nonparametric tolerance and prediction limits and a test of proportions based on the binomial distribution.

  6. A new Green's function Monte Carlo algorithm for the solution of the two-dimensional nonlinear Poisson-Boltzmann equation: Application to the modeling of the communication breakdown problem in space vehicles during re-entry

    NASA Astrophysics Data System (ADS)

    Chatterjee, Kausik; Roadcap, John R.; Singh, Surendra

    2014-11-01

    The objective of this paper is the exposition of a recently-developed, novel Green's function Monte Carlo (GFMC) algorithm for the solution of nonlinear partial differential equations and its application to the modeling of the plasma sheath region around a cylindrical conducting object, carrying a potential and moving at low speeds through an otherwise neutral medium. The plasma sheath is modeled in equilibrium through the GFMC solution of the nonlinear Poisson-Boltzmann (NPB) equation. The traditional Monte Carlo based approaches for the solution of nonlinear equations are iterative in nature, involving branching stochastic processes which are used to calculate linear functionals of the solution of nonlinear integral equations. Over the last several years, one of the authors of this paper, K. Chatterjee has been developing a philosophically-different approach, where the linearization of the equation of interest is not required and hence there is no need for iteration and the simulation of branching processes. Instead, an approximate expression for the Green's function is obtained using perturbation theory, which is used to formulate the random walk equations within the problem sub-domains where the random walker makes its walks. However, as a trade-off, the dimensions of these sub-domains have to be restricted by the limitations imposed by perturbation theory. The greatest advantage of this approach is the ease and simplicity of parallelization stemming from the lack of the need for iteration, as a result of which the parallelization procedure is identical to the parallelization procedure for the GFMC solution of a linear problem. The application area of interest is in the modeling of the communication breakdown problem during a space vehicle's re-entry into the atmosphere. However, additional application areas are being explored in the modeling of electromagnetic propagation through the atmosphere/ionosphere in UHF/GPS applications.

  7. Analysis of Time to Event Outcomes in Randomized Controlled Trials by Generalized Additive Models

    PubMed Central

    Argyropoulos, Christos; Unruh, Mark L.

    2015-01-01

    Background Randomized Controlled Trials almost invariably utilize the hazard ratio calculated with a Cox proportional hazard model as a treatment efficacy measure. Despite the widespread adoption of HRs, these provide a limited understanding of the treatment effect and may even provide a biased estimate when the assumption of proportional hazards in the Cox model is not verified by the trial data. Additional treatment effect measures on the survival probability or the time scale may be used to supplement HRs but a framework for the simultaneous generation of these measures is lacking. Methods By splitting follow-up time at the nodes of a Gauss Lobatto numerical quadrature rule, techniques for Poisson Generalized Additive Models (PGAM) can be adopted for flexible hazard modeling. Straightforward simulation post-estimation transforms PGAM estimates for the log hazard into estimates of the survival function. These in turn were used to calculate relative and absolute risks or even differences in restricted mean survival time between treatment arms. We illustrate our approach with extensive simulations and in two trials: IPASS (in which the proportionality of hazards was violated) and HEMO a long duration study conducted under evolving standards of care on a heterogeneous patient population. Findings PGAM can generate estimates of the survival function and the hazard ratio that are essentially identical to those obtained by Kaplan Meier curve analysis and the Cox model. PGAMs can simultaneously provide multiple measures of treatment efficacy after a single data pass. Furthermore, supported unadjusted (overall treatment effect) but also subgroup and adjusted analyses, while incorporating multiple time scales and accounting for non-proportional hazards in survival data. Conclusions By augmenting the HR conventionally reported, PGAMs have the potential to support the inferential goals of multiple stakeholders involved in the evaluation and appraisal of clinical trial

  8. Coulombic free energy of polymeric nucleic acid: low- and high-salt analytical approximations for the cylindrical Poisson-Boltzmann model.

    PubMed

    Shkel, Irina A

    2010-08-26

    An accurate analytical expression for the Coulombic free energy of DNA as a function of salt concentration ([salt]) is essential in applications to nucleic acid (NA) processes. The cylindrical model of DNA and the nonlinear Poisson-Boltzmann (NLPB) equation for ions in solution are among the simplest approaches capable of describing Coulombic interactions of NA and salt ions and of providing analytical expressions for thermodynamic quantities. Three approximations for Coulombic free energy G(u,infinity)(coul) of a polymeric nucleic acid are derived and compared with the numerical solution in a wide experimental range of 1:1 [salt] from 0.01 to 2 M. Two are obtained from the two asymptotic solutions of the cylindrical NLPB equation in the high-[salt] and low-[salt] limits: these are sufficient to determine G(u,infinity)(coul) of double-stranded (ds) DNA with 1% and of single-stranded (ss) DNA with 3% accuracy at any [salt]. The third approximation is experimentally motivated Taylor series up to the quadratic term in ln[salt] in the vicinity of the reference [salt] 0.15 M. This expression with three numerical coefficients (Coulombic free energy and its first and second derivatives at 0.15 M) predicts dependence of G(u,infinity)(coul) on [salt] within 2% of the numerical solution from 0.01 to 1 M for ss (a = 7 A, b = 3.4 A) and ds (a = 10 A, b = 1.7 A) DNA. Comparison of cylindrical free energy with that calculated for the all-atom structural model of linear B-DNA shows that the cylindrical model is completely sufficient above 0.01 M of 1:1 [salt]. The choice of two cylindrical parameters, the distance of closest approach of ion to cylinder axis (radius) a and the average axial charge separation b, is discussed in application to all-atom numerical calculations and analysis of experiment. Further development of analytical expression for Coulombic free energy with thermodynamic approaches accounting for ionic correlations and specific effects is suggested.

  9. Poisson's ratio over two centuries: challenging hypotheses

    PubMed Central

    Greaves, G. Neville

    2013-01-01

    This article explores Poisson's ratio, starting with the controversy concerning its magnitude and uniqueness in the context of the molecular and continuum hypotheses competing in the development of elasticity theory in the nineteenth century, moving on to its place in the development of materials science and engineering in the twentieth century, and concluding with its recent re-emergence as a universal metric for the mechanical performance of materials on any length scale. During these episodes France lost its scientific pre-eminence as paradigms switched from mathematical to observational, and accurate experiments became the prerequisite for scientific advance. The emergence of the engineering of metals followed, and subsequently the invention of composites—both somewhat separated from the discovery of quantum mechanics and crystallography, and illustrating the bifurcation of technology and science. Nowadays disciplines are reconnecting in the face of new scientific demands. During the past two centuries, though, the shape versus volume concept embedded in Poisson's ratio has remained invariant, but its application has exploded from its origins in describing the elastic response of solids and liquids, into areas such as materials with negative Poisson's ratio, brittleness, glass formation, and a re-evaluation of traditional materials. Moreover, the two contentious hypotheses have been reconciled in their complementarity within the hierarchical structure of materials and through computational modelling. PMID:24687094

  10. Comprehensive European dietary exposure model (CEDEM) for food additives.

    PubMed

    Tennant, David R

    2016-05-01

    European methods for assessing dietary exposures to nutrients, additives and other substances in food are limited by the availability of detailed food consumption data for all member states. A proposed comprehensive European dietary exposure model (CEDEM) applies summary data published by the European Food Safety Authority (EFSA) in a deterministic model based on an algorithm from the EFSA intake method for food additives. The proposed approach can predict estimates of food additive exposure provided in previous EFSA scientific opinions that were based on the full European food consumption database.

  11. Simulating the Effect of Poisson Ratio on Metallic Glass Properties

    SciTech Connect

    Morris, James R; Aga, Rachel S; Egami, Takeshi; Levashov, Valentin A.

    2009-01-01

    Recent work has shown that many metallic glass properties correlate with the Poisson ratio of the glass. We have developed a new model for simulating the atomistic behavior of liquids and glasses that allows us to change the Poisson ratio, while keeping the crystalline phase cohesive energy, lattice constant, and bulk modulus fixed. A number of liquid and glass properties are shown to be directly affected by the Poisson ratio. An increasing Poisson ratio stabilizes the liquid structure relative to the crystal phase, as indicated by a significantly lower melting temperature and by a lower enthalpy of the liquid phase. The liquids clearly exhibit two changes in behavior: one at low temperatures, associated with the conventional glass transition T{sub g}, and a second, higher temperature change associated with the shear properties of the liquids. This second crossover has a characteristic, measurable change in the liquid structure.

  12. Berezin integrals and Poisson processes

    NASA Astrophysics Data System (ADS)

    DeAngelis, G. F.; Jona-Lasinio, G.; Sidoravicius, V.

    1998-01-01

    We show that the calculation of Berezin integrals over anticommuting variables can be reduced to the evaluation of expectations of functionals of Poisson processes via an appropriate Feynman-Kac formula. In this way the tools of ordinary analysis can be applied to Berezin integrals and, as an example, we prove a simple upper bound. Possible applications of our results are briefly mentioned.

  13. Almost Poisson brackets for nonholonomic systems on Lie groups

    NASA Astrophysics Data System (ADS)

    Garcia-Naranjo, Luis Constantino

    We present a geometric construction of almost Poisson brackets for nonholonomic mechanical systems whose configuration space is a Lie group G. We study the so-called LL and LR systems where the kinetic energy defines a left invariant metric on G and the constraints are invariant with respect to left (respectively right) translation on G. For LL systems, the equations on the momentum phase space, T*G , can be left translated onto g *, the dual space of the Lie algebra g . We show that the reduced equations on g * can be cast in Poisson form with respect to an almost Poisson bracket that is obtained by projecting the standard Lie-Poisson bracket onto the constraint space. For LR systems we use ideas of semidirect product reduction to transfer the equations on T*G into the dual Lie algebra, s *, of a semidirect product. This provides a natural Lie algebraic setting for the equations of motion commonly found in the literature. We show that these equations can also be cast in Poisson form with respect to an almost Poisson bracket that is obtained by projecting the Lie-Poisson structure on s * onto a constraint submanifold. In both cases the constraint functions are Casimirs of the bracket and are satisfied automatically. Our construction is a natural generalization of the classical ideas of Lie-Poisson and semidirect product reduction to the nonholonomic case. It also sets a convenient stage for the study of Hamiltonization of certain nonholonomic systems. Our examples include the Suslov and the Veselova problems of constrained motion of a rigid body, and the Chaplygin sleigh. In addition we study the almost Poisson reduction of the Chaplygin sphere. We show that the bracket given by Borisov and Mamaev in [7] is obtained by reducing a nonstandard almost Poisson bracket that is obtained by projecting a non-canonical bivector onto the constraint submanifold using the Lagrange-D'Alembert principle. The examples that we treat show that it is possible to cast the reduced

  14. Modeling Errors in Daily Precipitation Measurements: Additive or Multiplicative?

    NASA Technical Reports Server (NTRS)

    Tian, Yudong; Huffman, George J.; Adler, Robert F.; Tang, Ling; Sapiano, Matthew; Maggioni, Viviana; Wu, Huan

    2013-01-01

    The definition and quantification of uncertainty depend on the error model used. For uncertainties in precipitation measurements, two types of error models have been widely adopted: the additive error model and the multiplicative error model. This leads to incompatible specifications of uncertainties and impedes intercomparison and application.In this letter, we assess the suitability of both models for satellite-based daily precipitation measurements in an effort to clarify the uncertainty representation. Three criteria were employed to evaluate the applicability of either model: (1) better separation of the systematic and random errors; (2) applicability to the large range of variability in daily precipitation; and (3) better predictive skills. It is found that the multiplicative error model is a much better choice under all three criteria. It extracted the systematic errors more cleanly, was more consistent with the large variability of precipitation measurements, and produced superior predictions of the error characteristics. The additive error model had several weaknesses, such as non constant variance resulting from systematic errors leaking into random errors, and the lack of prediction capability. Therefore, the multiplicative error model is a better choice.

  15. Electroacoustics modeling of piezoelectric welders for ultrasonic additive manufacturing processes

    NASA Astrophysics Data System (ADS)

    Hehr, Adam; Dapino, Marcelo J.

    2016-04-01

    Ultrasonic additive manufacturing (UAM) is a recent 3D metal printing technology which utilizes ultrasonic vibrations from high power piezoelectric transducers to additively weld similar and dissimilar metal foils. CNC machining is used intermittent of welding to create internal channels, embed temperature sensitive components, sensors, and materials, and for net shaping parts. Structural dynamics of the welder and work piece influence the performance of the welder and part quality. To understand the impact of structural dynamics on UAM, a linear time-invariant model is used to relate system shear force and electric current inputs to the system outputs of welder velocity and voltage. Frequency response measurements are combined with in-situ operating measurements of the welder to identify model parameters and to verify model assumptions. The proposed LTI model can enhance process consistency, performance, and guide the development of improved quality monitoring and control strategies.

  16. An Additional Symmetry in the Weinberg-Salam Model

    SciTech Connect

    Bakker, B.L.G.; Veselov, A.I.; Zubkov, M.A.

    2005-06-01

    An additional Z{sub 6} symmetry hidden in the fermion and Higgs sectors of the Standard Model has been found recently. It has a singular nature and is connected to the centers of the SU(3) and SU(2) subgroups of the gauge group. A lattice regularization of the Standard Model was constructed that possesses this symmetry. In this paper, we report our results on the numerical simulation of its electroweak sector.

  17. Modeling uranium transport in acidic contaminated groundwater with base addition

    SciTech Connect

    Zhang, Fan; Luo, Wensui; Parker, Jack C.; Brooks, Scott C; Watson, David B; Jardine, Philip; Gu, Baohua

    2011-01-01

    This study investigates reactive transport modeling in a column of uranium(VI)-contaminated sediments with base additions in the circulating influent. The groundwater and sediment exhibit oxic conditions with low pH, high concentrations of NO{sub 3}{sup -}, SO{sub 4}{sup 2-}, U and various metal cations. Preliminary batch experiments indicate that additions of strong base induce rapid immobilization of U for this material. In the column experiment that is the focus of the present study, effluent groundwater was titrated with NaOH solution in an inflow reservoir before reinjection to gradually increase the solution pH in the column. An equilibrium hydrolysis, precipitation and ion exchange reaction model developed through simulation of the preliminary batch titration experiments predicted faster reduction of aqueous Al than observed in the column experiment. The model was therefore modified to consider reaction kinetics for the precipitation and dissolution processes which are the major mechanism for Al immobilization. The combined kinetic and equilibrium reaction model adequately described variations in pH, aqueous concentrations of metal cations (Al, Ca, Mg, Sr, Mn, Ni, Co), sulfate and U(VI). The experimental and modeling results indicate that U(VI) can be effectively sequestered with controlled base addition due to sorption by slowly precipitated Al with pH-dependent surface charge. The model may prove useful to predict field-scale U(VI) sequestration and remediation effectiveness.

  18. Modeling uranium transport in acidic contaminated groundwater with base addition.

    PubMed

    Zhang, Fan; Luo, Wensui; Parker, Jack C; Brooks, Scott C; Watson, David B; Jardine, Philip M; Gu, Baohua

    2011-06-15

    This study investigates reactive transport modeling in a column of uranium(VI)-contaminated sediments with base additions in the circulating influent. The groundwater and sediment exhibit oxic conditions with low pH, high concentrations of NO(3)(-), SO(4)(2-), U and various metal cations. Preliminary batch experiments indicate that additions of strong base induce rapid immobilization of U for this material. In the column experiment that is the focus of the present study, effluent groundwater was titrated with NaOH solution in an inflow reservoir before reinjection to gradually increase the solution pH in the column. An equilibrium hydrolysis, precipitation and ion exchange reaction model developed through simulation of the preliminary batch titration experiments predicted faster reduction of aqueous Al than observed in the column experiment. The model was therefore modified to consider reaction kinetics for the precipitation and dissolution processes which are the major mechanism for Al immobilization. The combined kinetic and equilibrium reaction model adequately described variations in pH, aqueous concentrations of metal cations (Al, Ca, Mg, Sr, Mn, Ni, Co), sulfate and U(VI). The experimental and modeling results indicate that U(VI) can be effectively sequestered with controlled base addition due to sorption by slowly precipitated Al with pH-dependent surface charge. The model may prove useful to predict field-scale U(VI) sequestration and remediation effectiveness.

  19. Calculation of the Poisson cumulative distribution function

    NASA Technical Reports Server (NTRS)

    Bowerman, Paul N.; Nolty, Robert G.; Scheuer, Ernest M.

    1990-01-01

    A method for calculating the Poisson cdf (cumulative distribution function) is presented. The method avoids computer underflow and overflow during the process. The computer program uses this technique to calculate the Poisson cdf for arbitrary inputs. An algorithm that determines the Poisson parameter required to yield a specified value of the cdf is presented.

  20. From Loss of Memory to Poisson.

    ERIC Educational Resources Information Center

    Johnson, Bruce R.

    1983-01-01

    A way of presenting the Poisson process and deriving the Poisson distribution for upper-division courses in probability or mathematical statistics is presented. The main feature of the approach lies in the formulation of Poisson postulates with immediate intuitive appeal. (MNS)

  1. Using Set Model for Learning Addition of Integers

    ERIC Educational Resources Information Center

    Lestari, Umi Puji; Putri, Ratu Ilma Indra; Hartono, Yusuf

    2015-01-01

    This study aims to investigate how set model can help students' understanding of addition of integers in fourth grade. The study has been carried out to 23 students and a teacher of IVC SD Iba Palembang in January 2015. This study is a design research that also promotes PMRI as the underlying design context and activity. Results showed that the…

  2. Testing Nested Additive, Multiplicative, and General Multitrait-Multimethod Models.

    ERIC Educational Resources Information Center

    Coenders, Germa; Saris, Willem E.

    2000-01-01

    Provides alternatives to the definitions of additive and multiplicative method effects in multitrait-multimethod data given by D. Campbell and E. O'Connell (1967). The alternative definitions can be formulated by means of constraints in the parameters of the correlated uniqueness model (H. Marsh, 1989). (SLD)

  3. Numerical Solution of 3D Poisson-Nernst-Planck Equations Coupled with Classical Density Functional Theory for Modeling Ion and Electron Transport in a Confined Environment

    SciTech Connect

    Meng, Da; Zheng, Bin; Lin, Guang; Sushko, Maria L.

    2014-08-29

    We have developed efficient numerical algorithms for the solution of 3D steady-state Poisson-Nernst-Planck equations (PNP) with excess chemical potentials described by the classical density functional theory (cDFT). The coupled PNP equations are discretized by finite difference scheme and solved iteratively by Gummel method with relaxation. The Nernst-Planck equations are transformed into Laplace equations through the Slotboom transformation. Algebraic multigrid method is then applied to efficiently solve the Poisson equation and the transformed Nernst-Planck equations. A novel strategy for calculating excess chemical potentials through fast Fourier transforms is proposed which reduces computational complexity from O(N2) to O(NlogN) where N is the number of grid points. Integrals involving Dirac delta function are evaluated directly by coordinate transformation which yields more accurate result compared to applying numerical quadrature to an approximated delta function. Numerical results for ion and electron transport in solid electrolyte for Li ion batteries are shown to be in good agreement with the experimental data and the results from previous studies.

  4. Electrostatic forces in the Poisson-Boltzmann systems.

    PubMed

    Xiao, Li; Cai, Qin; Ye, Xiang; Wang, Jun; Luo, Ray

    2013-09-01

    Continuum modeling of electrostatic interactions based upon numerical solutions of the Poisson-Boltzmann equation has been widely used in structural and functional analyses of biomolecules. A limitation of the numerical strategies is that it is conceptually difficult to incorporate these types of models into molecular mechanics simulations, mainly because of the issue in assigning atomic forces. In this theoretical study, we first derived the Maxwell stress tensor for molecular systems obeying the full nonlinear Poisson-Boltzmann equation. We further derived formulations of analytical electrostatic forces given the Maxwell stress tensor and discussed the relations of the formulations with those published in the literature. We showed that the formulations derived from the Maxwell stress tensor require a weaker condition for its validity, applicable to nonlinear Poisson-Boltzmann systems with a finite number of singularities such as atomic point charges and the existence of discontinuous dielectric as in the widely used classical piece-wise constant dielectric models. PMID:24028101

  5. Estimating classification images with generalized linear and additive models.

    PubMed

    Knoblauch, Kenneth; Maloney, Laurence T

    2008-12-22

    Conventional approaches to modeling classification image data can be described in terms of a standard linear model (LM). We show how the problem can be characterized as a Generalized Linear Model (GLM) with a Bernoulli distribution. We demonstrate via simulation that this approach is more accurate in estimating the underlying template in the absence of internal noise. With increasing internal noise, however, the advantage of the GLM over the LM decreases and GLM is no more accurate than LM. We then introduce the Generalized Additive Model (GAM), an extension of GLM that can be used to estimate smooth classification images adaptively. We show that this approach is more robust to the presence of internal noise, and finally, we demonstrate that GAM is readily adapted to estimation of higher order (nonlinear) classification images and to testing their significance.

  6. Additions to Mars Global Reference Atmospheric Model (MARS-GRAM)

    NASA Technical Reports Server (NTRS)

    Justus, C. G.; James, Bonnie

    1992-01-01

    Three major additions or modifications were made to the Mars Global Reference Atmospheric Model (Mars-GRAM): (1) in addition to the interactive version, a new batch version is available, which uses NAMELIST input, and is completely modular, so that the main driver program can easily be replaced by any calling program, such as a trajectory simulation program; (2) both the interactive and batch versions now have an option for treating local-scale dust storm effects, rather than just the global-scale dust storms in the original Mars-GRAM; and (3) the Zurek wave perturbation model was added, to simulate the effects of tidal perturbations, in addition to the random (mountain wave) perturbation model of the original Mars-GRAM. A minor modification was also made which allows heights to go 'below' local terrain height and return 'realistic' pressure, density, and temperature, and not the surface values, as returned by the original Mars-GRAM. This feature will allow simulations of Mars rover paths which might go into local 'valley' areas which lie below the average height of the present, rather coarse-resolution, terrain height data used by Mars-GRAM. Sample input and output of both the interactive and batch versions of Mars-GRAM are presented.

  7. Additions to Mars Global Reference Atmospheric Model (Mars-GRAM)

    NASA Technical Reports Server (NTRS)

    Justus, C. G.

    1991-01-01

    Three major additions or modifications were made to the Mars Global Reference Atmospheric Model (Mars-GRAM): (1) in addition to the interactive version, a new batch version is available, which uses NAMELIST input, and is completely modular, so that the main driver program can easily be replaced by any calling program, such as a trajectory simulation program; (2) both the interactive and batch versions now have an option for treating local-scale dust storm effects, rather than just the global-scale dust storms in the original Mars-GRAM; and (3) the Zurek wave perturbation model was added, to simulate the effects of tidal perturbations, in addition to the random (mountain wave) perturbation model of the original Mars-GRAM. A minor modification has also been made which allows heights to go below local terrain height and return realistic pressure, density, and temperature (not the surface values) as returned by the original Mars-GRAM. This feature will allow simulations of Mars rover paths which might go into local valley areas which lie below the average height of the present, rather coarse-resolution, terrain height data used by Mars-GRAM. Sample input and output of both the interactive and batch version of Mars-GRAM are presented.

  8. Backbone Additivity in the Transfer Model of Protein Solvation

    SciTech Connect

    Hu, Char Y.; Kokubo, Hironori; Lynch, Gillian C.; Bolen, D Wayne; Pettitt, Bernard M.

    2010-05-01

    The transfer model implying additivity of the peptide backbone free energy of transfer is computationally tested. Molecular dynamics simulations are used to determine the extent of change in transfer free energy (ΔGtr) with increase in chain length of oligoglycine with capped end groups. Solvation free energies of oligoglycine models of varying lengths in pure water and in the osmolyte solutions, 2M urea and 2M trimethylamine N-oxide (TMAO), were calculated from simulations of all atom models, and ΔGtr values for peptide backbone transfer from water to the osmolyte solutions were determined. The results show that the transfer free energies change linearly with increasing chain length, demonstrating the principle of additivity, and provide values in reasonable agreement with experiment. The peptide backbone transfer free energy contributions arise from van der Waals interactions in the case of transfer to urea, but from electrostatics on transfer to TMAO solution. The simulations used here allow for the calculation of the solvation and transfer free energy of longer oligoglycine models to be evaluated than is currently possible through experiment. The peptide backbone unit computed transfer free energy of –54 cal/mol/Mcompares quite favorably with –43 cal/mol/M determined experimentally.

  9. Multiscale and Multiphysics Modeling of Additive Manufacturing of Advanced Materials

    NASA Technical Reports Server (NTRS)

    Liou, Frank; Newkirk, Joseph; Fan, Zhiqiang; Sparks, Todd; Chen, Xueyang; Fletcher, Kenneth; Zhang, Jingwei; Zhang, Yunlu; Kumar, Kannan Suresh; Karnati, Sreekar

    2015-01-01

    The objective of this proposed project is to research and develop a prediction tool for advanced additive manufacturing (AAM) processes for advanced materials and develop experimental methods to provide fundamental properties and establish validation data. Aircraft structures and engines demand materials that are stronger, useable at much higher temperatures, provide less acoustic transmission, and enable more aeroelastic tailoring than those currently used. Significant improvements in properties can only be achieved by processing the materials under nonequilibrium conditions, such as AAM processes. AAM processes encompass a class of processes that use a focused heat source to create a melt pool on a substrate. Examples include Electron Beam Freeform Fabrication and Direct Metal Deposition. These types of additive processes enable fabrication of parts directly from CAD drawings. To achieve the desired material properties and geometries of the final structure, assessing the impact of process parameters and predicting optimized conditions with numerical modeling as an effective prediction tool is necessary. The targets for the processing are multiple and at different spatial scales, and the physical phenomena associated occur in multiphysics and multiscale. In this project, the research work has been developed to model AAM processes in a multiscale and multiphysics approach. A macroscale model was developed to investigate the residual stresses and distortion in AAM processes. A sequentially coupled, thermomechanical, finite element model was developed and validated experimentally. The results showed the temperature distribution, residual stress, and deformation within the formed deposits and substrates. A mesoscale model was developed to include heat transfer, phase change with mushy zone, incompressible free surface flow, solute redistribution, and surface tension. Because of excessive computing time needed, a parallel computing approach was also tested. In addition

  10. Electrodiffusion Models of Neurons and Extracellular Space Using the Poisson-Nernst-Planck Equations—Numerical Simulation of the Intra- and Extracellular Potential for an Axon Model

    PubMed Central

    Pods, Jurgis; Schönke, Johannes; Bastian, Peter

    2013-01-01

    In neurophysiology, extracellular signals—as measured by local field potentials (LFP) or electroencephalography—are of great significance. Their exact biophysical basis is, however, still not fully understood. We present a three-dimensional model exploiting the cylinder symmetry of a single axon in extracellular fluid based on the Poisson-Nernst-Planck equations of electrodiffusion. The propagation of an action potential along the axonal membrane is investigated by means of numerical simulations. Special attention is paid to the Debye layer, the region with strong concentration gradients close to the membrane, which is explicitly resolved by the computational mesh. We focus on the evolution of the extracellular electric potential. A characteristic up-down-up LFP waveform in the far-field is found. Close to the membrane, the potential shows a more intricate shape. A comparison with the widely used line source approximation reveals similarities and demonstrates the strong influence of membrane currents. However, the electrodiffusion model shows another signal component stemming directly from the intracellular electric field, called the action potential echo. Depending on the neuronal configuration, this might have a significant effect on the LFP. In these situations, electrodiffusion models should be used for quantitative comparisons with experimental data. PMID:23823244

  11. Sensitivity analysis of geometric errors in additive manufacturing medical models.

    PubMed

    Pinto, Jose Miguel; Arrieta, Cristobal; Andia, Marcelo E; Uribe, Sergio; Ramos-Grez, Jorge; Vargas, Alex; Irarrazaval, Pablo; Tejos, Cristian

    2015-03-01

    Additive manufacturing (AM) models are used in medical applications for surgical planning, prosthesis design and teaching. For these applications, the accuracy of the AM models is essential. Unfortunately, this accuracy is compromised due to errors introduced by each of the building steps: image acquisition, segmentation, triangulation, printing and infiltration. However, the contribution of each step to the final error remains unclear. We performed a sensitivity analysis comparing errors obtained from a reference with those obtained modifying parameters of each building step. Our analysis considered global indexes to evaluate the overall error, and local indexes to show how this error is distributed along the surface of the AM models. Our results show that the standard building process tends to overestimate the AM models, i.e. models are larger than the original structures. They also show that the triangulation resolution and the segmentation threshold are critical factors, and that the errors are concentrated at regions with high curvatures. Errors could be reduced choosing better triangulation and printing resolutions, but there is an important need for modifying some of the standard building processes, particularly the segmentation algorithms.

  12. Addition Table of Colours: Additive and Subtractive Mixtures Described Using a Single Reasoning Model

    ERIC Educational Resources Information Center

    Mota, A. R.; Lopes dos Santos, J. M. B.

    2014-01-01

    Students' misconceptions concerning colour phenomena and the apparent complexity of the underlying concepts--due to the different domains of knowledge involved--make its teaching very difficult. We have developed and tested a teaching device, the addition table of colours (ATC), that encompasses additive and subtractive mixtures in a single…

  13. A Study of Additive Noise Model for Robust Speech Recognition

    NASA Astrophysics Data System (ADS)

    Awatade, Manisha H.

    2011-12-01

    A model of how speech amplitude spectra are affected by additive noise is studied. Acoustic features are extracted based on the noise robust parts of speech spectra without losing discriminative information. An existing two non-linear processing methods, harmonic demodulation and spectral peak-to-valley ratio locking, are designed to minimize mismatch between clean and noisy speech features. Previously studied methods, including peak isolation [1], do not require noise estimation and are effective in dealing with both stationary and non-stationary noise.

  14. Additive Manufacturing of Medical Models--Applications in Rhinology.

    PubMed

    Raos, Pero; Klapan, Ivica; Galeta, Tomislav

    2015-09-01

    In the paper we are introducing guidelines and suggestions for use of 3D image processing SW in head pathology diagnostic and procedures for obtaining physical medical model by additive manufacturing/rapid prototyping techniques, bearing in mind the improvement of surgery performance, its maximum security and faster postoperative recovery of patients. This approach has been verified in two case reports. In the treatment we used intelligent classifier-schemes for abnormal patterns using computer-based system for 3D-virtual and endoscopic assistance in rhinology, with appropriate visualization of anatomy and pathology within the nose, paranasal sinuses, and scull base area.

  15. Multiscale Modeling of Powder Bed-Based Additive Manufacturing

    NASA Astrophysics Data System (ADS)

    Markl, Matthias; Körner, Carolin

    2016-07-01

    Powder bed fusion processes are additive manufacturing technologies that are expected to induce the third industrial revolution. Components are built up layer by layer in a powder bed by selectively melting confined areas, according to sliced 3D model data. This technique allows for manufacturing of highly complex geometries hardly machinable with conventional technologies. However, the underlying physical phenomena are sparsely understood and difficult to observe during processing. Therefore, an intensive and expensive trial-and-error principle is applied to produce components with the desired dimensional accuracy, material characteristics, and mechanical properties. This review presents numerical modeling approaches on multiple length scales and timescales to describe different aspects of powder bed fusion processes. In combination with tailored experiments, the numerical results enlarge the process understanding of the underlying physical mechanisms and support the development of suitable process strategies and component topologies.

  16. Multiscale Modeling of Powder Bed–Based Additive Manufacturing

    NASA Astrophysics Data System (ADS)

    Markl, Matthias; Körner, Carolin

    2016-07-01

    Powder bed fusion processes are additive manufacturing technologies that are expected to induce the third industrial revolution. Components are built up layer by layer in a powder bed by selectively melting confined areas, according to sliced 3D model data. This technique allows for manufacturing of highly complex geometries hardly machinable with conventional technologies. However, the underlying physical phenomena are sparsely understood and difficult to observe during processing. Therefore, an intensive and expensive trial-and-error principle is applied to produce components with the desired dimensional accuracy, material characteristics, and mechanical properties. This review presents numerical modeling approaches on multiple length scales and timescales to describe different aspects of powder bed fusion processes. In combination with tailored experiments, the numerical results enlarge the process understanding of the underlying physical mechanisms and support the development of suitable process strategies and component topologies.

  17. Tunable negative Poisson's ratio in hydrogenated graphene.

    PubMed

    Jiang, Jin-Wu; Chang, Tienchong; Guo, Xingming

    2016-09-21

    We perform molecular dynamics simulations to investigate the effect of hydrogenation on the Poisson's ratio of graphene. It is found that the value of the Poisson's ratio of graphene can be effectively tuned from positive to negative by varying the percentage of hydrogenation. Specifically, the Poisson's ratio decreases with an increase in the percentage of hydrogenation, and reaches a minimum value of -0.04 when the percentage of hydrogenation is about 50%. The Poisson's ratio starts to increase upon a further increase of the percentage of hydrogenation. The appearance of a minimum negative Poisson's ratio in the hydrogenated graphene is attributed to the suppression of the hydrogenation-induced ripples during the stretching of graphene. Our results demonstrate that hydrogenation is a valuable approach for tuning the Poisson's ratio from positive to negative in graphene. PMID:27536878

  18. Additive functions in boolean models of gene regulatory network modules.

    PubMed

    Darabos, Christian; Di Cunto, Ferdinando; Tomassini, Marco; Moore, Jason H; Provero, Paolo; Giacobini, Mario

    2011-01-01

    Gene-on-gene regulations are key components of every living organism. Dynamical abstract models of genetic regulatory networks help explain the genome's evolvability and robustness. These properties can be attributed to the structural topology of the graph formed by genes, as vertices, and regulatory interactions, as edges. Moreover, the actual gene interaction of each gene is believed to play a key role in the stability of the structure. With advances in biology, some effort was deployed to develop update functions in boolean models that include recent knowledge. We combine real-life gene interaction networks with novel update functions in a boolean model. We use two sub-networks of biological organisms, the yeast cell-cycle and the mouse embryonic stem cell, as topological support for our system. On these structures, we substitute the original random update functions by a novel threshold-based dynamic function in which the promoting and repressing effect of each interaction is considered. We use a third real-life regulatory network, along with its inferred boolean update functions to validate the proposed update function. Results of this validation hint to increased biological plausibility of the threshold-based function. To investigate the dynamical behavior of this new model, we visualized the phase transition between order and chaos into the critical regime using Derrida plots. We complement the qualitative nature of Derrida plots with an alternative measure, the criticality distance, that also allows to discriminate between regimes in a quantitative way. Simulation on both real-life genetic regulatory networks show that there exists a set of parameters that allows the systems to operate in the critical region. This new model includes experimentally derived biological information and recent discoveries, which makes it potentially useful to guide experimental research. The update function confers additional realism to the model, while reducing the complexity

  19. Additive Functions in Boolean Models of Gene Regulatory Network Modules

    PubMed Central

    Darabos, Christian; Di Cunto, Ferdinando; Tomassini, Marco; Moore, Jason H.; Provero, Paolo; Giacobini, Mario

    2011-01-01

    Gene-on-gene regulations are key components of every living organism. Dynamical abstract models of genetic regulatory networks help explain the genome's evolvability and robustness. These properties can be attributed to the structural topology of the graph formed by genes, as vertices, and regulatory interactions, as edges. Moreover, the actual gene interaction of each gene is believed to play a key role in the stability of the structure. With advances in biology, some effort was deployed to develop update functions in Boolean models that include recent knowledge. We combine real-life gene interaction networks with novel update functions in a Boolean model. We use two sub-networks of biological organisms, the yeast cell-cycle and the mouse embryonic stem cell, as topological support for our system. On these structures, we substitute the original random update functions by a novel threshold-based dynamic function in which the promoting and repressing effect of each interaction is considered. We use a third real-life regulatory network, along with its inferred Boolean update functions to validate the proposed update function. Results of this validation hint to increased biological plausibility of the threshold-based function. To investigate the dynamical behavior of this new model, we visualized the phase transition between order and chaos into the critical regime using Derrida plots. We complement the qualitative nature of Derrida plots with an alternative measure, the criticality distance, that also allows to discriminate between regimes in a quantitative way. Simulation on both real-life genetic regulatory networks show that there exists a set of parameters that allows the systems to operate in the critical region. This new model includes experimentally derived biological information and recent discoveries, which makes it potentially useful to guide experimental research. The update function confers additional realism to the model, while reducing the complexity

  20. WATEQ3 geochemical model: thermodynamic data for several additional solids

    SciTech Connect

    Krupka, K.M.; Jenne, E.A.

    1982-09-01

    Geochemical models such as WATEQ3 can be used to model the concentrations of water-soluble pollutants that may result from the disposal of nuclear waste and retorted oil shale. However, for a model to competently deal with these water-soluble pollutants, an adequate thermodynamic data base must be provided that includes elements identified as important in modeling these pollutants. To this end, several minerals and related solid phases were identified that were absent from the thermodynamic data base of WATEQ3. In this study, the thermodynamic data for the identified solids were compiled and selected from several published tabulations of thermodynamic data. For these solids, an accepted Gibbs free energy of formation, ..delta..G/sup 0//sub f,298/, was selected for each solid phase based on the recentness of the tabulated data and on considerations of internal consistency with respect to both the published tabulations and the existing data in WATEQ3. For those solids not included in these published tabulations, Gibbs free energies of formation were calculated from published solubility data (e.g., lepidocrocite), or were estimated (e.g., nontronite) using a free-energy summation method described by Mattigod and Sposito (1978). The accepted or estimated free energies were then combined with internally consistent, ancillary thermodynamic data to calculate equilibrium constants for the hydrolysis reactions of these minerals and related solid phases. Including these values in the WATEQ3 data base increased the competency of this geochemical model in applications associated with the disposal of nuclear waste and retorted oil shale. Additional minerals and related solid phases that need to be added to the solubility submodel will be identified as modeling applications continue in these two programs.

  1. Surface reconstruction through poisson disk sampling.

    PubMed

    Hou, Wenguang; Xu, Zekai; Qin, Nannan; Xiong, Dongping; Ding, Mingyue

    2015-01-01

    This paper intends to generate the approximate Voronoi diagram in the geodesic metric for some unbiased samples selected from original points. The mesh model of seeds is then constructed on basis of the Voronoi diagram. Rather than constructing the Voronoi diagram for all original points, the proposed strategy is to run around the obstacle that the geodesic distances among neighboring points are sensitive to nearest neighbor definition. It is obvious that the reconstructed model is the level of detail of original points. Hence, our main motivation is to deal with the redundant scattered points. In implementation, Poisson disk sampling is taken to select seeds and helps to produce the Voronoi diagram. Adaptive reconstructions can be achieved by slightly changing the uniform strategy in selecting seeds. Behaviors of this method are investigated and accuracy evaluations are done. Experimental results show the proposed method is reliable and effective. PMID:25915744

  2. [Critical of the additive model of the randomized controlled trial].

    PubMed

    Boussageon, Rémy; Gueyffier, François; Bejan-Angoulvant, Theodora; Felden-Dominiak, Géraldine

    2008-01-01

    Randomized, double-blind, placebo-controlled clinical trials are currently the best way to demonstrate the clinical effectiveness of drugs. Its methodology relies on the method of difference (John Stuart Mill), through which the observed difference between two groups (drug vs placebo) can be attributed to the pharmacological effect of the drug being tested. However, this additive model can be questioned in the event of statistical interactions between the pharmacological and the placebo effects. Evidence in different domains has shown that the placebo effect can influence the effect of the active principle. This article evaluates the methodological, clinical and epistemological consequences of this phenomenon. Topics treated include extrapolating results, accounting for heterogeneous results, demonstrating the existence of several factors in the placebo effect, the necessity to take these factors into account for given symptoms or pathologies, as well as the problem of the "specific" effect.

  3. Rigid body dynamics on the Poisson torus

    NASA Astrophysics Data System (ADS)

    Richter, Peter H.

    2008-11-01

    The theory of rigid body motion with emphasis on the modifications introduced by a Cardan suspension is outlined. The configuration space is no longer SO(3) but a 3-torus; the equivalent of the Poisson sphere, after separation of an angular variable, is a Poisson torus. Iso-energy surfaces and their bifurcations are discussed. A universal Poincaré section method is proposed.

  4. Alternative Derivations for the Poisson Integral Formula

    ERIC Educational Resources Information Center

    Chen, J. T.; Wu, C. S.

    2006-01-01

    Poisson integral formula is revisited. The kernel in the Poisson integral formula can be derived in a series form through the direct BEM free of the concept of image point by using the null-field integral equation in conjunction with the degenerate kernels. The degenerate kernels for the closed-form Green's function and the series form of Poisson…

  5. Universality of Poisson indicator and Fano factor of transport event statistics in ion channels and enzyme kinetics.

    PubMed

    Chaudhury, Srabanti; Cao, Jianshu; Sinitsyn, Nikolai A

    2013-01-17

    We consider a generic stochastic model of ion transport through a single channel with arbitrary internal structure and kinetic rates of transitions between internal states. This model is also applicable to describe kinetics of a class of enzymes in which turnover events correspond to conversion of substrate into product by a single enzyme molecule. We show that measurement of statistics of single molecule transition time through the channel contains only restricted information about internal structure of the channel. In particular, the most accessible flux fluctuation characteristics, such as the Poisson indicator (P) and the Fano factor (F) as function of solute concentration, depend only on three parameters in addition to the parameters of the Michaelis-Menten curve that characterizes average current through the channel. Nevertheless, measurement of Poisson indicator or Fano factor for such renewal processes can discriminate reactions with multiple intermediate steps as well as provide valuable information about the internal kinetic rates.

  6. Percolation model with an additional source of disorder

    NASA Astrophysics Data System (ADS)

    Kundu, Sumanta; Manna, S. S.

    2016-06-01

    The ranges of transmission of the mobiles in a mobile ad hoc network are not uniform in reality. They are affected by the temperature fluctuation in air, obstruction due to the solid objects, even the humidity difference in the environment, etc. How the varying range of transmission of the individual active elements affects the global connectivity in the network may be an important practical question to ask. Here a model of percolation phenomena, with an additional source of disorder, is introduced for a theoretical understanding of this problem. As in ordinary percolation, sites of a square lattice are occupied randomly with probability p . Each occupied site is then assigned a circular disk of random value R for its radius. A bond is defined to be occupied if and only if the radii R1 and R2 of the disks centered at the ends satisfy a certain predefined condition. In a very general formulation, one divides the R1-R2 plane into two regions by an arbitrary closed curve. One defines a point within one region as representing an occupied bond; otherwise it is a vacant bond. The study of three different rules under this general formulation indicates that the percolation threshold always varies continuously. This threshold has two limiting values, one is pc(sq) , the percolation threshold for the ordinary site percolation on the square lattice, and the other is unity. The approach of the percolation threshold to its limiting values are characterized by two exponents. In a special case, all lattice sites are occupied by disks of random radii R ∈{0 ,R0} and a percolation transition is observed with R0 as the control variable, similar to the site occupation probability.

  7. Percolation model with an additional source of disorder.

    PubMed

    Kundu, Sumanta; Manna, S S

    2016-06-01

    The ranges of transmission of the mobiles in a mobile ad hoc network are not uniform in reality. They are affected by the temperature fluctuation in air, obstruction due to the solid objects, even the humidity difference in the environment, etc. How the varying range of transmission of the individual active elements affects the global connectivity in the network may be an important practical question to ask. Here a model of percolation phenomena, with an additional source of disorder, is introduced for a theoretical understanding of this problem. As in ordinary percolation, sites of a square lattice are occupied randomly with probability p. Each occupied site is then assigned a circular disk of random value R for its radius. A bond is defined to be occupied if and only if the radii R_{1} and R_{2} of the disks centered at the ends satisfy a certain predefined condition. In a very general formulation, one divides the R_{1}-R_{2} plane into two regions by an arbitrary closed curve. One defines a point within one region as representing an occupied bond; otherwise it is a vacant bond. The study of three different rules under this general formulation indicates that the percolation threshold always varies continuously. This threshold has two limiting values, one is p_{c}(sq), the percolation threshold for the ordinary site percolation on the square lattice, and the other is unity. The approach of the percolation threshold to its limiting values are characterized by two exponents. In a special case, all lattice sites are occupied by disks of random radii R∈{0,R_{0}} and a percolation transition is observed with R_{0} as the control variable, similar to the site occupation probability.

  8. Percolation model with an additional source of disorder.

    PubMed

    Kundu, Sumanta; Manna, S S

    2016-06-01

    The ranges of transmission of the mobiles in a mobile ad hoc network are not uniform in reality. They are affected by the temperature fluctuation in air, obstruction due to the solid objects, even the humidity difference in the environment, etc. How the varying range of transmission of the individual active elements affects the global connectivity in the network may be an important practical question to ask. Here a model of percolation phenomena, with an additional source of disorder, is introduced for a theoretical understanding of this problem. As in ordinary percolation, sites of a square lattice are occupied randomly with probability p. Each occupied site is then assigned a circular disk of random value R for its radius. A bond is defined to be occupied if and only if the radii R_{1} and R_{2} of the disks centered at the ends satisfy a certain predefined condition. In a very general formulation, one divides the R_{1}-R_{2} plane into two regions by an arbitrary closed curve. One defines a point within one region as representing an occupied bond; otherwise it is a vacant bond. The study of three different rules under this general formulation indicates that the percolation threshold always varies continuously. This threshold has two limiting values, one is p_{c}(sq), the percolation threshold for the ordinary site percolation on the square lattice, and the other is unity. The approach of the percolation threshold to its limiting values are characterized by two exponents. In a special case, all lattice sites are occupied by disks of random radii R∈{0,R_{0}} and a percolation transition is observed with R_{0} as the control variable, similar to the site occupation probability. PMID:27415234

  9. Hyperbolic value addition and general models of animal choice.

    PubMed

    Mazur, J E

    2001-01-01

    Three mathematical models of choice--the contextual-choice model (R. Grace, 1994), delay-reduction theory (N. Squires & E. Fantino, 1971), and a new model called the hyperbolic value-added model--were compared in their ability to predict the results from a wide variety of experiments with animal subjects. When supplied with 2 or 3 free parameters, all 3 models made fairly accurate predictions for a large set of experiments that used concurrent-chain procedures. One advantage of the hyperbolic value-added model is that it is derived from a simpler model that makes accurate predictions for many experiments using discrete-trial adjusting-delay procedures. Some results favor the hyperbolic value-added model and delay-reduction theory over the contextual-choice model, but more data are needed from choice situations for which the models make distinctly different predictions.

  10. Planetary surface dating from crater size-frequency distribution measurements: Poisson timing analysis

    NASA Astrophysics Data System (ADS)

    Michael, G. G.; Kneissl, T.; Neesemann, A.

    2016-10-01

    The predictions of crater chronology models have customarily been evaluated by dividing a crater population into discrete diameter intervals, plotting the crater density for each, and finding a best-fit model isochron, with the uncertainty in the procedure being assessed using 1/√n estimates, where n is the number of craters in an interval. This approach yields an approximate evaluation of the model predictions. The approximation is good until n becomes small, hence the often-posed question: what is the minimum number of craters for an adequate prediction? This work introduces an approach for exact evaluation of a crater chronology model using Poisson statistics and Bayesian inference, expressing the result as a likelihood function with an intrinsic uncertainty. We demonstrate that even in the case of no craters at all, a meaningful likelihood function can be obtained. Thus there is no required minimum count: there is only varying uncertainty, which can be well described. We recommend that the Poisson timing analysis should be preferred over binning/best-fit approaches. Additionally, we introduce a new notation to make it consistently clear that crater chronology model calibration errors are inseparable from stated crater model ages and their associated statistical errors.

  11. Soft elasticity of RNA gels and negative Poisson ratio

    NASA Astrophysics Data System (ADS)

    Ahsan, Amir; Rudnick, Joseph; Bruinsma, Robijn

    2007-12-01

    We propose a model for the elastic properties of RNA gels. The model predicts anomalous elastic properties in the form of a negative Poisson ratio and shape instabilities. The anomalous elasticity is generated by the non-Gaussian force-deformation relation of single-stranded RNA. The effect is greatly magnified by broken rotational symmetry produced by double-stranded sequences and the concomitant soft modes of uniaxial elastomers.

  12. Numerical methods for the Poisson-Fermi equation in electrolytes

    NASA Astrophysics Data System (ADS)

    Liu, Jinn-Liang

    2013-08-01

    The Poisson-Fermi equation proposed by Bazant, Storey, and Kornyshev [Phys. Rev. Lett. 106 (2011) 046102] for ionic liquids is applied to and numerically studied for electrolytes and biological ion channels in three-dimensional space. This is a fourth-order nonlinear PDE that deals with both steric and correlation effects of all ions and solvent molecules involved in a model system. The Fermi distribution follows from classical lattice models of configurational entropy of finite size ions and solvent molecules and hence prevents the long and outstanding problem of unphysical divergence predicted by the Gouy-Chapman model at large potentials due to the Boltzmann distribution of point charges. The equation reduces to Poisson-Boltzmann if the correlation length vanishes. A simplified matched interface and boundary method exhibiting optimal convergence is first developed for this equation by using a gramicidin A channel model that illustrates challenging issues associated with the geometric singularities of molecular surfaces of channel proteins in realistic 3D simulations. Various numerical methods then follow to tackle a range of numerical problems concerning the fourth-order term, nonlinearity, stability, efficiency, and effectiveness. The most significant feature of the Poisson-Fermi equation, namely, its inclusion of steric and correlation effects, is demonstrated by showing good agreement with Monte Carlo simulation data for a charged wall model and an L type calcium channel model.

  13. [Transformations of parameters in the generalized Poisson distribution for test data analysis].

    PubMed

    Ogasawara, H

    1996-02-01

    The generalized Poisson distribution is a distribution which approximates various forms of mixtures of Poisson distributions. The mean and variance of the generalized Poisson distribution, which are simple functions of the two parameters of the distribution, are more useful than the original parameters in test data analysis. Therefore, we adopted two types of transformations of parameters. The first model has new parameters of mean and standard deviation. The second model contains new parameters of mean and variance/mean. An example indicates that the transformed parameters are convenient to understand the properties of data. PMID:8935832

  14. Poisson brackets for densities of functionals

    NASA Astrophysics Data System (ADS)

    Dickey, Leonid A.

    In the theory of integrable systems and in other field theories one usually deals with Poisson brackets between functionals. The latter are integrals of densities. Densities are defined up to divergence (boundary) terms. A question arises, is it possible to define a reasonable Poisson bracket for densities themselves? A general theory was suggested by Barnich, Fulp, Lada, Markl and Stasheff which has led them to the notion of a strong homotopy Lie group, (sh Lie). We are giving a few concrete examples.

  15. Using Generalized Additive Models to Analyze Single-Case Designs

    ERIC Educational Resources Information Center

    Shadish, William; Sullivan, Kristynn

    2013-01-01

    Many analyses for single-case designs (SCDs)--including nearly all the effect size indicators-- currently assume no trend in the data. Regression and multilevel models allow for trend, but usually test only linear trend and have no principled way of knowing if higher order trends should be represented in the model. This paper shows how Generalized…

  16. Additive Manufacturing of Anatomical Models from Computed Tomography Scan Data.

    PubMed

    Gür, Y

    2014-12-01

    The purpose of the study presented here was to investigate the manufacturability of human anatomical models from Computed Tomography (CT) scan data via a 3D desktop printer which uses fused deposition modelling (FDM) technology. First, Digital Imaging and Communications in Medicine (DICOM) CT scan data were converted to 3D Standard Triangle Language (STL) format by using In Vaselius digital imaging program. Once this STL file is obtained, a 3D physical version of the anatomical model can be fabricated by a desktop 3D FDM printer. As a case study, a patient's skull CT scan data was considered, and a tangible version of the skull was manufactured by a 3D FDM desktop printer. During the 3D printing process, the skull was built using acrylonitrile-butadiene-styrene (ABS) co-polymer plastic. The printed model showed that the 3D FDM printing technology is able to fabricate anatomical models with high accuracy. As a result, the skull model can be used for preoperative surgical planning, medical training activities, implant design and simulation to show the potential of the FDM technology in medical field. It will also improve communication between medical stuff and patients. Current result indicates that a 3D desktop printer which uses FDM technology can be used to obtain accurate anatomical models.

  17. Additive Manufacturing of Anatomical Models from Computed Tomography Scan Data.

    PubMed

    Gür, Y

    2014-12-01

    The purpose of the study presented here was to investigate the manufacturability of human anatomical models from Computed Tomography (CT) scan data via a 3D desktop printer which uses fused deposition modelling (FDM) technology. First, Digital Imaging and Communications in Medicine (DICOM) CT scan data were converted to 3D Standard Triangle Language (STL) format by using In Vaselius digital imaging program. Once this STL file is obtained, a 3D physical version of the anatomical model can be fabricated by a desktop 3D FDM printer. As a case study, a patient's skull CT scan data was considered, and a tangible version of the skull was manufactured by a 3D FDM desktop printer. During the 3D printing process, the skull was built using acrylonitrile-butadiene-styrene (ABS) co-polymer plastic. The printed model showed that the 3D FDM printing technology is able to fabricate anatomical models with high accuracy. As a result, the skull model can be used for preoperative surgical planning, medical training activities, implant design and simulation to show the potential of the FDM technology in medical field. It will also improve communication between medical stuff and patients. Current result indicates that a 3D desktop printer which uses FDM technology can be used to obtain accurate anatomical models. PMID:26336695

  18. How much additional model complexity do the use of catchment hydrological signatures, additional data and expert knowledge warrant?

    NASA Astrophysics Data System (ADS)

    Hrachowitz, M.; Fovet, O.; RUIZ, L.; Gascuel-odoux, C.; Savenije, H.

    2013-12-01

    In the frequent absence of sufficient suitable data to constrain hydrological models, it is not uncommon to represent catchments at a range of scales by lumped model set-ups. Although process heterogeneity can average out on the catchment scale to generate simple catchment integrated responses whose general flow features can frequently be reproduced by lumped models, these models often fail to get details of the flow pattern as well as catchment internal dynamics, such as groundwater level changes, right to a sufficient degree, resulting in considerable predictive uncertainty. Traditionally, models are constrained by only one or two objectives functions, which does not warrant more than a handful of parameters to avoid elevated predictive uncertainty, thereby preventing more complex model set-ups accounting for increased process heterogeneity. In this study it was tested how much additional process heterogeneity is warranted in models when optimizing the model calibration strategy, using additional data and expert knowledge. Long-term timeseries of flow and groundwater levels for small nested experimental catchments in French Brittany with considerable differences in geology, topography and flow regime were used in this study to test which degree of model process heterogeneity is warranted with increased availability of information. In a first step, as a benchmark, the system was treated as one lumped entity and the model was trained based only on its ability to reproduce the hydrograph. Although it was found that the overall modelled flow generally reflects the observed flow response quite well, the internal system dynamics could not be reproduced. In further steps the complexity of this model was gradually increased, first by adding a separate riparian reservoir to the lumped set-up and then by a semi-distributed set-up, allowing for independent, parallel model structures, representing the contrasting nested catchments. Although calibration performance increased

  19. Causal Poisson bracket via deformation quantization

    NASA Astrophysics Data System (ADS)

    Berra-Montiel, Jasel; Molgado, Alberto; Palacios-García, César D.

    2016-06-01

    Starting with the well-defined product of quantum fields at two spacetime points, we explore an associated Poisson structure for classical field theories within the deformation quantization formalism. We realize that the induced star-product is naturally related to the standard Moyal product through an appropriate causal Green’s functions connecting points in the space of classical solutions to the equations of motion. Our results resemble the Peierls-DeWitt bracket that has been analyzed in the multisymplectic context. Once our star-product is defined, we are able to apply the Wigner-Weyl map in order to introduce a generalized version of Wick’s theorem. Finally, we include some examples to explicitly test our method: the real scalar field, the bosonic string and a physically motivated nonlinear particle model. For the field theoretic models, we have encountered causal generalizations of the creation/annihilation relations, and also a causal generalization of the Virasoro algebra for the bosonic string. For the nonlinear particle case, we use the approximate solution in terms of the Green’s function, in order to construct a well-behaved causal bracket.

  20. Addition of Diffusion Model to MELCOR and Comparison with Data

    SciTech Connect

    Brad Merrill; Richard Moore; Chang Oh

    2004-06-01

    A chemical diffusion model was incorporated into the thermal-hydraulics package of the MELCOR Severe Accident code (Reference 1) for analyzing air ingress events for a very high temperature gas-cooled reactor.

  1. Non-additive model for specific heat of electrons

    NASA Astrophysics Data System (ADS)

    Anselmo, D. H. A. L.; Vasconcelos, M. S.; Silva, R.; Mello, V. D.

    2016-10-01

    By using non-additive Tsallis entropy we demonstrate numerically that one-dimensional quasicrystals, whose energy spectra are multifractal Cantor sets, are characterized by an entropic parameter, and calculate the electronic specific heat, where we consider a non-additive entropy Sq. In our method we consider an energy spectra calculated using the one-dimensional tight binding Schrödinger equation, and their bands (or levels) are scaled onto the [ 0 , 1 ] interval. The Tsallis' formalism is applied to the energy spectra of Fibonacci and double-period one-dimensional quasiperiodic lattices. We analytically obtain an expression for the specific heat that we consider to be more appropriate to calculate this quantity in those quasiperiodic structures.

  2. Additional Research Needs to Support the GENII Biosphere Models

    SciTech Connect

    Napier, Bruce A.; Snyder, Sandra F.; Arimescu, Carmen

    2013-11-30

    In the course of evaluating the current parameter needs for the GENII Version 2 code (Snyder et al. 2013), areas of possible improvement for both the data and the underlying models have been identified. As the data review was implemented, PNNL staff identified areas where the models can be improved both to accommodate the locally significant pathways identified and also to incorporate newer models. The areas are general data needs for the existing models and improved formulations for the pathway models. It is recommended that priorities be set by NRC staff to guide selection of the most useful improvements in a cost-effective manner. Suggestions are made based on relatively easy and inexpensive changes, and longer-term more costly studies. In the short term, there are several improved model formulations that could be applied to the GENII suite of codes to make them more generally useful. • Implementation of the separation of the translocation and weathering processes • Implementation of an improved model for carbon-14 from non-atmospheric sources • Implementation of radon exposure pathways models • Development of a KML processor for the output report generator module data that are calculated on a grid that could be superimposed upon digital maps for easier presentation and display • Implementation of marine mammal models (manatees, seals, walrus, whales, etc.). Data needs in the longer term require extensive (and potentially expensive) research. Before picking any one radionuclide or food type, NRC staff should perform an in-house review of current and anticipated environmental analyses to select “dominant” radionuclides of interest to allow setting of cost-effective priorities for radionuclide- and pathway-specific research. These include • soil-to-plant uptake studies for oranges and other citrus fruits, and • Development of models for evaluation of radionuclide concentration in highly-processed foods such as oils and sugars. Finally, renewed

  3. Piezoelectrically-induced ultrasonic lubrication by way of Poisson effect

    NASA Astrophysics Data System (ADS)

    Dong, Sheng; Dapino, Marcelo J.

    2012-04-01

    It has been shown that the coefficient of dynamic friction between two surfaces decreases when ultrasonic vibra- tions are superimposed on the macroscopic sliding velocity. Instead of longitudinal vibrations, this paper focuses on the lateral contractions and expansions of an object in and around the half wavelength node region. This lateral motion is due to the Poisson effect (ratio of lateral strain to longitudinal strain) present in all materials. We numerically and experimentally investigate the Poisson-effect ultrasonic lubrication. A motor effect region is identified in which the effective friction force becomes negative as the vibratory waves drive the motion of the interface. Outside of the motor region, friction lubrication is observed with between 30% and 60% friction force reduction. A "stick-slip" contact model associated with horn kinematics is presented for simulation and analysis purposes. The model accurately matches the experiments for normal loads under 120 N.

  4. The mechanical influences of the graded distribution in the cross-sectional shape, the stiffness and Poisson׳s ratio of palm branches.

    PubMed

    Liu, Wangyu; Wang, Ningling; Jiang, Xiaoyong; Peng, Yujian

    2016-07-01

    The branching system plays an important role in maintaining the survival of palm trees. Due to the nature of monocots, no additional vascular bundles can be added in the palm tree tissue as it ages. Therefore, the changing of the cross-sectional area in the palm branch creates a graded distribution in the mechanical properties of the tissue. In the present work, this graded distribution in the tissue mechanical properties from sheath to petiole were studied with a multi-scale modeling approach. Then, the entire palm branch was reconstructed and analyzed using finite element methods. The variation of the elastic modulus can lower the level of mechanical stress in the sheath and also allow the branch to have smaller values of pressure on the other branches. Under impact loading, the enhanced frictional dissipation at the surfaces of adjacent branches benefits from the large Poisson׳s ratio of the sheath tissue. These findings can help to link the wind resistance ability of palm trees to their graded materials distribution in the branching system. PMID:26807774

  5. Universal Poisson Statistics of mRNAs with Complex Decay Pathways.

    PubMed

    Thattai, Mukund

    2016-01-19

    Messenger RNA (mRNA) dynamics in single cells are often modeled as a memoryless birth-death process with a constant probability per unit time that an mRNA molecule is synthesized or degraded. This predicts a Poisson steady-state distribution of mRNA number, in close agreement with experiments. This is surprising, since mRNA decay is known to be a complex process. The paradox is resolved by realizing that the Poisson steady state generalizes to arbitrary mRNA lifetime distributions. A mapping between mRNA dynamics and queueing theory highlights an identifiability problem: a measured Poisson steady state is consistent with a large variety of microscopic models. Here, I provide a rigorous and intuitive explanation for the universality of the Poisson steady state. I show that the mRNA birth-death process and its complex decay variants all take the form of the familiar Poisson law of rare events, under a nonlinear rescaling of time. As a corollary, not only steady-states but also transients are Poisson distributed. Deviations from the Poisson form occur only under two conditions, promoter fluctuations leading to transcriptional bursts or nonindependent degradation of mRNA molecules. These results place severe limits on the power of single-cell experiments to probe microscopic mechanisms, and they highlight the need for single-molecule measurements. PMID:26743048

  6. On third Poisson structure of KdV equation

    SciTech Connect

    Gorsky, A.; Marshakov, A.; Orlov, A.

    1995-12-01

    The third Poisson structure of the KdV equation in terms of canonical {open_quote}free fields{close_quote} and the reduced WZNW model is discussed. We prove that it is {open_quotes}diagonalized{close_quotes} in the Lagrange variables which were used before in the formulation of 2d gravity. We propose a quantum path integral for the KdV equation based on this representation.

  7. Events in time: Basic analysis of Poisson data

    SciTech Connect

    Engelhardt, M.E.

    1994-09-01

    The report presents basic statistical methods for analyzing Poisson data, such as the member of events in some period of time. It gives point estimates, confidence intervals, and Bayesian intervals for the rate of occurrence per unit of time. It shows how to compare subsets of the data, both graphically and by statistical tests, and how to look for trends in time. It presents a compound model when the rate of occurrence varies randomly. Examples and SAS programs are given.

  8. A generalized Poisson solver for first-principles device simulations

    NASA Astrophysics Data System (ADS)

    Bani-Hashemian, Mohammad Hossein; Brück, Sascha; Luisier, Mathieu; VandeVondele, Joost

    2016-01-01

    Electronic structure calculations of atomistic systems based on density functional theory involve solving the Poisson equation. In this paper, we present a plane-wave based algorithm for solving the generalized Poisson equation subject to periodic or homogeneous Neumann conditions on the boundaries of the simulation cell and Dirichlet type conditions imposed at arbitrary subdomains. In this way, source, drain, and gate voltages can be imposed across atomistic models of electronic devices. Dirichlet conditions are enforced as constraints in a variational framework giving rise to a saddle point problem. The resulting system of equations is then solved using a stationary iterative method in which the generalized Poisson operator is preconditioned with the standard Laplace operator. The solver can make use of any sufficiently smooth function modelling the dielectric constant, including density dependent dielectric continuum models. For all the boundary conditions, consistent derivatives are available and molecular dynamics simulations can be performed. The convergence behaviour of the scheme is investigated and its capabilities are demonstrated.

  9. A generalized Poisson solver for first-principles device simulations.

    PubMed

    Bani-Hashemian, Mohammad Hossein; Brück, Sascha; Luisier, Mathieu; VandeVondele, Joost

    2016-01-28

    Electronic structure calculations of atomistic systems based on density functional theory involve solving the Poisson equation. In this paper, we present a plane-wave based algorithm for solving the generalized Poisson equation subject to periodic or homogeneous Neumann conditions on the boundaries of the simulation cell and Dirichlet type conditions imposed at arbitrary subdomains. In this way, source, drain, and gate voltages can be imposed across atomistic models of electronic devices. Dirichlet conditions are enforced as constraints in a variational framework giving rise to a saddle point problem. The resulting system of equations is then solved using a stationary iterative method in which the generalized Poisson operator is preconditioned with the standard Laplace operator. The solver can make use of any sufficiently smooth function modelling the dielectric constant, including density dependent dielectric continuum models. For all the boundary conditions, consistent derivatives are available and molecular dynamics simulations can be performed. The convergence behaviour of the scheme is investigated and its capabilities are demonstrated. PMID:26827208

  10. The addition of algebraic turbulence modeling to program LAURA

    NASA Astrophysics Data System (ADS)

    Cheatwood, F. Mcneil; Thompson, R. A.

    1993-04-01

    The Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA) is modified to allow the calculation of turbulent flows. This is accomplished using the Cebeci-Smith and Baldwin-Lomax eddy-viscosity models in conjunction with the thin-layer Navier-Stokes options of the program. Turbulent calculations can be performed for both perfect-gas and equilibrium flows. However, a requirement of the models is that the flow be attached. It is seen that for slender bodies, adequate resolution of the boundary-layer gradients may require more cells in the normal direction than a laminar solution, even when grid stretching is employed. Results for axisymmetric and three-dimensional flows are presented. Comparison with experimental data and other numerical results reveal generally good agreement, except in the regions of detached flow.

  11. Finite-size effects and percolation properties of Poisson geometries

    NASA Astrophysics Data System (ADS)

    Larmier, C.; Dumonteil, E.; Malvagi, F.; Mazzolo, A.; Zoia, A.

    2016-07-01

    Random tessellations of the space represent a class of prototype models of heterogeneous media, which are central in several applications in physics, engineering, and life sciences. In this work, we investigate the statistical properties of d -dimensional isotropic Poisson geometries by resorting to Monte Carlo simulation, with special emphasis on the case d =3 . We first analyze the behavior of the key features of these stochastic geometries as a function of the dimension d and the linear size L of the domain. Then, we consider the case of Poisson binary mixtures, where the polyhedra are assigned two labels with complementary probabilities. For this latter class of random geometries, we numerically characterize the percolation threshold, the strength of the percolating cluster, and the average cluster size.

  12. Finite-size effects and percolation properties of Poisson geometries.

    PubMed

    Larmier, C; Dumonteil, E; Malvagi, F; Mazzolo, A; Zoia, A

    2016-07-01

    Random tessellations of the space represent a class of prototype models of heterogeneous media, which are central in several applications in physics, engineering, and life sciences. In this work, we investigate the statistical properties of d-dimensional isotropic Poisson geometries by resorting to Monte Carlo simulation, with special emphasis on the case d=3. We first analyze the behavior of the key features of these stochastic geometries as a function of the dimension d and the linear size L of the domain. Then, we consider the case of Poisson binary mixtures, where the polyhedra are assigned two labels with complementary probabilities. For this latter class of random geometries, we numerically characterize the percolation threshold, the strength of the percolating cluster, and the average cluster size. PMID:27575099

  13. Software reliability: Additional investigations into modeling with replicated experiments

    NASA Technical Reports Server (NTRS)

    Nagel, P. M.; Schotz, F. M.; Skirvan, J. A.

    1984-01-01

    The effects of programmer experience level, different program usage distributions, and programming languages are explored. All these factors affect performance, and some tentative relational hypotheses are presented. An analytic framework for replicated and non-replicated (traditional) software experiments is presented. A method of obtaining an upper bound on the error rate of the next error is proposed. The method was validated empirically by comparing forecasts with actual data. In all 14 cases the bound exceeded the observed parameter, albeit somewhat conservatively. Two other forecasting methods are proposed and compared to observed results. Although demonstrated relative to this framework that stages are neither independent nor exponentially distributed, empirical estimates show that the exponential assumption is nearly valid for all but the extreme tails of the distribution. Except for the dependence in the stage probabilities, Cox's model approximates to a degree what is being observed.

  14. Matrix decomposition graphics processing unit solver for Poisson image editing

    NASA Astrophysics Data System (ADS)

    Lei, Zhao; Wei, Li

    2012-10-01

    In recent years, gradient-domain methods have been widely discussed in the image processing field, including seamless cloning and image stitching. These algorithms are commonly carried out by solving a large sparse linear system: the Poisson equation. However, solving the Poisson equation is a computational and memory intensive task which makes it not suitable for real-time image editing. A new matrix decomposition graphics processing unit (GPU) solver (MDGS) is proposed to settle the problem. A matrix decomposition method is used to distribute the work among GPU threads, so that MDGS will take full advantage of the computing power of current GPUs. Additionally, MDGS is a hybrid solver (combines both the direct and iterative techniques) and has two-level architecture. These enable MDGS to generate identical solutions with those of the common Poisson methods and achieve high convergence rate in most cases. This approach is advantageous in terms of parallelizability, enabling real-time image processing, low memory-taken and extensive applications.

  15. Additional Developments in Atmosphere Revitalization Modeling and Simulation

    NASA Technical Reports Server (NTRS)

    Coker, Robert F.; Knox, James C.; Cummings, Ramona; Brooks, Thomas; Schunk, Richard G.; Gomez, Carlos

    2013-01-01

    NASA's Advanced Exploration Systems (AES) program is developing prototype systems, demonstrating key capabilities, and validating operational concepts for future human missions beyond Earth orbit. These forays beyond the confines of earth's gravity will place unprecedented demands on launch systems. They must launch the supplies needed to sustain a crew over longer periods for exploration missions beyond earth's moon. Thus all spacecraft systems, including those for the separation of metabolic carbon dioxide and water from a crewed vehicle, must be minimized with respect to mass, power, and volume. Emphasis is also placed on system robustness both to minimize replacement parts and ensure crew safety when a quick return to earth is not possible. Current efforts are focused on improving the current state-of-the-art systems utilizing fixed beds of sorbent pellets by evaluating structured sorbents, seeking more robust pelletized sorbents, and examining alternate bed configurations to improve system efficiency and reliability. These development efforts combine testing of sub-scale systems and multi-physics computer simulations to evaluate candidate approaches, select the best performing options, and optimize the configuration of the selected approach. This paper describes the continuing development of atmosphere revitalization models and simulations in support of the Atmosphere Revitalization Recovery and Environmental Monitoring (ARREM) project within the AES program.

  16. Additional Developments in Atmosphere Revitalization Modeling and Simulation

    NASA Technical Reports Server (NTRS)

    Coker, Robert F.; Knox, James C.; Cummings, Ramona; Brooks, Thomas; Schunk, Richard G.

    2013-01-01

    NASA's Advanced Exploration Systems (AES) program is developing prototype systems, demonstrating key capabilities, and validating operational concepts for future human missions beyond Earth orbit. These forays beyond the confines of earth's gravity will place unprecedented demands on launch systems. They must launch the supplies needed to sustain a crew over longer periods for exploration missions beyond earth's moon. Thus all spacecraft systems, including those for the separation of metabolic carbon dioxide and water from a crewed vehicle, must be minimized with respect to mass, power, and volume. Emphasis is also placed on system robustness both to minimize replacement parts and ensure crew safety when a quick return to earth is not possible. Current efforts are focused on improving the current state-of-the-art systems utilizing fixed beds of sorbent pellets by evaluating structured sorbents, seeking more robust pelletized sorbents, and examining alternate bed configurations to improve system efficiency and reliability. These development efforts combine testing of sub-scale systems and multi-physics computer simulations to evaluate candidate approaches, select the best performing options, and optimize the configuration of the selected approach. This paper describes the continuing development of atmosphere revitalization models and simulations in support of the Atmosphere Revitalization Recovery and Environmental Monitoring (ARREM)

  17. Time-dependent solutions for a stochastic model of gene expression with molecule production in the form of a compound Poisson process

    NASA Astrophysics Data System (ADS)

    Jedrak, Jakub; Ochab-Marcinek, Anna

    2016-09-01

    We study a stochastic model of gene expression, in which protein production has a form of random bursts whose size distribution is arbitrary, whereas protein decay is a first-order reaction. We find exact analytical expressions for the time evolution of the cumulant-generating function for the most general case when both the burst size probability distribution and the model parameters depend on time in an arbitrary (e.g., oscillatory) manner, and for arbitrary initial conditions. We show that in the case of periodic external activation and constant protein degradation rate, the response of the gene is analogous to the resistor-capacitor low-pass filter, where slow oscillations of the external driving have a greater effect on gene expression than the fast ones. We also demonstrate that the n th cumulant of the protein number distribution depends on the n th moment of the burst size distribution. We use these results to show that different measures of noise (coefficient of variation, Fano factor, fractional change of variance) may vary in time in a different manner. Therefore, any biological hypothesis of evolutionary optimization based on the nonmonotonic dependence of a chosen measure of noise on time must justify why it assumes that biological evolution quantifies noise in that particular way. Finally, we show that not only for exponentially distributed burst sizes but also for a wider class of burst size distributions (e.g., Dirac delta and gamma) the control of gene expression level by burst frequency modulation gives rise to proportional scaling of variance of the protein number distribution to its mean, whereas the control by amplitude modulation implies proportionality of protein number variance to the mean squared.

  18. Locating multiple interacting quantitative trait Loci with the zero-inflated generalized poisson regression.

    PubMed

    Erhardt, Vinzenz; Bogdan, Malgorzata; Czado, Claudia

    2010-01-01

    We consider the problem of locating multiple interacting quantitative trait loci (QTL) influencing traits measured in counts. In many applications the distribution of the count variable has a spike at zero. Zero-inflated generalized Poisson regression (ZIGPR) allows for an additional probability mass at zero and hence an improvement in the detection of significant loci. Classical model selection criteria often overestimate the QTL number. Therefore, modified versions of the Bayesian Information Criterion (mBIC and EBIC) were successfully used for QTL mapping. We apply these criteria based on ZIGPR as well as simpler models. An extensive simulation study shows their good power detecting QTL while controlling the false discovery rate. We illustrate how the inability of the Poisson distribution to account for over-dispersion leads to an overestimation of the QTL number and hence strongly discourages its application for identifying factors influencing count data. The proposed method is used to analyze the mice gallstone data of Lyons et al. (2003). Our results suggest the existence of a novel QTL on chromosome 4 interacting with another QTL previously identified on chromosome 5. We provide the corresponding code in R.

  19. Efficient gradient projection methods for edge-preserving removal of Poisson noise

    NASA Astrophysics Data System (ADS)

    Zanella, R.; Boccacci, P.; Zanni, L.; Bertero, M.

    2009-04-01

    Several methods based on different image models have been proposed and developed for image denoising. Some of them, such as total variation (TV) and wavelet thresholding, are based on the assumption of additive Gaussian noise. Recently the TV approach has been extended to the case of Poisson noise, a model describing the effect of photon counting in applications such as emission tomography, microscopy and astronomy. For the removal of this kind of noise we consider an approach based on a constrained optimization problem, with an objective function describing TV and other edge-preserving regularizations of the Kullback-Leibler divergence. We introduce a new discrepancy principle for the choice of the regularization parameter, which is justified by the statistical properties of the Poisson noise. For solving the optimization problem we propose a particular form of a general scaled gradient projection (SGP) method, recently introduced for image deblurring. We derive the form of the scaling from a decomposition of the gradient of the regularization functional into a positive and a negative part. The beneficial effect of the scaling is proved by means of numerical simulations, showing that the performance of the proposed form of SGP is superior to that of the most efficient gradient projection methods. An extended numerical analysis of the dependence of the solution on the regularization parameter is also performed to test the effectiveness of the proposed discrepancy principle.

  20. Transferability of regional permafrost disturbance susceptibility modelling using generalized linear and generalized additive models

    NASA Astrophysics Data System (ADS)

    Rudy, Ashley C. A.; Lamoureux, Scott F.; Treitz, Paul; van Ewijk, Karin Y.

    2016-07-01

    To effectively assess and mitigate risk of permafrost disturbance, disturbance-prone areas can be predicted through the application of susceptibility models. In this study we developed regional susceptibility models for permafrost disturbances using a field disturbance inventory to test the transferability of the model to a broader region in the Canadian High Arctic. Resulting maps of susceptibility were then used to explore the effect of terrain variables on the occurrence of disturbances within this region. To account for a large range of landscape characteristics, the model was calibrated using two locations: Sabine Peninsula, Melville Island, NU, and Fosheim Peninsula, Ellesmere Island, NU. Spatial patterns of disturbance were predicted with a generalized linear model (GLM) and generalized additive model (GAM), each calibrated using disturbed and randomized undisturbed locations from both locations and GIS-derived terrain predictor variables including slope, potential incoming solar radiation, wetness index, topographic position index, elevation, and distance to water. Each model was validated for the Sabine and Fosheim Peninsulas using independent data sets while the transferability of the model to an independent site was assessed at Cape Bounty, Melville Island, NU. The regional GLM and GAM validated well for both calibration sites (Sabine and Fosheim) with the area under the receiver operating curves (AUROC) > 0.79. Both models were applied directly to Cape Bounty without calibration and validated equally with AUROC's of 0.76; however, each model predicted disturbed and undisturbed samples differently. Additionally, the sensitivity of the transferred model was assessed using data sets with different sample sizes. Results indicated that models based on larger sample sizes transferred more consistently and captured the variability within the terrain attributes in the respective study areas. Terrain attributes associated with the initiation of disturbances were

  1. Poisson Noise Removal in Spherical Multichannel Images: Application to Fermi data

    NASA Astrophysics Data System (ADS)

    Schmitt, Jérémy; Starck, Jean-Luc; Fadili, Jalal; Digel, Seth

    2012-03-01

    The Fermi Gamma-ray Space Telescope, which was launched by NASA in June 2008, is a powerful space observatory which studies the high-energy gamma-ray sky [5]. Fermi's main instrument, the Large Area Telescope (LAT), detects photons in an energy range between 20MeV and >300 GeV. The LAT is much more sensitive than its predecessor, the energetic gamma ray experiment telescope (EGRET) telescope on the Compton Gamma-ray Observatory, and is expected to find several thousand gamma-ray point sources, which is an order of magnitude more than its predecessor EGRET [13]. Even with its relatively large acceptance (∼2m2 sr), the number of photons detected by the LAT outside the Galactic plane and away from intense sources is relatively low and the sky overall has a diffuse glow from cosmic-ray interactions with interstellar gas and low energy photons that makes a background against which point sources need to be detected. In addition, the per-photon angular resolution of the LAT is relatively poor and strongly energy dependent, ranging from>10° at 20MeV to ∼0.1° above 100 GeV. Consequently, the spherical photon count images obtained by Fermi are degraded by the fluctuations on the number of detected photons. This kind of noise is strongly signal dependent : on the brightest parts of the image like the galactic plane or the brightest sources, we have a lot of photons per pixel, and so the photon noise is low. Outside the galactic plane, the number of photons per pixel is low, which means that the photon noise is high. Such a signal-dependent noise cannot be accurately modeled by a Gaussian distribution. The basic photon-imaging model assumes that the number of detected photons at each pixel location is Poisson distributed. More specifically, the image is considered as a realization of an inhomogeneous Poisson process. This statistical noise makes the source detection more difficult, consequently it is highly desirable to have an efficient denoising method for spherical

  2. Modeling the cardiovascular system using a nonlinear additive autoregressive model with exogenous input

    NASA Astrophysics Data System (ADS)

    Riedl, M.; Suhrbier, A.; Malberg, H.; Penzel, T.; Bretthauer, G.; Kurths, J.; Wessel, N.

    2008-07-01

    The parameters of heart rate variability and blood pressure variability have proved to be useful analytical tools in cardiovascular physics and medicine. Model-based analysis of these variabilities additionally leads to new prognostic information about mechanisms behind regulations in the cardiovascular system. In this paper, we analyze the complex interaction between heart rate, systolic blood pressure, and respiration by nonparametric fitted nonlinear additive autoregressive models with external inputs. Therefore, we consider measurements of healthy persons and patients suffering from obstructive sleep apnea syndrome (OSAS), with and without hypertension. It is shown that the proposed nonlinear models are capable of describing short-term fluctuations in heart rate as well as systolic blood pressure significantly better than similar linear ones, which confirms the assumption of nonlinear controlled heart rate and blood pressure. Furthermore, the comparison of the nonlinear and linear approaches reveals that the heart rate and blood pressure variability in healthy subjects is caused by a higher level of noise as well as nonlinearity than in patients suffering from OSAS. The residue analysis points at a further source of heart rate and blood pressure variability in healthy subjects, in addition to heart rate, systolic blood pressure, and respiration. Comparison of the nonlinear models within and among the different groups of subjects suggests the ability to discriminate the cohorts that could lead to a stratification of hypertension risk in OSAS patients.

  3. Computation of confidence intervals for Poisson processes

    NASA Astrophysics Data System (ADS)

    Aguilar-Saavedra, J. A.

    2000-07-01

    We present an algorithm which allows a fast numerical computation of Feldman-Cousins confidence intervals for Poisson processes, even when the number of background events is relatively large. This algorithm incorporates an appropriate treatment of the singularities that arise as a consequence of the discreteness of the variable.

  4. Easy Demonstration of the Poisson Spot

    ERIC Educational Resources Information Center

    Gluck, Paul

    2010-01-01

    Many physics teachers have a set of slides of single, double and multiple slits to show their students the phenomena of interference and diffraction. Thomas Young's historic experiments with double slits were indeed a milestone in proving the wave nature of light. But another experiment, namely the Poisson spot, was also important historically and…

  5. On the Determination of Poisson Statistics for Haystack Radar Observations of Orbital Debris

    NASA Technical Reports Server (NTRS)

    Stokely, Christopher L.; Benbrook, James R.; Horstman, Matt

    2007-01-01

    A convenient and powerful method is used to determine if radar detections of orbital debris are observed according to Poisson statistics. This is done by analyzing the time interval between detection events. For Poisson statistics, the probability distribution of the time interval between events is shown to be an exponential distribution. This distribution is a special case of the Erlang distribution that is used in estimating traffic loads on telecommunication networks. Poisson statistics form the basis of many orbital debris models but the statistical basis of these models has not been clearly demonstrated empirically until now. Interestingly, during the fiscal year 2003 observations with the Haystack radar in a fixed staring mode, there are no statistically significant deviations observed from that expected with Poisson statistics, either independent or dependent of altitude or inclination. One would potentially expect some significant clustering of events in time as a result of satellite breakups, but the presence of Poisson statistics indicates that such debris disperse rapidly with respect to Haystack's very narrow radar beam. An exception to Poisson statistics is observed in the months following the intentional breakup of the Fengyun satellite in January 2007.

  6. Theory of multicolor lattice gas - A cellular automaton Poisson solver

    NASA Technical Reports Server (NTRS)

    Chen, H.; Matthaeus, W. H.; Klein, L. W.

    1990-01-01

    The present class of models for cellular automata involving a quiescent hydrodynamic lattice gas with multiple-valued passive labels termed 'colors', the lattice collisions change individual particle colors while preserving net color. The rigorous proofs of the multicolor lattice gases' essential features are rendered more tractable by an equivalent subparticle representation in which the color is represented by underlying two-state 'spins'. Schemes for the introduction of Dirichlet and Neumann boundary conditions are described, and two illustrative numerical test cases are used to verify the theory. The lattice gas model is equivalent to a Poisson equation solution.

  7. Theoretical Analysis of Radiographic Images by Nonstationary Poisson Processes

    NASA Astrophysics Data System (ADS)

    Tanaka, Kazuo; Yamada, Isao; Uchida, Suguru

    1980-12-01

    This paper deals with the noise analysis of radiographic images obtained in the usual fluorescent screen-film system. The theory of nonstationary Poisson processes is applied to the analysis of the radiographic images containing the object information. The ensemble averages, the autocorrelation functions, and the Wiener spectrum densities of the light-energy distribution at the fluorescent screen and of the film optical-density distribution are obtained. The detection characteristics of the system are evaluated theoretically. Numerical examples of the one-dimensional image are shown and the results are compared with those obtained under the assumption that the object image is related to the background noise by the additive process.

  8. The solution of large multi-dimensional Poisson problems

    NASA Technical Reports Server (NTRS)

    Stone, H. S.

    1974-01-01

    The Buneman algorithm for solving Poisson problems can be adapted to solve large Poisson problems on computers with a rotating drum memory so that the computation is done with very little time lost due to rotational latency of the drum.

  9. Polarizable Atomic Multipole Solutes in a Poisson-Boltzmann Continuum

    PubMed Central

    Schnieders, Michael J.; Baker, Nathan A.; Ren, Pengyu; Ponder, Jay W.

    2008-01-01

    Modeling the change in the electrostatics of organic molecules upon moving from vacuum into solvent, due to polarization, has long been an interesting problem. In vacuum, experimental values for the dipole moments and polarizabilities of small, rigid molecules are known to high accuracy; however, it has generally been difficult to determine these quantities for a polar molecule in water. A theoretical approach introduced by Onsager used vacuum properties of small molecules, including polarizability, dipole moment and size, to predict experimentally known permittivities of neat liquids via the Poisson equation. Since this important advance in understanding the condensed phase, a large number of computational methods have been developed to study solutes embedded in a continuum via numerical solutions to the Poisson-Boltzmann equation (PBE). Only recently have the classical force fields used for studying biomolecules begun to include explicit polarization in their functional forms. Here we describe the theory underlying a newly developed Polarizable Multipole Poisson-Boltzmann (PMPB) continuum electrostatics model, which builds on the Atomic Multipole Optimized Energetics for Biomolecular Applications (AMOEBA) force field. As an application of the PMPB methodology, results are presented for several small folded proteins studied by molecular dynamics in explicit water as well as embedded in the PMPB continuum. The dipole moment of each protein increased on average by a factor of 1.27 in explicit water and 1.26 in continuum solvent. The essentially identical electrostatic response in both models suggests that PMPB electrostatics offers an efficient alternative to sampling explicit solvent molecules for a variety of interesting applications, including binding energies, conformational analysis, and pKa prediction. Introduction of 150 mM salt lowered the electrostatic solvation energy between 2–13 kcal/mole, depending on the formal charge of the protein, but had only a

  10. A Poisson process approximation for generalized K-5 confidence regions

    NASA Technical Reports Server (NTRS)

    Arsham, H.; Miller, D. R.

    1982-01-01

    One-sided confidence regions for continuous cumulative distribution functions are constructed using empirical cumulative distribution functions and the generalized Kolmogorov-Smirnov distance. The band width of such regions becomes narrower in the right or left tail of the distribution. To avoid tedious computation of confidence levels and critical values, an approximation based on the Poisson process is introduced. This aproximation provides a conservative confidence region; moreover, the approximation error decreases monotonically to 0 as sample size increases. Critical values necessary for implementation are given. Applications are made to the areas of risk analysis, investment modeling, reliability assessment, and analysis of fault tolerant systems.

  11. Poisson filtering of laser ranging data

    NASA Technical Reports Server (NTRS)

    Ricklefs, Randall L.; Shelus, Peter J.

    1993-01-01

    The filtering of data in a high noise, low signal strength environment is a situation encountered routinely in lunar laser ranging (LLR) and, to a lesser extent, in artificial satellite laser ranging (SLR). The use of Poisson statistics as one of the tools for filtering LLR data is described first in a historical context. The more recent application of this statistical technique to noisy SLR data is also described.

  12. Stabilities for nonisentropic Euler-Poisson equations.

    PubMed

    Cheung, Ka Luen; Wong, Sen

    2015-01-01

    We establish the stabilities and blowup results for the nonisentropic Euler-Poisson equations by the energy method. By analysing the second inertia, we show that the classical solutions of the system with attractive forces blow up in finite time in some special dimensions when the energy is negative. Moreover, we obtain the stabilities results for the system in the cases of attractive and repulsive forces.

  13. Computation of solar perturbations with Poisson series

    NASA Technical Reports Server (NTRS)

    Broucke, R.

    1974-01-01

    Description of a project for computing first-order perturbations of natural or artificial satellites by integrating the equations of motion on a computer with automatic Poisson series expansions. A basic feature of the method of solution is that the classical variation-of-parameters formulation is used rather than rectangular coordinates. However, the variation-of-parameters formulation uses the three rectangular components of the disturbing force rather than the classical disturbing function, so that there is no problem in expanding the disturbing function in series. Another characteristic of the variation-of-parameters formulation employed is that six rather unusual variables are used in order to avoid singularities at the zero eccentricity and zero (or 90 deg) inclination. The integration process starts by assuming that all the orbit elements present on the right-hand sides of the equations of motion are constants. These right-hand sides are then simple Poisson series which can be obtained with the use of the Bessel expansions of the two-body problem in conjunction with certain interation methods. These Poisson series can then be integrated term by term, and a first-order solution is obtained.

  14. First- and second-order Poisson spots

    NASA Astrophysics Data System (ADS)

    Kelly, William R.; Shirley, Eric L.; Migdall, Alan L.; Polyakov, Sergey V.; Hendrix, Kurt

    2009-08-01

    Although Thomas Young is generally given credit for being the first to provide evidence against Newton's corpuscular theory of light, it was Augustin Fresnel who first stated the modern theory of diffraction. We review the history surrounding Fresnel's 1818 paper and the role of the Poisson spot in the associated controversy. We next discuss the boundary-diffraction-wave approach to calculating diffraction effects and show how it can reduce the complexity of calculating diffraction patterns. We briefly discuss a generalization of this approach that reduces the dimensionality of integrals needed to calculate the complete diffraction pattern of any order diffraction effect. We repeat earlier demonstrations of the conventional Poisson spot and discuss an experimental setup for demonstrating an analogous phenomenon that we call a "second-order Poisson spot." Several features of the diffraction pattern can be explained simply by considering the path lengths of singly and doubly bent paths and distinguishing between first- and second-order diffraction effects related to such paths, respectively.

  15. Nonparametric Bayesian Segmentation of a Multivariate Inhomogeneous Space-Time Poisson Process.

    PubMed

    Ding, Mingtao; He, Lihan; Dunson, David; Carin, Lawrence

    2012-12-01

    A nonparametric Bayesian model is proposed for segmenting time-evolving multivariate spatial point process data. An inhomogeneous Poisson process is assumed, with a logistic stick-breaking process (LSBP) used to encourage piecewise-constant spatial Poisson intensities. The LSBP explicitly favors spatially contiguous segments, and infers the number of segments based on the observed data. The temporal dynamics of the segmentation and of the Poisson intensities are modeled with exponential correlation in time, implemented in the form of a first-order autoregressive model for uniformly sampled discrete data, and via a Gaussian process with an exponential kernel for general temporal sampling. We consider and compare two different inference techniques: a Markov chain Monte Carlo sampler, which has relatively high computational complexity; and an approximate and efficient variational Bayesian analysis. The model is demonstrated with a simulated example and a real example of space-time crime events in Cincinnati, Ohio, USA. PMID:23741284

  16. Additive Manufacturing Modeling and Simulation A Literature Review for Electron Beam Free Form Fabrication

    NASA Technical Reports Server (NTRS)

    Seufzer, William J.

    2014-01-01

    Additive manufacturing is coming into industrial use and has several desirable attributes. Control of the deposition remains a complex challenge, and so this literature review was initiated to capture current modeling efforts in the field of additive manufacturing. This paper summarizes about 10 years of modeling and simulation related to both welding and additive manufacturing. The goals were to learn who is doing what in modeling and simulation, to summarize various approaches taken to create models, and to identify research gaps. Later sections in the report summarize implications for closed-loop-control of the process, implications for local research efforts, and implications for local modeling efforts.

  17. On the singularity of the Vlasov-Poisson system

    SciTech Connect

    Zheng, Jian; Qin, Hong

    2013-09-15

    The Vlasov-Poisson system can be viewed as the collisionless limit of the corresponding Fokker-Planck-Poisson system. It is reasonable to expect that the result of Landau damping can also be obtained from the Fokker-Planck-Poisson system when the collision frequency ν approaches zero. However, we show that the collisionless Vlasov-Poisson system is a singular limit of the collisional Fokker-Planck-Poisson system, and Landau's result can be recovered only as the ν approaches zero from the positive side.

  18. On the Singularity of the Vlasov-Poisson System

    SciTech Connect

    and Hong Qin, Jian Zheng

    2013-04-26

    The Vlasov-Poisson system can be viewed as the collisionless limit of the corresponding Fokker- Planck-Poisson system. It is reasonable to expect that the result of Landau damping can also be obtained from the Fokker-Planck-Poisson system when the collision frequency v approaches zero. However, we show that the colllisionless Vlasov-Poisson system is a singular limit of the collisional Fokker-Planck-Poisson system, and Landau's result can be recovered only as the approaching zero from the positive side.

  19. Compositions, Random Sums and Continued Random Fractions of Poisson and Fractional Poisson Processes

    NASA Astrophysics Data System (ADS)

    Orsingher, Enzo; Polito, Federico

    2012-08-01

    In this paper we consider the relation between random sums and compositions of different processes. In particular, for independent Poisson processes N α ( t), N β ( t), t>0, we have that N_{α}(N_{β}(t)) stackrel{d}{=} sum_{j=1}^{N_{β}(t)} Xj, where the X j s are Poisson random variables. We present a series of similar cases, where the outer process is Poisson with different inner processes. We highlight generalisations of these results where the external process is infinitely divisible. A section of the paper concerns compositions of the form N_{α}(tauk^{ν}), ν∈(0,1], where tauk^{ν} is the inverse of the fractional Poisson process, and we show how these compositions can be represented as random sums. Furthermore we study compositions of the form Θ( N( t)), t>0, which can be represented as random products. The last section is devoted to studying continued fractions of Cauchy random variables with a Poisson number of levels. We evaluate the exact distribution and derive the scale parameter in terms of ratios of Fibonacci numbers.

  20. On the fractal characterization of Paretian Poisson processes

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo I.; Sokolov, Igor M.

    2012-06-01

    Paretian Poisson processes are Poisson processes which are defined on the positive half-line, have maximal points, and are quantified by power-law intensities. Paretian Poisson processes are elemental in statistical physics, and are the bedrock of a host of power-law statistics ranging from Pareto's law to anomalous diffusion. In this paper we establish evenness-based fractal characterizations of Paretian Poisson processes. Considering an array of socioeconomic evenness-based measures of statistical heterogeneity, we show that: amongst the realm of Poisson processes which are defined on the positive half-line, and have maximal points, Paretian Poisson processes are the unique class of 'fractal processes' exhibiting scale-invariance. The results established in this paper are diametric to previous results asserting that the scale-invariance of Poisson processes-with respect to physical randomness-based measures of statistical heterogeneity-is characterized by exponential Poissonian intensities.

  1. Multiprocessing and Correction Algorithm of 3D-models for Additive Manufacturing

    NASA Astrophysics Data System (ADS)

    Anamova, R. R.; Zelenov, S. V.; Kuprikov, M. U.; Ripetskiy, A. V.

    2016-07-01

    This article addresses matters related to additive manufacturing preparation. A layer-by-layer model presentation was developed on the basis of a routing method. Methods for correction of errors in the layer-by-layer model presentation were developed. A multiprocessing algorithm for forming an additive manufacturing batch file was realized.

  2. Stationary and non-stationary occurrences of miniature end plate potentials are well described as stationary and non-stationary Poisson processes in the mollusc Navanax inermis.

    PubMed

    Cappell, M S; Spray, D C; Bennett, M V

    1988-06-28

    Protractor muscles in the gastropod mollusc Navanax inermis exhibit typical spontaneous miniature end plate potentials with mean amplitude 1.71 +/- 1.19 (standard deviation) mV. The evoked end plate potential is quantized, with a quantum equal to the miniature end plate potential amplitude. When their rate is stationary, occurrence of miniature end plate potentials is a random, Poisson process. When non-stationary, spontaneous miniature end plate potential occurrence is a non-stationary Poisson process, a Poisson process with the mean frequency changing with time. This extends the random Poisson model for miniature end plate potentials to the frequently observed non-stationary occurrence. Reported deviations from a Poisson process can sometimes be accounted for by the non-stationary Poisson process and more complex models, such as clustered release, are not always needed.

  3. Hydrodynamic limit of Wigner-Poisson kinetic theory: Revisited

    SciTech Connect

    Akbari-Moghanjoughi, M.

    2015-02-15

    In this paper, we revisit the hydrodynamic limit of the Langmuir wave dispersion relation based on the Wigner-Poisson model in connection with that obtained directly from the original Lindhard dielectric function based on the random-phase-approximation. It is observed that the (fourth-order) expansion of the exact Lindhard dielectric constant correctly reduces to the hydrodynamic dispersion relation with an additional term of fourth-order, beside that caused by the quantum diffraction effect. It is also revealed that the generalized Lindhard dielectric theory accounts for the recently discovered Shukla-Eliasson attractive potential (SEAP). However, the expansion of the exact Lindhard static dielectric function leads to a k{sup 4} term of different magnitude than that obtained from the linearized quantum hydrodynamics model. It is shown that a correction factor of 1/9 should be included in the term arising from the quantum Bohm potential of the momentum balance equation in fluid model in order for a correct plasma dielectric response treatment. Finally, it is observed that the long-range oscillatory screening potential (Friedel oscillations) of type cos(2k{sub F}r)/r{sup 3}, which is a consequence of the divergence of the dielectric function at point k = 2k{sub F} in a quantum plasma, arises due to the finiteness of the Fermi-wavenumber and is smeared out in the limit of very high electron number-densities, typical of white dwarfs and neutron stars. In the very low electron number-density regime, typical of semiconductors and metals, where the Friedel oscillation wavelength becomes much larger compared to the interparticle distances, the SEAP appears with a much deeper potential valley. It is remarked that the fourth-order approximate Lindhard dielectric constant approaches that of the linearized quantum hydrodynamic in the limit if very high electron number-density. By evaluation of the imaginary part of the Lindhard dielectric function, it is shown that the

  4. Toward negative Poisson's ratio composites: Investigation of the auxetic behavior of fibrous networks

    NASA Astrophysics Data System (ADS)

    Tatlier, Mehmet Seha

    Random fibrous can be found among natural and synthetic materials. Some of these random fibrous networks possess negative Poisson's ratio and they are extensively called auxetic materials. The governing mechanisms behind this counter intuitive property in random networks are yet to be understood and this kind of auxetic material remains widely under-explored. However, most of synthetic auxetic materials suffer from their low strength. This shortcoming can be rectified by developing high strength auxetic composites. The process of embedding auxetic random fibrous networks in a polymer matrix is an attractive alternate route to the manufacture of auxetic composites, however before such an approach can be developed, a methodology for designing fibrous networks with the desired negative Poisson's ratios must first be established. This requires an understanding of the factors which bring about negative Poisson's ratios in these materials. In this study, a numerical model is presented in order to investigate the auxetic behavior in compressed random fiber networks. Finite element analyses of three-dimensional stochastic fiber networks were performed to gain insight into the effects of parameters such as network anisotropy, network density, and degree of network compression on the out-of-plane Poisson's ratio and Young's modulus. The simulation results suggest that the compression is the critical parameter that gives rise to negative Poisson's ratio while anisotropy significantly promotes the auxetic behavior. This model can be utilized to design fibrous auxetic materials and to evaluate feasibility of developing auxetic composites by using auxetic fibrous networks as the reinforcing layer.

  5. Influence of the Poisson Ratio on the Natural Frequencies of Stepped-Thickness Circular Plate

    NASA Astrophysics Data System (ADS)

    AL-JUMAILY, A. M.; JAMEEL, K.

    2000-07-01

    The natural frequencies of simply supported and clamped, stepped-thickness plates are determined using classical plate solutions with exact continuity conditions at the step. The effect of incorporating the Poisson ratio in the continuity conditions on the natural frequencies for nodal diameters, 0, 1 and nodal interior circle numbers 0, 1, 2 is thoroughly investigated. For engineering applications, a design criterion is proposed for simply supported and clamped plates based on an approximate linear model for the natural frequencies. The literature lacks experimental results on this type of plates. Hence, in this paper experimental results are presented for four models with two Poisson's ratios and prove their consistency with the proposed criterion.

  6. Reentrant Origami-Based Metamaterials with Negative Poisson's Ratio and Bistability

    NASA Astrophysics Data System (ADS)

    Yasuda, H.; Yang, J.

    2015-05-01

    We investigate the unique mechanical properties of reentrant 3D origami structures based on the Tachi-Miura polyhedron (TMP). We explore the potential usage as mechanical metamaterials that exhibit tunable negative Poisson's ratio and structural bistability simultaneously. We show analytically and experimentally that the Poisson's ratio changes from positive to negative and vice versa during its folding motion. In addition, we verify the bistable mechanism of the reentrant 3D TMP under rigid origami configurations without relying on the buckling motions of planar origami surfaces. This study forms a foundation in designing and constructing TMP-based metamaterials in the form of bellowslike structures for engineering applications.

  7. Ductile Titanium Alloy with Low Poisson's Ratio

    SciTech Connect

    Hao, Y. L.; Li, S. J.; Sun, B. B.; Sui, M. L.; Yang, R.

    2007-05-25

    We report a ductile {beta}-type titanium alloy with body-centered cubic (bcc) crystal structure having a low Poisson's ratio of 0.14. The almost identical ultralow bulk and shear moduli of {approx}24 GPa combined with an ultrahigh strength of {approx}0.9 GPa contribute to easy crystal distortion due to much-weakened chemical bonding of atoms in the crystal, leading to significant elastic softening in tension and elastic hardening in compression. The peculiar elastic and plastic deformation behaviors of the alloy are interpreted as a result of approaching the elastic limit of the bcc crystal under applied stress.

  8. Normal and compound poisson approximations for pattern occurrences in NGS reads.

    PubMed

    Zhai, Zhiyuan; Reinert, Gesine; Song, Kai; Waterman, Michael S; Luan, Yihui; Sun, Fengzhu

    2012-06-01

    Next generation sequencing (NGS) technologies are now widely used in many biological studies. In NGS, sequence reads are randomly sampled from the genome sequence of interest. Most computational approaches for NGS data first map the reads to the genome and then analyze the data based on the mapped reads. Since many organisms have unknown genome sequences and many reads cannot be uniquely mapped to the genomes even if the genome sequences are known, alternative analytical methods are needed for the study of NGS data. Here we suggest using word patterns to analyze NGS data. Word pattern counting (the study of the probabilistic distribution of the number of occurrences of word patterns in one or multiple long sequences) has played an important role in molecular sequence analysis. However, no studies are available on the distribution of the number of occurrences of word patterns in NGS reads. In this article, we build probabilistic models for the background sequence and the sampling process of the sequence reads from the genome. Based on the models, we provide normal and compound Poisson approximations for the number of occurrences of word patterns from the sequence reads, with bounds on the approximation error. The main challenge is to consider the randomness in generating the long background sequence, as well as in the sampling of the reads using NGS. We show the accuracy of these approximations under a variety of conditions for different patterns with various characteristics. Under realistic assumptions, the compound Poisson approximation seems to outperform the normal approximation in most situations. These approximate distributions can be used to evaluate the statistical significance of the occurrence of patterns from NGS data. The theory and the computational algorithm for calculating the approximate distributions are then used to analyze ChIP-Seq data using transcription factor GABP. Software is available online (www-rcf.usc.edu/

  9. Prescription-induced jump distributions in multiplicative Poisson processes.

    PubMed

    Suweis, Samir; Porporato, Amilcare; Rinaldo, Andrea; Maritan, Amos

    2011-06-01

    Generalized Langevin equations (GLE) with multiplicative white Poisson noise pose the usual prescription dilemma leading to different evolution equations (master equations) for the probability distribution. Contrary to the case of multiplicative Gaussian white noise, the Stratonovich prescription does not correspond to the well-known midpoint (or any other intermediate) prescription. By introducing an inertial term in the GLE, we show that the Itô and Stratonovich prescriptions naturally arise depending on two time scales, one induced by the inertial term and the other determined by the jump event. We also show that, when the multiplicative noise is linear in the random variable, one prescription can be made equivalent to the other by a suitable transformation in the jump probability distribution. We apply these results to a recently proposed stochastic model describing the dynamics of primary soil salinization, in which the salt mass balance within the soil root zone requires the analysis of different prescriptions arising from the resulting stochastic differential equation forced by multiplicative white Poisson noise, the features of which are tailored to the characters of the daily precipitation. A method is finally suggested to infer the most appropriate prescription from the data.

  10. Prescription-induced jump distributions in multiplicative Poisson processes

    NASA Astrophysics Data System (ADS)

    Suweis, Samir; Porporato, Amilcare; Rinaldo, Andrea; Maritan, Amos

    2011-06-01

    Generalized Langevin equations (GLE) with multiplicative white Poisson noise pose the usual prescription dilemma leading to different evolution equations (master equations) for the probability distribution. Contrary to the case of multiplicative Gaussian white noise, the Stratonovich prescription does not correspond to the well-known midpoint (or any other intermediate) prescription. By introducing an inertial term in the GLE, we show that the Itô and Stratonovich prescriptions naturally arise depending on two time scales, one induced by the inertial term and the other determined by the jump event. We also show that, when the multiplicative noise is linear in the random variable, one prescription can be made equivalent to the other by a suitable transformation in the jump probability distribution. We apply these results to a recently proposed stochastic model describing the dynamics of primary soil salinization, in which the salt mass balance within the soil root zone requires the analysis of different prescriptions arising from the resulting stochastic differential equation forced by multiplicative white Poisson noise, the features of which are tailored to the characters of the daily precipitation. A method is finally suggested to infer the most appropriate prescription from the data.

  11. Prescription-induced jump distributions in multiplicative Poisson processes.

    PubMed

    Suweis, Samir; Porporato, Amilcare; Rinaldo, Andrea; Maritan, Amos

    2011-06-01

    Generalized Langevin equations (GLE) with multiplicative white Poisson noise pose the usual prescription dilemma leading to different evolution equations (master equations) for the probability distribution. Contrary to the case of multiplicative Gaussian white noise, the Stratonovich prescription does not correspond to the well-known midpoint (or any other intermediate) prescription. By introducing an inertial term in the GLE, we show that the Itô and Stratonovich prescriptions naturally arise depending on two time scales, one induced by the inertial term and the other determined by the jump event. We also show that, when the multiplicative noise is linear in the random variable, one prescription can be made equivalent to the other by a suitable transformation in the jump probability distribution. We apply these results to a recently proposed stochastic model describing the dynamics of primary soil salinization, in which the salt mass balance within the soil root zone requires the analysis of different prescriptions arising from the resulting stochastic differential equation forced by multiplicative white Poisson noise, the features of which are tailored to the characters of the daily precipitation. A method is finally suggested to infer the most appropriate prescription from the data. PMID:21797314

  12. A technique for determining the Poisson`s ratio of thin films

    SciTech Connect

    Krulevitch, P.

    1996-04-18

    The theory and experimental approach for a new technique used to determine the Poisson`s ratio of thin films are presented. The method involves taking the ratio of curvatures of cantilever beams and plates micromachined out of the film of interest. Curvature is induced by a through-thickness variation in residual stress, or by depositing a thin film under residual stress onto the beams and plates. This approach is made practical by the fact that the two curvatures air, the only required experimental parameters, and small calibration errors cancel when the ratio is taken. To confirm the accuracy of the technique, it was tested on a 2.5 {mu}m thick film of single crystal silicon. Micromachined beams 1 mm long by 100 {mu} wide and plates 700 {mu}m by 700 {mu}m were coated with 35 nm of gold and the curvatures were measured with a scanning optical profilometer. For the orientation tested ([100] film normal, [011] beam axis, [0{bar 1}1] contraction direction) silicon`s Poisson`s ratio is 0.064, and the measured result was 0.066 {+-} 0.043. The uncertainty in this technique is due primarily to variation in the measured curvatures, and should range from {+-} 0.02 to 0.04 with proper measurement technique.

  13. Lattice sums arising from the Poisson equation

    NASA Astrophysics Data System (ADS)

    Bailey, D. H.; Borwein, J. M.; Crandall, R. E.; Zucker, I. J.

    2013-03-01

    In recent times, attention has been directed to the problem of solving the Poisson equation, either in engineering scenarios (computational) or in regard to crystal structure (theoretical). Herein we study a class of lattice sums that amount to Poisson solutions, namely the n-dimensional forms \\begin{eqnarray*} \\phi _n(r_1, \\dots ,r_n) = \\frac{1}{\\pi ^2} \\sum _{m_1, \\dots , m_n \\ odd} \\frac{e^{i \\pi ( m_1 r_1 + \\cdots + m_n r_n)}}{m_1^2 + \\cdots + m_n^2}. \\end{eqnarray*} By virtue of striking connections with Jacobi ϑ-function values, we are able to develop new closed forms for certain values of the coordinates rk, and extend such analysis to similar lattice sums. A primary result is that for rational x, y, the natural potential ϕ2(x, y) is \\frac{1}{\\pi } log A where A is an algebraic number. Various extensions and explicit evaluations are given. Such work is made possible by number-theoretical analysis, symbolic computation and experimental mathematics, including extensive numerical computations using up to 20 000-digit arithmetic.

  14. The influence of dispersing additive on the paraffin crystallization in model systems

    NASA Astrophysics Data System (ADS)

    Gorshkov, A. M.; Tien Thang, Pham; Shishmina, L. V.; Chekantseva, L. V.

    2015-11-01

    The work is dedicated to investigation of the influence of dispersing additive on the paraffin crystallization in model systems. A new method to determine the paraffin saturation point of transparent solutions based on the phenomenon of light scattering has been proposed. The linear relationship between the values of critical micelle concentrations of the additive and the quantity of paraffin in solution has been obtained. The influence of the model system composition on the paraffin crystallization has been studied.

  15. Effects of additional food in a delayed predator-prey model.

    PubMed

    Sahoo, Banshidhar; Poria, Swarup

    2015-03-01

    We examine the effects of supplying additional food to predator in a gestation delay induced predator-prey system with habitat complexity. Additional food works in favor of predator growth in our model. Presence of additional food reduces the predatory attack rate to prey in the model. Supplying additional food we can control predator population. Taking time delay as bifurcation parameter the stability of the coexisting equilibrium point is analyzed. Hopf bifurcation analysis is done with respect to time delay in presence of additional food. The direction of Hopf bifurcations and the stability of bifurcated periodic solutions are determined by applying the normal form theory and the center manifold theorem. The qualitative dynamical behavior of the model is simulated using experimental parameter values. It is observed that fluctuations of the population size can be controlled either by supplying additional food suitably or by increasing the degree of habitat complexity. It is pointed out that Hopf bifurcation occurs in the system when the delay crosses some critical value. This critical value of delay strongly depends on quality and quantity of supplied additional food. Therefore, the variation of predator population significantly effects the dynamics of the model. Model results are compared with experimental results and biological implications of the analytical findings are discussed in the conclusion section.

  16. Method of Poisson's ratio imaging within a material part

    NASA Technical Reports Server (NTRS)

    Roth, Don J. (Inventor)

    1996-01-01

    The present invention is directed to a method of displaying the Poisson's ratio image of a material part. In the present invention longitudinal data is produced using a longitudinal wave transducer and shear wave data is produced using a shear wave transducer. The respective data is then used to calculate the Poisson's ratio for the entire material part. The Poisson's ratio approximations are then used to displayed the image.

  17. A Method of Poisson's Ration Imaging Within a Material Part

    NASA Technical Reports Server (NTRS)

    Roth, Don J. (Inventor)

    1994-01-01

    The present invention is directed to a method of displaying the Poisson's ratio image of a material part. In the present invention, longitudinal data is produced using a longitudinal wave transducer and shear wave data is produced using a shear wave transducer. The respective data is then used to calculate the Poisson's ratio for the entire material part. The Poisson's ratio approximations are then used to display the data.

  18. Pointwise estimates of solutions for the multi-dimensional bipolar Euler-Poisson system

    NASA Astrophysics Data System (ADS)

    Wu, Zhigang; Li, Yeping

    2016-06-01

    In the paper, we consider a multi-dimensional bipolar hydrodynamic model from semiconductor devices and plasmas. This system takes the form of Euler-Poisson with electric field and frictional damping added to the momentum equations. By making a new analysis on Green's functions for the Euler system with damping and the Euler-Poisson system with damping, we obtain the pointwise estimates of the solution for the multi-dimensions bipolar Euler-Poisson system. As a by-product, we extend decay rates of the densities {ρ_i(i=1,2)} in the usual L 2-norm to the L p -norm with {p≥1} and the time-decay rates of the momentums m i ( i = 1,2) in the L 2-norm to the L p -norm with p > 1 and all of the decay rates here are optimal.

  19. The Vlasov-Poisson System for Stellar Dynamics in Spaces of Constant Curvature

    NASA Astrophysics Data System (ADS)

    Diacu, Florin; Ibrahim, Slim; Lind, Crystal; Shen, Shengyi

    2016-09-01

    We obtain a natural extension of the Vlasov-Poisson system for stellar dynamics to spaces of constant Gaussian curvature {κ ≠ 0}: the unit sphere {S^2}, for {κ > 0}, and the unit hyperbolic sphere {H^2}, for {κ < 0}. These equations can be easily generalized to higher dimensions. When the particles move on a geodesic, the system reduces to a 1-dimensional problem that is more singular than the classical analogue of the Vlasov-Poisson system. In the analysis of this reduced model, we study the well-posedness of the problem and derive Penrose-type conditions for linear stability around homogeneous solutions in the sense of Landau damping.

  20. On an Additive Semigraphoid Model for Statistical Networks With Application to Pathway Analysis

    PubMed Central

    Li, Bing; Chun, Hyonho; Zhao, Hongyu

    2014-01-01

    We introduce a nonparametric method for estimating non-gaussian graphical models based on a new statistical relation called additive conditional independence, which is a three-way relation among random vectors that resembles the logical structure of conditional independence. Additive conditional independence allows us to use one-dimensional kernel regardless of the dimension of the graph, which not only avoids the curse of dimensionality but also simplifies computation. It also gives rise to a parallel structure to the gaussian graphical model that replaces the precision matrix by an additive precision operator. The estimators derived from additive conditional independence cover the recently introduced nonparanormal graphical model as a special case, but outperform it when the gaussian copula assumption is violated. We compare the new method with existing ones by simulations and in genetic pathway analysis. PMID:26401064

  1. The non-equilibrium allele frequency spectrum in a Poisson random field framework.

    PubMed

    Kaj, Ingemar; Mugal, Carina F

    2016-10-01

    In population genetic studies, the allele frequency spectrum (AFS) efficiently summarizes genome-wide polymorphism data and shapes a variety of allele frequency-based summary statistics. While existing theory typically features equilibrium conditions, emerging methodology requires an analytical understanding of the build-up of the allele frequencies over time. In this work, we use the framework of Poisson random fields to derive new representations of the non-equilibrium AFS for the case of a Wright-Fisher population model with selection. In our approach, the AFS is a scaling-limit of the expectation of a Poisson stochastic integral and the representation of the non-equilibrium AFS arises in terms of a fixation time probability distribution. The known duality between the Wright-Fisher diffusion process and a birth and death process generalizing Kingman's coalescent yields an additional representation. The results carry over to the setting of a random sample drawn from the population and provide the non-equilibrium behavior of sample statistics. Our findings are consistent with and extend a previous approach where the non-equilibrium AFS solves a partial differential forward equation with a non-traditional boundary condition. Moreover, we provide a bridge to previous coalescent-based work, and hence tie several frameworks together. Since frequency-based summary statistics are widely used in population genetics, for example, to identify candidate loci of adaptive evolution, to infer the demographic history of a population, or to improve our understanding of the underlying mechanics of speciation events, the presented results are potentially useful for a broad range of topics.

  2. The non-equilibrium allele frequency spectrum in a Poisson random field framework.

    PubMed

    Kaj, Ingemar; Mugal, Carina F

    2016-10-01

    In population genetic studies, the allele frequency spectrum (AFS) efficiently summarizes genome-wide polymorphism data and shapes a variety of allele frequency-based summary statistics. While existing theory typically features equilibrium conditions, emerging methodology requires an analytical understanding of the build-up of the allele frequencies over time. In this work, we use the framework of Poisson random fields to derive new representations of the non-equilibrium AFS for the case of a Wright-Fisher population model with selection. In our approach, the AFS is a scaling-limit of the expectation of a Poisson stochastic integral and the representation of the non-equilibrium AFS arises in terms of a fixation time probability distribution. The known duality between the Wright-Fisher diffusion process and a birth and death process generalizing Kingman's coalescent yields an additional representation. The results carry over to the setting of a random sample drawn from the population and provide the non-equilibrium behavior of sample statistics. Our findings are consistent with and extend a previous approach where the non-equilibrium AFS solves a partial differential forward equation with a non-traditional boundary condition. Moreover, we provide a bridge to previous coalescent-based work, and hence tie several frameworks together. Since frequency-based summary statistics are widely used in population genetics, for example, to identify candidate loci of adaptive evolution, to infer the demographic history of a population, or to improve our understanding of the underlying mechanics of speciation events, the presented results are potentially useful for a broad range of topics. PMID:27378747

  3. Testing a Gender Additive Model: The Role of Body Image in Adolescent Depression

    ERIC Educational Resources Information Center

    Bearman, Sarah Kate; Stice, Eric

    2008-01-01

    Despite consistent evidence that adolescent girls are at greater risk of developing depression than adolescent boys, risk factor models that account for this difference have been elusive. The objective of this research was to examine risk factors proposed by the "gender additive" model of depression that attempts to partially explain the increased…

  4. Stochastic search with Poisson and deterministic resetting

    NASA Astrophysics Data System (ADS)

    Bhat, Uttam; De Bacco, Caterina; Redner, S.

    2016-08-01

    We investigate a stochastic search process in one, two, and three dimensions in which N diffusing searchers that all start at x 0 seek a target at the origin. Each of the searchers is also reset to its starting point, either with rate r, or deterministically, with a reset time T. In one dimension and for a small number of searchers, the search time and the search cost are minimized at a non-zero optimal reset rate (or time), while for sufficiently large N, resetting always hinders the search. In general, a single searcher leads to the minimum search cost in one, two, and three dimensions. When the resetting is deterministic, several unexpected feature arise for N searchers, including the search time being independent of T for 1/T\\to 0 and the search cost being independent of N over a suitable range of N. Moreover, deterministic resetting typically leads to a lower search cost than in Poisson resetting.

  5. Efficient information transfer by Poisson neurons.

    PubMed

    Kostal, Lubomir; Shinomoto, Shigeru

    2016-06-01

    Recently, it has been suggested that certain neurons with Poissonian spiking statistics may communicate by discontinuously switching between two levels of firing intensity. Such a situation resembles in many ways the optimal information transmission protocol for the continuous-time Poisson channel known from information theory. In this contribution we employ the classical information-theoretic results to analyze the efficiency of such a transmission from different perspectives, emphasising the neurobiological viewpoint. We address both the ultimate limits, in terms of the information capacity under metabolic cost constraints, and the achievable bounds on performance at rates below capacity with fixed decoding error probability. In doing so we discuss optimal values of experimentally measurable quantities that can be compared with the actual neuronal recordings in a future effort. PMID:27106184

  6. Multi-Parameter Linear Least-Squares Fitting to Poisson Data One Count at a Time

    NASA Technical Reports Server (NTRS)

    Wheaton, W.; Dunklee, A.; Jacobson, A.; Ling, J.; Mahoney, W.; Radocinski, R.

    1993-01-01

    A standard problem in gamma-ray astronomy data analysis is the decomposition of a set of observed counts, described by Poisson statistics, according to a given multi-component linear model, with underlying physical count rates or fluxes which are to be estimated from the data.

  7. Updating a Classic: "The Poisson Distribution and the Supreme Court" Revisited

    ERIC Educational Resources Information Center

    Cole, Julio H.

    2010-01-01

    W. A. Wallis studied vacancies in the US Supreme Court over a 96-year period (1837-1932) and found that the distribution of the number of vacancies per year could be characterized by a Poisson model. This note updates this classic study.

  8. Application of the sine-Poisson equation in solar magnetostatics

    NASA Technical Reports Server (NTRS)

    Webb, G. M.; Zank, G. P.

    1990-01-01

    Solutions of the sine-Poisson equations are used to construct a class of isothermal magnetostatic atmospheres, with one ignorable coordinate corresponding to a uniform gravitational field in a plane geometry. The distributed current in the model (j) is directed along the x-axis, where x is the horizontal ignorable coordinate; (j) varies as the sine of the magnetostatic potential and falls off exponentially with distance vertical to the base with an e-folding distance equal to the gravitational scale height. Solutions for the magnetostatic potential A corresponding to the one-soliton, two-soliton, and breather solutions of the sine-Gordon equation are studied. Depending on the values of the free parameters in the soliton solutions, horizontally periodic magnetostatic structures are obtained possessing either a single X-type neutral point, multiple neural X-points, or solutions without X-points.

  9. Numerical Solution of the Gyrokinetic Poisson Equation in TEMPEST

    NASA Astrophysics Data System (ADS)

    Dorr, Milo; Cohen, Bruce; Cohen, Ronald; Dimits, Andris; Hittinger, Jeffrey; Kerbel, Gary; Nevins, William; Rognlien, Thomas; Umansky, Maxim; Xiong, Andrew; Xu, Xueqiao

    2006-10-01

    The gyrokinetic Poisson (GKP) model in the TEMPEST continuum gyrokinetic edge plasma code yields the electrostatic potential due to the charge density of electrons and an arbitrary number of ion species including the effects of gyroaveraging in the limit kρ1. The TEMPEST equations are integrated as a differential algebraic system involving a nonlinear system solve via Newton-Krylov iteration. The GKP preconditioner block is inverted using a multigrid preconditioned conjugate gradient (CG) algorithm. Electrons are treated as kinetic or adiabatic. The Boltzmann relation in the adiabatic option employs flux surface averaging to maintain neutrality within field lines and is solved self-consistently with the GKP equation. A decomposition procedure circumvents the near singularity of the GKP Jacobian block that otherwise degrades CG convergence.

  10. Nonstationary elementary-field light randomly triggered by Poisson impulses.

    PubMed

    Fernández-Pousa, Carlos R

    2013-05-01

    A stochastic theory of nonstationary light describing the random emission of elementary pulses is presented. The emission is governed by a nonhomogeneous Poisson point process determined by a time-varying emission rate. The model describes, in the appropriate limits, stationary, cyclostationary, locally stationary, and pulsed radiation, and reduces to a Gaussian theory in the limit of dense emission rate. The first- and second-order coherence theories are solved after the computation of second- and fourth-order correlation functions by use of the characteristic function. The ergodicity of second-order correlations under various types of detectors is explored and a number of observables, including optical spectrum, amplitude, and intensity correlations, are analyzed. PMID:23695325

  11. A Cartesian grid embedded boundary method for Poisson`s equation on irregular domains

    SciTech Connect

    Johansen, H.; Colella, P.

    1997-01-31

    The authors present a numerical method for solving Poisson`s equation, with variable coefficients and Dirichlet boundary conditions, on two-dimensional regions. The approach uses a finite-volume discretization, which embeds the domain in a regular Cartesian grid. They treat the solution as a cell-centered quantity, even when those centers are outside the domain. Cells that contain a portion of the domain boundary use conservation differencing of second-order accurate fluxes, on each cell volume. The calculation of the boundary flux ensures that the conditioning of the matrix is relatively unaffected by small cell volumes. This allows them to use multi-grid iterations with a simple point relaxation strategy. They have combined this with an adaptive mesh refinement (AMR) procedure. They provide evidence that the algorithm is second-order accurate on various exact solutions, and compare the adaptive and non-adaptive calculations.

  12. Classification and Casimir Invariants of Lie--Poisson Brackets

    NASA Astrophysics Data System (ADS)

    Thiffeault, Jean-Luc; Morrison, P. J.

    1997-11-01

    Several types of fluid and plasma systems admit a Hamiltonian formulation using Lie-Poisson brackets, including Euler's equation for fluids, reduced MHD for plasmas, and others. Lie-Poisson brackets, which are examples of noncanonical Poisson brackets, consist of an inner product, < , >, and the bracket, [ , ], of a Lie algebra which we call the inner bracket. The Lie-Poisson bracket is then lF,Gr = l<Ψ, l[F_Ψ , G_Ψr]r>. Here Ψ is a vector of field variables, and subscripts denote functional differentiation. The algebras corresponding to the inner brackets are algebras by extension: they are defined for multiple field variables from the bracket for a single variable. We derive a classification scheme for all such brackets using cohomology theory for Lie algebras. We then derive the Casimir invariants for the classes of Lie-Poisson brackets where the inner bracket is of canonical type.

  13. The Poisson Gamma distribution for wind speed data

    NASA Astrophysics Data System (ADS)

    Ćakmakyapan, Selen; Özel, Gamze

    2016-04-01

    The wind energy is one of the most significant alternative clean energy source and rapidly developing renewable energy sources in the world. For the evaluation of wind energy potential, probability density functions (pdfs) are usually used to model wind speed distributions. The selection of the appropriate pdf reduces the wind power estimation error and also allow to achieve characteristics. In the literature, different pdfs used to model wind speed data for wind energy applications. In this study, we propose a new probability distribution to model the wind speed data. Firstly, we defined the new probability distribution named Poisson-Gamma (PG) distribution and we analyzed a wind speed data sets which are about five pressure degree for the station. We obtained the data sets from Turkish State Meteorological Service. Then, we modelled the data sets with Exponential, Weibull, Lomax, 3 parameters Burr, Gumbel, Gamma, Rayleigh which are used to model wind speed data, and PG distributions. Finally, we compared the distribution, to select the best fitted model and demonstrated that PG distribution modeled the data sets better.

  14. Vector generalized additive models for extreme rainfall data analysis (study case rainfall data in Indramayu)

    NASA Astrophysics Data System (ADS)

    Utami, Eka Putri Nur; Wigena, Aji Hamim; Djuraidah, Anik

    2016-02-01

    Rainfall pattern are good indicators for potential disasters. Global Circulation Model (GCM) contains global scale information that can be used to predict the rainfall data. Statistical downscaling (SD) utilizes the global scale information to make inferences in the local scale. Essentially, SD can be used to predict local scale variables based on global scale variables. SD requires a method to accommodate non linear effects and extreme values. Extreme value Theory (EVT) can be used to analyze the extreme value. One of methods to identify the extreme events is peak over threshold that follows Generalized Pareto Distribution (GPD). The vector generalized additive model (VGAM) is an extension of the generalized additive model. It is able to accommodate linear or nonlinear effects by involving more than one additive predictors. The advantage of VGAM is to handle multi response models. The key idea of VGAM are iteratively reweighted least square for maximum likelihood estimation, penalized smoothing, fisher scoring and additive models. This works aims to analyze extreme rainfall data in Indramayu using VGAM. The results show that the VGAM with GPD is able to predict extreme rainfall data accurately. The prediction in February is very close to the actual value at quantile 75.

  15. Integrated reservoir characterization: Improvement in heterogeneities stochastic modelling by integration of additional external constraints

    SciTech Connect

    Doligez, B.; Eschard, R.; Geffroy, F.

    1997-08-01

    The classical approach to construct reservoir models is to start with a fine scale geological model which is informed with petrophysical properties. Then scaling-up techniques allow to obtain a reservoir model which is compatible with the fluid flow simulators. Geostatistical modelling techniques are widely used to build the geological models before scaling-up. These methods provide equiprobable images of the area under investigation, which honor the well data, and which variability is the same than the variability computed from the data. At an appraisal phase, when few data are available, or when the wells are insufficient to describe all the heterogeneities and the behavior of the field, additional constraints are needed to obtain a more realistic geological model. For example, seismic data or stratigraphic models can provide average reservoir information with an excellent areal coverage, but with a poor vertical resolution. New advances in modelisation techniques allow now to integrate this type of additional external information in order to constrain the simulations. In particular, 2D or 3D seismic derived information grids, or sand-shale ratios maps coming from stratigraphic models can be used as external drifts to compute the geological image of the reservoir at the fine scale. Examples are presented to illustrate the use of these new tools, their impact on the final reservoir model, and their sensitivity to some key parameters.

  16. AFMPB: An adaptive fast multipole Poisson-Boltzmann solver for calculating electrostatics in biomolecular systems

    NASA Astrophysics Data System (ADS)

    Lu, Benzhuo; Cheng, Xiaolin; Huang, Jingfang; McCammon, J. Andrew

    2010-06-01

    A Fortran program package is introduced for rapid evaluation of the electrostatic potentials and forces in biomolecular systems modeled by the linearized Poisson-Boltzmann equation. The numerical solver utilizes a well-conditioned boundary integral equation (BIE) formulation, a node-patch discretization scheme, a Krylov subspace iterative solver package with reverse communication protocols, and an adaptive new version of fast multipole method in which the exponential expansions are used to diagonalize the multipole-to-local translations. The program and its full description, as well as several closely related libraries and utility tools are available at http://lsec.cc.ac.cn/~lubz/afmpb.html and a mirror site at http://mccammon.ucsd.edu/. This paper is a brief summary of the program: the algorithms, the implementation and the usage. Program summaryProgram title: AFMPB: Adaptive fast multipole Poisson-Boltzmann solver Catalogue identifier: AEGB_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGB_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPL 2.0 No. of lines in distributed program, including test data, etc.: 453 649 No. of bytes in distributed program, including test data, etc.: 8 764 754 Distribution format: tar.gz Programming language: Fortran Computer: Any Operating system: Any RAM: Depends on the size of the discretized biomolecular system Classification: 3 External routines: Pre- and post-processing tools are required for generating the boundary elements and for visualization. Users can use MSMS ( http://www.scripps.edu/~sanner/html/msms_home.html) for pre-processing, and VMD ( http://www.ks.uiuc.edu/Research/vmd/) for visualization. Sub-programs included: An iterative Krylov subspace solvers package from SPARSKIT by Yousef Saad ( http://www-users.cs.umn.edu/~saad/software/SPARSKIT/sparskit.html), and the fast multipole methods subroutines from FMMSuite ( http

  17. Experimental model and analytic solution for real-time observation of vehicle's additional steer angle

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaolong; Li, Liang; Pan, Deng; Cao, Chengmao; Song, Jian

    2014-03-01

    The current research of real-time observation for vehicle roll steer angle and compliance steer angle(both of them comprehensively referred as the additional steer angle in this paper) mainly employs the linear vehicle dynamic model, in which only the lateral acceleration of vehicle body is considered. The observation accuracy resorting to this method cannot meet the requirements of vehicle real-time stability control, especially under extreme driving conditions. The paper explores the solution resorting to experimental method. Firstly, a multi-body dynamic model of a passenger car is built based on the ADAMS/Car software, whose dynamic accuracy is verified by the same vehicle's roadway test data of steady static circular test. Based on this simulation platform, several influencing factors of additional steer angle under different driving conditions are quantitatively analyzed. Then ɛ-SVR algorithm is employed to build the additional steer angle prediction model, whose input vectors mainly include the sensor information of standard electronic stability control system(ESC). The method of typical slalom tests and FMVSS 126 tests are adopted to make simulation, train model and test model's generalization performance. The test result shows that the influence of lateral acceleration on additional steer angle is maximal (the magnitude up to 1°), followed by the longitudinal acceleration-deceleration and the road wave amplitude (the magnitude up to 0.3°). Moreover, both the prediction accuracy and the calculation real-time of the model can meet the control requirements of ESC. This research expands the accurate observation methods of the additional steer angle under extreme driving conditions.

  18. Antimicrobial combinations: Bliss independence and Loewe additivity derived from mechanistic multi-hit models.

    PubMed

    Baeder, Desiree Y; Yu, Guozhi; Hozé, Nathanaël; Rolff, Jens; Regoes, Roland R

    2016-05-26

    Antimicrobial peptides (AMPs) and antibiotics reduce the net growth rate of bacterial populations they target. It is relevant to understand if effects of multiple antimicrobials are synergistic or antagonistic, in particular for AMP responses, because naturally occurring responses involve multiple AMPs. There are several competing proposals describing how multiple types of antimicrobials add up when applied in combination, such as Loewe additivity or Bliss independence. These additivity terms are defined ad hoc from abstract principles explaining the supposed interaction between the antimicrobials. Here, we link these ad hoc combination terms to a mathematical model that represents the dynamics of antimicrobial molecules hitting targets on bacterial cells. In this multi-hit model, bacteria are killed when a certain number of targets are hit by antimicrobials. Using this bottom-up approach reveals that Bliss independence should be the model of choice if no interaction between antimicrobial molecules is expected. Loewe additivity, on the other hand, describes scenarios in which antimicrobials affect the same components of the cell, i.e. are not acting independently. While our approach idealizes the dynamics of antimicrobials, it provides a conceptual underpinning of the additivity terms. The choice of the additivity term is essential to determine synergy or antagonism of antimicrobials.This article is part of the themed issue 'Evolutionary ecology of arthropod antimicrobial peptides'. PMID:27160596

  19. Sparse Additive Ordinary Differential Equations for Dynamic Gene Regulatory Network Modeling

    PubMed Central

    Wu, Hulin; Lu, Tao; Xue, Hongqi; Liang, Hua

    2014-01-01

    Summary The gene regulation network (GRN) is a high-dimensional complex system, which can be represented by various mathematical or statistical models. The ordinary differential equation (ODE) model is one of the popular dynamic GRN models. High-dimensional linear ODE models have been proposed to identify GRNs, but with a limitation of the linear regulation effect assumption. In this article, we propose a sparse additive ODE (SA-ODE) model, coupled with ODE estimation methods and adaptive group LASSO techniques, to model dynamic GRNs that could flexibly deal with nonlinear regulation effects. The asymptotic properties of the proposed method are established and simulation studies are performed to validate the proposed approach. An application example for identifying the nonlinear dynamic GRN of T-cell activation is used to illustrate the usefulness of the proposed method. PMID:25061254

  20. Sparse Additive Ordinary Differential Equations for Dynamic Gene Regulatory Network Modeling.

    PubMed

    Wu, Hulin; Lu, Tao; Xue, Hongqi; Liang, Hua

    2014-04-01

    The gene regulation network (GRN) is a high-dimensional complex system, which can be represented by various mathematical or statistical models. The ordinary differential equation (ODE) model is one of the popular dynamic GRN models. High-dimensional linear ODE models have been proposed to identify GRNs, but with a limitation of the linear regulation effect assumption. In this article, we propose a sparse additive ODE (SA-ODE) model, coupled with ODE estimation methods and adaptive group LASSO techniques, to model dynamic GRNs that could flexibly deal with nonlinear regulation effects. The asymptotic properties of the proposed method are established and simulation studies are performed to validate the proposed approach. An application example for identifying the nonlinear dynamic GRN of T-cell activation is used to illustrate the usefulness of the proposed method. PMID:25061254

  1. Midrapidity inclusive densities in high energy pp collisions in additive quark model

    NASA Astrophysics Data System (ADS)

    Shabelski, Yu. M.; Shuvaev, A. G.

    2016-08-01

    High energy (CERN SPS and LHC) inelastic pp (pbar{p}) scattering is treated in the framework of the additive quark model together with Pomeron exchange theory. We extract the midrapidity inclusive density of the charged secondaries produced in a single quark-quark collision and investigate its energy dependence. Predictions for the π p collisions are presented.

  2. Parametrically Guided Generalized Additive Models with Application to Mergers and Acquisitions Data

    PubMed Central

    Fan, Jianqing; Maity, Arnab; Wang, Yihui; Wu, Yichao

    2012-01-01

    Generalized nonparametric additive models present a flexible way to evaluate the effects of several covariates on a general outcome of interest via a link function. In this modeling framework, one assumes that the effect of each of the covariates is nonparametric and additive. However, in practice, often there is prior information available about the shape of the regression functions, possibly from pilot studies or exploratory analysis. In this paper, we consider such situations and propose an estimation procedure where the prior information is used as a parametric guide to fit the additive model. Specifically, we first posit a parametric family for each of the regression functions using the prior information (parametric guides). After removing these parametric trends, we then estimate the remainder of the nonparametric functions using a nonparametric generalized additive model, and form the final estimates by adding back the parametric trend. We investigate the asymptotic properties of the estimates and show that when a good guide is chosen, the asymptotic variance of the estimates can be reduced significantly while keeping the asymptotic variance same as the unguided estimator. We observe the performance of our method via a simulation study and demonstrate our method by applying to a real data set on mergers and acquisitions. PMID:23645976

  3. Modeling Longitudinal Data with Generalized Additive Models: Applications to Single-Case Designs

    ERIC Educational Resources Information Center

    Sullivan, Kristynn J.; Shadish, William R.

    2013-01-01

    Single case designs (SCDs) are short time series that assess intervention effects by measuring units repeatedly over time both in the presence and absence of treatment. For a variety of reasons, interest in the statistical analysis and meta-analysis of these designs has been growing in recent years. This paper proposes modeling SCD data with…

  4. Formation and reduction of carcinogenic furan in various model systems containing food additives.

    PubMed

    Kim, Jin-Sil; Her, Jae-Young; Lee, Kwang-Geun

    2015-12-15

    The aim of this study was to analyse and reduce furan in various model systems. Furan model systems consisting of monosaccharides (0.5M glucose and ribose), amino acids (0.5M alanine and serine) and/or 1.0M ascorbic acid were heated at 121°C for 25 min. The effects of food additives (each 0.1M) such as metal ions (iron sulphate, magnesium sulphate, zinc sulphate and calcium sulphate), antioxidants (BHT and BHA), and sodium sulphite on the formation of furan were measured. The level of furan formed in the model systems was 6.8-527.3 ng/ml. The level of furan in the model systems of glucose/serine and glucose/alanine increased 7-674% when food additives were added. In contrast, the level of furan decreased by 18-51% in the Maillard reaction model systems that included ribose and alanine/serine with food additives except zinc sulphate.

  5. Generalized HPC method for the Poisson equation

    NASA Astrophysics Data System (ADS)

    Bardazzi, A.; Lugni, C.; Antuono, M.; Graziani, G.; Faltinsen, O. M.

    2015-10-01

    An efficient and innovative numerical algorithm based on the use of Harmonic Polynomials on each Cell of the computational domain (HPC method) has been recently proposed by Shao and Faltinsen (2014) [1], to solve Boundary Value Problem governed by the Laplace equation. Here, we extend the HPC method for the solution of non-homogeneous elliptic boundary value problems. The homogeneous solution, i.e. the Laplace equation, is represented through a polynomial function with harmonic polynomials while the particular solution of the Poisson equation is provided by a bi-quadratic function. This scheme has been called generalized HPC method. The present algorithm, accurate up to the 4th order, proved to be efficient, i.e. easy to be implemented and with a low computational effort, for the solution of two-dimensional elliptic boundary value problems. Furthermore, it provides an analytical representation of the solution within each computational stencil, which allows its coupling with existing numerical algorithms within an efficient domain-decomposition strategy or within an adaptive mesh refinement algorithm.

  6. NB-PLC channel modelling with cyclostationary noise addition & OFDM implementation for smart grid

    NASA Astrophysics Data System (ADS)

    Thomas, Togis; Gupta, K. K.

    2016-03-01

    Power line communication (PLC) technology can be a viable solution for the future ubiquitous networks because it provides a cheaper alternative to other wired technology currently being used for communication. In smart grid Power Line Communication (PLC) is used to support communication with low rate on low voltage (LV) distribution network. In this paper, we propose the channel modelling of narrowband (NB) PLC in the frequency range 5 KHz to 500 KHz by using ABCD parameter with cyclostationary noise addition. Behaviour of the channel was studied by the addition of 11KV/230V transformer, by varying load location and load. Bit error rate (BER) Vs signal to noise ratio SNR) was plotted for the proposed model by employing OFDM. Our simulation results based on the proposed channel model show an acceptable performance in terms of bit error rate versus signal to noise ratio, which enables communication required for smart grid applications.

  7. Dynamics of a prey-predator system under Poisson white noise excitation

    NASA Astrophysics Data System (ADS)

    Pan, Shan-Shan; Zhu, Wei-Qiu

    2014-10-01

    The classical Lotka-Volterra (LV) model is a well-known mathematical model for prey-predator ecosystems. In the present paper, the pulse-type version of stochastic LV model, in which the effect of a random natural environment has been modeled as Poisson white noise, is investigated by using the stochastic averaging method. The averaged generalized Itô stochastic differential equation and Fokker-Planck-Kolmogorov (FPK) equation are derived for prey-predator ecosystem driven by Poisson white noise. Approximate stationary solution for the averaged generalized FPK equation is obtained by using the perturbation method. The effect of prey self-competition parameter ɛ2 s on ecosystem behavior is evaluated. The analytical result is confirmed by corresponding Monte Carlo (MC) simulation.

  8. Generalized neurofuzzy network modeling algorithms using Bézier-Bernstein polynomial functions and additive decomposition.

    PubMed

    Hong, X; Harris, C J

    2000-01-01

    This paper introduces a new neurofuzzy model construction algorithm for nonlinear dynamic systems based upon basis functions that are Bézier-Bernstein polynomial functions. This paper is generalized in that it copes with n-dimensional inputs by utilising an additive decomposition construction to overcome the curse of dimensionality associated with high n. This new construction algorithm also introduces univariate Bézier-Bernstein polynomial functions for the completeness of the generalized procedure. Like the B-spline expansion based neurofuzzy systems, Bézier-Bernstein polynomial function based neurofuzzy networks hold desirable properties such as nonnegativity of the basis functions, unity of support, and interpretability of basis function as fuzzy membership functions, moreover with the additional advantages of structural parsimony and Delaunay input space partition, essentially overcoming the curse of dimensionality associated with conventional fuzzy and RBF networks. This new modeling network is based on additive decomposition approach together with two separate basis function formation approaches for both univariate and bivariate Bézier-Bernstein polynomial functions used in model construction. The overall network weights are then learnt using conventional least squares methods. Numerical examples are included to demonstrate the effectiveness of this new data based modeling approach.

  9. Poly-symplectic Groupoids and Poly-Poisson Structures

    NASA Astrophysics Data System (ADS)

    Martinez, Nicolas

    2015-05-01

    We introduce poly-symplectic groupoids, which are natural extensions of symplectic groupoids to the context of poly-symplectic geometry, and define poly-Poisson structures as their infinitesimal counterparts. We present equivalent descriptions of poly-Poisson structures, including one related with AV-Dirac structures. We also discuss symmetries and reduction in the setting of poly-symplectic groupoids and poly-Poisson structures, and use our viewpoint to revisit results and develop new aspects of the theory initiated in Iglesias et al. (Lett Math Phys 103:1103-1133, 2013).

  10. Bayesian Inference and Online Learning in Poisson Neuronal Networks.

    PubMed

    Huang, Yanping; Rao, Rajesh P N

    2016-08-01

    Motivated by the growing evidence for Bayesian computation in the brain, we show how a two-layer recurrent network of Poisson neurons can perform both approximate Bayesian inference and learning for any hidden Markov model. The lower-layer sensory neurons receive noisy measurements of hidden world states. The higher-layer neurons infer a posterior distribution over world states via Bayesian inference from inputs generated by sensory neurons. We demonstrate how such a neuronal network with synaptic plasticity can implement a form of Bayesian inference similar to Monte Carlo methods such as particle filtering. Each spike in a higher-layer neuron represents a sample of a particular hidden world state. The spiking activity across the neural population approximates the posterior distribution over hidden states. In this model, variability in spiking is regarded not as a nuisance but as an integral feature that provides the variability necessary for sampling during inference. We demonstrate how the network can learn the likelihood model, as well as the transition probabilities underlying the dynamics, using a Hebbian learning rule. We present results illustrating the ability of the network to perform inference and learning for arbitrary hidden Markov models.

  11. Optimal dispersion with minimized Poisson equations for non-hydrostatic free surface flows

    NASA Astrophysics Data System (ADS)

    Cui, Haiyang; Pietrzak, J. D.; Stelling, G. S.

    2014-09-01

    A non-hydrostatic shallow-water model is proposed to simulate the wave propagation in situations where the ratio of the wave length to the water depth is small. It exploits the reduced-size stencil in the Poisson pressure solver to make the model less expensive in terms of memory and CPU time. We refer to this new technique as the minimized Poisson equations formulation. In the simplest case when the method applied to a two-layer model, the new model requires the same computational effort as depth-integrated non-hydrostatic models, but can provide a much better description of dispersive waves. To allow an easy implementation of the new method in depth-integrated models, the governing equations are transformed into a depth-integrated system, in which the velocity difference serves as an extra variable. The non-hydrostatic shallow-water model with minimized Poisson equations formulation produces good results in a series of numerical experiments, including a standing wave in a basin, a non-linear wave test, solitary wave propagation in a channel and a wave propagation over a submerged bar.

  12. Use of additive technologies for practical working with complex models for foundry technologies

    NASA Astrophysics Data System (ADS)

    Olkhovik, E.; Butsanets, A. A.; Ageeva, A. A.

    2016-07-01

    The article presents the results of research of additive technology (3D printing) application for developing a geometrically complex model of castings parts. Investment casting is well known and widely used technology for the production of complex parts. The work proposes the use of a 3D printing technology for manufacturing models parts, which are removed by thermal destruction. Traditional methods of equipment production for investment casting involve the use of manual labor which has problems with dimensional accuracy, and CNC technology which is less used. Such scheme is low productive and demands considerable time. We have offered an alternative method which consists in printing the main knots using a 3D printer (PLA and ABS) with a subsequent production of castings models from them. In this article, the main technological methods are considered and their problems are discussed. The dimensional accuracy of models in comparison with investment casting technology is considered as the main aspect.

  13. Generalized Additive Mixed-Models for Pharmacology Using Integrated Discrete Multiple Organ Co-Culture

    PubMed Central

    Ingersoll, Thomas; Cole, Stephanie; Madren-Whalley, Janna; Booker, Lamont; Dorsey, Russell; Li, Albert; Salem, Harry

    2016-01-01

    Integrated Discrete Multiple Organ Co-culture (IDMOC) is emerging as an in-vitro alternative to in-vivo animal models for pharmacology studies. IDMOC allows dose-response relationships to be investigated at the tissue and organoid levels, yet, these relationships often exhibit responses that are far more complex than the binary responses often measured in whole animals. To accommodate departure from binary endpoints, IDMOC requires an expansion of analytic techniques beyond simple linear probit and logistic models familiar in toxicology. IDMOC dose-responses may be measured at continuous scales, exhibit significant non-linearity such as local maxima or minima, and may include non-independent measures. Generalized additive mixed-modeling (GAMM) provides an alternative description of dose-response that relaxes assumptions of independence and linearity. We compared GAMMs to traditional linear models for describing dose-response in IDMOC pharmacology studies. PMID:27110941

  14. Evidence of thermal additivity during short laser pulses in an in vitro retinal model

    NASA Astrophysics Data System (ADS)

    Denton, Michael L.; Tijerina, Amanda J.; Dyer, Phillip N.; Oian, Chad A.; Noojin, Gary D.; Rickman, John M.; Shingledecker, Aurora D.; Clark, Clifton D.; Castellanos, Cherry C.; Thomas, Robert J.; Rockwell, Benjamin A.

    2015-03-01

    Laser damage thresholds were determined for exposure to 2.5-ms 532-nm pulses in an established in vitro retinal model. Single and multiple pulses (10, 100, 1000) were delivered to the cultured cells at three different pulse repetition frequency (PRF) values, and overt damage (membrane breach) was scored 1 hr post laser exposure. Trends in the damage data within and across the PRF range identified significant thermal additivity as PRF was increased, as evidenced by drastically reduced threshold values (< 40% of single-pulse value). Microthermography data that were collected in real time during each exposure also provided evidence of thermal additivity between successive laser pulses. Using thermal profiles simulated at high temporal resolution, damage threshold values were predicted by an in-house computational model. Our simulated ED50 value for a single 2.5-ms pulse was in very good agreement with experimental results, but ED50 predictions for multiple-pulse trains will require more refinement.

  15. Vlasov-Poisson in 1D: waterbags

    NASA Astrophysics Data System (ADS)

    Colombi, Stéphane; Touma, Jihad

    2014-07-01

    We revisit in one dimension the waterbag method to solve numerically Vlasov-Poisson equations. In this approach, the phase-space distribution function f (x, v) is initially sampled by an ensemble of patches, the waterbags, where f is assumed to be constant. As a consequence of Liouville theorem, it is only needed to follow the evolution of the border of these waterbags, which can be done by employing an orientated, self-adaptive polygon tracing isocontours of f. This method, which is entropy conserving in essence, is very accurate and can trace very well non-linear instabilities as illustrated by specific examples. As an application of the method, we generate an ensemble of single-waterbag simulations with decreasing thickness to perform a convergence study to the cold case. Our measurements show that the system relaxes to a steady state where the gravitational potential profile is a power law of slowly varying index β, with β close to 3/2 as found in the literature. However, detailed analysis of the properties of the gravitational potential shows that at the centre, β > 1.54. Moreover, our measurements are consistent with the value β = 8/5 = 1.6 that can be analytically derived by assuming that the average of the phase-space density per energy level obtained at crossing times is conserved during the mixing phase. These results are incompatible with the logarithmic slope of the projected density profile β - 2 ≃ -0.47 obtained recently by Schulz et al. using an N-body technique. This sheds again strong doubts on the capability of N-body techniques to converge to the correct steady state expected in the continuous limit.

  16. Model for Assembly Line Re-Balancing Considering Additional Capacity and Outsourcing to Face Demand Fluctuations

    NASA Astrophysics Data System (ADS)

    Samadhi, TMAA; Sumihartati, Atin

    2016-02-01

    The most critical stage in a garment industry is sewing process, because generally, it consists of a number of operations and a large number of sewing machines for each operation. Therefore, it requires a balancing method that can assign task to work station with balance workloads. Many studies on assembly line balancing assume a new assembly line, but in reality, due to demand fluctuation and demand increased a re-balancing is needed. To cope with those fluctuating demand changes, additional capacity can be carried out by investing in spare sewing machine and paying for sewing service through outsourcing. This study develops an assembly line balancing (ALB) model on existing line to cope with fluctuating demand change. Capacity redesign is decided if the fluctuation demand exceeds the available capacity through a combination of making investment on new machines and outsourcing while considering for minimizing the cost of idle capacity in the future. The objective of the model is to minimize the total cost of the line assembly that consists of operating costs, machine cost, adding capacity cost, losses cost due to idle capacity and outsourcing costs. The model develop is based on an integer programming model. The model is tested for a set of data of one year demand with the existing number of sewing machines of 41 units. The result shows that additional maximum capacity up to 76 units of machine required when there is an increase of 60% of the average demand, at the equal cost parameters..

  17. A DNA-hairpin model for repeat-addition processivity in telomere synthesis.

    PubMed

    Yang, Wei; Lee, Young-Sam

    2015-11-01

    We propose a DNA-hairpin model for the processivity of telomeric-repeat addition. Concomitantly with template-RNA translocation after each repeat synthesis, the complementary DNA repeat, for example, AGGGTT, loops out in a noncanonical base-paired hairpin, thus freeing the RNA template for the next round of repeat synthesis. The DNA hairpin is temporarily stabilized by telomerase and the incoming dGTP but becomes realigned for processive telomere synthesis.

  18. Rain water transport and storage in a model sandy soil with hydrogel particle additives.

    PubMed

    Wei, Y; Durian, D J

    2014-10-01

    We study rain water infiltration and drainage in a dry model sandy soil with superabsorbent hydrogel particle additives by measuring the mass of retained water for non-ponding rainfall using a self-built 3D laboratory set-up. In the pure model sandy soil, the retained water curve measurements indicate that instead of a stable horizontal wetting front that grows downward uniformly, a narrow fingered flow forms under the top layer of water-saturated soil. This rain water channelization phenomenon not only further reduces the available rain water in the plant root zone, but also affects the efficiency of soil additives, such as superabsorbent hydrogel particles. Our studies show that the shape of the retained water curve for a soil packing with hydrogel particle additives strongly depends on the location and the concentration of the hydrogel particles in the model sandy soil. By carefully choosing the particle size and distribution methods, we may use the swollen hydrogel particles to modify the soil pore structure, to clog or extend the water channels in sandy soils, or to build water reservoirs in the plant root zone.

  19. Estimation of adjusted rate differences using additive negative binomial regression.

    PubMed

    Donoghoe, Mark W; Marschner, Ian C

    2016-08-15

    Rate differences are an important effect measure in biostatistics and provide an alternative perspective to rate ratios. When the data are event counts observed during an exposure period, adjusted rate differences may be estimated using an identity-link Poisson generalised linear model, also known as additive Poisson regression. A problem with this approach is that the assumption of equality of mean and variance rarely holds in real data, which often show overdispersion. An additive negative binomial model is the natural alternative to account for this; however, standard model-fitting methods are often unable to cope with the constrained parameter space arising from the non-negativity restrictions of the additive model. In this paper, we propose a novel solution to this problem using a variant of the expectation-conditional maximisation-either algorithm. Our method provides a reliable way to fit an additive negative binomial regression model and also permits flexible generalisations using semi-parametric regression functions. We illustrate the method using a placebo-controlled clinical trial of fenofibrate treatment in patients with type II diabetes, where the outcome is the number of laser therapy courses administered to treat diabetic retinopathy. An R package is available that implements the proposed method. Copyright © 2016 John Wiley & Sons, Ltd. PMID:27073156

  20. Negative Poisson's ratios for extreme states of matter

    PubMed

    Baughman; Dantas; Stafstrom; Zakhidov; Mitchell; Dubin

    2000-06-16

    Negative Poisson's ratios are predicted for body-centered-cubic phases that likely exist in white dwarf cores and neutron star outer crusts, as well as those found for vacuumlike ion crystals, plasma dust crystals, and colloidal crystals (including certain virus crystals). The existence of this counterintuitive property, which means that a material laterally expands when stretched, is experimentally demonstrated for very low density crystals of trapped ions. At very high densities, the large predicted negative and positive Poisson's ratios might be important for understanding the asteroseismology of neutron stars and white dwarfs and the effect of stellar stresses on nuclear reaction rates. Giant Poisson's ratios are both predicted and observed for highly strained coulombic photonic crystals, suggesting possible applications of large, tunable Poisson's ratios for photonic crystal devices. PMID:10856209

  1. Negative Poisson's ratios for extreme states of matter

    PubMed

    Baughman; Dantas; Stafstrom; Zakhidov; Mitchell; Dubin

    2000-06-16

    Negative Poisson's ratios are predicted for body-centered-cubic phases that likely exist in white dwarf cores and neutron star outer crusts, as well as those found for vacuumlike ion crystals, plasma dust crystals, and colloidal crystals (including certain virus crystals). The existence of this counterintuitive property, which means that a material laterally expands when stretched, is experimentally demonstrated for very low density crystals of trapped ions. At very high densities, the large predicted negative and positive Poisson's ratios might be important for understanding the asteroseismology of neutron stars and white dwarfs and the effect of stellar stresses on nuclear reaction rates. Giant Poisson's ratios are both predicted and observed for highly strained coulombic photonic crystals, suggesting possible applications of large, tunable Poisson's ratios for photonic crystal devices.

  2. Generalized linear and generalized additive models in studies of species distributions: Setting the scene

    USGS Publications Warehouse

    Guisan, A.; Edwards, T.C.; Hastie, T.

    2002-01-01

    An important statistical development of the last 30 years has been the advance in regression analysis provided by generalized linear models (GLMs) and generalized additive models (GAMs). Here we introduce a series of papers prepared within the framework of an international workshop entitled: Advances in GLMs/GAMs modeling: from species distribution to environmental management, held in Riederalp, Switzerland, 6-11 August 2001. We first discuss some general uses of statistical models in ecology, as well as provide a short review of several key examples of the use of GLMs and GAMs in ecological modeling efforts. We next present an overview of GLMs and GAMs, and discuss some of their related statistics used for predictor selection, model diagnostics, and evaluation. Included is a discussion of several new approaches applicable to GLMs and GAMs, such as ridge regression, an alternative to stepwise selection of predictors, and methods for the identification of interactions by a combined use of regression trees and several other approaches. We close with an overview of the papers and how we feel they advance our understanding of their application to ecological modeling. ?? 2002 Elsevier Science B.V. All rights reserved.

  3. Nonlocal quadratic Poisson algebras, monodromy map, and Bogoyavlensky lattices

    NASA Astrophysics Data System (ADS)

    Suris, Yuri B.

    1997-08-01

    A new Lax representation for the Bogoyavlensky lattice is found and its r-matrix interpretation is elaborated. The r-matrix structure turns out to be related to a highly nonlocal quadratic Poisson structure on a direct sum of associative algebras. The theory of such nonlocal structures is developed and the Poisson property of the monodromy map is worked out in the most general situation. Some problems concerning the duality of Lax representations are raised.

  4. Bicrossed products induced by Poisson vector fields and their integrability

    NASA Astrophysics Data System (ADS)

    Djiba, Samson Apourewagne; Wade, Aïssa

    2016-01-01

    First we show that, associated to any Poisson vector field E on a Poisson manifold (M,π), there is a canonical Lie algebroid structure on the first jet bundle J1M which, depends only on the cohomology class of E. We then introduce the notion of a cosymplectic groupoid and we discuss the integrability of the first jet bundle into a cosymplectic groupoid. Finally, we give applications to Atiyah classes and L∞-algebras.

  5. A generalized additive model for the spatial distribution of snowpack in the Spanish Pyrenees

    NASA Astrophysics Data System (ADS)

    López-Moreno, J. I.; Nogués-Bravo, D.

    2005-10-01

    A generalized additive model (GAM) was used to model the spatial distribution of snow depth in the central Spanish Pyrenees. Statistically significant non-linear relationships were found between distinct location and topographical variables and the average depth of the April snowpack at 76 snow poles from 1985 to 2000. The joint effect of the predictor variables explained more than 73% of the variance of the dependent variable. The performance of the model was assessed by applying a number of quantitative approaches to the residuals from a cross-validation test. The relatively low estimated errors and the possibility of understanding the processes that control snow accumulation, through the response curves of each independent variable, indicate that GAMs may be a useful tool for interpolating local snow depth or other climate parameters.

  6. Parity Symmetry and Parity Breaking in the Quantum Rabi Model with Addition of Ising Interaction

    NASA Astrophysics Data System (ADS)

    Wang, Qiong; He, Zhi; Yao, Chun-Mei

    2015-04-01

    We explore the possibility to generate new parity symmetry in the quantum Rabi model after a bias is introduced. In contrast to a mathematical treatment in a previous publication [J. Phys. A 46 (2013) 265302], we consider a physically realistic method by involving an additional spin into the quantum Rabi model to couple with the original spin by an Ising interaction, and then the parity symmetry is broken as well as the scaling behavior of the ground state by introducing a bias. The rule can be found that the parity symmetry is broken by introducing a bias and then restored by adding new degrees of freedom. Experimental feasibility of realizing the models under discussion is investigated. Supported by the National Natural Science Foundation of China under Grant Nos. 61475045 and 11347142, the Natural Science Foundation of Hunan Province, China under Grant No. 2015JJ3092

  7. Evaporation model for beam based additive manufacturing using free surface lattice Boltzmann methods

    NASA Astrophysics Data System (ADS)

    Klassen, Alexander; Scharowsky, Thorsten; Körner, Carolin

    2014-07-01

    Evaporation plays an important role in many technical applications including beam-based additive manufacturing processes, such as selective electron beam or selective laser melting (SEBM/SLM). In this paper, we describe an evaporation model which we employ within the framework of a two-dimensional free surface lattice Boltzmann method. With this method, we solve the hydrodynamics as well as thermodynamics of the molten material taking into account the mass and energy losses due to evaporation and the recoil pressure acting on the melt pool. Validation of the numerical model is performed by measuring maximum melt depths and evaporative losses in samples of pure titanium and Ti-6Al-4V molten by an electron beam. Finally, the model is applied to create processing maps for an SEBM process. The results predict that the penetration depth of the electron beam, which is a function of the acceleration voltage, has a significant influence on evaporation effects.

  8. Testing Departure from Additivity in Tukey’s Model using Shrinkage: Application to a Longitudinal Setting

    PubMed Central

    Ko, Yi-An; Mukherjee, Bhramar; Smith, Jennifer A.; Park, Sung Kyun; Kardia, Sharon L.R.; Allison, Matthew A.; Vokonas, Pantel S.; Chen, Jinbo; Diez-Roux, Ana V.

    2014-01-01

    While there has been extensive research developing gene-environment interaction (GEI) methods in case-control studies, little attention has been given to sparse and efficient modeling of GEI in longitudinal studies. In a two-way table for GEI with rows and columns as categorical variables, a conventional saturated interaction model involves estimation of a specific parameter for each cell, with constraints ensuring identifiability. The estimates are unbiased but are potentially inefficient because the number of parameters to be estimated can grow quickly with increasing categories of row/column factors. On the other hand, Tukey’s one degree of freedom (df) model for non-additivity treats the interaction term as a scaled product of row and column main effects. Due to the parsimonious form of interaction, the interaction estimate leads to enhanced efficiency and the corresponding test could lead to increased power. Unfortunately, Tukey’s model gives biased estimates and low power if the model is misspecified. When screening multiple GEIs where each genetic and environmental marker may exhibit a distinct interaction pattern, a robust estimator for interaction is important for GEI detection. We propose a shrinkage estimator for interaction effects that combines estimates from both Tukey’s and saturated interaction models and use the corresponding Wald test for testing interaction in a longitudinal setting. The proposed estimator is robust to misspecification of interaction structure. We illustrate the proposed methods using two longitudinal studies — the Normative Aging Study and the Multi-Ethnic Study of Atherosclerosis. PMID:25112650

  9. Wall-models for large eddy simulation based on a generic additive-filter formulation

    NASA Astrophysics Data System (ADS)

    Sanchez Rocha, Martin

    Based on the philosophy of only resolving the large scales of turbulent motion, Large Eddy Simulation (LES) has demonstrated potential to provide high-fidelity turbulence simulations at low computational cost. However, when the scales that control the turbulence in a particular flow are not large, LES has to increase significantly its computational cost to provide accurate predictions. This is the case in wall-bounded flows, where the grid resolution required by LES to resolve the near-wall structures is close to the requirements to resolve the smallest dissipative scales in turbulence. Therefore, to reduce this demanding requirement, it has been proposed to model the near-wall region with Reynolds-Averaged Navier-Stokes (RANS) models, in what is known as hybrid RANS/LES approach. In this work, the mathematical implications of merging two different turbulence modeling approaches are addressed by deriving the exact hybrid RANS/LES Navier-Stokes equations. These equations are derived by introducing an additive-filter, which linearly combines the RANS and LES operators with a blending function. The equations derived with the additive-filter predict additional hybrid terms, which represent the interactions between RANS and LES formulations. Theoretically, the prediction of the hybrid terms demonstrates that the hybridization of the two approaches cannot be accomplished only by the turbulence model equations, as it is claimed in current hybrid RANS/LES models. The importance of the exact hybrid RANS/LES equations is demonstrated by conducting numerical calculations on a turbulent flat-plate boundary layer. Results indicate that the hybrid terms help to maintain an equilibrated model transition when the hybrid formulation switches from RANS to LES. Results also indicate, that when the hybrid terms are not included, the accuracy of the calculations strongly relies on the blending function implemented in the additive-filter. On the other hand, if the exact equations are

  10. Mixed-effects Poisson regression analysis of adverse event reports

    PubMed Central

    Gibbons, Robert D.; Segawa, Eisuke; Karabatsos, George; Amatya, Anup K.; Bhaumik, Dulal K.; Brown, C. Hendricks; Kapur, Kush; Marcus, Sue M.; Hur, Kwan; Mann, J. John

    2008-01-01

    SUMMARY A new statistical methodology is developed for the analysis of spontaneous adverse event (AE) reports from post-marketing drug surveillance data. The method involves both empirical Bayes (EB) and fully Bayes estimation of rate multipliers for each drug within a class of drugs, for a particular AE, based on a mixed-effects Poisson regression model. Both parametric and semiparametric models for the random-effect distribution are examined. The method is applied to data from Food and Drug Administration (FDA)’s Adverse Event Reporting System (AERS) on the relationship between antidepressants and suicide. We obtain point estimates and 95 per cent confidence (posterior) intervals for the rate multiplier for each drug (e.g. antidepressants), which can be used to determine whether a particular drug has an increased risk of association with a particular AE (e.g. suicide). Confidence (posterior) intervals that do not include 1.0 provide evidence for either significant protective or harmful associations of the drug and the adverse effect. We also examine EB, parametric Bayes, and semiparametric Bayes estimators of the rate multipliers and associated confidence (posterior) intervals. Results of our analysis of the FDA AERS data revealed that newer antidepressants are associated with lower rates of suicide adverse event reports compared with older antidepressants. We recommend improvements to the existing AERS system, which are likely to improve its public health value as an early warning system. PMID:18404622

  11. Reduction of carcinogenic 4(5)-methylimidazole in a caramel model system: influence of food additives.

    PubMed

    Seo, Seulgi; Ka, Mi-Hyun; Lee, Kwang-Geun

    2014-07-01

    The effect of various food additives on the formation of carcinogenic 4(5)-methylimidazole (4-MI) in a caramel model system was investigated. The relationship between the levels of 4-MI and various pyrazines was studied. When glucose and ammonium hydroxide were heated, the amount of 4-MI was 556 ± 1.3 μg/mL, which increased to 583 ± 2.6 μg/mL by the addition of 0.1 M of sodium sulfite. When various food additives, such as 0.1 M of iron sulfate, magnesium sulfate, zinc sulfate, tryptophan, and cysteine were added, the amount of 4-MI was reduced to 110 ± 0.7, 483 ± 2.0, 460 ± 2.0, 409 ± 4.4, and 397 ± 1.7 μg/mL, respectively. The greatest reduction, 80%, occurred with the addition of iron sulfate. Among the 12 pyrazines, 2-ethyl-6-methylpyrazine with 4-MI showed the highest correlation (r = -0.8239).

  12. Coagulation kinetics beyond mean field theory using an optimised Poisson representation

    NASA Astrophysics Data System (ADS)

    Burnett, James; Ford, Ian J.

    2015-05-01

    Binary particle coagulation can be modelled as the repeated random process of the combination of two particles to form a third. The kinetics may be represented by population rate equations based on a mean field assumption, according to which the rate of aggregation is taken to be proportional to the product of the mean populations of the two participants, but this can be a poor approximation when the mean populations are small. However, using the Poisson representation, it is possible to derive a set of rate equations that go beyond mean field theory, describing pseudo-populations that are continuous, noisy, and complex, but where averaging over the noise and initial conditions gives the mean of the physical population. Such an approach is explored for the simple case of a size-independent rate of coagulation between particles. Analytical results are compared with numerical computations and with results derived by other means. In the numerical work, we encounter instabilities that can be eliminated using a suitable "gauge" transformation of the problem [P. D. Drummond, Eur. Phys. J. B 38, 617 (2004)] which we show to be equivalent to the application of the Cameron-Martin-Girsanov formula describing a shift in a probability measure. The cost of such a procedure is to introduce additional statistical noise into the numerical results, but we identify an optimised gauge transformation where this difficulty is minimal for the main properties of interest. For more complicated systems, such an approach is likely to be computationally cheaper than Monte Carlo simulation.

  13. AFMPB: An adaptive fast multipole Poisson-Boltzmann solver for calculating electrostatics in biomolecular systems

    NASA Astrophysics Data System (ADS)

    Lu, Benzhuo; Cheng, Xiaolin; Huang, Jingfang; McCammon, J. Andrew

    2013-11-01

    A Fortran program package is introduced for rapid evaluation of the electrostatic potentials and forces in biomolecular systems modeled by the linearized Poisson-Boltzmann equation. The numerical solver utilizes a well-conditioned boundary integral equation (BIE) formulation, a node-patch discretization scheme, a Krylov subspace iterative solver package with reverse communication protocols, and an adaptive new version of the fast multipole method in which the exponential expansions are used to diagonalize the multipole-to-local translations. The program and its full description, as well as several closely related libraries and utility tools are available at http://lsec.cc.ac.cn/~lubz/afmpb.html and a mirror site at http://mccammon.ucsd.edu/. This paper is a brief summary of the program: the algorithms, the implementation and the usage. Restrictions: Only three or six significant digits options are provided in this version. Unusual features: Most of the codes are in Fortran77 style. Memory allocation functions from Fortran90 and above are used in a few subroutines. Additional comments: The current version of the codes is designed and written for single core/processor desktop machines. Check http://lsec.cc.ac.cn/lubz/afmpb.html for updates and changes. Running time: The running time varies with the number of discretized elements (N) in the system and their distributions. In most cases, it scales linearly as a function of N.

  14. Topsoil organic carbon content of Europe, a new map based on a generalised additive model

    NASA Astrophysics Data System (ADS)

    de Brogniez, Delphine; Ballabio, Cristiano; Stevens, Antoine; Jones, Robert J. A.; Montanarella, Luca; van Wesemael, Bas

    2014-05-01

    There is an increasing demand for up-to-date spatially continuous organic carbon (OC) data for global environment and climatic modeling. Whilst the current map of topsoil organic carbon content for Europe (Jones et al., 2005) was produced by applying expert-knowledge based pedo-transfer rules on large soil mapping units, the aim of this study was to replace it by applying digital soil mapping techniques on the first European harmonised geo-referenced topsoil (0-20 cm) database, which arises from the LUCAS (land use/cover area frame statistical survey) survey. A generalized additive model (GAM) was calibrated on 85% of the dataset (ca. 17 000 soil samples) and a backward stepwise approach selected slope, land cover, temperature, net primary productivity, latitude and longitude as environmental covariates (500 m resolution). The validation of the model (applied on 15% of the dataset), gave an R2 of 0.27. We observed that most organic soils were under-predicted by the model and that soils of Scandinavia were also poorly predicted. The model showed an RMSE of 42 g kg-1 for mineral soils and of 287 g kg-1 for organic soils. The map of predicted OC content showed the lowest values in Mediterranean countries and in croplands across Europe, whereas highest OC content were predicted in wetlands, woodlands and in mountainous areas. The map of standard error of the OC model predictions showed high values in northern latitudes, wetlands, moors and heathlands, whereas low uncertainty was mostly found in croplands. A comparison of our results with the map of Jones et al. (2005) showed a general agreement on the prediction of mineral soils' OC content, most probably because the models use some common covariates, namely land cover and temperature. Our model however failed to predict values of OC content greater than 200 g kg-1, which we explain by the imposed unimodal distribution of our model, whose mean is tilted towards the majority of soils, which are mineral. Finally, average

  15. Transmission tomography under Poisson noise using the Anscombe transformation and Wiener filtering of the projections

    NASA Astrophysics Data System (ADS)

    Mascarenhas, Nelson D. A.; Santos, Cid A. N.; Cruvinel, Paulo E.

    1999-03-01

    A minitomograph scanner for soil science was developed by the National Center for Research and Development of Agricultural Instrumentation (EMBRAPA/CNPDIA). The purpose of this paper is twofold. First, a statistical characterization of the noise affecting the projection measurements of this scanner is presented. Second, having determined the Poisson nature of this noise, a new method of filtering the projection data prior to the reconstruction is proposed. It is based on transforming the Poisson noise into Gaussian additive noise, filtering the projections in blocks through the Wiener filter and performing the inverse tranformation. Results with real data indicate that this method gives superior results, as compared to conventional backprojection with the ramp filter, by taking into consideration both resolution and noise, through a mean square error criterion.

  16. Improving the predictive accuracy of hurricane power outage forecasts using generalized additive models.

    PubMed

    Han, Seung-Ryong; Guikema, Seth D; Quiring, Steven M

    2009-10-01

    Electric power is a critical infrastructure service after hurricanes, and rapid restoration of electric power is important in order to minimize losses in the impacted areas. However, rapid restoration of electric power after a hurricane depends on obtaining the necessary resources, primarily repair crews and materials, before the hurricane makes landfall and then appropriately deploying these resources as soon as possible after the hurricane. This, in turn, depends on having sound estimates of both the overall severity of the storm and the relative risk of power outages in different areas. Past studies have developed statistical, regression-based approaches for estimating the number of power outages in advance of an approaching hurricane. However, these approaches have either not been applicable for future events or have had lower predictive accuracy than desired. This article shows that a different type of regression model, a generalized additive model (GAM), can outperform the types of models used previously. This is done by developing and validating a GAM based on power outage data during past hurricanes in the Gulf Coast region and comparing the results from this model to the previously used generalized linear models.

  17. Predicting the Survival Time for Bladder Cancer Using an Additive Hazards Model in Microarray Data

    PubMed Central

    TAPAK, Leili; MAHJUB, Hossein; SADEGHIFAR, Majid; SAIDIJAM, Massoud; POOROLAJAL, Jalal

    2016-01-01

    Background: One substantial part of microarray studies is to predict patients’ survival based on their gene expression profile. Variable selection techniques are powerful tools to handle high dimensionality in analysis of microarray data. However, these techniques have not been investigated in competing risks setting. This study aimed to investigate the performance of four sparse variable selection methods in estimating the survival time. Methods: The data included 1381 gene expression measurements and clinical information from 301 patients with bladder cancer operated in the years 1987 to 2000 in hospitals in Denmark, Sweden, Spain, France, and England. Four methods of the least absolute shrinkage and selection operator, smoothly clipped absolute deviation, the smooth integration of counting and absolute deviation and elastic net were utilized for simultaneous variable selection and estimation under an additive hazards model. The criteria of area under ROC curve, Brier score and c-index were used to compare the methods. Results: The median follow-up time for all patients was 47 months. The elastic net approach was indicated to outperform other methods. The elastic net had the lowest integrated Brier score (0.137±0.07) and the greatest median of the over-time AUC and C-index (0.803±0.06 and 0.779±0.13, respectively). Five out of 19 selected genes by the elastic net were significant (P<0.05) under an additive hazards model. It was indicated that the expression of RTN4, SON, IGF1R and CDC20 decrease the survival time, while the expression of SMARCAD1 increase it. Conclusion: The elastic net had higher capability than the other methods for the prediction of survival time in patients with bladder cancer in the presence of competing risks base on additive hazards model. PMID:27114989

  18. Comparison of prosthetic models produced by traditional and additive manufacturing methods

    PubMed Central

    Park, Jin-Young; Kim, Hae-Young; Kim, Ji-Hwan; Kim, Jae-Hong

    2015-01-01

    PURPOSE The purpose of this study was to verify the clinical-feasibility of additive manufacturing by comparing the accuracy of four different manufacturing methods for metal coping: the conventional lost wax technique (CLWT); subtractive methods with wax blank milling (WBM); and two additive methods, multi jet modeling (MJM), and micro-stereolithography (Micro-SLA). MATERIALS AND METHODS Thirty study models were created using an acrylic model with the maxillary upper right canine, first premolar, and first molar teeth. Based on the scan files from a non-contact blue light scanner (Identica; Medit Co. Ltd., Seoul, Korea), thirty cores were produced using the WBM, MJM, and Micro-SLA methods, respectively, and another thirty frameworks were produced using the CLWT method. To measure the marginal and internal gap, the silicone replica method was adopted, and the silicone images obtained were evaluated using a digital microscope (KH-7700; Hirox, Tokyo, Japan) at 140X magnification. Analyses were performed using two-way analysis of variance (ANOVA) and Tukey post hoc test (α=.05). RESULTS The mean marginal gaps and internal gaps showed significant differences according to tooth type (P<.001 and P<.001, respectively) and manufacturing method (P<.037 and P<.001, respectively). Micro-SLA did not show any significant difference from CLWT regarding mean marginal gap compared to the WBM and MJM methods. CONCLUSION The mean values of gaps resulting from the four different manufacturing methods were within a clinically allowable range, and, thus, the clinical use of additive manufacturing methods is acceptable as an alternative to the traditional lost wax-technique and subtractive manufacturing. PMID:26330976

  19. Thermodynamic network model for predicting effects of substrate addition and other perturbations on subsurface microbial communities

    SciTech Connect

    Jack Istok; Melora Park; James McKinley; Chongxuan Liu; Lee Krumholz; Anne Spain; Aaron Peacock; Brett Baldwin

    2007-04-19

    The overall goal of this project is to develop and test a thermodynamic network model for predicting the effects of substrate additions and environmental perturbations on microbial growth, community composition and system geochemistry. The hypothesis is that a thermodynamic analysis of the energy-yielding growth reactions performed by defined groups of microorganisms can be used to make quantitative and testable predictions of the change in microbial community composition that will occur when a substrate is added to the subsurface or when environmental conditions change.

  20. Generalized additive models and Lucilia sericata growth: assessing confidence intervals and error rates in forensic entomology.

    PubMed

    Tarone, Aaron M; Foran, David R

    2008-07-01

    Forensic entomologists use blow fly development to estimate a postmortem interval. Although accurate, fly age estimates can be imprecise for older developmental stages and no standard means of assigning confidence intervals exists. Presented here is a method for modeling growth of the forensically important blow fly Lucilia sericata, using generalized additive models (GAMs). Eighteen GAMs were created to predict the extent of juvenile fly development, encompassing developmental stage, length, weight, strain, and temperature data, collected from 2559 individuals. All measures were informative, explaining up to 92.6% of the deviance in the data, though strain and temperature exerted negligible influences. Predictions made with an independent data set allowed for a subsequent examination of error. Estimates using length and developmental stage were within 5% of true development percent during the feeding portion of the larval life cycle, while predictions for postfeeding third instars were less precise, but within expected error.

  1. Understanding the changes in ductility and Poisson's ratio of metallic glasses during annealing from microscopic dynamics

    SciTech Connect

    Wang, Z.; Ngai, K. L.; Wang, W. H.

    2015-07-21

    In the paper K. L. Ngai et al., [J. Chem. 140, 044511 (2014)], the empirical correlation of ductility with the Poisson's ratio, ν{sub Poisson}, found in metallic glasses was theoretically explained by microscopic dynamic processes which link on the one hand ductility, and on the other hand the Poisson's ratio. Specifically, the dynamic processes are the primitive relaxation in the Coupling Model which is the precursor of the Johari–Goldstein β-relaxation, and the caged atoms dynamics characterized by the effective Debye–Waller factor f{sub 0} or equivalently the nearly constant loss (NCL) in susceptibility. All these processes and the parameters characterizing them are accessible experimentally except f{sub 0} or the NCL of caged atoms; thus, so far, the experimental verification of the explanation of the correlation between ductility and Poisson's ratio is incomplete. In the experimental part of this paper, we report dynamic mechanical measurement of the NCL of the metallic glass La{sub 60}Ni{sub 15}Al{sub 25} as-cast, and the changes by annealing at temperature below T{sub g}. The observed monotonic decrease of the NCL with aging time, reflecting the corresponding increase of f{sub 0}, correlates with the decrease of ν{sub Poisson}. This is important observation because such measurements, not made before, provide the missing link in confirming by experiment the explanation of the correlation of ductility with ν{sub Poisson}. On aging the metallic glass, also observed in the isochronal loss spectra is the shift of the β-relaxation to higher temperatures and reduction of the relaxation strength. These concomitant changes of the β-relaxation and NCL are the root cause of embrittlement by aging the metallic glass. The NCL of caged atoms is terminated by the onset of the primitive relaxation in the Coupling Model, which is generally supported by experiments. From this relation, the monotonic decrease of the NCL with aging time is caused by the slowing down

  2. Guarana provides additional stimulation over caffeine alone in the planarian model.

    PubMed

    Moustakas, Dimitrios; Mezzio, Michael; Rodriguez, Branden R; Constable, Mic Andre; Mulligan, Margaret E; Voura, Evelyn B

    2015-01-01

    The stimulant effect of energy drinks is primarily attributed to the caffeine they contain. Many energy drinks also contain other ingredients that might enhance the tonic effects of these caffeinated beverages. One of these additives is guarana. Guarana is a climbing plant native to the Amazon whose seeds contain approximately four times the amount of caffeine found in coffee beans. The mix of other natural chemicals contained in guarana seeds is thought to heighten the stimulant effects of guarana over caffeine alone. Yet, despite the growing use of guarana as an additive in energy drinks, and a burgeoning market for it as a nutritional supplement, the science examining guarana and how it affects other dietary ingredients is lacking. To appreciate the stimulant effects of guarana and other natural products, a straightforward model to investigate their physiological properties is needed. The planarian provides such a system. The locomotor activity and convulsive response of planarians with substance exposure has been shown to provide an excellent system to measure the effects of drug stimulation, addiction and withdrawal. To gauge the stimulant effects of guarana we studied how it altered the locomotor activity of the planarian species Dugesia tigrina. We report evidence that guarana seeds provide additional stimulation over caffeine alone, and document the changes to this stimulation in the context of both caffeine and glucose. PMID:25880065

  3. Analysis and Modeling of soil hydrology under different soil additives in artificial runoff plots

    NASA Astrophysics Data System (ADS)

    Ruidisch, M.; Arnhold, S.; Kettering, J.; Huwe, B.; Kuzyakov, Y.; Ok, Y.; Tenhunen, J. D.

    2009-12-01

    The impact of monsoon events during June and July in the Korean project region Haean Basin, which is located in the northeastern part of South Korea plays a key role for erosion, leaching and groundwater pollution risk by agrochemicals. Therefore, the project investigates the main hydrological processes in agricultural soils under field and laboratory conditions on different scales (plot, hillslope and catchment). Soil hydrological parameters were analysed depending on different soil additives, which are known for prevention of soil erosion and nutrient loss as well as increasing of water infiltration, aggregate stability and soil fertility. Hence, synthetic water-soluble Polyacrylamides (PAM), Biochar (Black Carbon mixed with organic fertilizer), both PAM and Biochar were applied in runoff plots at three agricultural field sites. Additionally, as control a subplot was set up without any additives. The field sites were selected in areas with similar hillslope gradients and with emphasis on the dominant land management form of dryland farming in Haean, which is characterised by row planting and row covering by foil. Hydrological parameters like satured water conductivity, matrix potential and water content were analysed by infiltration experiments, continuous tensiometer measurements, time domain reflectometry as well as pressure plates to indentify characteristic water retention curves of each horizon. Weather data were observed by three weather stations next to the runoff plots. Measured data also provide the input data for modeling water transport in the unsatured zone in runoff plots with HYDRUS 1D/2D/3D and SWAT (Soil & Water Assessment Tool).

  4. Guarana provides additional stimulation over caffeine alone in the planarian model.

    PubMed

    Moustakas, Dimitrios; Mezzio, Michael; Rodriguez, Branden R; Constable, Mic Andre; Mulligan, Margaret E; Voura, Evelyn B

    2015-01-01

    The stimulant effect of energy drinks is primarily attributed to the caffeine they contain. Many energy drinks also contain other ingredients that might enhance the tonic effects of these caffeinated beverages. One of these additives is guarana. Guarana is a climbing plant native to the Amazon whose seeds contain approximately four times the amount of caffeine found in coffee beans. The mix of other natural chemicals contained in guarana seeds is thought to heighten the stimulant effects of guarana over caffeine alone. Yet, despite the growing use of guarana as an additive in energy drinks, and a burgeoning market for it as a nutritional supplement, the science examining guarana and how it affects other dietary ingredients is lacking. To appreciate the stimulant effects of guarana and other natural products, a straightforward model to investigate their physiological properties is needed. The planarian provides such a system. The locomotor activity and convulsive response of planarians with substance exposure has been shown to provide an excellent system to measure the effects of drug stimulation, addiction and withdrawal. To gauge the stimulant effects of guarana we studied how it altered the locomotor activity of the planarian species Dugesia tigrina. We report evidence that guarana seeds provide additional stimulation over caffeine alone, and document the changes to this stimulation in the context of both caffeine and glucose.

  5. Guarana Provides Additional Stimulation over Caffeine Alone in the Planarian Model

    PubMed Central

    Moustakas, Dimitrios; Mezzio, Michael; Rodriguez, Branden R.; Constable, Mic Andre; Mulligan, Margaret E.; Voura, Evelyn B.

    2015-01-01

    The stimulant effect of energy drinks is primarily attributed to the caffeine they contain. Many energy drinks also contain other ingredients that might enhance the tonic effects of these caffeinated beverages. One of these additives is guarana. Guarana is a climbing plant native to the Amazon whose seeds contain approximately four times the amount of caffeine found in coffee beans. The mix of other natural chemicals contained in guarana seeds is thought to heighten the stimulant effects of guarana over caffeine alone. Yet, despite the growing use of guarana as an additive in energy drinks, and a burgeoning market for it as a nutritional supplement, the science examining guarana and how it affects other dietary ingredients is lacking. To appreciate the stimulant effects of guarana and other natural products, a straightforward model to investigate their physiological properties is needed. The planarian provides such a system. The locomotor activity and convulsive response of planarians with substance exposure has been shown to provide an excellent system to measure the effects of drug stimulation, addiction and withdrawal. To gauge the stimulant effects of guarana we studied how it altered the locomotor activity of the planarian species Dugesia tigrina. We report evidence that guarana seeds provide additional stimulation over caffeine alone, and document the changes to this stimulation in the context of both caffeine and glucose. PMID:25880065

  6. Relaxation-time limit in the multi-dimensional bipolar nonisentropic Euler-Poisson systems

    NASA Astrophysics Data System (ADS)

    Li, Yeping; Zhou, Zhiming

    2015-05-01

    In this paper, we consider the multi-dimensional bipolar nonisentropic Euler-Poisson systems, which model various physical phenomena in semiconductor devices, plasmas and channel proteins. We mainly study the relaxation-time limit of the initial value problem for the bipolar full Euler-Poisson equations with well-prepared initial data. Inspired by the Maxwell iteration, we construct the different approximation states for the case τσ = 1 and σ = 1, respectively, and show that periodic initial-value problems of the certain scaled bipolar nonisentropic Euler-Poisson systems in the case τσ = 1 and σ = 1 have unique smooth solutions in the time interval where the classical energy transport equation and the drift-diffusive equation have smooth solution. Moreover, it is also obtained that the smooth solutions converge to those of energy-transport models at the rate of τ2 and those of the drift-diffusive models at the rate of τ, respectively. The proof of these results is based on the continuation principle and the error estimates.

  7. Nonparametric Independence Screening in Sparse Ultra-High Dimensional Additive Models.

    PubMed

    Fan, Jianqing; Feng, Yang; Song, Rui

    2011-06-01

    A variable screening procedure via correlation learning was proposed in Fan and Lv (2008) to reduce dimensionality in sparse ultra-high dimensional models. Even when the true model is linear, the marginal regression can be highly nonlinear. To address this issue, we further extend the correlation learning to marginal nonparametric learning. Our nonparametric independence screening is called NIS, a specific member of the sure independence screening. Several closely related variable screening procedures are proposed. Under general nonparametric models, it is shown that under some mild technical conditions, the proposed independence screening methods enjoy a sure screening property. The extent to which the dimensionality can be reduced by independence screening is also explicitly quantified. As a methodological extension, a data-driven thresholding and an iterative nonparametric independence screening (INIS) are also proposed to enhance the finite sample performance for fitting sparse additive models. The simulation results and a real data analysis demonstrate that the proposed procedure works well with moderate sample size and large dimension and performs better than competing methods.

  8. Model Scramjet Inlet Unstart Induced by Mass Addition and Heat Release

    NASA Astrophysics Data System (ADS)

    Im, Seong-Kyun; Baccarella, Damiano; McGann, Brendan; Liu, Qili; Wermer, Lydiy; Do, Hyungrok

    2015-11-01

    The inlet unstart phenomena in a model scramjet are investigated at an arc-heated hypersonic wind tunnel. The unstart induced by nitrogen or ethylene jets at low or high enthalpy Mach 4.5 freestream flow conditions are compared. The jet injection pressurizes the downstream flow by mass addition and flow blockage. In case of the ethylene jet injection, heat release from combustion increases the backpressure further. Time-resolved schlieren imaging is performed at the jet and the lip of the model inlet to visualize the flow features during unstart. High frequency pressure measurements are used to provide information on pressure fluctuation at the scramjet wall. In both of the mass and heat release driven unstart cases, it is observed that there are similar flow transient and quasi-steady behaviors of unstart shockwave system during the unstart processes. Combustion driven unstart induces severe oscillatory flow motions of the jet and the unstart shock at the lip of the scramjet inlet after the completion of the unstart process, while the unstarted flow induced by solely mass addition remains relatively steady. The discrepancies between the processes of mass and heat release driven unstart are explained by flow choking mechanism.

  9. Exact solutions for models of evolving networks with addition and deletion of nodes.

    PubMed

    Moore, Cristopher; Ghoshal, Gourab; Newman, M E J

    2006-09-01

    There has been considerable recent interest in the properties of networks, such as citation networks and the worldwide web, that grow by the addition of vertices, and a number of simple solvable models of network growth have been studied. In the real world, however, many networks, including the web, not only add vertices but also lose them. Here we formulate models of the time evolution of such networks and give exact solutions for a number of cases of particular interest. For the case of net growth and so-called preferential attachment--in which newly appearing vertices attach to previously existing ones in proportion to vertex degree--we show that the resulting networks have power-law degree distributions, but with an exponent that diverges as the growth rate vanishes. We conjecture that the low exponent values observed in real-world networks are thus the result of vigorous growth in which the rate of addition of vertices far exceeds the rate of removal. Were growth to slow in the future--for instance, in a more mature future version of the web--we would expect to see exponents increase, potentially without bound.

  10. Continental crust composition constrained by measurements of crustal Poisson's ratio

    NASA Astrophysics Data System (ADS)

    Zandt, George; Ammon, Charles J.

    1995-03-01

    DECIPHERING the geological evolution of the Earth's continental crust requires knowledge of its bulk composition and global variability. The main uncertainties are associated with the composition of the lower crust. Seismic measurements probe the elastic properties of the crust at depth, from which composition can be inferred. Of particular note is Poisson's ratio,Σ ; this elastic parameter can be determined uniquely from the ratio of P- to S-wave seismic velocity, and provides a better diagnostic of crustal composition than either P- or S-wave velocity alone1. Previous attempts to measure Σ have been limited by difficulties in obtaining coincident P- and S-wave data sampling the entire crust2. Here we report 76 new estimates of crustal Σ spanning all of the continents except Antarctica. We find that, on average, Σ increases with the age of the crust. Our results strongly support the presence of a mafic lower crust beneath cratons, and suggest either a uniformitarian craton formation process involving delamination of the lower crust during continental collisions, followed by magmatic underplating, or a model in which crust formation processes have changed since the Precambrian era.

  11. Complexity as aging non-Poisson renewal processes

    NASA Astrophysics Data System (ADS)

    Bianco, Simone

    The search for a satisfactory model for complexity, meant as an intermediate condition between total order and total disorder, is still subject of debate in the scientific community. In this dissertation the emergence of non-Poisson renewal processes in several complex systems is investigated. After reviewing the basics of renewal theory, another popular approach to complexity, called modulation, is introduced. I show how these two different approaches, given a suitable choice of the parameter involved, can generate the same macroscopic outcome, namely an inverse power law distribution density of events occurrence. To solve this ambiguity, a numerical instrument, based on the theoretical analysis of the aging properties of renewal systems, is introduced. The application of this method, called renewal aging experiment, allows us to distinguish if a time series has been generated by a renewal or a modulation process. This method of analysis is then applied to several physical systems, from blinking quantum dots, to the human brain activity, to seismic fluctuations. Theoretical conclusions about the underlying nature of the considered complex systems are drawn.

  12. Poisson process approximation for sequence repeats, and sequencing by hybridization.

    PubMed

    Arratia, R; Martin, D; Reinert, G; Waterman, M S

    1996-01-01

    Sequencing by hybridization is a tool to determine a DNA sequence from the unordered list of all l-tuples contained in this sequence; typical numbers for l are l = 8, 10, 12. For theoretical purposes we assume that the multiset of all l-tuples is known. This multiset determines the DNA sequence uniquely if none of the so-called Ukkonen transformations are possible. These transformations require repeats of (l-1)-tuples in the sequence, with these repeats occurring in certain spatial patterns. We model DNA as an i.i.d. sequence. We first prove Poisson process approximations for the process of indicators of all leftmost long repeats allowing self-overlap and for the process of indicators of all left-most long repeats without self-overlap. Using the Chen-Stein method, we get bounds on the error of these approximations. As a corollary, we approximate the distribution of longest repeats. In the second step we analyze the spatial patterns of the repeats. Finally we combine these two steps to prove an approximation for the probability that a random sequence is uniquely recoverable from its list of l-tuples. For all our results we give some numerical examples including error bounds. PMID:8891959

  13. Semiclassical Limits of Ore Extensions and a Poisson Generalized Weyl Algebra

    NASA Astrophysics Data System (ADS)

    Cho, Eun-Hee; Oh, Sei-Qwon

    2016-07-01

    We observe [Launois and Lecoutre, Trans. Am. Math. Soc. 368:755-785, 2016, Proposition 4.1] that Poisson polynomial extensions appear as semiclassical limits of a class of Ore extensions. As an application, a Poisson generalized Weyl algebra A 1, considered as a Poisson version of the quantum generalized Weyl algebra, is constructed and its Poisson structures are studied. In particular, a necessary and sufficient condition is obtained, such that A 1 is Poisson simple and established that the Poisson endomorphisms of A 1 are Poisson analogues of the endomorphisms of the quantum generalized Weyl algebra.

  14. Determinants of Low Birth Weight in Malawi: Bayesian Geo-Additive Modelling.

    PubMed

    Ngwira, Alfred; Stanley, Christopher C

    2015-01-01

    Studies on factors of low birth weight in Malawi have neglected the flexible approach of using smooth functions for some covariates in models. Such flexible approach reveals detailed relationship of covariates with the response. The study aimed at investigating risk factors of low birth weight in Malawi by assuming a flexible approach for continuous covariates and geographical random effect. A Bayesian geo-additive model for birth weight in kilograms and size of the child at birth (less than average or average and higher) with district as a spatial effect using the 2010 Malawi demographic and health survey data was adopted. A Gaussian model for birth weight in kilograms and a binary logistic model for the binary outcome (size of child at birth) were fitted. Continuous covariates were modelled by the penalized (p) splines and spatial effects were smoothed by the two dimensional p-spline. The study found that child birth order, mother weight and height are significant predictors of birth weight. Secondary education for mother, birth order categories 2-3 and 4-5, wealth index of richer family and mother height were significant predictors of child size at birth. The area associated with low birth weight was Chitipa and areas with increased risk to less than average size at birth were Chitipa and Mchinji. The study found support for the flexible modelling of some covariates that clearly have nonlinear influences. Nevertheless there is no strong support for inclusion of geographical spatial analysis. The spatial patterns though point to the influence of omitted variables with some spatial structure or possibly epidemiological processes that account for this spatial structure and the maps generated could be used for targeting development efforts at a glance.

  15. Modeling protein density of states: additive hydrophobic effects are insufficient for calorimetric two-state cooperativity.

    PubMed

    Chan, H S

    2000-09-01

    A well-established experimental criterion for two-state thermodynamic cooperativity in protein folding is that the van't Hoff enthalpy DeltaH(vH) around the transition midpoint is equal, or very nearly so, to the calorimetric enthalpy DeltaH(cal) of the entire transition. This condition is satisfied by many small proteins. We use simple lattice models to provide a statistical mechanical framework to elucidate how this calorimetric two-state picture may be reconciled with the hierarchical multistate scenario emerging from recent hydrogen exchange experiments. We investigate the feasibility of using inverse Laplace transforms to recover the underlying density of states (i.e., enthalpy distribution) from calorimetric data. We find that the constraint imposed by DeltaH(vH)/DeltaH(cal) approximately 1 on densities of states of proteins is often more stringent than other "two-state" criteria proposed in recent theoretical studies. In conjunction with reasonable assumptions, the calorimetric two-state condition implies a narrow distribution of denatured-state enthalpies relative to the overall enthalpy difference between the native and the denatured conformations. This requirement does not always correlate with simple definitions of "sharpness" of a transition and has important ramifications for theoretical modeling. We find that protein models that assume capillarity cooperativity can exhibit overall calorimetric two-state-like behaviors. However, common heteropolymer models based on additive hydrophobic-like interactions, including highly specific two-dimensional Gō models, fail to produce proteinlike DeltaH(vH)/DeltaH(cal) approximately 1. A simple model is constructed to illustrate a proposed scenario in which physically plausible local and nonlocal cooperative terms, which mimic helical cooperativity and environment-dependent hydrogen bonding strength, can lead to thermodynamic behaviors closer to experiment. Our results suggest that proteinlike thermodynamic

  16. Determinants of Low Birth Weight in Malawi: Bayesian Geo-Additive Modelling

    PubMed Central

    Ngwira, Alfred; Stanley, Christopher C.

    2015-01-01

    Studies on factors of low birth weight in Malawi have neglected the flexible approach of using smooth functions for some covariates in models. Such flexible approach reveals detailed relationship of covariates with the response. The study aimed at investigating risk factors of low birth weight in Malawi by assuming a flexible approach for continuous covariates and geographical random effect. A Bayesian geo-additive model for birth weight in kilograms and size of the child at birth (less than average or average and higher) with district as a spatial effect using the 2010 Malawi demographic and health survey data was adopted. A Gaussian model for birth weight in kilograms and a binary logistic model for the binary outcome (size of child at birth) were fitted. Continuous covariates were modelled by the penalized (p) splines and spatial effects were smoothed by the two dimensional p-spline. The study found that child birth order, mother weight and height are significant predictors of birth weight. Secondary education for mother, birth order categories 2-3 and 4-5, wealth index of richer family and mother height were significant predictors of child size at birth. The area associated with low birth weight was Chitipa and areas with increased risk to less than average size at birth were Chitipa and Mchinji. The study found support for the flexible modelling of some covariates that clearly have nonlinear influences. Nevertheless there is no strong support for inclusion of geographical spatial analysis. The spatial patterns though point to the influence of omitted variables with some spatial structure or possibly epidemiological processes that account for this spatial structure and the maps generated could be used for targeting development efforts at a glance. PMID:26114866

  17. Generalized Additive Models Used to Predict Species Abundance in the Gulf of Mexico: An Ecosystem Modeling Tool

    PubMed Central

    Drexler, Michael; Ainsworth, Cameron H.

    2013-01-01

    Spatially explicit ecosystem models of all types require an initial allocation of biomass, often in areas where fisheries independent abundance estimates do not exist. A generalized additive modelling (GAM) approach is used to describe the abundance of 40 species groups (i.e. functional groups) across the Gulf of Mexico (GoM) using a large fisheries independent data set (SEAMAP) and climate scale oceanographic conditions. Predictor variables included in the model are chlorophyll a, sediment type, dissolved oxygen, temperature, and depth. Despite the presence of a large number of zeros in the data, a single GAM using a negative binomial distribution was suitable to make predictions of abundance for multiple functional groups. We present an example case study using pink shrimp (Farfantepenaeus duroarum) and compare the results to known distributions. The model successfully predicts the known areas of high abundance in the GoM, including those areas where no data was inputted into the model fitting. Overall, the model reliably captures areas of high and low abundance for the large majority of functional groups observed in SEAMAP. The result of this method allows for the objective setting of spatial distributions for numerous functional groups across a modeling domain, even where abundance data may not exist. PMID:23691223

  18. The biobehavioral family model: testing social support as an additional exogenous variable.

    PubMed

    Woods, Sarah B; Priest, Jacob B; Roush, Tara

    2014-12-01

    This study tests the inclusion of social support as a distinct exogenous variable in the Biobehavioral Family Model (BBFM). The BBFM is a biopsychosocial approach to health that proposes that biobehavioral reactivity (anxiety and depression) mediates the relationship between family emotional climate and disease activity. Data for this study included married, English-speaking adult participants (n = 1,321; 55% female; M age = 45.2 years) from the National Comorbidity Survey Replication, a nationally representative epidemiological study of the frequency of mental disorders in the United States. Participants reported their demographics, marital functioning, social support from friends and relatives, anxiety and depression (biobehavioral reactivity), number of chronic health conditions, and number of prescription medications. Confirmatory factor analyses supported the items used in the measures of negative marital interactions, social support, and biobehavioral reactivity, as well as the use of negative marital interactions, friends' social support, and relatives' social support as distinct factors in the model. Structural equation modeling indicated a good fit of the data to the hypothesized model (χ(2)  = 846.04, p = .000, SRMR = .039, CFI = .924, TLI = .914, RMSEA = .043). Negative marital interactions predicted biobehavioral reactivity (β = .38, p < .001), as did relatives' social support, inversely (β = -.16, p < .001). Biobehavioral reactivity predicted disease activity (β = .40, p < .001) and was demonstrated to be a significant mediator through tests of indirect effects. Findings are consistent with previous tests of the BBFM with adult samples, and suggest the important addition of family social support as a predicting factor in the model. PMID:24981970

  19. Modeling particulate matter concentrations measured through mobile monitoring in a deletion/substitution/addition approach

    NASA Astrophysics Data System (ADS)

    Su, Jason G.; Hopke, Philip K.; Tian, Yilin; Baldwin, Nichole; Thurston, Sally W.; Evans, Kristin; Rich, David Q.

    2015-12-01

    Land use regression modeling (LUR) through local scale circular modeling domains has been used to predict traffic-related air pollution such as nitrogen oxides (NOX). LUR modeling for fine particulate matters (PM), which generally have smaller spatial gradients than NOX, has been typically applied for studies involving multiple study regions. To increase the spatial coverage for fine PM and key constituent concentrations, we designed a mobile monitoring network in Monroe County, New York to measure pollutant concentrations of black carbon (BC, wavelength at 880 nm), ultraviolet black carbon (UVBC, wavelength at 3700 nm) and Delta-C (the difference between the UVBC and BC concentrations) using the Clarkson University Mobile Air Pollution Monitoring Laboratory (MAPL). A Deletion/Substitution/Addition (D/S/A) algorithm was conducted, which used circular buffers as a basis for statistics. The algorithm maximizes the prediction accuracy for locations without measurements using the V-fold cross-validation technique, and it reduces overfitting compared to other approaches. We found that the D/S/A LUR modeling approach could achieve good results, with prediction powers of 60%, 63%, and 61%, respectively, for BC, UVBC, and Delta-C. The advantage of mobile monitoring is that it can monitor pollutant concentrations at hundreds of spatial points in a region, rather than the typical less than 100 points from a fixed site saturation monitoring network. This research indicates that a mobile saturation sampling network, when combined with proper modeling techniques, can uncover small area variations (e.g., 10 m) in particulate matter concentrations.

  20. A habitat suitability model for Chinese sturgeon determined using the generalized additive method

    NASA Astrophysics Data System (ADS)

    Yi, Yujun; Sun, Jie; Zhang, Shanghong

    2016-03-01

    The Chinese sturgeon is a type of large anadromous fish that migrates between the ocean and rivers. Because of the construction of dams, this sturgeon's migration path has been cut off, and this species currently is on the verge of extinction. Simulating suitable environmental conditions for spawning followed by repairing or rebuilding its spawning grounds are effective ways to protect this species. Various habitat suitability models based on expert knowledge have been used to evaluate the suitability of spawning habitat. In this study, a two-dimensional hydraulic simulation is used to inform a habitat suitability model based on the generalized additive method (GAM). The GAM is based on real data. The values of water depth and velocity are calculated first via the hydrodynamic model and later applied in the GAM. The final habitat suitability model is validated using the catch per unit effort (CPUEd) data of 1999 and 2003. The model results show that a velocity of 1.06-1.56 m/s and a depth of 13.33-20.33 m are highly suitable ranges for the Chinese sturgeon to spawn. The hydraulic habitat suitability indexes (HHSI) for seven discharges (4000; 9000; 12,000; 16,000; 20,000; 30,000; and 40,000 m3/s) are calculated to evaluate integrated habitat suitability. The results show that the integrated habitat suitability reaches its highest value at a discharge of 16,000 m3/s. This study is the first to apply a GAM to evaluate the suitability of spawning grounds for the Chinese sturgeon. The study provides a reference for the identification of potential spawning grounds in the entire basin.

  1. The biobehavioral family model: testing social support as an additional exogenous variable.

    PubMed

    Woods, Sarah B; Priest, Jacob B; Roush, Tara

    2014-12-01

    This study tests the inclusion of social support as a distinct exogenous variable in the Biobehavioral Family Model (BBFM). The BBFM is a biopsychosocial approach to health that proposes that biobehavioral reactivity (anxiety and depression) mediates the relationship between family emotional climate and disease activity. Data for this study included married, English-speaking adult participants (n = 1,321; 55% female; M age = 45.2 years) from the National Comorbidity Survey Replication, a nationally representative epidemiological study of the frequency of mental disorders in the United States. Participants reported their demographics, marital functioning, social support from friends and relatives, anxiety and depression (biobehavioral reactivity), number of chronic health conditions, and number of prescription medications. Confirmatory factor analyses supported the items used in the measures of negative marital interactions, social support, and biobehavioral reactivity, as well as the use of negative marital interactions, friends' social support, and relatives' social support as distinct factors in the model. Structural equation modeling indicated a good fit of the data to the hypothesized model (χ(2)  = 846.04, p = .000, SRMR = .039, CFI = .924, TLI = .914, RMSEA = .043). Negative marital interactions predicted biobehavioral reactivity (β = .38, p < .001), as did relatives' social support, inversely (β = -.16, p < .001). Biobehavioral reactivity predicted disease activity (β = .40, p < .001) and was demonstrated to be a significant mediator through tests of indirect effects. Findings are consistent with previous tests of the BBFM with adult samples, and suggest the important addition of family social support as a predicting factor in the model.

  2. Poisson image reconstruction with Hessian Schatten-norm regularization.

    PubMed

    Lefkimmiatis, Stamatios; Unser, Michael

    2013-11-01

    Poisson inverse problems arise in many modern imaging applications, including biomedical and astronomical ones. The main challenge is to obtain an estimate of the underlying image from a set of measurements degraded by a linear operator and further corrupted by Poisson noise. In this paper, we propose an efficient framework for Poisson image reconstruction, under a regularization approach, which depends on matrix-valued regularization operators. In particular, the employed regularizers involve the Hessian as the regularization operator and Schatten matrix norms as the potential functions. For the solution of the problem, we propose two optimization algorithms that are specifically tailored to the Poisson nature of the noise. These algorithms are based on an augmented-Lagrangian formulation of the problem and correspond to two variants of the alternating direction method of multipliers. Further, we derive a link that relates the proximal map of an l(p) norm with the proximal map of a Schatten matrix norm of order p. This link plays a key role in the development of one of the proposed algorithms. Finally, we provide experimental results on natural and biological images for the task of Poisson image deblurring and demonstrate the practical relevance and effectiveness of the proposed framework.

  3. Poisson image reconstruction with Hessian Schatten-norm regularization.

    PubMed

    Lefkimmiatis, Stamatios; Unser, Michael

    2013-11-01

    Poisson inverse problems arise in many modern imaging applications, including biomedical and astronomical ones. The main challenge is to obtain an estimate of the underlying image from a set of measurements degraded by a linear operator and further corrupted by Poisson noise. In this paper, we propose an efficient framework for Poisson image reconstruction, under a regularization approach, which depends on matrix-valued regularization operators. In particular, the employed regularizers involve the Hessian as the regularization operator and Schatten matrix norms as the potential functions. For the solution of the problem, we propose two optimization algorithms that are specifically tailored to the Poisson nature of the noise. These algorithms are based on an augmented-Lagrangian formulation of the problem and correspond to two variants of the alternating direction method of multipliers. Further, we derive a link that relates the proximal map of an l(p) norm with the proximal map of a Schatten matrix norm of order p. This link plays a key role in the development of one of the proposed algorithms. Finally, we provide experimental results on natural and biological images for the task of Poisson image deblurring and demonstrate the practical relevance and effectiveness of the proposed framework. PMID:23846472

  4. Sister chromatid exchange data fit with a mixture of Poisson distributions.

    PubMed

    Byers, R H; Shenton, L R

    1999-06-30

    Bowman et al. [K.O. Bowman, Wesley Eddings, Marvin A. Kastenbaum, L. R. Shenton. Sister chromatid exchange data and Gram-Charlier series, Mutat. Res., 403 (1998) 159-169.] have shown that a Gram-Charlier modification of a negative binomial distribution gives a reasonable fit to counts of sister chromatid exchange (SCE) data originally presented by Bender et al. [M.A. Bender, R.J. Preston, R. C. Leonard, B.E. Pyatt, P.C. Gooch, On the distribution of spontaneous SCE in human peripheral blood lymphocytes, Mutat. Res., 281 (1992) 227-232. ]. Here we show that a mixture of a generalized Poisson distributions also fits the data. Advantages of the generalized Poisson mixture include a simplified model involving only four parameters which fits the data more closely according to the chi-squared goodness-of-fit criterion. PMID:10393269

  5. Genetic evaluation of traits distributed as Poisson-binomial with reference to reproductive characters.

    PubMed

    Foulley, J L; Gianola, D; Im, S

    1987-04-01

    A procedure of genetic evaluation of reproductive traits such as litter size and survival in a polytocous species under the assumption of polygenic inheritance is described. Conditional distributions of these traits are assumed to be Poisson and Bernoulli, respectively. Using the concept of generalized linear models, logarithmic (litter size) and probit (survival) functions are described as linear combinations of "nuisance" environmental effects and of transmitting abilities of sires or individual breeding values. The liability of survival is expressed conditionally to the logarithm of litter size. Inferences on location parameters are based on the mode of their joint posterior density assuming a prior multivariate normal distribution. A method of estimation of the dispersion parameters is also presented. The use of a "truncated" Poisson distribution is suggested to account for missing records on litter size. PMID:24241297

  6. Impact of an additional chronic BDNF reduction on learning performance in an Alzheimer mouse model.

    PubMed

    Psotta, Laura; Rockahr, Carolin; Gruss, Michael; Kirches, Elmar; Braun, Katharina; Lessmann, Volkmar; Bock, Jörg; Endres, Thomas

    2015-01-01

    There is increasing evidence that brain-derived neurotrophic factor (BDNF) plays a crucial role in Alzheimer's disease (AD) pathology. A number of studies demonstrated that AD patients exhibit reduced BDNF levels in the brain and the blood serum, and in addition, several animal-based studies indicated a potential protective effect of BDNF against Aβ-induced neurotoxicity. In order to further investigate the role of BDNF in the etiology of AD, we created a novel mouse model by crossing a well-established AD mouse model (APP/PS1) with a mouse exhibiting a chronic BDNF deficiency (BDNF(+/-)). This new triple transgenic mouse model enabled us to further analyze the role of BDNF in AD in vivo. We reasoned that in case BDNF has a protective effect against AD pathology, an AD-like phenotype in our new mouse model should occur earlier and/or in more severity than in the APP/PS1-mice. Indeed, the behavioral analysis revealed that the APP/PS1-BDNF(+/-)-mice show an earlier onset of learning impairments in a two-way active avoidance task in comparison to APP/PS1- and BDNF(+/-)-mice. However in the Morris water maze (MWM) test, we could not observe an overall aggrevated impairment in spatial learning and also short-term memory in an object recognition task remained intact in all tested mouse lines. In addition to the behavioral experiments, we analyzed the amyloid plaque pathology in the APP/PS1 and APP/PS1-BDNF(+/-)-mice and observed a comparable plaque density in the two genotypes. Moreover, our results revealed a higher plaque density in prefrontal cortical compared to hippocampal brain regions. Our data reveal that higher cognitive tasks requiring the recruitment of cortical networks appear to be more severely affected in our new mouse model than learning tasks requiring mainly sub-cortical networks. Furthermore, our observations of an accelerated impairment in active avoidance learning in APP/PS1-BDNF(+/-)-mice further supports the hypothesis that BDNF deficiency

  7. Nonlinear feedback in a six-dimensional Lorenz model: impact of an additional heating term

    NASA Astrophysics Data System (ADS)

    Shen, B.-W.

    2015-12-01

    In this study, a six-dimensional Lorenz model (6DLM) is derived, based on a recent study using a five-dimensional (5-D) Lorenz model (LM), in order to examine the impact of an additional mode and its accompanying heating term on solution stability. The new mode added to improve the representation of the streamfunction is referred to as a secondary streamfunction mode, while the two additional modes, which appear in both the 6DLM and 5DLM but not in the original LM, are referred to as secondary temperature modes. Two energy conservation relationships of the 6DLM are first derived in the dissipationless limit. The impact of three additional modes on solution stability is examined by comparing numerical solutions and ensemble Lyapunov exponents of the 6DLM and 5DLM as well as the original LM. For the onset of chaos, the critical value of the normalized Rayleigh number (rc) is determined to be 41.1. The critical value is larger than that in the 3DLM (rc ~ 24.74), but slightly smaller than the one in the 5DLM (rc ~ 42.9). A stability analysis and numerical experiments obtained using generalized LMs, with or without simplifications, suggest the following: (1) negative nonlinear feedback in association with the secondary temperature modes, as first identified using the 5DLM, plays a dominant role in providing feedback for improving the solution's stability of the 6DLM, (2) the additional heating term in association with the secondary streamfunction mode may destabilize the solution, and (3) overall feedback due to the secondary streamfunction mode is much smaller than the feedback due to the secondary temperature modes; therefore, the critical Rayleigh number of the 6DLM is comparable to that of the 5DLM. The 5DLM and 6DLM collectively suggest different roles for small-scale processes (i.e., stabilization vs. destabilization), consistent with the following statement by Lorenz (1972): "If the flap of a butterfly's wings can be instrumental in generating a tornado, it can

  8. Nonlinear feedback in a six-dimensional Lorenz Model: impact of an additional heating term

    NASA Astrophysics Data System (ADS)

    Shen, B.-W.

    2015-03-01

    In this study, a six-dimensional Lorenz model (6DLM) is derived, based on a recent study using a five-dimensional (5-D) Lorenz model (LM), in order to examine the impact of an additional mode and its accompanying heating term on solution stability. The new mode added to improve the representation of the steamfunction is referred to as a secondary streamfunction mode, while the two additional modes, that appear in both the 6DLM and 5DLM but not in the original LM, are referred to as secondary temperature modes. Two energy conservation relationships of the 6DLM are first derived in the dissipationless limit. The impact of three additional modes on solution stability is examined by comparing numerical solutions and ensemble Lyapunov exponents of the 6DLM and 5DLM as well as the original LM. For the onset of chaos, the critical value of the normalized Rayleigh number (rc) is determined to be 41.1. The critical value is larger than that in the 3DLM (rc ~ 24.74), but slightly smaller than the one in the 5DLM (rc ~ 42.9). A stability analysis and numerical experiments obtained using generalized LMs, with or without simplifications, suggest the following: (1) negative nonlinear feedback in association with the secondary temperature modes, as first identified using the 5DLM, plays a dominant role in providing feedback for improving the solution's stability of the 6DLM, (2) the additional heating term in association with the secondary streamfunction mode may destabilize the solution, and (3) overall feedback due to the secondary streamfunction mode is much smaller than the feedback due to the secondary temperature modes; therefore, the critical Rayleigh number of the 6DLM is comparable to that of the 5DLM. The 5DLM and 6DLM collectively suggest different roles for small-scale processes (i.e., stabilization vs. destabilization), consistent with the following statement by Lorenz (1972): If the flap of a butterfly's wings can be instrumental in generating a tornado, it can

  9. Efficient Levenberg-Marquardt minimization of the maximum likelihood estimator for Poisson deviates

    SciTech Connect

    Laurence, T; Chromy, B

    2009-11-10

    Histograms of counted events are Poisson distributed, but are typically fitted without justification using nonlinear least squares fitting. The more appropriate maximum likelihood estimator (MLE) for Poisson distributed data is seldom used. We extend the use of the Levenberg-Marquardt algorithm commonly used for nonlinear least squares minimization for use with the MLE for Poisson distributed data. In so doing, we remove any excuse for not using this more appropriate MLE. We demonstrate the use of the algorithm and the superior performance of the MLE using simulations and experiments in the context of fluorescence lifetime imaging. Scientists commonly form histograms of counted events from their data, and extract parameters by fitting to a specified model. Assuming that the probability of occurrence for each bin is small, event counts in the histogram bins will be distributed according to the Poisson distribution. We develop here an efficient algorithm for fitting event counting histograms using the maximum likelihood estimator (MLE) for Poisson distributed data, rather than the non-linear least squares measure. This algorithm is a simple extension of the common Levenberg-Marquardt (L-M) algorithm, is simple to implement, quick and robust. Fitting using a least squares measure is most common, but it is the maximum likelihood estimator only for Gaussian-distributed data. Non-linear least squares methods may be applied to event counting histograms in cases where the number of events is very large, so that the Poisson distribution is well approximated by a Gaussian. However, it is not easy to satisfy this criterion in practice - which requires a large number of events. It has been well-known for years that least squares procedures lead to biased results when applied to Poisson-distributed data; a recent paper providing extensive characterization of these biases in exponential fitting is given. The more appropriate measure based on the maximum likelihood estimator (MLE

  10. Estimation and Inference in Generalized Additive Coefficient Models for Nonlinear Interactions with High-Dimensional Covariates

    PubMed Central

    Shujie, MA; Carroll, Raymond J.; Liang, Hua; Xu, Shizhong

    2015-01-01

    In the low-dimensional case, the generalized additive coefficient model (GACM) proposed by Xue and Yang [Statist. Sinica 16 (2006) 1423–1446] has been demonstrated to be a powerful tool for studying nonlinear interaction effects of variables. In this paper, we propose estimation and inference procedures for the GACM when the dimension of the variables is high. Specifically, we propose a groupwise penalization based procedure to distinguish significant covariates for the “large p small n” setting. The procedure is shown to be consistent for model structure identification. Further, we construct simultaneous confidence bands for the coefficient functions in the selected model based on a refined two-step spline estimator. We also discuss how to choose the tuning parameters. To estimate the standard deviation of the functional estimator, we adopt the smoothed bootstrap method. We conduct simulation experiments to evaluate the numerical performance of the proposed methods and analyze an obesity data set from a genome-wide association study as an illustration. PMID:26412908

  11. Spectral models of additive and modulation noise in speech and phonatory excitation signals

    NASA Astrophysics Data System (ADS)

    Schoentgen, Jean

    2003-01-01

    The article presents spectral models of additive and modulation noise in speech. The purpose is to learn about the causes of noise in the spectra of normal and disordered voices and to gauge whether the spectral properties of the perturbations of the phonatory excitation signal can be inferred from the spectral properties of the speech signal. The approach to modeling consists of deducing the Fourier series of the perturbed speech, assuming that the Fourier series of the noise and of the clean monocycle-periodic excitation are known. The models explain published data, take into account the effects of supraglottal tremor, demonstrate the modulation distortion owing to vocal tract filtering, establish conditions under which noise cues of different speech signals may be compared, and predict the impossibility of inferring the spectral properties of the frequency modulating noise from the spectral properties of the frequency modulation noise (e.g., phonatory jitter and frequency tremor). The general conclusion is that only phonatory frequency modulation noise is spectrally relevant. Other types of noise in speech are either epiphenomenal, or their spectral effects are masked by the spectral effects of frequency modulation noise.

  12. Shaping the Arago-Poisson spot with incomplete spiral phase modulation.

    PubMed

    Zhang, Yuanying; Zhang, Wuhong; Su, Ming; Chen, Lixiang

    2016-04-01

    The Arago-Poisson spot played an important role in the discovery of the wave nature of light. We demonstrate a novel way to shape the Arago-Poisson spot by partially twisting the phase fronts of the incident light beam. We use a spatial light modulator to generate the holographic gratings both for mimicking the circular opaque objects and for modulating the spiral phase profiles. For incomplete spiral phase of five- and tenfold symmetry, we observe the gradual formation of the on-axis bright spots upon propagation. Our results show that two fundamental but seemingly independent optical phenomena, namely, the Arago-Poisson spot and the orbital angular momentum (OAM) of light, can be well connected by changing the phase height ϑ gradually from 0 to 2π. The experimental results are well interpreted visually by plotting the Poynting vector flows. In addition, based on the decomposed OAM spectra, the observations can also be understood from the controllable mixture of a fundamental Gaussian beam and an OAM beam. Our work is an elegant demonstration that spiral phase modulation can add to the optical tool to effectively shape the diffraction of light and may have potential applications in the field of optical manipulations. PMID:27140766

  13. The charge conserving Poisson-Boltzmann equations: Existence, uniqueness, and maximum principle

    SciTech Connect

    Lee, Chiun-Chang

    2014-05-15

    The present article is concerned with the charge conserving Poisson-Boltzmann (CCPB) equation in high-dimensional bounded smooth domains. The CCPB equation is a Poisson-Boltzmann type of equation with nonlocal coefficients. First, under the Robin boundary condition, we get the existence of weak solutions to this equation. The main approach is variational, based on minimization of a logarithm-type energy functional. To deal with the regularity of weak solutions, we establish a maximum modulus estimate for the standard Poisson-Boltzmann (PB) equation to show that weak solutions of the CCPB equation are essentially bounded. Then the classical solutions follow from the elliptic regularity theorem. Second, a maximum principle for the CCPB equation is established. In particular, we show that in the case of global electroneutrality, the solution achieves both its maximum and minimum values at the boundary. However, in the case of global non-electroneutrality, the solution may attain its maximum value at an interior point. In addition, under certain conditions on the boundary, we show that the global non-electroneutrality implies pointwise non-electroneutrality.

  14. Newtonian limit of conformal gravity and the lack of necessity of the second order Poisson equation

    SciTech Connect

    Mannheim, P.D. ); Kazanas, D. )

    1994-04-01

    In this work, the authors study the interior structure of a locally conformal invariant fourth order theory of gravity in the presence of a static, spherically symmetric gravitational source. It is found, quite remarkably, that the associated dynamics is determined exactly and without any approximation at all by a simple fourth order Poisson equation which thus describes both the strong and weak field limits of the theory in this static case. The authors present the solutions of this fourth order equation and find that they are able to recover all of the standard Newton-Euler gravitational phenomenology in the weak gravity limit, to thus establish the observational viability of the weak field limit of the fourth order theory. Additionally, the authors make a critical analysis of the second order Poisson equation, and find that the currently available experimental evidence for its validity is not as clearcut and definitive as is commonly believed, with there not apparently being any conclusive observational support for it at all either on the very largest distance scales for outside of fundamental sources, or on the very smallest ones within their interiors. This study enables the deduction that even though the familiar second order Poisson gravitational equation may be sufficient to yield Newton's Law of Gravity it is not in fact necessary. 17 refs., 1 fig.

  15. Additive Factors Do Not Imply Discrete Processing Stages: A Worked Example Using Models of the Stroop Task

    PubMed Central

    Stafford, Tom; Gurney, Kevin N.

    2011-01-01

    Previously, it has been shown experimentally that the psychophysical law known as Piéron’s Law holds for color intensity and that the size of the effect is additive with that of Stroop condition (Stafford et al., 2011). According to the additive factors method (Donders, 1868–1869/1969; Sternberg, 1998), additivity is assumed to indicate independent and discrete processing stages. We present computational modeling work, using an existing Parallel Distributed Processing model of the Stroop task (Cohen et al., 1990) and a standard model of decision making (Ratcliff, 1978). This demonstrates that additive factors can be successfully accounted for by existing single stage models of the Stroop effect. Consequently, it is not valid to infer either discrete stages or separate loci of effects from additive factors. Further, our modeling work suggests that information binding may be a more important architectural property for producing additive factors than discrete stages. PMID:22102842

  16. Active Contours Using Additive Local and Global Intensity Fitting Models for Intensity Inhomogeneous Image Segmentation

    PubMed Central

    Soomro, Shafiullah; Kim, Jeong Heon; Soomro, Toufique Ahmed

    2016-01-01

    This paper introduces an improved region based active contour method with a level set formulation. The proposed energy functional integrates both local and global intensity fitting terms in an additive formulation. Local intensity fitting term influences local force to pull the contour and confine it to object boundaries. In turn, the global intensity fitting term drives the movement of contour at a distance from the object boundaries. The global intensity term is based on the global division algorithm, which can better capture intensity information of an image than Chan-Vese (CV) model. Both local and global terms are mutually assimilated to construct an energy function based on a level set formulation to segment images with intensity inhomogeneity. Experimental results show that the proposed method performs better both qualitatively and quantitatively compared to other state-of-the-art-methods. PMID:27800011

  17. Use of generalised additive models to categorise continuous variables in clinical prediction

    PubMed Central

    2013-01-01

    Background In medical practice many, essentially continuous, clinical parameters tend to be categorised by physicians for ease of decision-making. Indeed, categorisation is a common practice both in medical research and in the development of clinical prediction rules, particularly where the ensuing models are to be applied in daily clinical practice to support clinicians in the decision-making process. Since the number of categories into which a continuous predictor must be categorised depends partly on the relationship between the predictor and the outcome, the need for more than two categories must be borne in mind. Methods We propose a categorisation methodology for clinical-prediction models, using Generalised Additive Models (GAMs) with P-spline smoothers to determine the relationship between the continuous predictor and the outcome. The proposed method consists of creating at least one average-risk category along with high- and low-risk categories based on the GAM smooth function. We applied this methodology to a prospective cohort of patients with exacerbated chronic obstructive pulmonary disease. The predictors selected were respiratory rate and partial pressure of carbon dioxide in the blood (PCO2), and the response variable was poor evolution. An additive logistic regression model was used to show the relationship between the covariates and the dichotomous response variable. The proposed categorisation was compared to the continuous predictor as the best option, using the AIC and AUC evaluation parameters. The sample was divided into a derivation (60%) and validation (40%) samples. The first was used to obtain the cut points while the second was used to validate the proposed methodology. Results The three-category proposal for the respiratory rate was ≤ 20;(20,24];> 24, for which the following values were obtained: AIC=314.5 and AUC=0.638. The respective values for the continuous predictor were AIC=317.1 and AUC=0.634, with no statistically

  18. Generalized Concentration Addition Modeling Predicts Mixture Effects of Environmental PPARγ Agonists.

    PubMed

    Watt, James; Webster, Thomas F; Schlezinger, Jennifer J

    2016-09-01

    The vast array of potential environmental toxicant combinations necessitates the development of efficient strategies for predicting toxic effects of mixtures. Current practices emphasize the use of concentration addition to predict joint effects of endocrine disrupting chemicals in coexposures. Generalized concentration addition (GCA) is one such method for predicting joint effects of coexposures to chemicals and has the advantage of allowing for mixture components to have differences in efficacy (ie, dose-response curve maxima). Peroxisome proliferator-activated receptor gamma (PPARγ) is a nuclear receptor that plays a central role in regulating lipid homeostasis, insulin sensitivity, and bone quality and is the target of an increasing number of environmental toxicants. Here, we tested the applicability of GCA in predicting mixture effects of therapeutic (rosiglitazone and nonthiazolidinedione partial agonist) and environmental PPARγ ligands (phthalate compounds identified using EPA's ToxCast database). Transcriptional activation of human PPARγ1 by individual compounds and mixtures was assessed using a peroxisome proliferator response element-driven luciferase reporter. Using individual dose-response parameters and GCA, we generated predictions of PPARγ activation by the mixtures, and we compared these predictions with the empirical data. At high concentrations, GCA provided a better estimation of the experimental response compared with 3 alternative models: toxic equivalency factor, effect summation and independent action. These alternatives provided reasonable fits to the data at low concentrations in this system. These experiments support the implementation of GCA in mixtures analysis with endocrine disrupting compounds and establish PPARγ as an important target for further studies of chemical mixtures.

  19. A spectral Poisson solver for kinetic plasma simulation

    NASA Astrophysics Data System (ADS)

    Szeremley, Daniel; Obberath, Jens; Brinkmann, Ralf

    2011-10-01

    Plasma resonance spectroscopy is a well established plasma diagnostic method, realized in several designs. One of these designs is the multipole resonance probe (MRP). In its idealized - geometrically simplified - version it consists of two dielectrically shielded, hemispherical electrodes to which an RF signal is applied. A numerical tool is under development which is capable of simulating the dynamics of the plasma surrounding the MRP in electrostatic approximation. In this contribution we concentrate on the specialized Poisson solver for that tool. The plasma is represented by an ensemble of point charges. By expanding both the charge density and the potential into spherical harmonics, a largely analytical solution of the Poisson problem can be employed. For a practical implementation, the expansion must be appropriately truncated. With this spectral solver we are able to efficiently solve the Poisson equation in a kinetic plasma simulation without the need of introducing a spatial discretization.

  20. Blocked Shape Memory Effect in Negative Poisson's Ratio Polymer Metamaterials.

    PubMed

    Boba, Katarzyna; Bianchi, Matteo; McCombe, Greg; Gatt, Ruben; Griffin, Anselm C; Richardson, Robert M; Scarpa, Fabrizio; Hamerton, Ian; Grima, Joseph N

    2016-08-10

    We describe a new class of negative Poisson's ratio (NPR) open cell PU-PE foams produced by blocking the shape memory effect in the polymer. Contrary to classical NPR open cell thermoset and thermoplastic foams that return to their auxetic phase after reheating (and therefore limit their use in technological applications), this new class of cellular solids has a permanent negative Poisson's ratio behavior, generated through multiple shape memory (mSM) treatments that lead to a fixity of the topology of the cell foam. The mSM-NPR foams have Poisson's ratio values similar to the auxetic foams prior their return to the conventional phase, but compressive stress-strain curves similar to the ones of conventional foams. The results show that by manipulating the shape memory effect in polymer microstructures it is possible to obtain new classes of materials with unusual deformation mechanisms. PMID:27377708

  1. Blocked Shape Memory Effect in Negative Poisson's Ratio Polymer Metamaterials.

    PubMed

    Boba, Katarzyna; Bianchi, Matteo; McCombe, Greg; Gatt, Ruben; Griffin, Anselm C; Richardson, Robert M; Scarpa, Fabrizio; Hamerton, Ian; Grima, Joseph N

    2016-08-10

    We describe a new class of negative Poisson's ratio (NPR) open cell PU-PE foams produced by blocking the shape memory effect in the polymer. Contrary to classical NPR open cell thermoset and thermoplastic foams that return to their auxetic phase after reheating (and therefore limit their use in technological applications), this new class of cellular solids has a permanent negative Poisson's ratio behavior, generated through multiple shape memory (mSM) treatments that lead to a fixity of the topology of the cell foam. The mSM-NPR foams have Poisson's ratio values similar to the auxetic foams prior their return to the conventional phase, but compressive stress-strain curves similar to the ones of conventional foams. The results show that by manipulating the shape memory effect in polymer microstructures it is possible to obtain new classes of materials with unusual deformation mechanisms.

  2. Do bacterial cell numbers follow a theoretical Poisson distribution? Comparison of experimentally obtained numbers of single cells with random number generation via computer simulation.

    PubMed

    Koyama, Kento; Hokunan, Hidekazu; Hasegawa, Mayumi; Kawamura, Shuso; Koseki, Shigenobu

    2016-12-01

    We investigated a bacterial sample preparation procedure for single-cell studies. In the present study, we examined whether single bacterial cells obtained via 10-fold dilution followed a theoretical Poisson distribution. Four serotypes of Salmonella enterica, three serotypes of enterohaemorrhagic Escherichia coli and one serotype of Listeria monocytogenes were used as sample bacteria. An inoculum of each serotype was prepared via a 10-fold dilution series to obtain bacterial cell counts with mean values of one or two. To determine whether the experimentally obtained bacterial cell counts follow a theoretical Poisson distribution, a likelihood ratio test between the experimentally obtained cell counts and Poisson distribution which parameter estimated by maximum likelihood estimation (MLE) was conducted. The bacterial cell counts of each serotype sufficiently followed a Poisson distribution. Furthermore, to examine the validity of the parameters of Poisson distribution from experimentally obtained bacterial cell counts, we compared these with the parameters of a Poisson distribution that were estimated using random number generation via computer simulation. The Poisson distribution parameters experimentally obtained from bacterial cell counts were within the range of the parameters estimated using a computer simulation. These results demonstrate that the bacterial cell counts of each serotype obtained via 10-fold dilution followed a Poisson distribution. The fact that the frequency of bacterial cell counts follows a Poisson distribution at low number would be applied to some single-cell studies with a few bacterial cells. In particular, the procedure presented in this study enables us to develop an inactivation model at the single-cell level that can estimate the variability of survival bacterial numbers during the bacterial death process.

  3. Do bacterial cell numbers follow a theoretical Poisson distribution? Comparison of experimentally obtained numbers of single cells with random number generation via computer simulation.

    PubMed

    Koyama, Kento; Hokunan, Hidekazu; Hasegawa, Mayumi; Kawamura, Shuso; Koseki, Shigenobu

    2016-12-01

    We investigated a bacterial sample preparation procedure for single-cell studies. In the present study, we examined whether single bacterial cells obtained via 10-fold dilution followed a theoretical Poisson distribution. Four serotypes of Salmonella enterica, three serotypes of enterohaemorrhagic Escherichia coli and one serotype of Listeria monocytogenes were used as sample bacteria. An inoculum of each serotype was prepared via a 10-fold dilution series to obtain bacterial cell counts with mean values of one or two. To determine whether the experimentally obtained bacterial cell counts follow a theoretical Poisson distribution, a likelihood ratio test between the experimentally obtained cell counts and Poisson distribution which parameter estimated by maximum likelihood estimation (MLE) was conducted. The bacterial cell counts of each serotype sufficiently followed a Poisson distribution. Furthermore, to examine the validity of the parameters of Poisson distribution from experimentally obtained bacterial cell counts, we compared these with the parameters of a Poisson distribution that were estimated using random number generation via computer simulation. The Poisson distribution parameters experimentally obtained from bacterial cell counts were within the range of the parameters estimated using a computer simulation. These results demonstrate that the bacterial cell counts of each serotype obtained via 10-fold dilution followed a Poisson distribution. The fact that the frequency of bacterial cell counts follows a Poisson distribution at low number would be applied to some single-cell studies with a few bacterial cells. In particular, the procedure presented in this study enables us to develop an inactivation model at the single-cell level that can estimate the variability of survival bacterial numbers during the bacterial death process. PMID:27554145

  4. Modeling external carbon addition in biological nutrient removal processes with an extension of the international water association activated sludge model.

    PubMed

    Swinarski, M; Makinia, J; Stensel, H D; Czerwionka, K; Drewnowski, J

    2012-08-01

    The aim of this study was to expand the International Water Association Activated Sludge Model No. 2d (ASM2d) to account for a newly defined readily biodegradable substrate that can be consumed by polyphosphate-accumulating organisms (PAOs) under anoxic and aerobic conditions, but not under anaerobic conditions. The model change was to add a new substrate component and process terms for its use by PAOs and other heterotrophic bacteria under anoxic and aerobic conditions. The Gdansk (Poland) wastewater treatment plant (WWTP), which has a modified University of Cape Town (MUCT) process for nutrient removal, provided field data and mixed liquor for batch tests for model evaluation. The original ASM2d was first calibrated under dynamic conditions with the results of batch tests with settled wastewater and mixed liquor, in which nitrate-uptake rates, phosphorus-release rates, and anoxic phosphorus uptake rates were followed. Model validation was conducted with data from a 96-hour measurement campaign in the full-scale WWTP. The results of similar batch tests with ethanol and fusel oil as the external carbon sources were used to adjust kinetic and stoichiometric coefficients in the expanded ASM2d. Both models were compared based on their predictions of the effect of adding supplemental carbon to the anoxic zone of an MUCT process. In comparison with the ASM2d, the new model better predicted the anoxic behaviors of carbonaceous oxygen demand, nitrate-nitrogen (NO3-N), and phosphorous (PO4-P) in batch experiments with ethanol and fusel oil. However, when simulating ethanol addition to the anoxic zone of a full-scale biological nutrient removal facility, both models predicted similar effluent NO3-N concentrations (6.6 to 6.9 g N/m3). For the particular application, effective enhanced biological phosphorus removal was predicted by both models with external carbon addition but, for the new model, the effluent PO4-P concentration was approximately one-half of that found from

  5. Evaluation of the Performance of Smoothing Functions in Generalized Additive Models for Spatial Variation in Disease

    PubMed Central

    Siangphoe, Umaporn; Wheeler, David C.

    2015-01-01

    Generalized additive models (GAMs) with bivariate smoothing functions have been applied to estimate spatial variation in risk for many types of cancers. Only a handful of studies have evaluated the performance of smoothing functions applied in GAMs with regard to different geographical areas of elevated risk and different risk levels. This study evaluates the ability of different smoothing functions to detect overall spatial variation of risk and elevated risk in diverse geographical areas at various risk levels using a simulation study. We created five scenarios with different true risk area shapes (circle, triangle, linear) in a square study region. We applied four different smoothing functions in the GAMs, including two types of thin plate regression splines (TPRS) and two versions of locally weighted scatterplot smoothing (loess). We tested the null hypothesis of constant risk and detected areas of elevated risk using analysis of deviance with permutation methods and assessed the performance of the smoothing methods based on the spatial detection rate, sensitivity, accuracy, precision, power, and false-positive rate. The results showed that all methods had a higher sensitivity and a consistently moderate-to-high accuracy rate when the true disease risk was higher. The models generally performed better in detecting elevated risk areas than detecting overall spatial variation. One of the loess methods had the highest precision in detecting overall spatial variation across scenarios and outperformed the other methods in detecting a linear elevated risk area. The TPRS methods outperformed loess in detecting elevated risk in two circular areas. PMID:25983545

  6. Modeling and additive manufacturing of bio-inspired composites with tunable fracture mechanical properties.

    PubMed

    Dimas, Leon S; Buehler, Markus J

    2014-07-01

    Flaws, imperfections and cracks are ubiquitous in material systems and are commonly the catalysts of catastrophic material failure. As stresses and strains tend to concentrate around cracks and imperfections, structures tend to fail far before large regions of material have ever been subjected to significant loading. Therefore, a major challenge in material design is to engineer systems that perform on par with pristine structures despite the presence of imperfections. In this work we integrate knowledge of biological systems with computational modeling and state of the art additive manufacturing to synthesize advanced composites with tunable fracture mechanical properties. Supported by extensive mesoscale computer simulations, we demonstrate the design and manufacturing of composites that exhibit deformation mechanisms characteristic of pristine systems, featuring flaw-tolerant properties. We analyze the results by directly comparing strain fields for the synthesized composites, obtained through digital image correlation (DIC), and the computationally tested composites. Moreover, we plot Ashby diagrams for the range of simulated and experimental composites. Our findings show good agreement between simulation and experiment, confirming that the proposed mechanisms have a significant potential for vastly improving the fracture response of composite materials. We elucidate the role of stiffness ratio variations of composite constituents as an important feature in determining the composite properties. Moreover, our work validates the predictive ability of our models, presenting them as useful tools for guiding further material design. This work enables the tailored design and manufacturing of composites assembled from inferior building blocks, that obtain optimal combinations of stiffness and toughness. PMID:24700202

  7. Additive surface complexation modeling of uranium(VI) adsorption onto quartz-sand dominated sediments.

    PubMed

    Dong, Wenming; Wan, Jiamin

    2014-06-17

    Many aquifers contaminated by U(VI)-containing acidic plumes are composed predominantly of quartz-sand sediments. The F-Area of the Savannah River Site (SRS) in South Carolina (USA) is an example. To predict U(VI) mobility and natural attenuation, we conducted U(VI) adsorption experiments using the F-Area plume sediments and reference quartz, goethite, and kaolinite. The sediments are composed of ∼96% quartz-sand and 3-4% fine fractions of kaolinite and goethite. We developed a new humic acid adsorption method for determining the relative surface area abundances of goethite and kaolinite in the fine fractions. This method is expected to be applicable to many other binary mineral pairs, and allows successful application of the component additivity (CA) approach based surface complexation modeling (SCM) at the SRS F-Area and other similar aquifers. Our experimental results indicate that quartz has stronger U(VI) adsorption ability per unit surface area than goethite and kaolinite at pH ≤ 4.0. Our modeling results indicate that the binary (goethite/kaolinite) CA-SCM under-predicts U(VI) adsorption to the quartz-sand dominated sediments at pH ≤ 4.0. The new ternary (quartz/goethite/kaolinite) CA-SCM provides excellent predictions. The contributions of quartz-sand, kaolinite, and goethite to U(VI) adsorption and the potential influences of dissolved Al, Si, and Fe are also discussed.

  8. Modeling and additive manufacturing of bio-inspired composites with tunable fracture mechanical properties.

    PubMed

    Dimas, Leon S; Buehler, Markus J

    2014-07-01

    Flaws, imperfections and cracks are ubiquitous in material systems and are commonly the catalysts of catastrophic material failure. As stresses and strains tend to concentrate around cracks and imperfections, structures tend to fail far before large regions of material have ever been subjected to significant loading. Therefore, a major challenge in material design is to engineer systems that perform on par with pristine structures despite the presence of imperfections. In this work we integrate knowledge of biological systems with computational modeling and state of the art additive manufacturing to synthesize advanced composites with tunable fracture mechanical properties. Supported by extensive mesoscale computer simulations, we demonstrate the design and manufacturing of composites that exhibit deformation mechanisms characteristic of pristine systems, featuring flaw-tolerant properties. We analyze the results by directly comparing strain fields for the synthesized composites, obtained through digital image correlation (DIC), and the computationally tested composites. Moreover, we plot Ashby diagrams for the range of simulated and experimental composites. Our findings show good agreement between simulation and experiment, confirming that the proposed mechanisms have a significant potential for vastly improving the fracture response of composite materials. We elucidate the role of stiffness ratio variations of composite constituents as an important feature in determining the composite properties. Moreover, our work validates the predictive ability of our models, presenting them as useful tools for guiding further material design. This work enables the tailored design and manufacturing of composites assembled from inferior building blocks, that obtain optimal combinations of stiffness and toughness.

  9. ? filtering for stochastic systems driven by Poisson processes

    NASA Astrophysics Data System (ADS)

    Song, Bo; Wu, Zheng-Guang; Park, Ju H.; Shi, Guodong; Zhang, Ya

    2015-01-01

    This paper investigates the ? filtering problem for stochastic systems driven by Poisson processes. By utilising the martingale theory such as the predictable projection operator and the dual predictable projection operator, this paper transforms the expectation of stochastic integral with respect to the Poisson process into the expectation of Lebesgue integral. Then, based on this, this paper designs an ? filter such that the filtering error system is mean-square asymptotically stable and satisfies a prescribed ? performance level. Finally, a simulation example is given to illustrate the effectiveness of the proposed filtering scheme.

  10. Acoustic Poisson-like effect in periodic structures.

    PubMed

    Titovich, Alexey S; Norris, Andrew N

    2016-06-01

    Redirection of acoustic energy by 90° is shown to be possible in an otherwise acoustically transparent sonic crystal. An unresponsive "deaf" antisymmetric mode is excited by matching Bragg scattering with a quadrupole scatterer resonance. The dynamic effect causes normal unidirectional wave motion to strongly couple to perpendicular motion, analogous to the quasi-static Poisson effect in solids. The Poisson-like effect is demonstrated using the first flexural resonance in cylindrical shells of elastic solids. Simulations for a finite array of acrylic shells that are impedance and index matched to water show dramatic acoustic energy redirection in an otherwise acoustically transparent medium. PMID:27369161

  11. A Study of Poisson's Ratio in the Yield Region

    NASA Technical Reports Server (NTRS)

    Gerard, George; Wildhorn, Sorrel

    1952-01-01

    In the yield region of the stress-strain curve the variation in Poisson's ratio from the elastic to the plastic value is most pronounced. This variation was studied experimentally by a systematic series of tests on several aluminum alloys. The tests were conducted under simple tensile and compressive loading along three orthogonal axes. A theoretical variation of Poisson's ratio for an orthotropic solid was obtained from dilatational considerations. The assumptions used in deriving the theory were examined by use of the test data and were found to be in reasonable agreement with experimental evidence.

  12. Mechanical properties of additively manufactured octagonal honeycombs.

    PubMed

    Hedayati, R; Sadighi, M; Mohammadi-Aghdam, M; Zadpoor, A A

    2016-12-01

    Honeycomb structures have found numerous applications as structural and biomedical materials due to their favourable properties such as low weight, high stiffness, and porosity. Application of additive manufacturing and 3D printing techniques allows for manufacturing of honeycombs with arbitrary shape and wall thickness, opening the way for optimizing the mechanical and physical properties for specific applications. In this study, the mechanical properties of honeycomb structures with a new geometry, called octagonal honeycomb, were investigated using analytical, numerical, and experimental approaches. An additive manufacturing technique, namely fused deposition modelling, was used to fabricate the honeycomb from polylactic acid (PLA). The honeycombs structures were then mechanically tested under compression and the mechanical properties of the structures were determined. In addition, the Euler-Bernoulli and Timoshenko beam theories were used for deriving analytical relationships for elastic modulus, yield stress, Poisson's ratio, and buckling stress of this new design of honeycomb structures. Finite element models were also created to analyse the mechanical behaviour of the honeycombs computationally. The analytical solutions obtained using Timoshenko beam theory were close to computational results in terms of elastic modulus, Poisson's ratio and yield stress, especially for relative densities smaller than 25%. The analytical solutions based on the Timoshenko analytical solution and the computational results were in good agreement with experimental observations. Finally, the elastic properties of the proposed honeycomb structure were compared to those of other honeycomb structures such as square, triangular, hexagonal, mixed, diamond, and Kagome. The octagonal honeycomb showed yield stress and elastic modulus values very close to those of regular hexagonal honeycombs and lower than the other considered honeycombs. PMID:27612831

  13. Mechanical properties of additively manufactured octagonal honeycombs.

    PubMed

    Hedayati, R; Sadighi, M; Mohammadi-Aghdam, M; Zadpoor, A A

    2016-12-01

    Honeycomb structures have found numerous applications as structural and biomedical materials due to their favourable properties such as low weight, high stiffness, and porosity. Application of additive manufacturing and 3D printing techniques allows for manufacturing of honeycombs with arbitrary shape and wall thickness, opening the way for optimizing the mechanical and physical properties for specific applications. In this study, the mechanical properties of honeycomb structures with a new geometry, called octagonal honeycomb, were investigated using analytical, numerical, and experimental approaches. An additive manufacturing technique, namely fused deposition modelling, was used to fabricate the honeycomb from polylactic acid (PLA). The honeycombs structures were then mechanically tested under compression and the mechanical properties of the structures were determined. In addition, the Euler-Bernoulli and Timoshenko beam theories were used for deriving analytical relationships for elastic modulus, yield stress, Poisson's ratio, and buckling stress of this new design of honeycomb structures. Finite element models were also created to analyse the mechanical behaviour of the honeycombs computationally. The analytical solutions obtained using Timoshenko beam theory were close to computational results in terms of elastic modulus, Poisson's ratio and yield stress, especially for relative densities smaller than 25%. The analytical solutions based on the Timoshenko analytical solution and the computational results were in good agreement with experimental observations. Finally, the elastic properties of the proposed honeycomb structure were compared to those of other honeycomb structures such as square, triangular, hexagonal, mixed, diamond, and Kagome. The octagonal honeycomb showed yield stress and elastic modulus values very close to those of regular hexagonal honeycombs and lower than the other considered honeycombs.

  14. Collisional effects on the numerical recurrence in Vlasov-Poisson simulations

    NASA Astrophysics Data System (ADS)

    Pezzi, Oreste; Camporeale, Enrico; Valentini, Francesco

    2016-02-01

    The initial state recurrence in numerical simulations of the Vlasov-Poisson system is a well-known phenomenon. Here, we study the effect on recurrence of artificial collisions modeled through the Lenard-Bernstein operator [A. Lenard and I. B. Bernstein, Phys. Rev. 112, 1456-1459 (1958)]. By decomposing the linear Vlasov-Poisson system in the Fourier-Hermite space, the recurrence problem is investigated in the linear regime of the damping of a Langmuir wave and of the onset of the bump-on-tail instability. The analysis is then confirmed and extended to the nonlinear regime through an Eulerian collisional Vlasov-Poisson code. It is found that, despite being routinely used, an artificial collisionality is not a viable way of preventing recurrence in numerical simulations without compromising the kinetic nature of the solution. Moreover, it is shown how numerical effects associated to the generation of fine velocity scales can modify the physical features of the system evolution even in nonlinear regime. This means that filamentation-like phenomena, usually associated with low amplitude fluctuations contexts, can play a role even in nonlinear regime.

  15. Generalized Concentration Addition Modeling Predicts Mixture Effects of Environmental PPARγ Agonists.

    PubMed

    Watt, James; Webster, Thomas F; Schlezinger, Jennifer J

    2016-09-01

    The vast array of potential environmental toxicant combinations necessitates the development of efficient strategies for predicting toxic effects of mixtures. Current practices emphasize the use of concentration addition to predict joint effects of endocrine disrupting chemicals in coexposures. Generalized concentration addition (GCA) is one such method for predicting joint effects of coexposures to chemicals and has the advantage of allowing for mixture components to have differences in efficacy (ie, dose-response curve maxima). Peroxisome proliferator-activated receptor gamma (PPARγ) is a nuclear receptor that plays a central role in regulating lipid homeostasis, insulin sensitivity, and bone quality and is the target of an increasing number of environmental toxicants. Here, we tested the applicability of GCA in predicting mixture effects of therapeutic (rosiglitazone and nonthiazolidinedione partial agonist) and environmental PPARγ ligands (phthalate compounds identified using EPA's ToxCast database). Transcriptional activation of human PPARγ1 by individual compounds and mixtures was assessed using a peroxisome proliferator response element-driven luciferase reporter. Using individual dose-response parameters and GCA, we generated predictions of PPARγ activation by the mixtures, and we compared these predictions with the empirical data. At high concentrations, GCA provided a better estimation of the experimental response compared with 3 alternative models: toxic equivalency factor, effect summation and independent action. These alternatives provided reasonable fits to the data at low concentrations in this system. These experiments support the implementation of GCA in mixtures analysis with endocrine disrupting compounds and establish PPARγ as an important target for further studies of chemical mixtures. PMID:27255385

  16. Cost-Sensitive Boosting: Fitting an Additive Asymmetric Logistic Regression Model

    NASA Astrophysics Data System (ADS)

    Li, Qiu-Jie; Mao, Yao-Bin; Wang, Zhi-Quan; Xiang, Wen-Bo

    Conventional machine learning algorithms like boosting tend to equally treat misclassification errors that are not adequate to process certain cost-sensitive classification problems such as object detection. Although many cost-sensitive extensions of boosting by directly modifying the weighting strategy of correspond original algorithms have been proposed and reported, they are heuristic in nature and only proved effective by empirical results but lack sound theoretical analysis. This paper develops a framework from a statistical insight that can embody almost all existing cost-sensitive boosting algorithms: fitting an additive asymmetric logistic regression model by stage-wise optimization of certain criterions. Four cost-sensitive versions of boosting algorithms are derived, namely CSDA, CSRA, CSGA and CSLB which respectively correspond to Discrete AdaBoost, Real AdaBoost, Gentle AdaBoost and LogitBoost. Experimental results on the application of face detection have shown the effectiveness of the proposed learning framework in the reduction of the cumulative misclassification cost.

  17. Influence of the heterogeneous reaction HCL + HOCl on an ozone hole model with hydrocarbon additions

    SciTech Connect

    Elliott, S.; Cicerone, R.J.; Turco, R.P.

    1994-02-20

    Injection of ethane or propane has been suggested as a means for reducing ozone loss within the Antarctic vortex because alkanes can convert active chlorine radicals into hydrochloric acid. In kinetic models of vortex chemistry including as heterogeneous processes only the hydrolysis and HCl reactions of ClONO{sub 2} and N{sub 2}O{sub 5}, parts per billion by volume levels of the light alkanes counteract ozone depletion by sequestering chlorine atoms. Introduction of the surface reaction of HCl with HOCl causes ethane to deepen baseline ozone holes and generally works to impede any mitigation by hydrocarbons. The increased depletion occurs because HCl + HOCl can be driven by HO{sub x} radicals released during organic oxidation. Following initial hydrogen abstraction by chlorine, alkane breakdown leads to a net hydrochloric acid activation as the remaining hydrogen atoms enter the photochemical system. Lowering the rate constant for reactions of organic peroxy radicals with ClO to 10{sup {minus}13} cm{sup 3} molecule{sup {minus}1} s{sup {minus}1} does not alter results, and the major conclusions are insensitive to the timing of the ethane additions. Ignoring the organic peroxy radical plus ClO reactions entirely restores remediation capabilities by allowing HO{sub x} removal independent of HCl. Remediation also returns if early evaporation of polar stratospheric clouds leaves hydrogen atoms trapped in aldehyde intermediates, but real ozone losses are small in such cases. 95 refs., 4 figs., 7 tabs.

  18. In vivo characterization of two additional Leishmania donovani strains using the murine and hamster model.

    PubMed

    Kauffmann, F; Dumetz, F; Hendrickx, S; Muraille, E; Dujardin, J-C; Maes, L; Magez, S; De Trez, C

    2016-05-01

    Leishmania donovani is a protozoan parasite causing the neglected tropical disease visceral leishmaniasis. One difficulty to study the immunopathology upon L. donovani infection is the limited adaptability of the strains to experimental mammalian hosts. Our knowledge about L. donovani infections relies on a restricted number of East African strains (LV9, 1S). Isolated from patients in the 1960s, these strains were described extensively in mice and Syrian hamsters and have consequently become 'reference' laboratory strains. L. donovani strains from the Indian continent display distinct clinical features compared to East African strains. Some reports describing the in vivo immunopathology of strains from the Indian continent exist. This study comprises a comprehensive immunopathological characterization upon infection with two additional strains, the Ethiopian L. donovani L82 strain and the Nepalese L. donovani BPK282 strain in both Syrian hamsters and C57BL/6 mice. Parameters that include parasitaemia levels, weight loss, hepatosplenomegaly and alterations in cellular composition of the spleen and liver, showed that the L82 strain generated an overall more virulent infection compared to the BPK282 strain. Altogether, both L. donovani strains are suitable and interesting for subsequent in vivo investigation of visceral leishmaniasis in the Syrian hamster and the C57BL/6 mouse model. PMID:27012562

  19. Enhancement of colour stability of anthocyanins in model beverages by gum arabic addition.

    PubMed

    Chung, Cheryl; Rojanasasithara, Thananunt; Mutilangi, William; McClements, David Julian

    2016-06-15

    This study investigated the potential of gum arabic to improve the stability of anthocyanins that are used in commercial beverages as natural colourants. The degradation of purple carrot anthocyanin in model beverage systems (pH 3.0) containing L-ascorbic acid proceeded with a first-order reaction rate during storage (40 °C for 5 days in light). The addition of gum arabic (0.05-5.0%) significantly enhanced the colour stability of anthocyanin, with the most stable systems observed at intermediate levels (1.5%). A further increase in concentration (>1.5%) reduced its efficacy due to a change in the conformation of the gum arabic molecules that hindered their exposure to the anthocyanins. Fluorescence quenching measurements showed that the anthocyanin could have interacted with the glycoprotein fractions of the gum arabic through hydrogen bonding, resulting in enhanced stability. Overall, this study provides valuable information about enhancing the stability of anthocyanins in beverage systems using natural ingredients.

  20. Subsonic Flow for the Multidimensional Euler-Poisson System

    NASA Astrophysics Data System (ADS)

    Bae, Myoungjean; Duan, Ben; Xie, Chunjing

    2016-04-01

    We establish the existence and stability of subsonic potential flow for the steady Euler-Poisson system in a multidimensional nozzle of a finite length when prescribing the electric potential difference on a non-insulated boundary from a fixed point at the exit, and prescribing the pressure at the exit of the nozzle. The Euler-Poisson system for subsonic potential flow can be reduced to a nonlinear elliptic system of second order. In this paper, we develop a technique to achieve a priori {C^{1,α}} estimates of solutions to a quasi-linear second order elliptic system with mixed boundary conditions in a multidimensional domain enclosed by a Lipschitz continuous boundary. In particular, we discovered a special structure of the Euler-Poisson system which enables us to obtain {C^{1,α}} estimates of the velocity potential and the electric potential functions, and this leads us to establish structural stability of subsonic flows for the Euler-Poisson system under perturbations of various data.

  1. Poisson processes on groups and Feynman path integrals

    NASA Astrophysics Data System (ADS)

    Combe, Ph.; Høegh-Krohn, R.; Rodriguez, R.; Sirugue, M.; Sirugue-Collin, M.

    1980-10-01

    We give an expression for the perturbed evolution of a free evolution by gentle, possibly velocity dependent, potential, in terms of the expectation with respect to a Poisson process on a group. Various applications are given in particular to usual quantum mechanics but also to Fermi and spin systems.

  2. Vectorized multigrid Poisson solver for the CDC CYBER 205

    NASA Technical Reports Server (NTRS)

    Barkai, D.; Brandt, M. A.

    1984-01-01

    The full multigrid (FMG) method is applied to the two dimensional Poisson equation with Dirichlet boundary conditions. This has been chosen as a relatively simple test case for examining the efficiency of fully vectorizing of the multigrid method. Data structure and programming considerations and techniques are discussed, accompanied by performance details.

  3. Some applications of the fractional Poisson probability distribution

    SciTech Connect

    Laskin, Nick

    2009-11-15

    Physical and mathematical applications of the recently invented fractional Poisson probability distribution have been presented. As a physical application, a new family of quantum coherent states has been introduced and studied. As mathematical applications, we have developed the fractional generalization of Bell polynomials, Bell numbers, and Stirling numbers of the second kind. The appearance of fractional Bell polynomials is natural if one evaluates the diagonal matrix element of the evolution operator in the basis of newly introduced quantum coherent states. Fractional Stirling numbers of the second kind have been introduced and applied to evaluate the skewness and kurtosis of the fractional Poisson probability distribution function. A representation of the Bernoulli numbers in terms of fractional Stirling numbers of the second kind has been found. In the limit case when the fractional Poisson probability distribution becomes the Poisson probability distribution, all of the above listed developments and implementations turn into the well-known results of the quantum optics and the theory of combinatorial numbers.

  4. The Poisson Distribution: An Experimental Approach to Teaching Statistics

    ERIC Educational Resources Information Center

    Lafleur, Mimi S.; And Others

    1972-01-01

    Explains an experimental approach to teaching statistics to students who are essentially either non-science and non-mathematics majors or just beginning study of science. With every day examples, the article illustrates the method of teaching Poisson Distribution. (PS)

  5. On covariant Poisson brackets in classical field theory

    SciTech Connect

    Forger, Michael; Salles, Mário O.

    2015-10-15

    How to give a natural geometric definition of a covariant Poisson bracket in classical field theory has for a long time been an open problem—as testified by the extensive literature on “multisymplectic Poisson brackets,” together with the fact that all these proposals suffer from serious defects. On the other hand, the functional approach does provide a good candidate which has come to be known as the Peierls–De Witt bracket and whose construction in a geometrical setting is now well understood. Here, we show how the basic “multisymplectic Poisson bracket” already proposed in the 1970s can be derived from the Peierls–De Witt bracket, applied to a special class of functionals. This relation allows to trace back most (if not all) of the problems encountered in the past to ambiguities (the relation between differential forms on multiphase space and the functionals they define is not one-to-one) and also to the fact that this class of functionals does not form a Poisson subalgebra.

  6. Void-containing materials with tailored Poisson's ratio

    NASA Astrophysics Data System (ADS)

    Goussev, Olga A.; Richner, Peter; Rozman, Michael G.; Gusev, Andrei A.

    2000-10-01

    Assuming square, hexagonal, and random packed arrays of nonoverlapping identical parallel cylindrical voids dispersed in an aluminum matrix, we have calculated numerically the concentration dependence of the transverse Poisson's ratios. It was shown that the transverse Poisson's ratio of the hexagonal and random packed arrays approached 1 upon increasing the concentration of voids while the ratio of the square packed array along the principal continuation directions approached 0. Experimental measurements were carried out on rectangular aluminum bricks with identical cylindrical holes drilled in square and hexagonal packed arrays. Experimental results were in good agreement with numerical predictions. We then demonstrated, based on the numerical and experimental results, that by varying the spatial arrangement of the holes and their volume fraction, one can design and manufacture voided materials with a tailored Poisson's ratio between 0 and 1. In practice, those with a high Poisson's ratio, i.e., close to 1, can be used to amplify the lateral responses of the structures while those with a low one, i.e., close to 0, can largely attenuate the lateral responses and can therefore be used in situations where stringent lateral stability is needed.

  7. Decomposition reactions as general Poisson processes: Theory and an experimental example

    NASA Astrophysics Data System (ADS)

    Rydén, Tobias; Wernersson, Mikael

    1995-10-01

    The classical theory of decomposition reaction kinetics depends on a ``large scale'' assumption. In this paper we show how this assumption can be replaced by the assumption that the nucleation process is a space-time Poisson process. This framework is unifying in the sense that it includes many earlier formulas as special cases, and it naturally takes boundary effects into account. We consider the conversion of a sphere in detail, and fit the parameters of this model to gypsum decomposition experimental data. The so obtained model shows, for this particular reaction, that the boundary effects decrease with temperature.

  8. Wavelet-based Poisson solver for use in particle-in-cell simulations.

    PubMed

    Terzić, Balsa; Pogorelov, Ilya V

    2005-06-01

    We report on a successful implementation of a wavelet-based Poisson solver for use in three-dimensional particle-in-cell simulations. Our method harnesses advantages afforded by the wavelet formulation, such as sparsity of operators and data sets, existence of effective preconditioners, and the ability simultaneously to remove numerical noise and additional compression of relevant data sets. We present and discuss preliminary results relating to the application of the new solver to test problems in accelerator physics and astrophysics. PMID:15980304

  9. Exact momentum conservation laws for the gyrokinetic Vlasov-Poisson equations

    SciTech Connect

    Brizard, Alain J.; Tronko, Natalia

    2011-08-15

    The exact momentum conservation laws for the nonlinear gyrokinetic Vlasov-Poisson equations are derived by applying the Noether method on the gyrokinetic variational principle [A. J. Brizard, Phys. Plasmas 7, 4816 (2000)]. From the gyrokinetic Noether canonical-momentum equation derived by the Noether method, the gyrokinetic parallel momentum equation and other gyrokinetic Vlasov-moment equations are obtained. In addition, an exact gyrokinetic toroidal angular-momentum conservation law is derived in axisymmetric tokamak geometry, where the transport of parallel-toroidal momentum is related to the radial gyrocenter polarization, which includes contributions from the guiding-center and gyrocenter transformations.

  10. Determination of the Poisson's ratio of the cell: recovery properties of chondrocytes after release from complete micropipette aspiration.

    PubMed

    Trickey, Wendy R; Baaijens, Frank P T; Laursen, Tod A; Alexopoulos, Leonidas G; Guilak, Farshid

    2006-01-01

    Chondrocytes in articular cartilage are regularly subjected to compression and recovery due to dynamic loading of the joint. Previous studies have investigated the elastic and viscoelastic properties of chondrocytes using micropipette aspiration techniques, but in order to calculate cell properties, these studies have generally assumed that cells are incompressible with a Poisson's ratio of 0.5. The goal of this study was to measure the Poisson's ratio and recovery properties of the chondrocyte by combining theoretical modeling with experimental measures of complete cellular aspiration and release from a micropipette. Chondrocytes isolated from non-osteoarthritic and osteoarthritic cartilage were fully aspirated into a micropipette and allowed to reach mechanical equilibrium. Cells were then extruded from the micropipette and cell volume and morphology were measured throughout the experiment. This experimental procedure was simulated with finite element analysis, modeling the chondrocyte as either a compressible two-mode viscoelastic solid, or as a biphasic viscoelastic material. By fitting the experimental data to the theoretically predicted cell response, the Poisson's ratio and the viscoelastic recovery properties of the cell were determined. The Poisson's ratio of chondrocytes was found to be 0.38 for non-osteoarthritic cartilage and 0.36 for osteoarthritic chondrocytes (no significant difference). Osteoarthritic chondrocytes showed an increased recovery time following full aspiration. In contrast to previous assumptions, these findings suggest that chondrocytes are compressible, consistent with previous studies showing cell volume changes with compression of the extracellular matrix.

  11. Coagulation kinetics beyond mean field theory using an optimised Poisson representation

    SciTech Connect

    Burnett, James; Ford, Ian J.

    2015-05-21

    Binary particle coagulation can be modelled as the repeated random process of the combination of two particles to form a third. The kinetics may be represented by population rate equations based on a mean field assumption, according to which the rate of aggregation is taken to be proportional to the product of the mean populations of the two participants, but this can be a poor approximation when the mean populations are small. However, using the Poisson representation, it is possible to derive a set of rate equations that go beyond mean field theory, describing pseudo-populations that are continuous, noisy, and complex, but where averaging over the noise and initial conditions gives the mean of the physical population. Such an approach is explored for the simple case of a size-independent rate of coagulation between particles. Analytical results are compared with numerical computations and with results derived by other means. In the numerical work, we encounter instabilities that can be eliminated using a suitable “gauge” transformation of the problem [P. D. Drummond, Eur. Phys. J. B 38, 617 (2004)] which we show to be equivalent to the application of the Cameron-Martin-Girsanov formula describing a shift in a probability measure. The cost of such a procedure is to introduce additional statistical noise into the numerical results, but we identify an optimised gauge transformation where this difficulty is minimal for the main properties of interest. For more complicated systems, such an approach is likely to be computationally cheaper than Monte Carlo simulation.

  12. Second-order Poisson-Nernst-Planck solver for ion transport

    NASA Astrophysics Data System (ADS)

    Zheng, Qiong; Chen, Duan; Wei, Guo-Wei

    2011-06-01

    The Poisson-Nernst-Planck (PNP) theory is a simplified continuum model for a wide variety of chemical, physical and biological applications. Its ability of providing quantitative explanation and increasingly qualitative predictions of experimental measurements has earned itself much recognition in the research community. Numerous computational algorithms have been constructed for the solution of the PNP equations. However, in the realistic ion-channel context, no second-order convergent PNP algorithm has ever been reported in the literature, due to many numerical obstacles, including discontinuous coefficients, singular charges, geometric singularities, and nonlinear couplings. The present work introduces a number of numerical algorithms to overcome the abovementioned numerical challenges and constructs the first second-order convergent PNP solver in the ion-channel context. First, a Dirichlet to Neumann mapping (DNM) algorithm is designed to alleviate the charge singularity due to the protein structure. Additionally, the matched interface and boundary (MIB) method is reformulated for solving the PNP equations. The MIB method systematically enforces the interface jump conditions and achieves the second order accuracy in the presence of complex geometry and geometric singularities of molecular surfaces. Moreover, two iterative schemes are utilized to deal with the coupled nonlinear equations. Furthermore, extensive and rigorous numerical validations are carried out over a number of geometries, including a sphere, two proteins and an ion channel, to examine the numerical accuracy and convergence order of the present numerical algorithms. Finally, application is considered to a real transmembrane protein, the Gramicidin A channel protein. The performance of the proposed numerical techniques is tested against a number of factors, including mesh sizes, diffusion coefficient profiles, iterative schemes, ion concentrations, and applied voltages. Numerical predictions are

  13. Two-sample discrimination of Poisson means

    NASA Technical Reports Server (NTRS)

    Lampton, M.

    1994-01-01

    This paper presents a statistical test for detecting significant differences between two random count accumulations. The null hypothesis is that the two samples share a common random arrival process with a mean count proportional to each sample's exposure. The model represents the partition of N total events into two counts, A and B, as a sequence of N independent Bernoulli trials whose partition fraction, f, is determined by the ratio of the exposures of A and B. The detection of a significant difference is claimed when the background (null) hypothesis is rejected, which occurs when the observed sample falls in a critical region of (A, B) space. The critical region depends on f and the desired significance level, alpha. The model correctly takes into account the fluctuations in both the signals and the background data, including the important case of small numbers of counts in the signal, the background, or both. The significance can be exactly determined from the cumulative binomial distribution, which in turn can be inverted to determine the critical A(B) or B(A) contour. This paper gives efficient implementations of these tests, based on lookup tables. Applications include the detection of clustering of astronomical objects, the detection of faint emission or absorption lines in photon-limited spectroscopy, the detection of faint emitters or absorbers in photon-limited imaging, and dosimetry.

  14. 78 FR 32224 - Availability of Version 3.1.2 of the Connect America Fund Phase II Cost Model; Additional...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-29

    ...; Additional Discussion Topics in Connect America Cost Model Virtual Workshop AGENCY: Federal Communications... issues in the ongoing virtual workshop. DATES: Comments are due on or before June 18, 2013. If you... comments. Virtual Workshop: In addition to the usual methods for filing electronic comments, the...

  15. A Legendre-Fourier spectral method with exact conservation laws for the Vlasov-Poisson system

    NASA Astrophysics Data System (ADS)

    Manzini, G.; Delzanno, G. L.; Vencels, J.; Markidis, S.

    2016-07-01

    We present the design and implementation of an L2-stable spectral method for the discretization of the Vlasov-Poisson model of a collisionless plasma in one space and velocity dimension. The velocity and space dependence of the Vlasov equation are resolved through a truncated spectral expansion based on Legendre and Fourier basis functions, respectively. The Poisson equation, which is coupled to the Vlasov equation, is also resolved through a Fourier expansion. The resulting system of ordinary differential equation is discretized by the implicit second-order accurate Crank-Nicolson time discretization. The non-linear dependence between the Vlasov and Poisson equations is iteratively solved at any time cycle by a Jacobian-Free Newton-Krylov method. In this work we analyze the structure of the main conservation laws of the resulting Legendre-Fourier model, e.g., mass, momentum, and energy, and prove that they are exactly satisfied in the semi-discrete and discrete setting. The L2-stability of the method is ensured by discretizing the boundary conditions of the distribution function at the boundaries of the velocity domain by a suitable penalty term. The impact of the penalty term on the conservation properties is investigated theoretically and numerically. An implementation of the penalty term that does not affect the conservation of mass, momentum and energy, is also proposed and studied. A collisional term is introduced in the discrete model to control the filamentation effect, but does not affect the conservation properties of the system. Numerical results on a set of standard test problems illustrate the performance of the method.

  16. Comment on {open_quotes}Models of intermediate spectral statistics{close_quotes}

    SciTech Connect

    Gorin, T.; Muller, M.; Seba, P.

    2001-06-01

    In this Comment we point out that the semi-Poisson is well suited only as a reference point for the so-called {open_quotes}intermediate statistics,{close_quotes} which cannot be interpreted as a universal ensemble, like the Gaussian orthogonal ensemble or the Poissonian statistics. In Ref. 2 it was proposed that the nearest-neighbor distribution P(s) of the spectrum of a Poissonian distributed matrix perturbed by a rank one matrix is similar to the semi-Poisson distribution. We show, however, that the P(s) of this model differs considerably in many aspects from the semi-Poisson. In addition, we give an asymptotic formula for P(s) as s{r_arrow}0, which gives P{sup {prime}}(0)={pi}{radical}3/2 for the slope at s=0. This is different not only from the GOE case, but also from the semi-Poisson prediction.

  17. Adaptation of the pore diffusion model to describe multi-addition batch uptake high-throughput screening experiments.

    PubMed

    Traylor, Steven J; Xu, Xuankuo; Li, Yi; Jin, Mi; Li, Zheng Jian

    2014-11-14

    Equilibrium isotherm and kinetic mass transfer measurements are critical to mechanistic modeling of binding and elution behavior within a chromatographic column. However, traditional methods of measuring these parameters are impractically time- and labor-intensive. While advances in high-throughput robotic liquid handling systems have created time and labor-saving methods of performing kinetic and equilibrium measurements of proteins on chromatographic resins in a 96-well plate format, these techniques continue to be limited by physical constraints on protein addition, incubation and separation times; the available concentration of protein stocks and process pools; and practical constraints on resin and fluid volumes in the 96-well format. In this study, a novel technique for measuring protein uptake kinetics (multi-addition batch uptake) has been developed to address some of these limitations during high-throughput batch uptake kinetic measurements. This technique uses sequential additions of protein stock to chromatographic resin in a 96-well plate and the subsequent removal of each addition by centrifugation or vacuum separation. The pore diffusion model was adapted here to model multi-addition batch uptake and was tested and compared with traditional batch uptake measurements of uptake of an Fc-fusion protein on an anion exchange resin. Acceptable agreement between the two techniques is achieved for the two solution conditions investigated here. In addition, a sensitivity analysis of the model to the physical inputs is presented and the advantages and limitations of the multi-addition batch uptake technique are explored.

  18. Self-regulating genes. Exact steady state solution by using Poisson representation

    NASA Astrophysics Data System (ADS)

    Sugár, István P.; Simon, István

    2014-09-01

    Systems biology studies the structure and behavior of complex gene regulatory networks. One of its aims is to develop a quantitative understanding of the modular components that constitute such networks. The self-regulating gene is a type of auto regulatory genetic modules which appears in over 40% of known transcription factors in E. coli. In this work, using the technique of Poisson Representation, we are able to provide exact steady state solutions for this feedback model. By using the methods of synthetic biology (P.E.M. Purnick and Weiss, R., Nature Reviews, Molecular Cell Biology, 2009, 10: 410-422) one can build the system itself from modules like this.

  19. Effect of non-Poisson samples on turbulence spectra from laser velocimetry

    NASA Technical Reports Server (NTRS)

    Sree, Dave; Kjelgaard, Scott O.; Sellers, William L., III

    1994-01-01

    Spectral analysis of laser velocimetry (LV) data plays an important role in characterizing a turbulent flow and in estimating the associated turbulence scales, which can be helpful in validating theoretical and numerical turbulence models. The determination of turbulence scales is critically dependent on the accuracy of the spectral estimates. Spectral estimations from 'individual realization' laser velocimetry data are typically based on the assumption of a Poisson sampling process. What this Note has demonstrated is that the sampling distribution must be considered before spectral estimates are used to infer turbulence scales.

  20. Superposition of many independent spike trains is generally not a Poisson process

    NASA Astrophysics Data System (ADS)

    Lindner, Benjamin

    2006-02-01

    We study the sum of many independent spike trains and ask whether the resulting spike train has Poisson statistics or not. It is shown that for a non-Poissonian statistics of the single spike train, the resulting sum of spikes has exponential interspike interval (ISI) distributions, vanishing the ISI correlation at a finite lag but exhibits exactly the same power spectrum as the original spike train does. This paradox is resolved by considering what happens to ISI correlations in the limit of an infinite number of superposed trains. Implications of our findings for stochastic models in the neurosciences are briefly discussed.