Sample records for poisson generalized linear

  1. Application of the Hyper-Poisson Generalized Linear Model for Analyzing Motor Vehicle Crashes.

    PubMed

    Khazraee, S Hadi; Sáez-Castillo, Antonio Jose; Geedipally, Srinivas Reddy; Lord, Dominique

    2015-05-01

    The hyper-Poisson distribution can handle both over- and underdispersion, and its generalized linear model formulation allows the dispersion of the distribution to be observation-specific and dependent on model covariates. This study's objective is to examine the potential applicability of a newly proposed generalized linear model framework for the hyper-Poisson distribution in analyzing motor vehicle crash count data. The hyper-Poisson generalized linear model was first fitted to intersection crash data from Toronto, characterized by overdispersion, and then to crash data from railway-highway crossings in Korea, characterized by underdispersion. The results of this study are promising. When fitted to the Toronto data set, the goodness-of-fit measures indicated that the hyper-Poisson model with a variable dispersion parameter provided a statistical fit as good as the traditional negative binomial model. The hyper-Poisson model was also successful in handling the underdispersed data from Korea; the model performed as well as the gamma probability model and the Conway-Maxwell-Poisson model previously developed for the same data set. The advantages of the hyper-Poisson model studied in this article are noteworthy. Unlike the negative binomial model, which has difficulties in handling underdispersed data, the hyper-Poisson model can handle both over- and underdispersed crash data. Although not a major issue for the Conway-Maxwell-Poisson model, the effect of each variable on the expected mean of crashes is easily interpretable in the case of this new model. © 2014 Society for Risk Analysis.

  2. Accuracy assessment of the linear Poisson-Boltzmann equation and reparametrization of the OBC generalized Born model for nucleic acids and nucleic acid-protein complexes.

    PubMed

    Fogolari, Federico; Corazza, Alessandra; Esposito, Gennaro

    2015-04-05

    The generalized Born model in the Onufriev, Bashford, and Case (Onufriev et al., Proteins: Struct Funct Genet 2004, 55, 383) implementation has emerged as one of the best compromises between accuracy and speed of computation. For simulations of nucleic acids, however, a number of issues should be addressed: (1) the generalized Born model is based on a linear model and the linearization of the reference Poisson-Boltmann equation may be questioned for highly charged systems as nucleic acids; (2) although much attention has been given to potentials, solvation forces could be much less sensitive to linearization than the potentials; and (3) the accuracy of the Onufriev-Bashford-Case (OBC) model for nucleic acids depends on fine tuning of parameters. Here, we show that the linearization of the Poisson Boltzmann equation has mild effects on computed forces, and that with optimal choice of the OBC model parameters, solvation forces, essential for molecular dynamics simulations, agree well with those computed using the reference Poisson-Boltzmann model. © 2015 Wiley Periodicals, Inc.

  3. Log-normal frailty models fitted as Poisson generalized linear mixed models.

    PubMed

    Hirsch, Katharina; Wienke, Andreas; Kuss, Oliver

    2016-12-01

    The equivalence of a survival model with a piecewise constant baseline hazard function and a Poisson regression model has been known since decades. As shown in recent studies, this equivalence carries over to clustered survival data: A frailty model with a log-normal frailty term can be interpreted and estimated as a generalized linear mixed model with a binary response, a Poisson likelihood, and a specific offset. Proceeding this way, statistical theory and software for generalized linear mixed models are readily available for fitting frailty models. This gain in flexibility comes at the small price of (1) having to fix the number of pieces for the baseline hazard in advance and (2) having to "explode" the data set by the number of pieces. In this paper we extend the simulations of former studies by using a more realistic baseline hazard (Gompertz) and by comparing the model under consideration with competing models. Furthermore, the SAS macro %PCFrailty is introduced to apply the Poisson generalized linear mixed approach to frailty models. The simulations show good results for the shared frailty model. Our new %PCFrailty macro provides proper estimates, especially in case of 4 events per piece. The suggested Poisson generalized linear mixed approach for log-normal frailty models based on the %PCFrailty macro provides several advantages in the analysis of clustered survival data with respect to more flexible modelling of fixed and random effects, exact (in the sense of non-approximate) maximum likelihood estimation, and standard errors and different types of confidence intervals for all variance parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  4. A generalized Poisson and Poisson-Boltzmann solver for electrostatic environments.

    PubMed

    Fisicaro, G; Genovese, L; Andreussi, O; Marzari, N; Goedecker, S

    2016-01-07

    The computational study of chemical reactions in complex, wet environments is critical for applications in many fields. It is often essential to study chemical reactions in the presence of applied electrochemical potentials, taking into account the non-trivial electrostatic screening coming from the solvent and the electrolytes. As a consequence, the electrostatic potential has to be found by solving the generalized Poisson and the Poisson-Boltzmann equations for neutral and ionic solutions, respectively. In the present work, solvers for both problems have been developed. A preconditioned conjugate gradient method has been implemented for the solution of the generalized Poisson equation and the linear regime of the Poisson-Boltzmann, allowing to solve iteratively the minimization problem with some ten iterations of the ordinary Poisson equation solver. In addition, a self-consistent procedure enables us to solve the non-linear Poisson-Boltzmann problem. Both solvers exhibit very high accuracy and parallel efficiency and allow for the treatment of periodic, free, and slab boundary conditions. The solver has been integrated into the BigDFT and Quantum-ESPRESSO electronic-structure packages and will be released as an independent program, suitable for integration in other codes.

  5. A generalized Poisson and Poisson-Boltzmann solver for electrostatic environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fisicaro, G., E-mail: giuseppe.fisicaro@unibas.ch; Goedecker, S.; Genovese, L.

    2016-01-07

    The computational study of chemical reactions in complex, wet environments is critical for applications in many fields. It is often essential to study chemical reactions in the presence of applied electrochemical potentials, taking into account the non-trivial electrostatic screening coming from the solvent and the electrolytes. As a consequence, the electrostatic potential has to be found by solving the generalized Poisson and the Poisson-Boltzmann equations for neutral and ionic solutions, respectively. In the present work, solvers for both problems have been developed. A preconditioned conjugate gradient method has been implemented for the solution of the generalized Poisson equation and themore » linear regime of the Poisson-Boltzmann, allowing to solve iteratively the minimization problem with some ten iterations of the ordinary Poisson equation solver. In addition, a self-consistent procedure enables us to solve the non-linear Poisson-Boltzmann problem. Both solvers exhibit very high accuracy and parallel efficiency and allow for the treatment of periodic, free, and slab boundary conditions. The solver has been integrated into the BigDFT and Quantum-ESPRESSO electronic-structure packages and will be released as an independent program, suitable for integration in other codes.« less

  6. Characterizing the performance of the Conway-Maxwell Poisson generalized linear model.

    PubMed

    Francis, Royce A; Geedipally, Srinivas Reddy; Guikema, Seth D; Dhavala, Soma Sekhar; Lord, Dominique; LaRocca, Sarah

    2012-01-01

    Count data are pervasive in many areas of risk analysis; deaths, adverse health outcomes, infrastructure system failures, and traffic accidents are all recorded as count events, for example. Risk analysts often wish to estimate the probability distribution for the number of discrete events as part of doing a risk assessment. Traditional count data regression models of the type often used in risk assessment for this problem suffer from limitations due to the assumed variance structure. A more flexible model based on the Conway-Maxwell Poisson (COM-Poisson) distribution was recently proposed, a model that has the potential to overcome the limitations of the traditional model. However, the statistical performance of this new model has not yet been fully characterized. This article assesses the performance of a maximum likelihood estimation method for fitting the COM-Poisson generalized linear model (GLM). The objectives of this article are to (1) characterize the parameter estimation accuracy of the MLE implementation of the COM-Poisson GLM, and (2) estimate the prediction accuracy of the COM-Poisson GLM using simulated data sets. The results of the study indicate that the COM-Poisson GLM is flexible enough to model under-, equi-, and overdispersed data sets with different sample mean values. The results also show that the COM-Poisson GLM yields accurate parameter estimates. The COM-Poisson GLM provides a promising and flexible approach for performing count data regression. © 2011 Society for Risk Analysis.

  7. Application of the Conway-Maxwell-Poisson generalized linear model for analyzing motor vehicle crashes.

    PubMed

    Lord, Dominique; Guikema, Seth D; Geedipally, Srinivas Reddy

    2008-05-01

    This paper documents the application of the Conway-Maxwell-Poisson (COM-Poisson) generalized linear model (GLM) for modeling motor vehicle crashes. The COM-Poisson distribution, originally developed in 1962, has recently been re-introduced by statisticians for analyzing count data subjected to over- and under-dispersion. This innovative distribution is an extension of the Poisson distribution. The objectives of this study were to evaluate the application of the COM-Poisson GLM for analyzing motor vehicle crashes and compare the results with the traditional negative binomial (NB) model. The comparison analysis was carried out using the most common functional forms employed by transportation safety analysts, which link crashes to the entering flows at intersections or on segments. To accomplish the objectives of the study, several NB and COM-Poisson GLMs were developed and compared using two datasets. The first dataset contained crash data collected at signalized four-legged intersections in Toronto, Ont. The second dataset included data collected for rural four-lane divided and undivided highways in Texas. Several methods were used to assess the statistical fit and predictive performance of the models. The results of this study show that COM-Poisson GLMs perform as well as NB models in terms of GOF statistics and predictive performance. Given the fact the COM-Poisson distribution can also handle under-dispersed data (while the NB distribution cannot or has difficulties converging), which have sometimes been observed in crash databases, the COM-Poisson GLM offers a better alternative over the NB model for modeling motor vehicle crashes, especially given the important limitations recently documented in the safety literature about the latter type of model.

  8. Evaluating the double Poisson generalized linear model.

    PubMed

    Zou, Yaotian; Geedipally, Srinivas Reddy; Lord, Dominique

    2013-10-01

    The objectives of this study are to: (1) examine the applicability of the double Poisson (DP) generalized linear model (GLM) for analyzing motor vehicle crash data characterized by over- and under-dispersion and (2) compare the performance of the DP GLM with the Conway-Maxwell-Poisson (COM-Poisson) GLM in terms of goodness-of-fit and theoretical soundness. The DP distribution has seldom been investigated and applied since its first introduction two decades ago. The hurdle for applying the DP is related to its normalizing constant (or multiplicative constant) which is not available in closed form. This study proposed a new method to approximate the normalizing constant of the DP with high accuracy and reliability. The DP GLM and COM-Poisson GLM were developed using two observed over-dispersed datasets and one observed under-dispersed dataset. The modeling results indicate that the DP GLM with its normalizing constant approximated by the new method can handle crash data characterized by over- and under-dispersion. Its performance is comparable to the COM-Poisson GLM in terms of goodness-of-fit (GOF), although COM-Poisson GLM provides a slightly better fit. For the over-dispersed data, the DP GLM performs similar to the NB GLM. Considering the fact that the DP GLM can be easily estimated with inexpensive computation and that it is simpler to interpret coefficients, it offers a flexible and efficient alternative for researchers to model count data. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. A comparison of methods for the analysis of binomial clustered outcomes in behavioral research.

    PubMed

    Ferrari, Alberto; Comelli, Mario

    2016-12-01

    In behavioral research, data consisting of a per-subject proportion of "successes" and "failures" over a finite number of trials often arise. This clustered binary data are usually non-normally distributed, which can distort inference if the usual general linear model is applied and sample size is small. A number of more advanced methods is available, but they are often technically challenging and a comparative assessment of their performances in behavioral setups has not been performed. We studied the performances of some methods applicable to the analysis of proportions; namely linear regression, Poisson regression, beta-binomial regression and Generalized Linear Mixed Models (GLMMs). We report on a simulation study evaluating power and Type I error rate of these models in hypothetical scenarios met by behavioral researchers; plus, we describe results from the application of these methods on data from real experiments. Our results show that, while GLMMs are powerful instruments for the analysis of clustered binary outcomes, beta-binomial regression can outperform them in a range of scenarios. Linear regression gave results consistent with the nominal level of significance, but was overall less powerful. Poisson regression, instead, mostly led to anticonservative inference. GLMMs and beta-binomial regression are generally more powerful than linear regression; yet linear regression is robust to model misspecification in some conditions, whereas Poisson regression suffers heavily from violations of the assumptions when used to model proportion data. We conclude providing directions to behavioral scientists dealing with clustered binary data and small sample sizes. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Structural interactions in ionic liquids linked to higher-order Poisson-Boltzmann equations

    NASA Astrophysics Data System (ADS)

    Blossey, R.; Maggs, A. C.; Podgornik, R.

    2017-06-01

    We present a derivation of generalized Poisson-Boltzmann equations starting from classical theories of binary fluid mixtures, employing an approach based on the Legendre transform as recently applied to the case of local descriptions of the fluid free energy. Under specific symmetry assumptions, and in the linearized regime, the Poisson-Boltzmann equation reduces to a phenomenological equation introduced by Bazant et al. [Phys. Rev. Lett. 106, 046102 (2011)], 10.1103/PhysRevLett.106.046102, whereby the structuring near the surface is determined by bulk coefficients.

  11. Poisson sigma models, reduction and nonlinear gauge theories

    NASA Astrophysics Data System (ADS)

    Signori, Daniele

    This dissertation comprises two main lines of research. Firstly, we study non-linear gauge theories for principal bundles, where the structure group is replaced by a Lie groupoid. We follow the approach of Moerdijk-Mrcun and establish its relation with the existing physics literature. In particular, we derive a new formula for the gauge transformation which closely resembles and generalizes the classical formulas found in Yang Mills gauge theories. Secondly, we give a field theoretic interpretation of the of the BRST (Becchi-Rouet-Stora-Tyutin) and BFV (Batalin-Fradkin-Vilkovisky) methods for the reduction of coisotropic submanifolds of Poisson manifolds. The generalized Poisson sigma models that we define are related to the quantization deformation problems of coisotropic submanifolds using homotopical algebras.

  12. Poisson Mixture Regression Models for Heart Disease Prediction.

    PubMed

    Mufudza, Chipo; Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model.

  13. Poisson Mixture Regression Models for Heart Disease Prediction

    PubMed Central

    Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model. PMID:27999611

  14. Quantization with maximally degenerate Poisson brackets: the harmonic oscillator!

    NASA Astrophysics Data System (ADS)

    Nutku, Yavuz

    2003-07-01

    Nambu's construction of multi-linear brackets for super-integrable systems can be thought of as degenerate Poisson brackets with a maximal set of Casimirs in their kernel. By introducing privileged coordinates in phase space these degenerate Poisson brackets are brought to the form of Heisenberg's equations. We propose a definition for constructing quantum operators for classical functions, which enables us to turn the maximally degenerate Poisson brackets into operators. They pose a set of eigenvalue problems for a new state vector. The requirement of the single-valuedness of this eigenfunction leads to quantization. The example of the harmonic oscillator is used to illustrate this general procedure for quantizing a class of maximally super-integrable systems.

  15. Oscillatory Reduction in Option Pricing Formula Using Shifted Poisson and Linear Approximation

    NASA Astrophysics Data System (ADS)

    Nur Rachmawati, Ro'fah; Irene; Budiharto, Widodo

    2014-03-01

    Option is one of derivative instruments that can help investors improve their expected return and minimize the risks. However, the Black-Scholes formula is generally used in determining the price of the option does not involve skewness factor and it is difficult to apply in computing process because it produces oscillation for the skewness values close to zero. In this paper, we construct option pricing formula that involve skewness by modified Black-Scholes formula using Shifted Poisson model and transformed it into the form of a Linear Approximation in the complete market to reduce the oscillation. The results are Linear Approximation formula can predict the price of an option with very accurate and successfully reduce the oscillations in the calculation processes.

  16. Complex wet-environments in electronic-structure calculations

    NASA Astrophysics Data System (ADS)

    Fisicaro, Giuseppe; Genovese, Luigi; Andreussi, Oliviero; Marzari, Nicola; Goedecker, Stefan

    The computational study of chemical reactions in complex, wet environments is critical for applications in many fields. It is often essential to study chemical reactions in the presence of an applied electrochemical potentials, including complex electrostatic screening coming from the solvent. In the present work we present a solver to handle both the Generalized Poisson and the Poisson-Boltzmann equation. A preconditioned conjugate gradient (PCG) method has been implemented for the Generalized Poisson and the linear regime of the Poisson-Boltzmann, allowing to solve iteratively the minimization problem with some ten iterations. On the other hand, a self-consistent procedure enables us to solve the Poisson-Boltzmann problem. The algorithms take advantage of a preconditioning procedure based on the BigDFT Poisson solver for the standard Poisson equation. They exhibit very high accuracy and parallel efficiency, and allow different boundary conditions, including surfaces. The solver has been integrated into the BigDFT and Quantum-ESPRESSO electronic-structure packages and it will be released as a independent program, suitable for integration in other codes. We present test calculations for large proteins to demonstrate efficiency and performances. This work was done within the PASC and NCCR MARVEL projects. Computer resources were provided by the Swiss National Supercomputing Centre (CSCS) under Project ID s499. LG acknowledges also support from the EXTMOS EU project.

  17. Efficient Levenberg-Marquardt minimization of the maximum likelihood estimator for Poisson deviates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laurence, T; Chromy, B

    2009-11-10

    Histograms of counted events are Poisson distributed, but are typically fitted without justification using nonlinear least squares fitting. The more appropriate maximum likelihood estimator (MLE) for Poisson distributed data is seldom used. We extend the use of the Levenberg-Marquardt algorithm commonly used for nonlinear least squares minimization for use with the MLE for Poisson distributed data. In so doing, we remove any excuse for not using this more appropriate MLE. We demonstrate the use of the algorithm and the superior performance of the MLE using simulations and experiments in the context of fluorescence lifetime imaging. Scientists commonly form histograms ofmore » counted events from their data, and extract parameters by fitting to a specified model. Assuming that the probability of occurrence for each bin is small, event counts in the histogram bins will be distributed according to the Poisson distribution. We develop here an efficient algorithm for fitting event counting histograms using the maximum likelihood estimator (MLE) for Poisson distributed data, rather than the non-linear least squares measure. This algorithm is a simple extension of the common Levenberg-Marquardt (L-M) algorithm, is simple to implement, quick and robust. Fitting using a least squares measure is most common, but it is the maximum likelihood estimator only for Gaussian-distributed data. Non-linear least squares methods may be applied to event counting histograms in cases where the number of events is very large, so that the Poisson distribution is well approximated by a Gaussian. However, it is not easy to satisfy this criterion in practice - which requires a large number of events. It has been well-known for years that least squares procedures lead to biased results when applied to Poisson-distributed data; a recent paper providing extensive characterization of these biases in exponential fitting is given. The more appropriate measure based on the maximum likelihood estimator (MLE) for the Poisson distribution is also well known, but has not become generally used. This is primarily because, in contrast to non-linear least squares fitting, there has been no quick, robust, and general fitting method. In the field of fluorescence lifetime spectroscopy and imaging, there have been some efforts to use this estimator through minimization routines such as Nelder-Mead optimization, exhaustive line searches, and Gauss-Newton minimization. Minimization based on specific one- or multi-exponential models has been used to obtain quick results, but this procedure does not allow the incorporation of the instrument response, and is not generally applicable to models found in other fields. Methods for using the MLE for Poisson-distributed data have been published by the wider spectroscopic community, including iterative minimization schemes based on Gauss-Newton minimization. The slow acceptance of these procedures for fitting event counting histograms may also be explained by the use of the ubiquitous, fast Levenberg-Marquardt (L-M) fitting procedure for fitting non-linear models using least squares fitting (simple searches obtain {approx}10000 references - this doesn't include those who use it, but don't know they are using it). The benefits of L-M include a seamless transition between Gauss-Newton minimization and downward gradient minimization through the use of a regularization parameter. This transition is desirable because Gauss-Newton methods converge quickly, but only within a limited domain of convergence; on the other hand the downward gradient methods have a much wider domain of convergence, but converge extremely slowly nearer the minimum. L-M has the advantages of both procedures: relative insensitivity to initial parameters and rapid convergence. Scientists, when wanting an answer quickly, will fit data using L-M, get an answer, and move on. Only those that are aware of the bias issues will bother to fit using the more appropriate MLE for Poisson deviates. However, since there is a simple, analytical formula for the appropriate MLE measure for Poisson deviates, it is inexcusable that least squares estimators are used almost exclusively when fitting event counting histograms. There have been ways found to use successive non-linear least squares fitting to obtain similarly unbiased results, but this procedure is justified by simulation, must be re-tested when conditions change significantly, and requires two successive fits. There is a great need for a fitting routine for the MLE estimator for Poisson deviates that has convergence domains and rates comparable to the non-linear least squares L-M fitting. We show in this report that a simple way to achieve that goal is to use the L-M fitting procedure not to minimize the least squares measure, but the MLE for Poisson deviates.« less

  18. Modified Regression Correlation Coefficient for Poisson Regression Model

    NASA Astrophysics Data System (ADS)

    Kaengthong, Nattacha; Domthong, Uthumporn

    2017-09-01

    This study gives attention to indicators in predictive power of the Generalized Linear Model (GLM) which are widely used; however, often having some restrictions. We are interested in regression correlation coefficient for a Poisson regression model. This is a measure of predictive power, and defined by the relationship between the dependent variable (Y) and the expected value of the dependent variable given the independent variables [E(Y|X)] for the Poisson regression model. The dependent variable is distributed as Poisson. The purpose of this research was modifying regression correlation coefficient for Poisson regression model. We also compare the proposed modified regression correlation coefficient with the traditional regression correlation coefficient in the case of two or more independent variables, and having multicollinearity in independent variables. The result shows that the proposed regression correlation coefficient is better than the traditional regression correlation coefficient based on Bias and the Root Mean Square Error (RMSE).

  19. State Estimation for Linear Systems Driven Simultaneously by Wiener and Poisson Processes.

    DTIC Science & Technology

    1978-12-01

    The state estimation problem of linear stochastic systems driven simultaneously by Wiener and Poisson processes is considered, especially the case...where the incident intensities of the Poisson processes are low and the system is observed in an additive white Gaussian noise. The minimum mean squared

  20. Nonlocal and nonlinear electrostatics of a dipolar Coulomb fluid.

    PubMed

    Sahin, Buyukdagli; Ralf, Blossey

    2014-07-16

    We study a model Coulomb fluid consisting of dipolar solvent molecules of finite extent which generalizes the point-like dipolar Poisson-Boltzmann model (DPB) previously introduced by Coalson and Duncan (1996 J. Phys. Chem. 100 2612) and Abrashkin et al (2007 Phys. Rev. Lett. 99 077801). We formulate a nonlocal Poisson-Boltzmann equation (NLPB) and study both linear and nonlinear dielectric response in this model for the case of a single plane geometry. Our results shed light on the relevance of nonlocal versus nonlinear effects in continuum models of material electrostatics.

  1. Image denoising in mixed Poisson-Gaussian noise.

    PubMed

    Luisier, Florian; Blu, Thierry; Unser, Michael

    2011-03-01

    We propose a general methodology (PURE-LET) to design and optimize a wide class of transform-domain thresholding algorithms for denoising images corrupted by mixed Poisson-Gaussian noise. We express the denoising process as a linear expansion of thresholds (LET) that we optimize by relying on a purely data-adaptive unbiased estimate of the mean-squared error (MSE), derived in a non-Bayesian framework (PURE: Poisson-Gaussian unbiased risk estimate). We provide a practical approximation of this theoretical MSE estimate for the tractable optimization of arbitrary transform-domain thresholding. We then propose a pointwise estimator for undecimated filterbank transforms, which consists of subband-adaptive thresholding functions with signal-dependent thresholds that are globally optimized in the image domain. We finally demonstrate the potential of the proposed approach through extensive comparisons with state-of-the-art techniques that are specifically tailored to the estimation of Poisson intensities. We also present denoising results obtained on real images of low-count fluorescence microscopy.

  2. Effect of Poisson's loss factor of rubbery material on underwater sound absorption of anechoic coatings

    NASA Astrophysics Data System (ADS)

    Zhong, Jie; Zhao, Honggang; Yang, Haibin; Yin, Jianfei; Wen, Jihong

    2018-06-01

    Rubbery coatings embedded with air cavities are commonly used on underwater structures to reduce reflection of incoming sound waves. In this paper, the relationships between Poisson's and modulus loss factors of rubbery materials are theoretically derived, the different effects of the tiny Poisson's loss factor on characterizing the loss factors of shear and longitudinal moduli are revealed. Given complex Young's modulus and dynamic Poisson's ratio, it is found that the shear loss factor has almost invisible variation with the Poisson's loss factor and is very close to the loss factor of Young's modulus, while the longitudinal loss factor almost linearly decreases with the increase of Poisson's loss factor. Then, a finite element (FE) model is used to investigate the effect of the tiny Poisson's loss factor, which is generally neglected in some FE models, on the underwater sound absorption of rubbery coatings. Results show that the tiny Poisson's loss factor has a significant effect on the sound absorption of homogeneous coatings within the concerned frequency range, while it has both frequency- and structure-dependent influence on the sound absorption of inhomogeneous coatings with embedded air cavities. Given the material parameters and cavity dimensions, more obvious effect can be observed for the rubbery coating with a larger lattice constant and/or a thicker cover layer.

  3. Modelling female fertility traits in beef cattle using linear and non-linear models.

    PubMed

    Naya, H; Peñagaricano, F; Urioste, J I

    2017-06-01

    Female fertility traits are key components of the profitability of beef cattle production. However, these traits are difficult and expensive to measure, particularly under extensive pastoral conditions, and consequently, fertility records are in general scarce and somehow incomplete. Moreover, fertility traits are usually dominated by the effects of herd-year environment, and it is generally assumed that relatively small margins are kept for genetic improvement. New ways of modelling genetic variation in these traits are needed. Inspired in the methodological developments made by Prof. Daniel Gianola and co-workers, we assayed linear (Gaussian), Poisson, probit (threshold), censored Poisson and censored Gaussian models to three different kinds of endpoints, namely calving success (CS), number of days from first calving (CD) and number of failed oestrus (FE). For models involving FE and CS, non-linear models overperformed their linear counterparts. For models derived from CD, linear versions displayed better adjustment than the non-linear counterparts. Non-linear models showed consistently higher estimates of heritability and repeatability in all cases (h 2  < 0.08 and r < 0.13, for linear models; h 2  > 0.23 and r > 0.24, for non-linear models). While additive and permanent environment effects showed highly favourable correlations between all models (>0.789), consistency in selecting the 10% best sires showed important differences, mainly amongst the considered endpoints (FE, CS and CD). In consequence, endpoints should be considered as modelling different underlying genetic effects, with linear models more appropriate to describe CD and non-linear models better for FE and CS. © 2017 Blackwell Verlag GmbH.

  4. Hydrodynamic representation of the Klein-Gordon-Einstein equations in the weak field limit: General formalism and perturbations analysis

    NASA Astrophysics Data System (ADS)

    Suárez, Abril; Chavanis, Pierre-Henri

    2015-07-01

    Using a generalization of the Madelung transformation, we derive the hydrodynamic representation of the Klein-Gordon-Einstein equations in the weak field limit. We consider a complex self-interacting scalar field with a λ |φ |4 potential. We study the evolution of the spatially homogeneous background in the fluid representation and derive the linearized equations describing the evolution of small perturbations in a static and in an expanding Universe. We compare the results with simplified models in which the gravitational potential is introduced by hand in the Klein-Gordon equation, and assumed to satisfy a (generalized) Poisson equation. Nonrelativistic hydrodynamic equations based on the Schrödinger-Poisson equations or on the Gross-Pitaevskii-Poisson equations are recovered in the limit c →+∞. We study the evolution of the perturbations in the matter era using the nonrelativistic limit of our formalism. Perturbations whose wavelength is below the Jeans length oscillate in time while perturbations whose wavelength is above the Jeans length grow linearly with the scale factor as in the cold dark matter model. The growth of perturbations in the scalar field model is substantially faster than in the cold dark matter model. When the wavelength of the perturbations approaches the cosmological horizon (Hubble length), a relativistic treatment is mandatory. In that case, we find that relativistic effects attenuate or even prevent the growth of perturbations. This paper exposes the general formalism and provides illustrations in simple cases. Other applications of our formalism will be considered in companion papers.

  5. DL_MG: A Parallel Multigrid Poisson and Poisson-Boltzmann Solver for Electronic Structure Calculations in Vacuum and Solution.

    PubMed

    Womack, James C; Anton, Lucian; Dziedzic, Jacek; Hasnip, Phil J; Probert, Matt I J; Skylaris, Chris-Kriton

    2018-03-13

    The solution of the Poisson equation is a crucial step in electronic structure calculations, yielding the electrostatic potential-a key component of the quantum mechanical Hamiltonian. In recent decades, theoretical advances and increases in computer performance have made it possible to simulate the electronic structure of extended systems in complex environments. This requires the solution of more complicated variants of the Poisson equation, featuring nonhomogeneous dielectric permittivities, ionic concentrations with nonlinear dependencies, and diverse boundary conditions. The analytic solutions generally used to solve the Poisson equation in vacuum (or with homogeneous permittivity) are not applicable in these circumstances, and numerical methods must be used. In this work, we present DL_MG, a flexible, scalable, and accurate solver library, developed specifically to tackle the challenges of solving the Poisson equation in modern large-scale electronic structure calculations on parallel computers. Our solver is based on the multigrid approach and uses an iterative high-order defect correction method to improve the accuracy of solutions. Using two chemically relevant model systems, we tested the accuracy and computational performance of DL_MG when solving the generalized Poisson and Poisson-Boltzmann equations, demonstrating excellent agreement with analytic solutions and efficient scaling to ∼10 9 unknowns and 100s of CPU cores. We also applied DL_MG in actual large-scale electronic structure calculations, using the ONETEP linear-scaling electronic structure package to study a 2615 atom protein-ligand complex with routinely available computational resources. In these calculations, the overall execution time with DL_MG was not significantly greater than the time required for calculations using a conventional FFT-based solver.

  6. Mean-square state and parameter estimation for stochastic linear systems with Gaussian and Poisson noises

    NASA Astrophysics Data System (ADS)

    Basin, M.; Maldonado, J. J.; Zendejo, O.

    2016-07-01

    This paper proposes new mean-square filter and parameter estimator design for linear stochastic systems with unknown parameters over linear observations, where unknown parameters are considered as combinations of Gaussian and Poisson white noises. The problem is treated by reducing the original problem to a filtering problem for an extended state vector that includes parameters as additional states, modelled as combinations of independent Gaussian and Poisson processes. The solution to this filtering problem is based on the mean-square filtering equations for incompletely polynomial states confused with Gaussian and Poisson noises over linear observations. The resulting mean-square filter serves as an identifier for the unknown parameters. Finally, a simulation example shows effectiveness of the proposed mean-square filter and parameter estimator.

  7. A coarse-grid projection method for accelerating incompressible flow computations

    NASA Astrophysics Data System (ADS)

    San, Omer; Staples, Anne E.

    2013-01-01

    We present a coarse-grid projection (CGP) method for accelerating incompressible flow computations, which is applicable to methods involving Poisson equations as incompressibility constraints. The CGP methodology is a modular approach that facilitates data transfer with simple interpolations and uses black-box solvers for the Poisson and advection-diffusion equations in the flow solver. After solving the Poisson equation on a coarsened grid, an interpolation scheme is used to obtain the fine data for subsequent time stepping on the full grid. A particular version of the method is applied here to the vorticity-stream function, primitive variable, and vorticity-velocity formulations of incompressible Navier-Stokes equations. We compute several benchmark flow problems on two-dimensional Cartesian and non-Cartesian grids, as well as a three-dimensional flow problem. The method is found to accelerate these computations while retaining a level of accuracy close to that of the fine resolution field, which is significantly better than the accuracy obtained for a similar computation performed solely using a coarse grid. A linear acceleration rate is obtained for all the cases we consider due to the linear-cost elliptic Poisson solver used, with reduction factors in computational time between 2 and 42. The computational savings are larger when a suboptimal Poisson solver is used. We also find that the computational savings increase with increasing distortion ratio on non-Cartesian grids, making the CGP method a useful tool for accelerating generalized curvilinear incompressible flow solvers.

  8. Linear frictional forces cause orbits to neither circularize nor precess

    NASA Astrophysics Data System (ADS)

    Hamilton, B.; Crescimanno, M.

    2008-06-01

    For the undamped Kepler potential the lack of precession has historically been understood in terms of the Runge-Lenz symmetry. For the damped Kepler problem this result may be understood in terms of the generalization of Poisson structure to damped systems suggested recently by Tarasov (2005 J. Phys. A: Math. Gen. 38 2145). In this generalized algebraic structure the orbit-averaged Runge-Lenz vector remains a constant in the linearly damped Kepler problem to leading order in the damping coefficient. Beyond Kepler, we prove that, for any potential proportional to a power of the radius, the orbit shape and precession angle remain constant to leading order in the linear friction coefficient.

  9. Simple, Efficient Estimators of Treatment Effects in Randomized Trials Using Generalized Linear Models to Leverage Baseline Variables

    PubMed Central

    Rosenblum, Michael; van der Laan, Mark J.

    2010-01-01

    Models, such as logistic regression and Poisson regression models, are often used to estimate treatment effects in randomized trials. These models leverage information in variables collected before randomization, in order to obtain more precise estimates of treatment effects. However, there is the danger that model misspecification will lead to bias. We show that certain easy to compute, model-based estimators are asymptotically unbiased even when the working model used is arbitrarily misspecified. Furthermore, these estimators are locally efficient. As a special case of our main result, we consider a simple Poisson working model containing only main terms; in this case, we prove the maximum likelihood estimate of the coefficient corresponding to the treatment variable is an asymptotically unbiased estimator of the marginal log rate ratio, even when the working model is arbitrarily misspecified. This is the log-linear analog of ANCOVA for linear models. Our results demonstrate one application of targeted maximum likelihood estimation. PMID:20628636

  10. Background stratified Poisson regression analysis of cohort data.

    PubMed

    Richardson, David B; Langholz, Bryan

    2012-03-01

    Background stratified Poisson regression is an approach that has been used in the analysis of data derived from a variety of epidemiologically important studies of radiation-exposed populations, including uranium miners, nuclear industry workers, and atomic bomb survivors. We describe a novel approach to fit Poisson regression models that adjust for a set of covariates through background stratification while directly estimating the radiation-disease association of primary interest. The approach makes use of an expression for the Poisson likelihood that treats the coefficients for stratum-specific indicator variables as 'nuisance' variables and avoids the need to explicitly estimate the coefficients for these stratum-specific parameters. Log-linear models, as well as other general relative rate models, are accommodated. This approach is illustrated using data from the Life Span Study of Japanese atomic bomb survivors and data from a study of underground uranium miners. The point estimate and confidence interval obtained from this 'conditional' regression approach are identical to the values obtained using unconditional Poisson regression with model terms for each background stratum. Moreover, it is shown that the proposed approach allows estimation of background stratified Poisson regression models of non-standard form, such as models that parameterize latency effects, as well as regression models in which the number of strata is large, thereby overcoming the limitations of previously available statistical software for fitting background stratified Poisson regression models.

  11. Functional linear models for zero-inflated count data with application to modeling hospitalizations in patients on dialysis.

    PubMed

    Sentürk, Damla; Dalrymple, Lorien S; Nguyen, Danh V

    2014-11-30

    We propose functional linear models for zero-inflated count data with a focus on the functional hurdle and functional zero-inflated Poisson (ZIP) models. Although the hurdle model assumes the counts come from a mixture of a degenerate distribution at zero and a zero-truncated Poisson distribution, the ZIP model considers a mixture of a degenerate distribution at zero and a standard Poisson distribution. We extend the generalized functional linear model framework with a functional predictor and multiple cross-sectional predictors to model counts generated by a mixture distribution. We propose an estimation procedure for functional hurdle and ZIP models, called penalized reconstruction, geared towards error-prone and sparsely observed longitudinal functional predictors. The approach relies on dimension reduction and pooling of information across subjects involving basis expansions and penalized maximum likelihood techniques. The developed functional hurdle model is applied to modeling hospitalizations within the first 2 years from initiation of dialysis, with a high percentage of zeros, in the Comprehensive Dialysis Study participants. Hospitalization counts are modeled as a function of sparse longitudinal measurements of serum albumin concentrations, patient demographics, and comorbidities. Simulation studies are used to study finite sample properties of the proposed method and include comparisons with an adaptation of standard principal components regression. Copyright © 2014 John Wiley & Sons, Ltd.

  12. Elliptic Euler-Poisson-Darboux equation, critical points and integrable systems

    NASA Astrophysics Data System (ADS)

    Konopelchenko, B. G.; Ortenzi, G.

    2013-12-01

    The structure and properties of families of critical points for classes of functions W(z,{\\overline{z}}) obeying the elliptic Euler-Poisson-Darboux equation E(1/2, 1/2) are studied. General variational and differential equations governing the dependence of critical points in variational (deformation) parameters are found. Explicit examples of the corresponding integrable quasi-linear differential systems and hierarchies are presented. There are the extended dispersionless Toda/nonlinear Schrödinger hierarchies, the ‘inverse’ hierarchy and equations associated with the real-analytic Eisenstein series E(\\beta ,{\\overline{\\beta }};1/2) among them. The specific bi-Hamiltonian structure of these equations is also discussed.

  13. Evolutionary inference via the Poisson Indel Process.

    PubMed

    Bouchard-Côté, Alexandre; Jordan, Michael I

    2013-01-22

    We address the problem of the joint statistical inference of phylogenetic trees and multiple sequence alignments from unaligned molecular sequences. This problem is generally formulated in terms of string-valued evolutionary processes along the branches of a phylogenetic tree. The classic evolutionary process, the TKF91 model [Thorne JL, Kishino H, Felsenstein J (1991) J Mol Evol 33(2):114-124] is a continuous-time Markov chain model composed of insertion, deletion, and substitution events. Unfortunately, this model gives rise to an intractable computational problem: The computation of the marginal likelihood under the TKF91 model is exponential in the number of taxa. In this work, we present a stochastic process, the Poisson Indel Process (PIP), in which the complexity of this computation is reduced to linear. The Poisson Indel Process is closely related to the TKF91 model, differing only in its treatment of insertions, but it has a global characterization as a Poisson process on the phylogeny. Standard results for Poisson processes allow key computations to be decoupled, which yields the favorable computational profile of inference under the PIP model. We present illustrative experiments in which Bayesian inference under the PIP model is compared with separate inference of phylogenies and alignments.

  14. Evolutionary inference via the Poisson Indel Process

    PubMed Central

    Bouchard-Côté, Alexandre; Jordan, Michael I.

    2013-01-01

    We address the problem of the joint statistical inference of phylogenetic trees and multiple sequence alignments from unaligned molecular sequences. This problem is generally formulated in terms of string-valued evolutionary processes along the branches of a phylogenetic tree. The classic evolutionary process, the TKF91 model [Thorne JL, Kishino H, Felsenstein J (1991) J Mol Evol 33(2):114–124] is a continuous-time Markov chain model composed of insertion, deletion, and substitution events. Unfortunately, this model gives rise to an intractable computational problem: The computation of the marginal likelihood under the TKF91 model is exponential in the number of taxa. In this work, we present a stochastic process, the Poisson Indel Process (PIP), in which the complexity of this computation is reduced to linear. The Poisson Indel Process is closely related to the TKF91 model, differing only in its treatment of insertions, but it has a global characterization as a Poisson process on the phylogeny. Standard results for Poisson processes allow key computations to be decoupled, which yields the favorable computational profile of inference under the PIP model. We present illustrative experiments in which Bayesian inference under the PIP model is compared with separate inference of phylogenies and alignments. PMID:23275296

  15. Poisson Coordinates.

    PubMed

    Li, Xian-Ying; Hu, Shi-Min

    2013-02-01

    Harmonic functions are the critical points of a Dirichlet energy functional, the linear projections of conformal maps. They play an important role in computer graphics, particularly for gradient-domain image processing and shape-preserving geometric computation. We propose Poisson coordinates, a novel transfinite interpolation scheme based on the Poisson integral formula, as a rapid way to estimate a harmonic function on a certain domain with desired boundary values. Poisson coordinates are an extension of the Mean Value coordinates (MVCs) which inherit their linear precision, smoothness, and kernel positivity. We give explicit formulas for Poisson coordinates in both continuous and 2D discrete forms. Superior to MVCs, Poisson coordinates are proved to be pseudoharmonic (i.e., they reproduce harmonic functions on n-dimensional balls). Our experimental results show that Poisson coordinates have lower Dirichlet energies than MVCs on a number of typical 2D domains (particularly convex domains). As well as presenting a formula, our approach provides useful insights for further studies on coordinates-based interpolation and fast estimation of harmonic functions.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moradi, Afshin, E-mail: a.moradi@kut.ac.ir

    We develop the Maxwell-Garnett theory for the effective medium approximation of composite materials with metallic nanoparticles by taking into account the quantum spatial dispersion effects in dielectric response of nanoparticles. We derive a quantum nonlocal generalization of the standard Maxwell-Garnett formula, by means the linearized quantum hydrodynamic theory in conjunction with the Poisson equation as well as the appropriate additional quantum boundary conditions.

  17. Linear and Poisson models for genetic evaluation of tick resistance in cross-bred Hereford x Nellore cattle.

    PubMed

    Ayres, D R; Pereira, R J; Boligon, A A; Silva, F F; Schenkel, F S; Roso, V M; Albuquerque, L G

    2013-12-01

    Cattle resistance to ticks is measured by the number of ticks infesting the animal. The model used for the genetic analysis of cattle resistance to ticks frequently requires logarithmic transformation of the observations. The objective of this study was to evaluate the predictive ability and goodness of fit of different models for the analysis of this trait in cross-bred Hereford x Nellore cattle. Three models were tested: a linear model using logarithmic transformation of the observations (MLOG); a linear model without transformation of the observations (MLIN); and a generalized linear Poisson model with residual term (MPOI). All models included the classificatory effects of contemporary group and genetic group and the covariates age of animal at the time of recording and individual heterozygosis, as well as additive genetic effects as random effects. Heritability estimates were 0.08 ± 0.02, 0.10 ± 0.02 and 0.14 ± 0.04 for MLIN, MLOG and MPOI models, respectively. The model fit quality, verified by deviance information criterion (DIC) and residual mean square, indicated fit superiority of MPOI model. The predictive ability of the models was compared by validation test in independent sample. The MPOI model was slightly superior in terms of goodness of fit and predictive ability, whereas the correlations between observed and predicted tick counts were practically the same for all models. A higher rank correlation between breeding values was observed between models MLOG and MPOI. Poisson model can be used for the selection of tick-resistant animals. © 2013 Blackwell Verlag GmbH.

  18. Assessment of Poisson, probit and linear models for genetic analysis of presence and number of black spots in Corriedale sheep.

    PubMed

    Peñagaricano, F; Urioste, J I; Naya, H; de los Campos, G; Gianola, D

    2011-04-01

    Black skin spots are associated with pigmented fibres in wool, an important quality fault. Our objective was to assess alternative models for genetic analysis of presence (BINBS) and number (NUMBS) of black spots in Corriedale sheep. During 2002-08, 5624 records from 2839 animals in two flocks, aged 1 through 6 years, were taken at shearing. Four models were considered: linear and probit for BINBS and linear and Poisson for NUMBS. All models included flock-year and age as fixed effects and animal and permanent environmental as random effects. Models were fitted to the whole data set and were also compared based on their predictive ability in cross-validation. Estimates of heritability ranged from 0.154 to 0.230 for BINBS and 0.269 to 0.474 for NUMBS. For BINBS, the probit model fitted slightly better to the data than the linear model. Predictions of random effects from these models were highly correlated, and both models exhibited similar predictive ability. For NUMBS, the Poisson model, with a residual term to account for overdispersion, performed better than the linear model in goodness of fit and predictive ability. Predictions of random effects from the Poisson model were more strongly correlated with those from BINBS models than those from the linear model. Overall, the use of probit or linear models for BINBS and of a Poisson model with a residual for NUMBS seems a reasonable choice for genetic selection purposes in Corriedale sheep. © 2010 Blackwell Verlag GmbH.

  19. The coefficient of determination R2 and intra-class correlation coefficient from generalized linear mixed-effects models revisited and expanded.

    PubMed

    Nakagawa, Shinichi; Johnson, Paul C D; Schielzeth, Holger

    2017-09-01

    The coefficient of determination R 2 quantifies the proportion of variance explained by a statistical model and is an important summary statistic of biological interest. However, estimating R 2 for generalized linear mixed models (GLMMs) remains challenging. We have previously introduced a version of R 2 that we called [Formula: see text] for Poisson and binomial GLMMs, but not for other distributional families. Similarly, we earlier discussed how to estimate intra-class correlation coefficients (ICCs) using Poisson and binomial GLMMs. In this paper, we generalize our methods to all other non-Gaussian distributions, in particular to negative binomial and gamma distributions that are commonly used for modelling biological data. While expanding our approach, we highlight two useful concepts for biologists, Jensen's inequality and the delta method, both of which help us in understanding the properties of GLMMs. Jensen's inequality has important implications for biologically meaningful interpretation of GLMMs, whereas the delta method allows a general derivation of variance associated with non-Gaussian distributions. We also discuss some special considerations for binomial GLMMs with binary or proportion data. We illustrate the implementation of our extension by worked examples from the field of ecology and evolution in the R environment. However, our method can be used across disciplines and regardless of statistical environments. © 2017 The Author(s).

  20. Multi-Parameter Linear Least-Squares Fitting to Poisson Data One Count at a Time

    NASA Technical Reports Server (NTRS)

    Wheaton, W.; Dunklee, A.; Jacobson, A.; Ling, J.; Mahoney, W.; Radocinski, R.

    1993-01-01

    A standard problem in gamma-ray astronomy data analysis is the decomposition of a set of observed counts, described by Poisson statistics, according to a given multi-component linear model, with underlying physical count rates or fluxes which are to be estimated from the data.

  1. Simulation Methods for Poisson Processes in Nonstationary Systems.

    DTIC Science & Technology

    1978-08-01

    for simulation of nonhomogeneous Poisson processes is stated with log-linear rate function. The method is based on an identity relating the...and relatively efficient new method for simulation of one-dimensional and two-dimensional nonhomogeneous Poisson processes is described. The method is

  2. Duality and integrability: Electromagnetism, linearized gravity, and massless higher spin gauge fields as bi-Hamiltonian systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barnich, Glenn; Troessaert, Cedric

    2009-04-15

    In the reduced phase space of electromagnetism, the generator of duality rotations in the usual Poisson bracket is shown to generate Maxwell's equations in a second, much simpler Poisson bracket. This gives rise to a hierarchy of bi-Hamiltonian evolution equations in the standard way. The result can be extended to linearized Yang-Mills theory, linearized gravity, and massless higher spin gauge fields.

  3. Normal forms for Poisson maps and symplectic groupoids around Poisson transversals

    NASA Astrophysics Data System (ADS)

    Frejlich, Pedro; Mărcuț, Ioan

    2018-03-01

    Poisson transversals are submanifolds in a Poisson manifold which intersect all symplectic leaves transversally and symplectically. In this communication, we prove a normal form theorem for Poisson maps around Poisson transversals. A Poisson map pulls a Poisson transversal back to a Poisson transversal, and our first main result states that simultaneous normal forms exist around such transversals, for which the Poisson map becomes transversally linear, and intertwines the normal form data of the transversals. Our second result concerns symplectic integrations. We prove that a neighborhood of a Poisson transversal is integrable exactly when the Poisson transversal itself is integrable, and in that case we prove a normal form theorem for the symplectic groupoid around its restriction to the Poisson transversal, which puts all structure maps in normal form. We conclude by illustrating our results with examples arising from Lie algebras.

  4. Normal forms for Poisson maps and symplectic groupoids around Poisson transversals.

    PubMed

    Frejlich, Pedro; Mărcuț, Ioan

    2018-01-01

    Poisson transversals are submanifolds in a Poisson manifold which intersect all symplectic leaves transversally and symplectically. In this communication, we prove a normal form theorem for Poisson maps around Poisson transversals. A Poisson map pulls a Poisson transversal back to a Poisson transversal, and our first main result states that simultaneous normal forms exist around such transversals, for which the Poisson map becomes transversally linear, and intertwines the normal form data of the transversals. Our second result concerns symplectic integrations. We prove that a neighborhood of a Poisson transversal is integrable exactly when the Poisson transversal itself is integrable, and in that case we prove a normal form theorem for the symplectic groupoid around its restriction to the Poisson transversal, which puts all structure maps in normal form. We conclude by illustrating our results with examples arising from Lie algebras.

  5. Research in Stochastic Processes.

    DTIC Science & Technology

    1983-10-01

    increases. A more detailed investigation for the exceedances themselves (rather than Just the cluster centers) was undertaken, together with J. HUsler and...J. HUsler and M.R. Leadbetter, Compoung Poisson limit theorems for high level exceedances by stationary sequences, Center for Stochastic Processes...stability by a random linear operator. C.D. Hardin, General (asymmetric) stable variables and processes. T. Hsing, J. HUsler and M.R. Leadbetter, Compound

  6. Bluues: a program for the analysis of the electrostatic properties of proteins based on generalized Born radii

    PubMed Central

    2012-01-01

    Background The Poisson-Boltzmann (PB) equation and its linear approximation have been widely used to describe biomolecular electrostatics. Generalized Born (GB) models offer a convenient computational approximation for the more fundamental approach based on the Poisson-Boltzmann equation, and allows estimation of pairwise contributions to electrostatic effects in the molecular context. Results We have implemented in a single program most common analyses of the electrostatic properties of proteins. The program first computes generalized Born radii, via a surface integral and then it uses generalized Born radii (using a finite radius test particle) to perform electrostic analyses. In particular the ouput of the program entails, depending on user's requirement: 1) the generalized Born radius of each atom; 2) the electrostatic solvation free energy; 3) the electrostatic forces on each atom (currently in a dvelopmental stage); 4) the pH-dependent properties (total charge and pH-dependent free energy of folding in the pH range -2 to 18; 5) the pKa of all ionizable groups; 6) the electrostatic potential at the surface of the molecule; 7) the electrostatic potential in a volume surrounding the molecule; Conclusions Although at the expense of limited flexibility the program provides most common analyses with requirement of a single input file in PQR format. The results obtained are comparable to those obtained using state-of-the-art Poisson-Boltzmann solvers. A Linux executable with example input and output files is provided as supplementary material. PMID:22536964

  7. Identification d’une Classe de Processus de Poisson Filtres (Identification of a Class of Filtered Poisson Processes).

    DTIC Science & Technology

    1983-05-20

    Poisson processes is introduced: the amplitude has a law which is spherically invariant and the filter is real, linear and causal. It is shown how such a model can be identified from experimental data. (Author)

  8. Towards classical spectrum generating algebras for f-deformations

    NASA Astrophysics Data System (ADS)

    Kullock, Ricardo; Latini, Danilo

    2016-01-01

    In this paper we revise the classical analog of f-oscillators, a generalization of q-oscillators given in Man'ko et al. (1997) [8], in the framework of classical spectrum generating algebras (SGA) introduced in Kuru and Negro (2008) [9]. We write down the deformed Poisson algebra characterizing the entire family of non-linear oscillators and construct its general solution algebraically. The latter, covering the full range of f-deformations, shows an energy dependence both in the amplitude and the frequency of the motion.

  9. Solution of the nonlinear Poisson-Boltzmann equation: Application to ionic diffusion in cementitious materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arnold, J.; Kosson, D.S., E-mail: david.s.kosson@vanderbilt.edu; Garrabrants, A.

    2013-02-15

    A robust numerical solution of the nonlinear Poisson-Boltzmann equation for asymmetric polyelectrolyte solutions in discrete pore geometries is presented. Comparisons to the linearized approximation of the Poisson-Boltzmann equation reveal that the assumptions leading to linearization may not be appropriate for the electrochemical regime in many cementitious materials. Implications of the electric double layer on both partitioning of species and on diffusive release are discussed. The influence of the electric double layer on anion diffusion relative to cation diffusion is examined.

  10. Modeling the Non-Linear Response of Fiber-Reinforced Laminates Using a Combined Damage/Plasticity Model

    NASA Technical Reports Server (NTRS)

    Schuecker, Clara; Davila, Carlos G.; Pettermann, Heinz E.

    2008-01-01

    The present work is concerned with modeling the non-linear response of fiber reinforced polymer laminates. Recent experimental data suggests that the non-linearity is not only caused by matrix cracking but also by matrix plasticity due to shear stresses. To capture the effects of those two mechanisms, a model combining a plasticity formulation with continuum damage has been developed to simulate the non-linear response of laminates under plane stress states. The model is used to compare the predicted behavior of various laminate lay-ups to experimental data from the literature by looking at the degradation of axial modulus and Poisson s ratio of the laminates. The influence of residual curing stresses and in-situ effect on the predicted response is also investigated. It is shown that predictions of the combined damage/plasticity model, in general, correlate well with the experimental data. The test data shows that there are two different mechanisms that can have opposite effects on the degradation of the laminate Poisson s ratio which is captured correctly by the damage/plasticity model. Residual curing stresses are found to have a minor influence on the predicted response for the cases considered here. Some open questions remain regarding the prediction of damage onset.

  11. Independence of the effective dielectric constant of an electrolytic solution on the ionic distribution in the linear Poisson-Nernst-Planck model.

    PubMed

    Alexe-Ionescu, A L; Barbero, G; Lelidis, I

    2014-08-28

    We consider the influence of the spatial dependence of the ions distribution on the effective dielectric constant of an electrolytic solution. We show that in the linear version of the Poisson-Nernst-Planck model, the effective dielectric constant of the solution has to be considered independent of any ionic distribution induced by the external field. This result follows from the fact that, in the linear approximation of the Poisson-Nernst-Planck model, the redistribution of the ions in the solvent due to the external field gives rise to a variation of the dielectric constant that is of the first order in the effective potential, and therefore it has to be neglected in the Poisson's equation that relates the actual electric potential across the electrolytic cell to the bulk density of ions. The analysis is performed in the case where the electrodes are perfectly blocking and the adsorption at the electrodes is negligible, and in the absence of any ion dissociation-recombination effect.

  12. Mixed effect Poisson log-linear models for clinical and epidemiological sleep hypnogram data

    PubMed Central

    Swihart, Bruce J.; Caffo, Brian S.; Crainiceanu, Ciprian; Punjabi, Naresh M.

    2013-01-01

    Bayesian Poisson log-linear multilevel models scalable to epidemiological studies are proposed to investigate population variability in sleep state transition rates. Hierarchical random effects are used to account for pairings of subjects and repeated measures within those subjects, as comparing diseased to non-diseased subjects while minimizing bias is of importance. Essentially, non-parametric piecewise constant hazards are estimated and smoothed, allowing for time-varying covariates and segment of the night comparisons. The Bayesian Poisson regression is justified through a re-derivation of a classical algebraic likelihood equivalence of Poisson regression with a log(time) offset and survival regression assuming exponentially distributed survival times. Such re-derivation allows synthesis of two methods currently used to analyze sleep transition phenomena: stratified multi-state proportional hazards models and log-linear models with GEE for transition counts. An example data set from the Sleep Heart Health Study is analyzed. Supplementary material includes the analyzed data set as well as the code for a reproducible analysis. PMID:22241689

  13. On the origin of dual Lax pairs and their r-matrix structure

    NASA Astrophysics Data System (ADS)

    Avan, Jean; Caudrelier, Vincent

    2017-10-01

    We establish the algebraic origin of the following observations made previously by the authors and coworkers: (i) A given integrable PDE in 1 + 1 dimensions within the Zakharov-Shabat scheme related to a Lax pair can be cast in two distinct, dual Hamiltonian formulations; (ii) Associated to each formulation is a Poisson bracket and a phase space (which are not compatible in the sense of Magri); (iii) Each matrix in the Lax pair satisfies a linear Poisson algebra a la Sklyanin characterized by the same classical r matrix. We develop the general concept of dual Lax pairs and dual Hamiltonian formulation of an integrable field theory. We elucidate the origin of the common r-matrix structure by tracing it back to a single Lie-Poisson bracket on a suitable coadjoint orbit of the loop algebra sl(2 , C) ⊗ C(λ ,λ-1) . The results are illustrated with the examples of the nonlinear Schrödinger and Gerdjikov-Ivanov hierarchies.

  14. Response analysis of a class of quasi-linear systems with fractional derivative excited by Poisson white noise

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Yongge; Xu, Wei, E-mail: weixu@nwpu.edu.cn; Yang, Guidong

    The Poisson white noise, as a typical non-Gaussian excitation, has attracted much attention recently. However, little work was referred to the study of stochastic systems with fractional derivative under Poisson white noise excitation. This paper investigates the stationary response of a class of quasi-linear systems with fractional derivative excited by Poisson white noise. The equivalent stochastic system of the original stochastic system is obtained. Then, approximate stationary solutions are obtained with the help of the perturbation method. Finally, two typical examples are discussed in detail to demonstrate the effectiveness of the proposed method. The analysis also shows that the fractionalmore » order and the fractional coefficient significantly affect the responses of the stochastic systems with fractional derivative.« less

  15. Influence of the nucleus area distribution on the survival fraction after charged particles broad beam irradiation.

    PubMed

    Wéra, A-C; Barazzuol, L; Jeynes, J C G; Merchant, M J; Suzuki, M; Kirkby, K J

    2014-08-07

    It is well known that broad beam irradiation with heavy ions leads to variation in the number of hit(s) received by each cell as the distribution of particles follows the Poisson statistics. Although the nucleus area will determine the number of hit(s) received for a given dose, variation amongst its irradiated cell population is generally not considered. In this work, we investigate the effect of the nucleus area's distribution on the survival fraction. More specifically, this work aims to explain the deviation, or tail, which might be observed in the survival fraction at high irradiation doses. For this purpose, the nucleus area distribution was added to the beam Poisson statistics and the Linear-Quadratic model in order to fit the experimental data. As shown in this study, nucleus size variation, and the associated Poisson statistics, can lead to an upward survival trend after broad beam irradiation. The influence of the distribution parameters (mean area and standard deviation) was studied using a normal distribution, along with the Linear-Quadratic model parameters (α and β). Finally, the model proposed here was successfully tested to the survival fraction of LN18 cells irradiated with a 85 keV µm(- 1) carbon ion broad beam for which the distribution in the area of the nucleus had been determined.

  16. Estimating the intensity of a cyclic Poisson process in the presence of additive and multiplicative linear trend

    NASA Astrophysics Data System (ADS)

    Wayan Mangku, I.

    2017-10-01

    In this paper we survey some results on estimation of the intensity function of a cyclic Poisson process in the presence of additive and multiplicative linear trend. We do not assume any parametric form for the cyclic component of the intensity function, except that it is periodic. Moreover, we consider the case when there is only a single realization of the Poisson process is observed in a bounded interval. The considered estimators are weakly and strongly consistent when the size of the observation interval indefinitely expands. Asymptotic approximations to the bias and variance of those estimators are presented.

  17. PB-AM: An open-source, fully analytical linear poisson-boltzmann solver

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Felberg, Lisa E.; Brookes, David H.; Yap, Eng-Hui

    2016-11-02

    We present the open source distributed software package Poisson-Boltzmann Analytical Method (PB-AM), a fully analytical solution to the linearized Poisson Boltzmann equation. The PB-AM software package includes the generation of outputs files appropriate for visualization using VMD, a Brownian dynamics scheme that uses periodic boundary conditions to simulate dynamics, the ability to specify docking criteria, and offers two different kinetics schemes to evaluate biomolecular association rate constants. Given that PB-AM defines mutual polarization completely and accurately, it can be refactored as a many-body expansion to explore 2- and 3-body polarization. Additionally, the software has been integrated into the Adaptive Poisson-Boltzmannmore » Solver (APBS) software package to make it more accessible to a larger group of scientists, educators and students that are more familiar with the APBS framework.« less

  18. Identification of a Class of Filtered Poisson Processes.

    DTIC Science & Technology

    1981-01-01

    LD-A135 371 IDENTIFICATION OF A CLASS OF FILERED POISSON PROCESSES I AU) NORTH CAROLINA UNIV AT CHAPEL HIL DEPT 0F STATISTICS D DE RRUC ET AL 1981...STNO&IO$ !tt ~ 4.s " . , ".7" -L N ~ TITLE :IDENTIFICATION OF A CLASS OF FILTERED POISSON PROCESSES Authors : DE BRUCQ Denis - GUALTIEROTTI Antonio...filtered Poisson processes is intro- duced : the amplitude has a law which is spherically invariant and the filter is real, linear and causal. It is shown

  19. Assessment of Linear Finite-Difference Poisson-Boltzmann Solvers

    PubMed Central

    Wang, Jun; Luo, Ray

    2009-01-01

    CPU time and memory usage are two vital issues that any numerical solvers for the Poisson-Boltzmann equation have to face in biomolecular applications. In this study we systematically analyzed the CPU time and memory usage of five commonly used finite-difference solvers with a large and diversified set of biomolecular structures. Our comparative analysis shows that modified incomplete Cholesky conjugate gradient and geometric multigrid are the most efficient in the diversified test set. For the two efficient solvers, our test shows that their CPU times increase approximately linearly with the numbers of grids. Their CPU times also increase almost linearly with the negative logarithm of the convergence criterion at very similar rate. Our comparison further shows that geometric multigrid performs better in the large set of tested biomolecules. However, modified incomplete Cholesky conjugate gradient is superior to geometric multigrid in molecular dynamics simulations of tested molecules. We also investigated other significant components in numerical solutions of the Poisson-Boltzmann equation. It turns out that the time-limiting step is the free boundary condition setup for the linear systems for the selected proteins if the electrostatic focusing is not used. Thus, development of future numerical solvers for the Poisson-Boltzmann equation should balance all aspects of the numerical procedures in realistic biomolecular applications. PMID:20063271

  20. Poisson structure of dynamical systems with three degrees of freedom

    NASA Astrophysics Data System (ADS)

    Gümral, Hasan; Nutku, Yavuz

    1993-12-01

    It is shown that the Poisson structure of dynamical systems with three degrees of freedom can be defined in terms of an integrable one-form in three dimensions. Advantage is taken of this fact and the theory of foliations is used in discussing the geometrical structure underlying complete and partial integrability. Techniques for finding Poisson structures are presented and applied to various examples such as the Halphen system which has been studied as the two-monopole problem by Atiyah and Hitchin. It is shown that the Halphen system can be formulated in terms of a flat SL(2,R)-valued connection and belongs to a nontrivial Godbillon-Vey class. On the other hand, for the Euler top and a special case of three-species Lotka-Volterra equations which are contained in the Halphen system as limiting cases, this structure degenerates into the form of globally integrable bi-Hamiltonian structures. The globally integrable bi-Hamiltonian case is a linear and the SL(2,R) structure is a quadratic unfolding of an integrable one-form in 3+1 dimensions. It is shown that the existence of a vector field compatible with the flow is a powerful tool in the investigation of Poisson structure and some new techniques for incorporating arbitrary constants into the Poisson one-form are presented herein. This leads to some extensions, analogous to q extensions, of Poisson structure. The Kermack-McKendrick model and some of its generalizations describing the spread of epidemics, as well as the integrable cases of the Lorenz, Lotka-Volterra, May-Leonard, and Maxwell-Bloch systems admit globally integrable bi-Hamiltonian structure.

  1. Nonlinear effective theory of dark energy

    NASA Astrophysics Data System (ADS)

    Cusin, Giulia; Lewandowski, Matthew; Vernizzi, Filippo

    2018-04-01

    We develop an approach to parametrize cosmological perturbations beyond linear order for general dark energy and modified gravity models characterized by a single scalar degree of freedom. We derive the full nonlinear action, focusing on Horndeski theories. In the quasi-static, non-relativistic limit, there are a total of six independent relevant operators, three of which start at nonlinear order. The new nonlinear couplings modify, beyond linear order, the generalized Poisson equation relating the Newtonian potential to the matter density contrast. We derive this equation up to cubic order in perturbations and, in a companion article [1], we apply it to compute the one-loop matter power spectrum. Within this approach, we also discuss the Vainshtein regime around spherical sources and the relation between the Vainshtein scale and the nonlinear scale for structure formation.

  2. Multiparameter linear least-squares fitting to Poisson data one count at a time

    NASA Technical Reports Server (NTRS)

    Wheaton, Wm. A.; Dunklee, Alfred L.; Jacobsen, Allan S.; Ling, James C.; Mahoney, William A.; Radocinski, Robert G.

    1995-01-01

    A standard problem in gamma-ray astronomy data analysis is the decomposition of a set of observed counts, described by Poisson statistics, according to a given multicomponent linear model, with underlying physical count rates or fluxes which are to be estimated from the data. Despite its conceptual simplicity, the linear least-squares (LLSQ) method for solving this problem has generally been limited to situations in which the number n(sub i) of counts in each bin i is not too small, conventionally more than 5-30. It seems to be widely believed that the failure of the LLSQ method for small counts is due to the failure of the Poisson distribution to be even approximately normal for small numbers. The cause is more accurately the strong anticorrelation between the data and the wieghts w(sub i) in the weighted LLSQ method when square root of n(sub i) instead of square root of bar-n(sub i) is used to approximate the uncertainties, sigma(sub i), in the data, where bar-n(sub i) = E(n(sub i)), the expected value of N(sub i). We show in an appendix that, avoiding this approximation, the correct equations for the Poisson LLSQ (PLLSQ) problems are actually identical to those for the maximum likelihood estimate using the exact Poisson distribution. We apply the method to solve a problem in high-resolution gamma-ray spectroscopy for the JPL High-Resolution Gamma-Ray Spectrometer flown on HEAO 3. Systematic error in subtracting the strong, highly variable background encountered in the low-energy gamma-ray region can be significantly reduced by closely pairing source and background data in short segments. Significant results can be built up by weighted averaging of the net fluxes obtained from the subtraction of many individual source/background pairs. Extension of the approach to complex situations, with multiple cosmic sources and realistic background parameterizations, requires a means of efficiently fitting to data from single scans in the narrow (approximately = 1.2 keV, HEAO 3) energy channels of a Ge spectrometer, where the expected number of counts obtained per scan may be very low. Such an analysis system is discussed and compared to the method previously used.

  3. Particle trapping: A key requisite of structure formation and stability of Vlasov–Poisson plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schamel, Hans, E-mail: hans.schamel@uni-bayreuth.de

    2015-04-15

    Particle trapping is shown to control the existence of undamped coherent structures in Vlasov–Poisson plasmas and thereby affects the onset of plasma instability beyond the realm of linear Landau theory.

  4. Static behavior and the effects of thermal cycling in hybrid laminates

    NASA Technical Reports Server (NTRS)

    Liber, T. M.; Daniel, I. M.; Chamis, C. C.

    1977-01-01

    Static stiffness, strength and ultimate strain after thermal cycling were investigated for graphite/Kevlar 49/epoxy and graphite/S-glass/epoxy angle-ply laminates. Tensile stress-strain curves to failure and uniaxial tensile properties were determined, and theoretical predictions of modulus, Poisson's ratio and ultimate strain, based on linear lamination theory, constituent ply properties and measured strength, were made. No significant influence on tensile stress properties due to stacking sequence variations was observed. In general, specimens containing two 0-degree Kevlar or S-glass plies were found to behave linearly to failure, while specimens containing 4 0-degree Kevlar or S-glass plies showed some nonlinear behavior.

  5. Overdispersion of the Molecular Clock: Temporal Variation of Gene-Specific Substitution Rates in Drosophila

    PubMed Central

    Hartl, Daniel L.

    2008-01-01

    Simple models of molecular evolution assume that sequences evolve by a Poisson process in which nucleotide or amino acid substitutions occur as rare independent events. In these models, the expected ratio of the variance to the mean of substitution counts equals 1, and substitution processes with a ratio greater than 1 are called overdispersed. Comparing the genomes of 10 closely related species of Drosophila, we extend earlier evidence for overdispersion in amino acid replacements as well as in four-fold synonymous substitutions. The observed deviation from the Poisson expectation can be described as a linear function of the rate at which substitutions occur on a phylogeny, which implies that deviations from the Poisson expectation arise from gene-specific temporal variation in substitution rates. Amino acid sequences show greater temporal variation in substitution rates than do four-fold synonymous sequences. Our findings provide a general phenomenological framework for understanding overdispersion in the molecular clock. Also, the presence of substantial variation in gene-specific substitution rates has broad implications for work in phylogeny reconstruction and evolutionary rate estimation. PMID:18480070

  6. Non-linear properties of metallic cellular materials with a negative Poisson's ratio

    NASA Technical Reports Server (NTRS)

    Choi, J. B.; Lakes, R. S.

    1992-01-01

    Negative Poisson's ratio copper foam was prepared and characterized experimentally. The transformation into re-entrant foam was accomplished by applying sequential permanent compressions above the yield point to achieve a triaxial compression. The Poisson's ratio of the re-entrant foam depended on strain and attained a relative minimum at strains near zero. Poisson's ratio as small as -0.8 was achieved. The strain dependence of properties occurred over a narrower range of strain than in the polymer foams studied earlier. Annealing of the foam resulted in a slightly greater magnitude of negative Poisson's ratio and greater toughness at the expense of a decrease in the Young's modulus.

  7. Linear stability analysis of the Vlasov-Poisson equations in high density plasmas in the presence of crossed fields and density gradients

    NASA Technical Reports Server (NTRS)

    Kaup, D. J.; Hansen, P. J.; Choudhury, S. Roy; Thomas, Gary E.

    1986-01-01

    The equations for the single-particle orbits in a nonneutral high density plasma in the presence of inhomogeneous crossed fields are obtained. Using these orbits, the linearized Vlasov equation is solved as an expansion in the orbital radii in the presence of inhomogeneities and density gradients. A model distribution function is introduced whose cold-fluid limit is exactly the same as that used in many previous studies of the cold-fluid equations. This model function is used to reduce the linearized Vlasov-Poisson equations to a second-order ordinary differential equation for the linearized electrostatic potential whose eigenvalue is the perturbation frequency.

  8. Ergodicity-breaking bifurcations and tunneling in hyperbolic transport models

    NASA Astrophysics Data System (ADS)

    Giona, M.; Brasiello, A.; Crescitelli, S.

    2015-11-01

    One of the main differences between parabolic transport, associated with Langevin equations driven by Wiener processes, and hyperbolic models related to generalized Kac equations driven by Poisson processes, is the occurrence in the latter of multiple stable invariant densities (Frobenius multiplicity) in certain regions of the parameter space. This phenomenon is associated with the occurrence in linear hyperbolic balance equations of a typical bifurcation, referred to as the ergodicity-breaking bifurcation, the properties of which are thoroughly analyzed.

  9. Analyzing Seasonal Variations in Suicide With Fourier Poisson Time-Series Regression: A Registry-Based Study From Norway, 1969-2007.

    PubMed

    Bramness, Jørgen G; Walby, Fredrik A; Morken, Gunnar; Røislien, Jo

    2015-08-01

    Seasonal variation in the number of suicides has long been acknowledged. It has been suggested that this seasonality has declined in recent years, but studies have generally used statistical methods incapable of confirming this. We examined all suicides occurring in Norway during 1969-2007 (more than 20,000 suicides in total) to establish whether seasonality decreased over time. Fitting of additive Fourier Poisson time-series regression models allowed for formal testing of a possible linear decrease in seasonality, or a reduction at a specific point in time, while adjusting for a possible smooth nonlinear long-term change without having to categorize time into discrete yearly units. The models were compared using Akaike's Information Criterion and analysis of variance. A model with a seasonal pattern was significantly superior to a model without one. There was a reduction in seasonality during the period. Both the model assuming a linear decrease in seasonality and the model assuming a change at a specific point in time were both superior to a model assuming constant seasonality, thus confirming by formal statistical testing that the magnitude of the seasonality in suicides has diminished. The additive Fourier Poisson time-series regression model would also be useful for studying other temporal phenomena with seasonal components. © The Author 2015. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  10. Factors Associated with Post-traumatic Stress Symptoms in Students Who Survived 20 Months after the Sewol Ferry Disaster in Korea

    PubMed Central

    2018-01-01

    Background The Sewol ferry disaster caused national shock and grief in Korea. The present study examined the prevalence and associated factors of post-traumatic stress disorder (PTSD) symptoms among the surviving students 20 months after that disaster. Methods This study was conducted using a cross-sectional design and a sample of 57 students (29 boys and 28 girls) who survived the Sewol ferry disaster. Data were collected using a questionnaire, including instruments that assessed psychological status. A generalized linear model using a log link and Poisson distribution was performed to identify factors associated with PTSD symptoms. Results The results showed that 26.3% of participants were classified in the clinical group by the Child Report of Post-traumatic Symptoms score. Based on a generalized linear model, Poisson distribution, and log link analyses, PTSD symptoms were positively correlated with the number of exposed traumatic events, peers and social support, peri-traumatic dissociation and post-traumatic negative beliefs, and emotional difficulties. On the other hand, PTSD symptoms were negatively correlated with psychological well-being, family cohesion, post-traumatic social support, receiving care at a psychiatry clinic, and female gender. Conclusion This study uncovered risk and protective factors of PTSD in disaster-exposed adolescents. The implications of these findings are considered in relation to determining assessment and interventional strategies aimed at helping survivors following similar traumatic experiences. PMID:29495137

  11. A Fast Solver for Implicit Integration of the Vlasov--Poisson System in the Eulerian Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garrett, C. Kristopher; Hauck, Cory D.

    In this paper, we present a domain decomposition algorithm to accelerate the solution of Eulerian-type discretizations of the linear, steady-state Vlasov equation. The steady-state solver then forms a key component in the implementation of fully implicit or nearly fully implicit temporal integrators for the nonlinear Vlasov--Poisson system. The solver relies on a particular decomposition of phase space that enables the use of sweeping techniques commonly used in radiation transport applications. The original linear system for the phase space unknowns is then replaced by a smaller linear system involving only unknowns on the boundary between subdomains, which can then be solvedmore » efficiently with Krylov methods such as GMRES. Steady-state solves are combined to form an implicit Runge--Kutta time integrator, and the Vlasov equation is coupled self-consistently to the Poisson equation via a linearized procedure or a nonlinear fixed-point method for the electric field. Finally, numerical results for standard test problems demonstrate the efficiency of the domain decomposition approach when compared to the direct application of an iterative solver to the original linear system.« less

  12. A Fast Solver for Implicit Integration of the Vlasov--Poisson System in the Eulerian Framework

    DOE PAGES

    Garrett, C. Kristopher; Hauck, Cory D.

    2018-04-05

    In this paper, we present a domain decomposition algorithm to accelerate the solution of Eulerian-type discretizations of the linear, steady-state Vlasov equation. The steady-state solver then forms a key component in the implementation of fully implicit or nearly fully implicit temporal integrators for the nonlinear Vlasov--Poisson system. The solver relies on a particular decomposition of phase space that enables the use of sweeping techniques commonly used in radiation transport applications. The original linear system for the phase space unknowns is then replaced by a smaller linear system involving only unknowns on the boundary between subdomains, which can then be solvedmore » efficiently with Krylov methods such as GMRES. Steady-state solves are combined to form an implicit Runge--Kutta time integrator, and the Vlasov equation is coupled self-consistently to the Poisson equation via a linearized procedure or a nonlinear fixed-point method for the electric field. Finally, numerical results for standard test problems demonstrate the efficiency of the domain decomposition approach when compared to the direct application of an iterative solver to the original linear system.« less

  13. Nonlinear and anisotropic tensile properties of graft materials used in soft tissue applications.

    PubMed

    Yoder, Jonathon H; Elliott, Dawn M

    2010-05-01

    The mechanical properties of extracellular matrix grafts that are intended to augment or replace soft tissues should be comparable to the native tissue. Such grafts are often used in fiber-reinforced tissue applications that undergo multi-axial loading and therefore knowledge of the anisotropic and nonlinear properties are needed, including the moduli and Poisson's ratio in two orthogonal directions within the plane of the graft. The objective of this study was to measure the tensile mechanical properties of several marketed grafts: Alloderm, Restore, CuffPatch, and OrthADAPT. The degree of anisotropy and non-linearity within each graft was evaluated from uniaxial tensile tests and compared to their native tissue. The Alloderm graft was anisotropic in both the toe- and linear-region of the stress-strain response, was highly nonlinear, and generally had low properties. The Restore and CuffPatch grafts had similar stress-strain responses, were largely isotropic, had a linear-region modulus of 18MPa, and were nonlinear. OrthADAPT was anisotropic in the linear-region (131 MPA vs 47MPa in the toe-region) and was highly nonlinear. The Poisson ratio for all grafts was between 0.4 and 0.7, except for the parallel orientation of Restore which was greater than 1.0. Having an informed understanding of how the available grafts perform mechanically will allow for better assessment by the physician for which graft to apply depending upon its application. Copyright 2010 Elsevier Ltd. All rights reserved.

  14. Low-Rank Correction Methods for Algebraic Domain Decomposition Preconditioners

    DOE PAGES

    Li, Ruipeng; Saad, Yousef

    2017-08-01

    This study presents a parallel preconditioning method for distributed sparse linear systems, based on an approximate inverse of the original matrix, that adopts a general framework of distributed sparse matrices and exploits domain decomposition (DD) and low-rank corrections. The DD approach decouples the matrix and, once inverted, a low-rank approximation is applied by exploiting the Sherman--Morrison--Woodbury formula, which yields two variants of the preconditioning methods. The low-rank expansion is computed by the Lanczos procedure with reorthogonalizations. Numerical experiments indicate that, when combined with Krylov subspace accelerators, this preconditioner can be efficient and robust for solving symmetric sparse linear systems. Comparisonsmore » with pARMS, a DD-based parallel incomplete LU (ILU) preconditioning method, are presented for solving Poisson's equation and linear elasticity problems.« less

  15. Low-Rank Correction Methods for Algebraic Domain Decomposition Preconditioners

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Ruipeng; Saad, Yousef

    This study presents a parallel preconditioning method for distributed sparse linear systems, based on an approximate inverse of the original matrix, that adopts a general framework of distributed sparse matrices and exploits domain decomposition (DD) and low-rank corrections. The DD approach decouples the matrix and, once inverted, a low-rank approximation is applied by exploiting the Sherman--Morrison--Woodbury formula, which yields two variants of the preconditioning methods. The low-rank expansion is computed by the Lanczos procedure with reorthogonalizations. Numerical experiments indicate that, when combined with Krylov subspace accelerators, this preconditioner can be efficient and robust for solving symmetric sparse linear systems. Comparisonsmore » with pARMS, a DD-based parallel incomplete LU (ILU) preconditioning method, are presented for solving Poisson's equation and linear elasticity problems.« less

  16. Performance of the modified Poisson regression approach for estimating relative risks from clustered prospective data.

    PubMed

    Yelland, Lisa N; Salter, Amy B; Ryan, Philip

    2011-10-15

    Modified Poisson regression, which combines a log Poisson regression model with robust variance estimation, is a useful alternative to log binomial regression for estimating relative risks. Previous studies have shown both analytically and by simulation that modified Poisson regression is appropriate for independent prospective data. This method is often applied to clustered prospective data, despite a lack of evidence to support its use in this setting. The purpose of this article is to evaluate the performance of the modified Poisson regression approach for estimating relative risks from clustered prospective data, by using generalized estimating equations to account for clustering. A simulation study is conducted to compare log binomial regression and modified Poisson regression for analyzing clustered data from intervention and observational studies. Both methods generally perform well in terms of bias, type I error, and coverage. Unlike log binomial regression, modified Poisson regression is not prone to convergence problems. The methods are contrasted by using example data sets from 2 large studies. The results presented in this article support the use of modified Poisson regression as an alternative to log binomial regression for analyzing clustered prospective data when clustering is taken into account by using generalized estimating equations.

  17. Center of Excellence in Theoretical Geoplasma Research

    DTIC Science & Technology

    1993-08-31

    of the Balescu -Lenard-Poisson ecluations for collisional plasmas were reported by J.R. Jasperse of the Geophysics Directorate. Discussions at the...the Chairperson: W. Burke (AFGL) 15:00 - 16:30 1. "Solutions of the linearized Balescu -Lenard-Poisson Equations for a Weakly-Collisional Plasma: Some

  18. Center of Excellence in Theoretical Geoplasma Research

    DTIC Science & Technology

    1989-11-10

    iii) First results of closed-form solutions of the3 Balescu -Lenard-Poisson equations for collisional plasmas were reported I REPORT November 10, 1989...Basu, "Solutions of the Linearized Balescu -Lenard-Poisson Equations for a Weakly-Collisional Plasma: Some New Results". [511 American Geophysical Union

  19. Continuum description of ionic and dielectric shielding for molecular-dynamics simulations of proteins in solution

    NASA Astrophysics Data System (ADS)

    Egwolf, Bernhard; Tavan, Paul

    2004-01-01

    We extend our continuum description of solvent dielectrics in molecular-dynamics (MD) simulations [B. Egwolf and P. Tavan, J. Chem. Phys. 118, 2039 (2003)], which has provided an efficient and accurate solution of the Poisson equation, to ionic solvents as described by the linearized Poisson-Boltzmann (LPB) equation. We start with the formulation of a general theory for the electrostatics of an arbitrarily shaped molecular system, which consists of partially charged atoms and is embedded in a LPB continuum. This theory represents the reaction field induced by the continuum in terms of charge and dipole densities localized within the molecular system. Because these densities cannot be calculated analytically for systems of arbitrary shape, we introduce an atom-based discretization and a set of carefully designed approximations. This allows us to represent the densities by charges and dipoles located at the atoms. Coupled systems of linear equations determine these multipoles and can be rapidly solved by iteration during a MD simulation. The multipoles yield the reaction field forces and energies. Finally, we scrutinize the quality of our approach by comparisons with an analytical solution restricted to perfectly spherical systems and with results of a finite difference method.

  20. Beta-Poisson model for single-cell RNA-seq data analyses.

    PubMed

    Vu, Trung Nghia; Wills, Quin F; Kalari, Krishna R; Niu, Nifang; Wang, Liewei; Rantalainen, Mattias; Pawitan, Yudi

    2016-07-15

    Single-cell RNA-sequencing technology allows detection of gene expression at the single-cell level. One typical feature of the data is a bimodality in the cellular distribution even for highly expressed genes, primarily caused by a proportion of non-expressing cells. The standard and the over-dispersed gamma-Poisson models that are commonly used in bulk-cell RNA-sequencing are not able to capture this property. We introduce a beta-Poisson mixture model that can capture the bimodality of the single-cell gene expression distribution. We further integrate the model into the generalized linear model framework in order to perform differential expression analyses. The whole analytical procedure is called BPSC. The results from several real single-cell RNA-seq datasets indicate that ∼90% of the transcripts are well characterized by the beta-Poisson model; the model-fit from BPSC is better than the fit of the standard gamma-Poisson model in > 80% of the transcripts. Moreover, in differential expression analyses of simulated and real datasets, BPSC performs well against edgeR, a conventional method widely used in bulk-cell RNA-sequencing data, and against scde and MAST, two recent methods specifically designed for single-cell RNA-seq data. An R package BPSC for model fitting and differential expression analyses of single-cell RNA-seq data is available under GPL-3 license at https://github.com/nghiavtr/BPSC CONTACT: yudi.pawitan@ki.se or mattias.rantalainen@ki.se Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  1. Questionable Validity of Poisson Assumptions in a Combined Loglinear/MDS Mapping Model.

    ERIC Educational Resources Information Center

    Gleason, John M.

    1993-01-01

    This response to an earlier article on a combined log-linear/MDS model for mapping journals by citation analysis discusses the underlying assumptions of the Poisson model with respect to characteristics of the citation process. The importance of empirical data analysis is also addressed. (nine references) (LRW)

  2. Transport of Multivalent Electrolyte Mixtures in Micro- and Nanochannels

    DTIC Science & Technology

    2013-11-08

    equations for this process are the unsteady Navier-Stokes equations along with continuity and the Poisson- Nernst -Planck system for the electro- static part...about five times the Debye screening length D (the 1/e lengthscale for the potential from the solution of the linearized Poisson- Boltzmann equation

  3. Prescription-induced jump distributions in multiplicative Poisson processes.

    PubMed

    Suweis, Samir; Porporato, Amilcare; Rinaldo, Andrea; Maritan, Amos

    2011-06-01

    Generalized Langevin equations (GLE) with multiplicative white Poisson noise pose the usual prescription dilemma leading to different evolution equations (master equations) for the probability distribution. Contrary to the case of multiplicative Gaussian white noise, the Stratonovich prescription does not correspond to the well-known midpoint (or any other intermediate) prescription. By introducing an inertial term in the GLE, we show that the Itô and Stratonovich prescriptions naturally arise depending on two time scales, one induced by the inertial term and the other determined by the jump event. We also show that, when the multiplicative noise is linear in the random variable, one prescription can be made equivalent to the other by a suitable transformation in the jump probability distribution. We apply these results to a recently proposed stochastic model describing the dynamics of primary soil salinization, in which the salt mass balance within the soil root zone requires the analysis of different prescriptions arising from the resulting stochastic differential equation forced by multiplicative white Poisson noise, the features of which are tailored to the characters of the daily precipitation. A method is finally suggested to infer the most appropriate prescription from the data.

  4. Prescription-induced jump distributions in multiplicative Poisson processes

    NASA Astrophysics Data System (ADS)

    Suweis, Samir; Porporato, Amilcare; Rinaldo, Andrea; Maritan, Amos

    2011-06-01

    Generalized Langevin equations (GLE) with multiplicative white Poisson noise pose the usual prescription dilemma leading to different evolution equations (master equations) for the probability distribution. Contrary to the case of multiplicative Gaussian white noise, the Stratonovich prescription does not correspond to the well-known midpoint (or any other intermediate) prescription. By introducing an inertial term in the GLE, we show that the Itô and Stratonovich prescriptions naturally arise depending on two time scales, one induced by the inertial term and the other determined by the jump event. We also show that, when the multiplicative noise is linear in the random variable, one prescription can be made equivalent to the other by a suitable transformation in the jump probability distribution. We apply these results to a recently proposed stochastic model describing the dynamics of primary soil salinization, in which the salt mass balance within the soil root zone requires the analysis of different prescriptions arising from the resulting stochastic differential equation forced by multiplicative white Poisson noise, the features of which are tailored to the characters of the daily precipitation. A method is finally suggested to infer the most appropriate prescription from the data.

  5. Effect of Temperature on Mechanical Properties of Nanoclay Reinforced Polymeric Nanocomposites. Part 1. Experimental Results

    DTIC Science & Technology

    2012-04-23

    Temperature and nanoclay reinforcement affect the Poisson ?s ratio also, but this effect is less significant. In general, as the temperature increases...the Poisson ?s ratio also increases. However, an increase in nanoclay reinforcement generally reduces the Poisson ?s ratio . It is also noted that the...nanoclay reinforcement generally reduces the Poisson’s ratio . It is also noted that the type of resin used may have a significant effect on the

  6. Nambu-Poisson gauge theory

    NASA Astrophysics Data System (ADS)

    Jurčo, Branislav; Schupp, Peter; Vysoký, Jan

    2014-06-01

    We generalize noncommutative gauge theory using Nambu-Poisson structures to obtain a new type of gauge theory with higher brackets and gauge fields. The approach is based on covariant coordinates and higher versions of the Seiberg-Witten map. We construct a covariant Nambu-Poisson gauge theory action, give its first order expansion in the Nambu-Poisson tensor and relate it to a Nambu-Poisson matrix model.

  7. Use of instrumental variables in the analysis of generalized linear models in the presence of unmeasured confounding with applications to epidemiological research.

    PubMed

    Johnston, K M; Gustafson, P; Levy, A R; Grootendorst, P

    2008-04-30

    A major, often unstated, concern of researchers carrying out epidemiological studies of medical therapy is the potential impact on validity if estimates of treatment are biased due to unmeasured confounders. One technique for obtaining consistent estimates of treatment effects in the presence of unmeasured confounders is instrumental variables analysis (IVA). This technique has been well developed in the econometrics literature and is being increasingly used in epidemiological studies. However, the approach to IVA that is most commonly used in such studies is based on linear models, while many epidemiological applications make use of non-linear models, specifically generalized linear models (GLMs) such as logistic or Poisson regression. Here we present a simple method for applying IVA within the class of GLMs using the generalized method of moments approach. We explore some of the theoretical properties of the method and illustrate its use within both a simulation example and an epidemiological study where unmeasured confounding is suspected to be present. We estimate the effects of beta-blocker therapy on one-year all-cause mortality after an incident hospitalization for heart failure, in the absence of data describing disease severity, which is believed to be a confounder. 2008 John Wiley & Sons, Ltd

  8. Strongly nonlinear composite dielectrics: A perturbation method for finding the potential field and bulk effective properties

    NASA Astrophysics Data System (ADS)

    Blumenfeld, Raphael; Bergman, David J.

    1991-10-01

    A class of strongly nonlinear composite dielectrics is studied. We develop a general method to reduce the scalar-potential-field problem to the solution of a set of linear Poisson-type equations in rescaled coordinates. The method is applicable for a large variety of nonlinear materials. For a power-law relation between the displacement and the electric fields, it is used to solve explicitly for the value of the bulk effective dielectric constant ɛe to second order in the fluctuations of its local value. A simlar procedure for the vector potential, whose curl is the displacement field, yields a quantity analogous to the inverse dielectric constant in linear dielectrics. The bulk effective dielectric constant is given by a set of linear integral expressions in the rescaled coordinates and exact bounds for it are derived.

  9. Factors Associated with Post-traumatic Stress Symptoms in Students Who Survived 20 Months after the Sewol Ferry Disaster in Korea.

    PubMed

    Lee, So Hee; Kim, Eun Ji; Noh, Jin Won; Chae, Jeong Ho

    2018-03-12

    The Sewol ferry disaster caused national shock and grief in Korea. The present study examined the prevalence and associated factors of post-traumatic stress disorder (PTSD) symptoms among the surviving students 20 months after that disaster. This study was conducted using a cross-sectional design and a sample of 57 students (29 boys and 28 girls) who survived the Sewol ferry disaster. Data were collected using a questionnaire, including instruments that assessed psychological status. A generalized linear model using a log link and Poisson distribution was performed to identify factors associated with PTSD symptoms. The results showed that 26.3% of participants were classified in the clinical group by the Child Report of Post-traumatic Symptoms score. Based on a generalized linear model, Poisson distribution, and log link analyses, PTSD symptoms were positively correlated with the number of exposed traumatic events, peers and social support, peri-traumatic dissociation and post-traumatic negative beliefs, and emotional difficulties. On the other hand, PTSD symptoms were negatively correlated with psychological well-being, family cohesion, post-traumatic social support, receiving care at a psychiatry clinic, and female gender. This study uncovered risk and protective factors of PTSD in disaster-exposed adolescents. The implications of these findings are considered in relation to determining assessment and interventional strategies aimed at helping survivors following similar traumatic experiences. © 2018 The Korean Academy of Medical Sciences.

  10. Super-stable Poissonian structures

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo

    2012-10-01

    In this paper we characterize classes of Poisson processes whose statistical structures are super-stable. We consider a flow generated by a one-dimensional ordinary differential equation, and an ensemble of particles ‘surfing’ the flow. The particles start from random initial positions, and are propagated along the flow by stochastic ‘wave processes’ with general statistics and general cross correlations. Setting the initial positions to be Poisson processes, we characterize the classes of Poisson processes that render the particles’ positions—at all times, and invariantly with respect to the wave processes—statistically identical to their initial positions. These Poisson processes are termed ‘super-stable’ and facilitate the generalization of the notion of stationary distributions far beyond the realm of Markov dynamics.

  11. Impact of a New Law to Reduce the Legal Blood Alcohol Concentration Limit - A Poisson Regression Analysis and Descriptive Approach.

    PubMed

    Nistal-Nuño, Beatriz

    2017-03-31

    In Chile, a new law introduced in March 2012 lowered the blood alcohol concentration (BAC) limit for impaired drivers from 0.1% to 0.08% and the BAC limit for driving under the influence of alcohol from 0.05% to 0.03%, but its effectiveness remains uncertain. The goal of this investigation was to evaluate the effects of this enactment on road traffic injuries and fatalities in Chile. A retrospective cohort study. Data were analyzed using a descriptive and a Generalized Linear Models approach, type of Poisson regression, to analyze deaths and injuries in a series of additive Log-Linear Models accounting for the effects of law implementation, month influence, a linear time trend and population exposure. A review of national databases in Chile was conducted from 2003 to 2014 to evaluate the monthly rates of traffic fatalities and injuries associated to alcohol and in total. It was observed a decrease by 28.1 percent in the monthly rate of traffic fatalities related to alcohol as compared to before the law (P<0.001). Adding a linear time trend as a predictor, the decrease was by 20.9 percent (P<0.001).There was a reduction in the monthly rate of traffic injuries related to alcohol by 10.5 percent as compared to before the law (P<0.001). Adding a linear time trend as a predictor, the decrease was by 24.8 percent (P<0.001). Positive results followed from this new 'zero-tolerance' law implemented in 2012 in Chile. Chile experienced a significant reduction in alcohol-related traffic fatalities and injuries, being a successful public health intervention.

  12. Fractional Poisson Fields and Martingales

    NASA Astrophysics Data System (ADS)

    Aletti, Giacomo; Leonenko, Nikolai; Merzbach, Ely

    2018-02-01

    We present new properties for the Fractional Poisson process (FPP) and the Fractional Poisson field on the plane. A martingale characterization for FPPs is given. We extend this result to Fractional Poisson fields, obtaining some other characterizations. The fractional differential equations are studied. We consider a more general Mixed-Fractional Poisson process and show that this process is the stochastic solution of a system of fractional differential-difference equations. Finally, we give some simulations of the Fractional Poisson field on the plane.

  13. A differential equation for the Generalized Born radii.

    PubMed

    Fogolari, Federico; Corazza, Alessandra; Esposito, Gennaro

    2013-06-28

    The Generalized Born (GB) model offers a convenient way of representing electrostatics in complex macromolecules like proteins or nucleic acids. The computation of atomic GB radii is currently performed by different non-local approaches involving volume or surface integrals. Here we obtain a non-linear second-order partial differential equation for the Generalized Born radius, which may be solved using local iterative algorithms. The equation is derived under the assumption that the usual GB approximation to the reaction field obeys Laplace's equation. The equation admits as particular solutions the correct GB radii for the sphere and the plane. The tests performed on a set of 55 different proteins show an overall agreement with other reference GB models and "perfect" Poisson-Boltzmann based values.

  14. Uncertainty based pressure reconstruction from velocity measurement with generalized least squares

    NASA Astrophysics Data System (ADS)

    Zhang, Jiacheng; Scalo, Carlo; Vlachos, Pavlos

    2017-11-01

    A method using generalized least squares reconstruction of instantaneous pressure field from velocity measurement and velocity uncertainty is introduced and applied to both planar and volumetric flow data. Pressure gradients are computed on a staggered grid from flow acceleration. The variance-covariance matrix of the pressure gradients is evaluated from the velocity uncertainty by approximating the pressure gradient error to a linear combination of velocity errors. An overdetermined system of linear equations which relates the pressure and the computed pressure gradients is formulated and then solved using generalized least squares with the variance-covariance matrix of the pressure gradients. By comparing the reconstructed pressure field against other methods such as solving the pressure Poisson equation, the omni-directional integration, and the ordinary least squares reconstruction, generalized least squares method is found to be more robust to the noise in velocity measurement. The improvement on pressure result becomes more remarkable when the velocity measurement becomes less accurate and more heteroscedastic. The uncertainty of the reconstructed pressure field is also quantified and compared across the different methods.

  15. Collisional effects on the numerical recurrence in Vlasov-Poisson simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pezzi, Oreste; Valentini, Francesco; Camporeale, Enrico

    The initial state recurrence in numerical simulations of the Vlasov-Poisson system is a well-known phenomenon. Here, we study the effect on recurrence of artificial collisions modeled through the Lenard-Bernstein operator [A. Lenard and I. B. Bernstein, Phys. Rev. 112, 1456–1459 (1958)]. By decomposing the linear Vlasov-Poisson system in the Fourier-Hermite space, the recurrence problem is investigated in the linear regime of the damping of a Langmuir wave and of the onset of the bump-on-tail instability. The analysis is then confirmed and extended to the nonlinear regime through an Eulerian collisional Vlasov-Poisson code. It is found that, despite being routinely used,more » an artificial collisionality is not a viable way of preventing recurrence in numerical simulations without compromising the kinetic nature of the solution. Moreover, it is shown how numerical effects associated to the generation of fine velocity scales can modify the physical features of the system evolution even in nonlinear regime. This means that filamentation-like phenomena, usually associated with low amplitude fluctuations contexts, can play a role even in nonlinear regime.« less

  16. Genetic parameters and signatures of selection in two divergent laying hen lines selected for feather pecking behaviour.

    PubMed

    Grams, Vanessa; Wellmann, Robin; Preuß, Siegfried; Grashorn, Michael A; Kjaer, Jörgen B; Bessei, Werner; Bennewitz, Jörn

    2015-09-30

    Feather pecking (FP) in laying hens is a well-known and multi-factorial behaviour with a genetic background. In a selection experiment, two lines were developed for 11 generations for high (HFP) and low (LFP) feather pecking, respectively. Starting with the second generation of selection, there was a constant difference in mean number of FP bouts between both lines. We used the data from this experiment to perform a quantitative genetic analysis and to map selection signatures. Pedigree and phenotypic data were available for the last six generations of both lines. Univariate quantitative genetic analyses were conducted using mixed linear and generalized mixed linear models assuming a Poisson distribution. Selection signatures were mapped using 33,228 single nucleotide polymorphisms (SNPs) genotyped on 41 HFP and 34 LFP individuals of generation 11. For each SNP, we estimated Wright's fixation index (FST). We tested the null hypothesis that FST is driven purely by genetic drift against the alternative hypothesis that it is driven by genetic drift and selection. The mixed linear model failed to analyze the LFP data because of the large number of 0s in the observation vector. The Poisson model fitted the data well and revealed a small but continuous genetic trend in both lines. Most of the 17 genome-wide significant SNPs were located on chromosomes 3 and 4. Thirteen clusters with at least two significant SNPs within an interval of 3 Mb maximum were identified. Two clusters were mapped on chromosomes 3, 4, 8 and 19. Of the 17 genome-wide significant SNPs, 12 were located within the identified clusters. This indicates a non-random distribution of significant SNPs and points to the presence of selection sweeps. Data on FP should be analysed using generalised linear mixed models assuming a Poisson distribution, especially if the number of FP bouts is small and the distribution is heavily peaked at 0. The FST-based approach was suitable to map selection signatures that need to be confirmed by linkage or association mapping.

  17. [Use of multiple regression models in observational studies (1970-2013) and requirements of the STROBE guidelines in Spanish scientific journals].

    PubMed

    Real, J; Cleries, R; Forné, C; Roso-Llorach, A; Martínez-Sánchez, J M

    In medicine and biomedical research, statistical techniques like logistic, linear, Cox and Poisson regression are widely known. The main objective is to describe the evolution of multivariate techniques used in observational studies indexed in PubMed (1970-2013), and to check the requirements of the STROBE guidelines in the author guidelines in Spanish journals indexed in PubMed. A targeted PubMed search was performed to identify papers that used logistic linear Cox and Poisson models. Furthermore, a review was also made of the author guidelines of journals published in Spain and indexed in PubMed and Web of Science. Only 6.1% of the indexed manuscripts included a term related to multivariate analysis, increasing from 0.14% in 1980 to 12.3% in 2013. In 2013, 6.7, 2.5, 3.5, and 0.31% of the manuscripts contained terms related to logistic, linear, Cox and Poisson regression, respectively. On the other hand, 12.8% of journals author guidelines explicitly recommend to follow the STROBE guidelines, and 35.9% recommend the CONSORT guideline. A low percentage of Spanish scientific journals indexed in PubMed include the STROBE statement requirement in the author guidelines. Multivariate regression models in published observational studies such as logistic regression, linear, Cox and Poisson are increasingly used both at international level, as well as in journals published in Spanish. Copyright © 2015 Sociedad Española de Médicos de Atención Primaria (SEMERGEN). Publicado por Elsevier España, S.L.U. All rights reserved.

  18. Modelling infant mortality rate in Central Java, Indonesia use generalized poisson regression method

    NASA Astrophysics Data System (ADS)

    Prahutama, Alan; Sudarno

    2018-05-01

    The infant mortality rate is the number of deaths under one year of age occurring among the live births in a given geographical area during a given year, per 1,000 live births occurring among the population of the given geographical area during the same year. This problem needs to be addressed because it is an important element of a country’s economic development. High infant mortality rate will disrupt the stability of a country as it relates to the sustainability of the population in the country. One of regression model that can be used to analyze the relationship between dependent variable Y in the form of discrete data and independent variable X is Poisson regression model. Recently The regression modeling used for data with dependent variable is discrete, among others, poisson regression, negative binomial regression and generalized poisson regression. In this research, generalized poisson regression modeling gives better AIC value than poisson regression. The most significant variable is the Number of health facilities (X1), while the variable that gives the most influence to infant mortality rate is the average breastfeeding (X9).

  19. Nonlinear Poisson Equation for Heterogeneous Media

    PubMed Central

    Hu, Langhua; Wei, Guo-Wei

    2012-01-01

    The Poisson equation is a widely accepted model for electrostatic analysis. However, the Poisson equation is derived based on electric polarizations in a linear, isotropic, and homogeneous dielectric medium. This article introduces a nonlinear Poisson equation to take into consideration of hyperpolarization effects due to intensive charges and possible nonlinear, anisotropic, and heterogeneous media. Variational principle is utilized to derive the nonlinear Poisson model from an electrostatic energy functional. To apply the proposed nonlinear Poisson equation for the solvation analysis, we also construct a nonpolar solvation energy functional based on the nonlinear Poisson equation by using the geometric measure theory. At a fixed temperature, the proposed nonlinear Poisson theory is extensively validated by the electrostatic analysis of the Kirkwood model and a set of 20 proteins, and the solvation analysis of a set of 17 small molecules whose experimental measurements are also available for a comparison. Moreover, the nonlinear Poisson equation is further applied to the solvation analysis of 21 compounds at different temperatures. Numerical results are compared to theoretical prediction, experimental measurements, and those obtained from other theoretical methods in the literature. A good agreement between our results and experimental data as well as theoretical results suggests that the proposed nonlinear Poisson model is a potentially useful model for electrostatic analysis involving hyperpolarization effects. PMID:22947937

  20. On some Aitken-like acceleration of the Schwarz method

    NASA Astrophysics Data System (ADS)

    Garbey, M.; Tromeur-Dervout, D.

    2002-12-01

    In this paper we present a family of domain decomposition based on Aitken-like acceleration of the Schwarz method seen as an iterative procedure with a linear rate of convergence. We first present the so-called Aitken-Schwarz procedure for linear differential operators. The solver can be a direct solver when applied to the Helmholtz problem with five-point finite difference scheme on regular grids. We then introduce the Steffensen-Schwarz variant which is an iterative domain decomposition solver that can be applied to linear and nonlinear problems. We show that these solvers have reasonable numerical efficiency compared to classical fast solvers for the Poisson problem or multigrids for more general linear and nonlinear elliptic problems. However, the salient feature of our method is that our algorithm has high tolerance to slow network in the context of distributed parallel computing and is attractive, generally speaking, to use with computer architecture for which performance is limited by the memory bandwidth rather than the flop performance of the CPU. This is nowadays the case for most parallel. computer using the RISC processor architecture. We will illustrate this highly desirable property of our algorithm with large-scale computing experiments.

  1. Drag Minimization for Wings and Bodies in Supersonic Flow

    NASA Technical Reports Server (NTRS)

    Heaslet, Max A; Fuller, Franklyn B

    1958-01-01

    The minimization of inviscid fluid drag is studied for aerodynamic shapes satisfying the conditions of linearized theory, and subject to imposed constraints on lift, pitching moment, base area, or volume. The problem is transformed to one of determining two-dimensional potential flows satisfying either Laplace's or Poisson's equations with boundary values fixed by the imposed conditions. A general method for determining integral relations between perturbation velocity components is developed. This analysis is not restricted in application to optimum cases; it may be used for any supersonic wing problem.

  2. PB-AM: An open-source, fully analytical linear poisson-boltzmann solver.

    PubMed

    Felberg, Lisa E; Brookes, David H; Yap, Eng-Hui; Jurrus, Elizabeth; Baker, Nathan A; Head-Gordon, Teresa

    2017-06-05

    We present the open source distributed software package Poisson-Boltzmann Analytical Method (PB-AM), a fully analytical solution to the linearized PB equation, for molecules represented as non-overlapping spherical cavities. The PB-AM software package includes the generation of outputs files appropriate for visualization using visual molecular dynamics, a Brownian dynamics scheme that uses periodic boundary conditions to simulate dynamics, the ability to specify docking criteria, and offers two different kinetics schemes to evaluate biomolecular association rate constants. Given that PB-AM defines mutual polarization completely and accurately, it can be refactored as a many-body expansion to explore 2- and 3-body polarization. Additionally, the software has been integrated into the Adaptive Poisson-Boltzmann Solver (APBS) software package to make it more accessible to a larger group of scientists, educators, and students that are more familiar with the APBS framework. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  3. Nonlinear and Anisotropic Tensile Properties of Graft Materials used in Soft Tissue Applications

    PubMed Central

    Yoder, Jonathon H; Elliott, Dawn M

    2010-01-01

    Background The mechanical properties of extracellular matrix grafts that are intended to augment or replace soft tissues should be comparable to the native tissue. Such grafts are often used in fiber-reinforced tissue applications that undergo multi-axial loading and therefore knowledge of the anisotropic and nonlinear properties are needed, including the moduli and Poisson's ratio in two orthogonal directions within the plane of the graft. The objective of this study was to measure the tensile mechanical properties of several marketed grafts: Alloderm, Restore, CuffPatch, and OrthADAPT. Methods The degree of anisotropy and nonlinearity within each graft was evaluated from uniaxial tensile tests and compared to their native tissue. Results The Alloderm graft was anisotropic in both the toe and linear-region of the stress-strain response, was highly nonlinear, and generally had low properties. The Restore and CuffPatch grafts had similar stress-strain responses, were largely isotropic, had a linear-region modulus of 18 MPa, and were nonlinear. OrthADAPT was anisotropic in the linear region (131 vs 47 MPa) and was highly nonlinear. The Poisson ratio for all grafts was between 0.4 and 0.7, except for the parallel orientation of Restore which was greater than 1.0. Interpretation Having an informed understanding of how the available grafts perform mechanically will allow for better assessment by the physician for which graft to apply depending upon its application. PMID:20129728

  4. Semiparametric bivariate zero-inflated Poisson models with application to studies of abundance for multiple species

    USGS Publications Warehouse

    Arab, Ali; Holan, Scott H.; Wikle, Christopher K.; Wildhaber, Mark L.

    2012-01-01

    Ecological studies involving counts of abundance, presence–absence or occupancy rates often produce data having a substantial proportion of zeros. Furthermore, these types of processes are typically multivariate and only adequately described by complex nonlinear relationships involving externally measured covariates. Ignoring these aspects of the data and implementing standard approaches can lead to models that fail to provide adequate scientific understanding of the underlying ecological processes, possibly resulting in a loss of inferential power. One method of dealing with data having excess zeros is to consider the class of univariate zero-inflated generalized linear models. However, this class of models fails to address the multivariate and nonlinear aspects associated with the data usually encountered in practice. Therefore, we propose a semiparametric bivariate zero-inflated Poisson model that takes into account both of these data attributes. The general modeling framework is hierarchical Bayes and is suitable for a broad range of applications. We demonstrate the effectiveness of our model through a motivating example on modeling catch per unit area for multiple species using data from the Missouri River Benthic Fishes Study, implemented by the United States Geological Survey.

  5. Adiabatic reduction of a model of stochastic gene expression with jump Markov process.

    PubMed

    Yvinec, Romain; Zhuge, Changjing; Lei, Jinzhi; Mackey, Michael C

    2014-04-01

    This paper considers adiabatic reduction in a model of stochastic gene expression with bursting transcription considered as a jump Markov process. In this model, the process of gene expression with auto-regulation is described by fast/slow dynamics. The production of mRNA is assumed to follow a compound Poisson process occurring at a rate depending on protein levels (the phenomena called bursting in molecular biology) and the production of protein is a linear function of mRNA numbers. When the dynamics of mRNA is assumed to be a fast process (due to faster mRNA degradation than that of protein) we prove that, with appropriate scalings in the burst rate, jump size or translational rate, the bursting phenomena can be transmitted to the slow variable. We show that, depending on the scaling, the reduced equation is either a stochastic differential equation with a jump Poisson process or a deterministic ordinary differential equation. These results are significant because adiabatic reduction techniques seem to have not been rigorously justified for a stochastic differential system containing a jump Markov process. We expect that the results can be generalized to adiabatic methods in more general stochastic hybrid systems.

  6. General solution of the chemical master equation and modality of marginal distributions for hierarchic first-order reaction networks.

    PubMed

    Reis, Matthias; Kromer, Justus A; Klipp, Edda

    2018-01-20

    Multimodality is a phenomenon which complicates the analysis of statistical data based exclusively on mean and variance. Here, we present criteria for multimodality in hierarchic first-order reaction networks, consisting of catalytic and splitting reactions. Those networks are characterized by independent and dependent subnetworks. First, we prove the general solvability of the Chemical Master Equation (CME) for this type of reaction network and thereby extend the class of solvable CME's. Our general solution is analytical in the sense that it allows for a detailed analysis of its statistical properties. Given Poisson/deterministic initial conditions, we then prove the independent species to be Poisson/binomially distributed, while the dependent species exhibit generalized Poisson/Khatri Type B distributions. Generalized Poisson/Khatri Type B distributions are multimodal for an appropriate choice of parameters. We illustrate our criteria for multimodality by several basic models, as well as the well-known two-stage transcription-translation network and Bateman's model from nuclear physics. For both examples, multimodality was previously not reported.

  7. Modeling salt-mediated electrostatics of macromolecules: the discrete surface charge optimization algorithm and its application to the nucleosome.

    PubMed

    Beard, D A; Schlick, T

    2001-01-01

    Much progress has been achieved on quantitative assessment of electrostatic interactions on the all-atom level by molecular mechanics and dynamics, as well as on the macroscopic level by models of continuum solvation. Bridging of the two representations-an area of active research-is necessary for studying integrated functions of large systems of biological importance. Following perspectives of both discrete (N-body) interaction and continuum solvation, we present a new algorithm, DiSCO (Discrete Surface Charge Optimization), for economically describing the electrostatic field predicted by Poisson-Boltzmann theory using a discrete set of Debye-Hückel charges distributed on a virtual surface enclosing the macromolecule. The procedure in DiSCO relies on the linear behavior of the Poisson-Boltzmann equation in the far zone; thus contributions from a number of molecules may be superimposed, and the electrostatic potential, or equivalently the electrostatic field, may be quickly and efficiently approximated by the summation of contributions from the set of charges. The desired accuracy of this approximation is achieved by minimizing the difference between the Poisson-Boltzmann electrostatic field and that produced by the linearized Debye-Hückel approximation using our truncated Newton optimization package. DiSCO is applied here to describe the salt-dependent electrostatic environment of the nucleosome core particle in terms of several hundred surface charges. This representation forms the basis for modeling-by dynamic simulations (or Monte Carlo)-the folding of chromatin. DiSCO can be applied more generally to many macromolecular systems whose size and complexity warrant a model resolution between the all-atom and macroscopic levels. Copyright 2000 John Wiley & Sons, Inc.

  8. A generalized right truncated bivariate Poisson regression model with applications to health data.

    PubMed

    Islam, M Ataharul; Chowdhury, Rafiqul I

    2017-01-01

    A generalized right truncated bivariate Poisson regression model is proposed in this paper. Estimation and tests for goodness of fit and over or under dispersion are illustrated for both untruncated and right truncated bivariate Poisson regression models using marginal-conditional approach. Estimation and test procedures are illustrated for bivariate Poisson regression models with applications to Health and Retirement Study data on number of health conditions and the number of health care services utilized. The proposed test statistics are easy to compute and it is evident from the results that the models fit the data very well. A comparison between the right truncated and untruncated bivariate Poisson regression models using the test for nonnested models clearly shows that the truncated model performs significantly better than the untruncated model.

  9. A generalized right truncated bivariate Poisson regression model with applications to health data

    PubMed Central

    Islam, M. Ataharul; Chowdhury, Rafiqul I.

    2017-01-01

    A generalized right truncated bivariate Poisson regression model is proposed in this paper. Estimation and tests for goodness of fit and over or under dispersion are illustrated for both untruncated and right truncated bivariate Poisson regression models using marginal-conditional approach. Estimation and test procedures are illustrated for bivariate Poisson regression models with applications to Health and Retirement Study data on number of health conditions and the number of health care services utilized. The proposed test statistics are easy to compute and it is evident from the results that the models fit the data very well. A comparison between the right truncated and untruncated bivariate Poisson regression models using the test for nonnested models clearly shows that the truncated model performs significantly better than the untruncated model. PMID:28586344

  10. A more general system for Poisson series manipulation.

    NASA Technical Reports Server (NTRS)

    Cherniack, J. R.

    1973-01-01

    The design of a working Poisson series processor system is described that is more general than those currently in use. This system is the result of a series of compromises among efficiency, generality, ease of programing, and ease of use. The most general form of coefficients that can be multiplied efficiently is pointed out, and the place of general-purpose algebraic systems in celestial mechanics is discussed.

  11. Bayesian analysis of volcanic eruptions

    NASA Astrophysics Data System (ADS)

    Ho, Chih-Hsiang

    1990-10-01

    The simple Poisson model generally gives a good fit to many volcanoes for volcanic eruption forecasting. Nonetheless, empirical evidence suggests that volcanic activity in successive equal time-periods tends to be more variable than a simple Poisson with constant eruptive rate. An alternative model is therefore examined in which eruptive rate(λ) for a given volcano or cluster(s) of volcanoes is described by a gamma distribution (prior) rather than treated as a constant value as in the assumptions of a simple Poisson model. Bayesian analysis is performed to link two distributions together to give the aggregate behavior of the volcanic activity. When the Poisson process is expanded to accomodate a gamma mixing distribution on λ, a consequence of this mixed (or compound) Poisson model is that the frequency distribution of eruptions in any given time-period of equal length follows the negative binomial distribution (NBD). Applications of the proposed model and comparisons between the generalized model and simple Poisson model are discussed based on the historical eruptive count data of volcanoes Mauna Loa (Hawaii) and Etna (Italy). Several relevant facts lead to the conclusion that the generalized model is preferable for practical use both in space and time.

  12. Nonlinear Poisson equation for heterogeneous media.

    PubMed

    Hu, Langhua; Wei, Guo-Wei

    2012-08-22

    The Poisson equation is a widely accepted model for electrostatic analysis. However, the Poisson equation is derived based on electric polarizations in a linear, isotropic, and homogeneous dielectric medium. This article introduces a nonlinear Poisson equation to take into consideration of hyperpolarization effects due to intensive charges and possible nonlinear, anisotropic, and heterogeneous media. Variational principle is utilized to derive the nonlinear Poisson model from an electrostatic energy functional. To apply the proposed nonlinear Poisson equation for the solvation analysis, we also construct a nonpolar solvation energy functional based on the nonlinear Poisson equation by using the geometric measure theory. At a fixed temperature, the proposed nonlinear Poisson theory is extensively validated by the electrostatic analysis of the Kirkwood model and a set of 20 proteins, and the solvation analysis of a set of 17 small molecules whose experimental measurements are also available for a comparison. Moreover, the nonlinear Poisson equation is further applied to the solvation analysis of 21 compounds at different temperatures. Numerical results are compared to theoretical prediction, experimental measurements, and those obtained from other theoretical methods in the literature. A good agreement between our results and experimental data as well as theoretical results suggests that the proposed nonlinear Poisson model is a potentially useful model for electrostatic analysis involving hyperpolarization effects. Copyright © 2012 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  13. Finite-dimensional integrable systems: A collection of research problems

    NASA Astrophysics Data System (ADS)

    Bolsinov, A. V.; Izosimov, A. M.; Tsonev, D. M.

    2017-05-01

    This article suggests a series of problems related to various algebraic and geometric aspects of integrability. They reflect some recent developments in the theory of finite-dimensional integrable systems such as bi-Poisson linear algebra, Jordan-Kronecker invariants of finite dimensional Lie algebras, the interplay between singularities of Lagrangian fibrations and compatible Poisson brackets, and new techniques in projective geometry.

  14. Function-Space-Based Solution Scheme for the Size-Modified Poisson-Boltzmann Equation in Full-Potential DFT.

    PubMed

    Ringe, Stefan; Oberhofer, Harald; Hille, Christoph; Matera, Sebastian; Reuter, Karsten

    2016-08-09

    The size-modified Poisson-Boltzmann (MPB) equation is an efficient implicit solvation model which also captures electrolytic solvent effects. It combines an account of the dielectric solvent response with a mean-field description of solvated finite-sized ions. We present a general solution scheme for the MPB equation based on a fast function-space-oriented Newton method and a Green's function preconditioned iterative linear solver. In contrast to popular multigrid solvers, this approach allows us to fully exploit specialized integration grids and optimized integration schemes. We describe a corresponding numerically efficient implementation for the full-potential density-functional theory (DFT) code FHI-aims. We show that together with an additional Stern layer correction the DFT+MPB approach can describe the mean activity coefficient of a KCl aqueous solution over a wide range of concentrations. The high sensitivity of the calculated activity coefficient on the employed ionic parameters thereby suggests to use extensively tabulated experimental activity coefficients of salt solutions for a systematic parametrization protocol.

  15. Super-integrable Calogero-type systems admit maximal number of Poisson structures

    NASA Astrophysics Data System (ADS)

    Gonera, C.; Nutku, Y.

    2001-07-01

    We present a general scheme for constructing the Poisson structure of super-integrable dynamical systems of which the rational Calogero-Moser system is the most interesting one. This dynamical system is 2 N-dimensional with 2 N-1 first integrals and our construction yields 2 N-1 degenerate Poisson tensors that each admit 2( N-1) Casimirs. Our results are quite generally applicable to all super-integrable systems and form an alternative to the traditional bi-Hamiltonian approach.

  16. Renewal processes based on generalized Mittag-Leffler waiting times

    NASA Astrophysics Data System (ADS)

    Cahoy, Dexter O.; Polito, Federico

    2013-03-01

    The fractional Poisson process has recently attracted experts from several fields of study. Its natural generalization of the ordinary Poisson process made the model more appealing for real-world applications. In this paper, we generalized the standard and fractional Poisson processes through the waiting time distribution, and showed their relations to an integral operator with a generalized Mittag-Leffler function in the kernel. The waiting times of the proposed renewal processes have the generalized Mittag-Leffler and stretched-squashed Mittag-Leffler distributions. Note that the generalizations naturally provide greater flexibility in modeling real-life renewal processes. Algorithms to simulate sample paths and to estimate the model parameters are derived. Note also that these procedures are necessary to make these models more usable in practice. State probabilities and other qualitative or quantitative features of the models are also discussed.

  17. Continuum analogues of contragredient Lie algebras (Lie algebras with a Cartan operator and nonlinear dynamical systems)

    NASA Astrophysics Data System (ADS)

    Saveliev, M. V.; Vershik, A. M.

    1989-12-01

    We present an axiomatic formulation of a new class of infinitedimensional Lie algebras-the generalizations of Z-graded Lie algebras with, generally speaking, an infinite-dimensional Cartan subalgebra and a contiguous set of roots. We call such algebras “continuum Lie algebras.” The simple Lie algebras of constant growth are encapsulated in our formulation. We pay particular attention to the case when the local algebra is parametrized by a commutative algebra while the Cartan operator (the generalization of the Cartan matrix) is a linear operator. Special examples of these algebras are the Kac-Moody algebras, algebras of Poisson brackets, algebras of vector fields on a manifold, current algebras, and algebras with differential or integro-differential cartan operator. The nonlinear dynamical systems associated with the continuum contragredient Lie algebras are also considered.

  18. Combined analysis of magnetic and gravity anomalies using normalized source strength (NSS)

    NASA Astrophysics Data System (ADS)

    Li, L.; Wu, Y.

    2017-12-01

    Gravity field and magnetic field belong to potential fields which lead inherent multi-solution. Combined analysis of magnetic and gravity anomalies based on Poisson's relation is used to determinate homology gravity and magnetic anomalies and decrease the ambiguity. The traditional combined analysis uses the linear regression of the reduction to pole (RTP) magnetic anomaly to the first order vertical derivative of the gravity anomaly, and provides the quantitative or semi-quantitative interpretation by calculating the correlation coefficient, slope and intercept. In the calculation process, due to the effect of remanent magnetization, the RTP anomaly still contains the effect of oblique magnetization. In this case the homology gravity and magnetic anomalies display irrelevant results in the linear regression calculation. The normalized source strength (NSS) can be transformed from the magnetic tensor matrix, which is insensitive to the remanence. Here we present a new combined analysis using NSS. Based on the Poisson's relation, the gravity tensor matrix can be transformed into the pseudomagnetic tensor matrix of the direction of geomagnetic field magnetization under the homologous condition. The NSS of pseudomagnetic tensor matrix and original magnetic tensor matrix are calculated and linear regression analysis is carried out. The calculated correlation coefficient, slope and intercept indicate the homology level, Poisson's ratio and the distribution of remanent respectively. We test the approach using synthetic model under complex magnetization, the results show that it can still distinguish the same source under the condition of strong remanence, and establish the Poisson's ratio. Finally, this approach is applied in China. The results demonstrated that our approach is feasible.

  19. AN EFFICIENT HIGHER-ORDER FAST MULTIPOLE BOUNDARY ELEMENT SOLUTION FOR POISSON-BOLTZMANN BASED MOLECULAR ELECTROSTATICS

    PubMed Central

    Bajaj, Chandrajit; Chen, Shun-Chuan; Rand, Alexander

    2011-01-01

    In order to compute polarization energy of biomolecules, we describe a boundary element approach to solving the linearized Poisson-Boltzmann equation. Our approach combines several important features including the derivative boundary formulation of the problem and a smooth approximation of the molecular surface based on the algebraic spline molecular surface. State of the art software for numerical linear algebra and the kernel independent fast multipole method is used for both simplicity and efficiency of our implementation. We perform a variety of computational experiments, testing our method on a number of actual proteins involved in molecular docking and demonstrating the effectiveness of our solver for computing molecular polarization energy. PMID:21660123

  20. Generalized derivation extensions of 3-Lie algebras and corresponding Nambu-Poisson structures

    NASA Astrophysics Data System (ADS)

    Song, Lina; Jiang, Jun

    2018-01-01

    In this paper, we introduce the notion of a generalized derivation on a 3-Lie algebra. We construct a new 3-Lie algebra using a generalized derivation and call it the generalized derivation extension. We show that the corresponding Leibniz algebra on the space of fundamental objects is the double of a matched pair of Leibniz algebras. We also determine the corresponding Nambu-Poisson structures under some conditions.

  1. The Use of Crow-AMSAA Plots to Assess Mishap Trends

    NASA Technical Reports Server (NTRS)

    Dawson, Jeffrey W.

    2011-01-01

    Crow-AMSAA (CA) plots are used to model reliability growth. Use of CA plots has expanded into other areas, such as tracking events of interest to management, maintenance problems, and safety mishaps. Safety mishaps can often be successfully modeled using a Poisson probability distribution. CA plots show a Poisson process in log-log space. If the safety mishaps are a stable homogenous Poisson process, a linear fit to the points in a CA plot will have a slope of one. Slopes of greater than one indicate a nonhomogenous Poisson process, with increasing occurrence. Slopes of less than one indicate a nonhomogenous Poisson process, with decreasing occurrence. Changes in slope, known as "cusps," indicate a change in process, which could be an improvement or a degradation. After presenting the CA conceptual framework, examples are given of trending slips, trips and falls, and ergonomic incidents at NASA (from Agency-level data). Crow-AMSAA plotting is a robust tool for trending safety mishaps that can provide insight into safety performance over time.

  2. Unimodularity criteria for Poisson structures on foliated manifolds

    NASA Astrophysics Data System (ADS)

    Pedroza, Andrés; Velasco-Barreras, Eduardo; Vorobiev, Yury

    2018-03-01

    We study the behavior of the modular class of an orientable Poisson manifold and formulate some unimodularity criteria in the semilocal context, around a (singular) symplectic leaf. Our results generalize some known unimodularity criteria for regular Poisson manifolds related to the notion of the Reeb class. In particular, we show that the unimodularity of the transverse Poisson structure of the leaf is a necessary condition for the semilocal unimodular property. Our main tool is an explicit formula for a bigraded decomposition of modular vector fields of a coupling Poisson structure on a foliated manifold. Moreover, we also exploit the notion of the modular class of a Poisson foliation and its relationship with the Reeb class.

  3. Poisson's ratio of fiber-reinforced composites

    NASA Astrophysics Data System (ADS)

    Christiansson, Henrik; Helsing, Johan

    1996-05-01

    Poisson's ratio flow diagrams, that is, the Poisson's ratio versus the fiber fraction, are obtained numerically for hexagonal arrays of elastic circular fibers in an elastic matrix. High numerical accuracy is achieved through the use of an interface integral equation method. Questions concerning fixed point theorems and the validity of existing asymptotic relations are investigated and partially resolved. Our findings for the transverse effective Poisson's ratio, together with earlier results for random systems by other authors, make it possible to formulate a general statement for Poisson's ratio flow diagrams: For composites with circular fibers and where the phase Poisson's ratios are equal to 1/3, the system with the lowest stiffness ratio has the highest Poisson's ratio. For other choices of the elastic moduli for the phases, no simple statement can be made.

  4. A generalized Poisson solver for first-principles device simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bani-Hashemian, Mohammad Hossein; VandeVondele, Joost, E-mail: joost.vandevondele@mat.ethz.ch; Brück, Sascha

    2016-01-28

    Electronic structure calculations of atomistic systems based on density functional theory involve solving the Poisson equation. In this paper, we present a plane-wave based algorithm for solving the generalized Poisson equation subject to periodic or homogeneous Neumann conditions on the boundaries of the simulation cell and Dirichlet type conditions imposed at arbitrary subdomains. In this way, source, drain, and gate voltages can be imposed across atomistic models of electronic devices. Dirichlet conditions are enforced as constraints in a variational framework giving rise to a saddle point problem. The resulting system of equations is then solved using a stationary iterative methodmore » in which the generalized Poisson operator is preconditioned with the standard Laplace operator. The solver can make use of any sufficiently smooth function modelling the dielectric constant, including density dependent dielectric continuum models. For all the boundary conditions, consistent derivatives are available and molecular dynamics simulations can be performed. The convergence behaviour of the scheme is investigated and its capabilities are demonstrated.« less

  5. Deformation mechanisms in negative Poisson's ratio materials - Structural aspects

    NASA Technical Reports Server (NTRS)

    Lakes, R.

    1991-01-01

    Poisson's ratio in materials is governed by the following aspects of the microstructure: the presence of rotational degrees of freedom, non-affine deformation kinematics, or anisotropic structure. Several structural models are examined. The non-affine kinematics are seen to be essential for the production of negative Poisson's ratios for isotropic materials containing central force linkages of positive stiffness. Non-central forces combined with pre-load can also give rise to a negative Poisson's ratio in isotropic materials. A chiral microstructure with non-central force interaction or non-affine deformation can also exhibit a negative Poisson's ratio. Toughness and damage resistance in these materials may be affected by the Poisson's ratio itself, as well as by generalized continuum aspects associated with the microstructure.

  6. Poisson image reconstruction with Hessian Schatten-norm regularization.

    PubMed

    Lefkimmiatis, Stamatios; Unser, Michael

    2013-11-01

    Poisson inverse problems arise in many modern imaging applications, including biomedical and astronomical ones. The main challenge is to obtain an estimate of the underlying image from a set of measurements degraded by a linear operator and further corrupted by Poisson noise. In this paper, we propose an efficient framework for Poisson image reconstruction, under a regularization approach, which depends on matrix-valued regularization operators. In particular, the employed regularizers involve the Hessian as the regularization operator and Schatten matrix norms as the potential functions. For the solution of the problem, we propose two optimization algorithms that are specifically tailored to the Poisson nature of the noise. These algorithms are based on an augmented-Lagrangian formulation of the problem and correspond to two variants of the alternating direction method of multipliers. Further, we derive a link that relates the proximal map of an l(p) norm with the proximal map of a Schatten matrix norm of order p. This link plays a key role in the development of one of the proposed algorithms. Finally, we provide experimental results on natural and biological images for the task of Poisson image deblurring and demonstrate the practical relevance and effectiveness of the proposed framework.

  7. Local existence of solutions to the Euler-Poisson system, including densities without compact support

    NASA Astrophysics Data System (ADS)

    Brauer, Uwe; Karp, Lavi

    2018-01-01

    Local existence and well posedness for a class of solutions for the Euler Poisson system is shown. These solutions have a density ρ which either falls off at infinity or has compact support. The solutions have finite mass, finite energy functional and include the static spherical solutions for γ = 6/5. The result is achieved by using weighted Sobolev spaces of fractional order and a new non-linear estimate which allows to estimate the physical density by the regularised non-linear matter variable. Gamblin also has studied this setting but using very different functional spaces. However we believe that the functional setting we use is more appropriate to describe a physical isolated body and more suitable to study the Newtonian limit.

  8. Theoretical investigations on structural, elastic and electronic properties of thallium halides

    NASA Astrophysics Data System (ADS)

    Singh, Rishi Pal; Singh, Rajendra Kumar; Rajagopalan, Mathrubutham

    2011-04-01

    Theoretical investigations on structural, elastic and electronic properties, viz. ground state lattice parameter, elastic moduli and density of states, of thallium halides (viz. TlCl and TlBr) have been made using the full potential linearized augmented plane wave method within the generalized gradient approximation (GGA). The ground state lattice parameter and bulk modulus and its pressure derivative have been obtained using optimization method. Young's modulus, shear modulus, Poisson ratio, sound velocities for longitudinal and shear waves, Debye average velocity, Debye temperature and Grüneisen parameter have also been calculated for these compounds. Calculated structural, elastic and other parameters are in good agreement with the available data.

  9. Nearly associative deformation quantization

    NASA Astrophysics Data System (ADS)

    Vassilevich, Dmitri; Oliveira, Fernando Martins Costa

    2018-04-01

    We study several classes of non-associative algebras as possible candidates for deformation quantization in the direction of a Poisson bracket that does not satisfy Jacobi identities. We show that in fact alternative deformation quantization algebras require the Jacobi identities on the Poisson bracket and, under very general assumptions, are associative. At the same time, flexible deformation quantization algebras exist for any Poisson bracket.

  10. Methodological quality and reporting of generalized linear mixed models in clinical medicine (2000-2012): a systematic review.

    PubMed

    Casals, Martí; Girabent-Farrés, Montserrat; Carrasco, Josep L

    2014-01-01

    Modeling count and binary data collected in hierarchical designs have increased the use of Generalized Linear Mixed Models (GLMMs) in medicine. This article presents a systematic review of the application and quality of results and information reported from GLMMs in the field of clinical medicine. A search using the Web of Science database was performed for published original articles in medical journals from 2000 to 2012. The search strategy included the topic "generalized linear mixed models","hierarchical generalized linear models", "multilevel generalized linear model" and as a research domain we refined by science technology. Papers reporting methodological considerations without application, and those that were not involved in clinical medicine or written in English were excluded. A total of 443 articles were detected, with an increase over time in the number of articles. In total, 108 articles fit the inclusion criteria. Of these, 54.6% were declared to be longitudinal studies, whereas 58.3% and 26.9% were defined as repeated measurements and multilevel design, respectively. Twenty-two articles belonged to environmental and occupational public health, 10 articles to clinical neurology, 8 to oncology, and 7 to infectious diseases and pediatrics. The distribution of the response variable was reported in 88% of the articles, predominantly Binomial (n = 64) or Poisson (n = 22). Most of the useful information about GLMMs was not reported in most cases. Variance estimates of random effects were described in only 8 articles (9.2%). The model validation, the method of covariate selection and the method of goodness of fit were only reported in 8.0%, 36.8% and 14.9% of the articles, respectively. During recent years, the use of GLMMs in medical literature has increased to take into account the correlation of data when modeling qualitative data or counts. According to the current recommendations, the quality of reporting has room for improvement regarding the characteristics of the analysis, estimation method, validation, and selection of the model.

  11. Marginal Stability of Ion-Acoustic Waves in a Weakly Collisional Two-Temperature Plasma without a Current.

    DTIC Science & Technology

    1987-08-06

    ABSTRACT (Continue on reverse if necessary and identify by block number) The linearized Balescu -Lenard-Poisson equations are solved in the weakly...free plasma is . unresolved. The purpose of this report is to present a resolution based upon the Balescu -Lenard-Poisson equations. The Balescu -Lenard...acoustic waves become marginally stable. Gur re- sults are based on the closed form solution for the dielectric function for the line- arized Balescu -Lenard

  12. Extracting real-crack properties from non-linear elastic behaviour of rocks: abundance of cracks with dominating normal compliance and rocks with negative Poisson ratios

    NASA Astrophysics Data System (ADS)

    Zaitsev, Vladimir Y.; Radostin, Andrey V.; Pasternak, Elena; Dyskin, Arcady

    2017-09-01

    Results of examination of experimental data on non-linear elasticity of rocks using experimentally determined pressure dependences of P- and S-wave velocities from various literature sources are presented. Overall, over 90 rock samples are considered. Interpretation of the data is performed using an effective-medium description in which cracks are considered as compliant defects with explicitly introduced shear and normal compliances without specifying a particular crack model with an a priori given ratio of the compliances. Comparison with the experimental data indicated abundance (˜ 80 %) of cracks with the normal-to-shear compliance ratios that significantly exceed the values typical of conventionally used crack models (such as penny-shaped cuts or thin ellipsoidal cracks). Correspondingly, rocks with such cracks demonstrate a strongly decreased Poisson ratio including a significant (˜ 45 %) portion of rocks exhibiting negative Poisson ratios at lower pressures, for which the concentration of not yet closed cracks is maximal. The obtained results indicate the necessity for further development of crack models to account for the revealed numerous examples of cracks with strong domination of normal compliance. Discovering such a significant number of naturally auxetic rocks is in contrast to the conventional viewpoint that occurrence of a negative Poisson ratio is an exotic fact that is mostly discussed for artificial structures.

  13. Zero-inflated Poisson regression models for QTL mapping applied to tick-resistance in a Gyr × Holstein F2 population

    PubMed Central

    Silva, Fabyano Fonseca; Tunin, Karen P.; Rosa, Guilherme J.M.; da Silva, Marcos V.B.; Azevedo, Ana Luisa Souza; da Silva Verneque, Rui; Machado, Marco Antonio; Packer, Irineu Umberto

    2011-01-01

    Now a days, an important and interesting alternative in the control of tick-infestation in cattle is to select resistant animals, and identify the respective quantitative trait loci (QTLs) and DNA markers, for posterior use in breeding programs. The number of ticks/animal is characterized as a discrete-counting trait, which could potentially follow Poisson distribution. However, in the case of an excess of zeros, due to the occurrence of several noninfected animals, zero-inflated Poisson and generalized zero-inflated distribution (GZIP) may provide a better description of the data. Thus, the objective here was to compare through simulation, Poisson and ZIP models (simple and generalized) with classical approaches, for QTL mapping with counting phenotypes under different scenarios, and to apply these approaches to a QTL study of tick resistance in an F2 cattle (Gyr × Holstein) population. It was concluded that, when working with zero-inflated data, it is recommendable to use the generalized and simple ZIP model for analysis. On the other hand, when working with data with zeros, but not zero-inflated, the Poisson model or a data-transformation-approach, such as square-root or Box-Cox transformation, are applicable. PMID:22215960

  14. Zero-inflated Conway-Maxwell Poisson Distribution to Analyze Discrete Data.

    PubMed

    Sim, Shin Zhu; Gupta, Ramesh C; Ong, Seng Huat

    2018-01-09

    In this paper, we study the zero-inflated Conway-Maxwell Poisson (ZICMP) distribution and develop a regression model. Score and likelihood ratio tests are also implemented for testing the inflation/deflation parameter. Simulation studies are carried out to examine the performance of these tests. A data example is presented to illustrate the concepts. In this example, the proposed model is compared to the well-known zero-inflated Poisson (ZIP) and the zero- inflated generalized Poisson (ZIGP) regression models. It is shown that the fit by ZICMP is comparable or better than these models.

  15. p-brane actions and higher Roytenberg brackets

    NASA Astrophysics Data System (ADS)

    Jurčo, Branislav; Schupp, Peter; Vysoký, Jan

    2013-02-01

    Motivated by the quest to understand the analog of non-geometric flux compactification in the context of M-theory, we study higher dimensional analogs of generalized Poisson sigma models and corresponding dual string and p-brane models. We find that higher generalizations of the algebraic structures due to Dorfman, Roytenberg and Courant play an important role and establish their relation to Nambu-Poisson structures.

  16. Fast and Accurate Poisson Denoising With Trainable Nonlinear Diffusion.

    PubMed

    Feng, Wensen; Qiao, Peng; Chen, Yunjin; Wensen Feng; Peng Qiao; Yunjin Chen; Feng, Wensen; Chen, Yunjin; Qiao, Peng

    2018-06-01

    The degradation of the acquired signal by Poisson noise is a common problem for various imaging applications, such as medical imaging, night vision, and microscopy. Up to now, many state-of-the-art Poisson denoising techniques mainly concentrate on achieving utmost performance, with little consideration for the computation efficiency. Therefore, in this paper we aim to propose an efficient Poisson denoising model with both high computational efficiency and recovery quality. To this end, we exploit the newly developed trainable nonlinear reaction diffusion (TNRD) model which has proven an extremely fast image restoration approach with performance surpassing recent state-of-the-arts. However, the straightforward direct gradient descent employed in the original TNRD-based denoising task is not applicable in this paper. To solve this problem, we resort to the proximal gradient descent method. We retrain the model parameters, including the linear filters and influence functions by taking into account the Poisson noise statistics, and end up with a well-trained nonlinear diffusion model specialized for Poisson denoising. The trained model provides strongly competitive results against state-of-the-art approaches, meanwhile bearing the properties of simple structure and high efficiency. Furthermore, our proposed model comes along with an additional advantage, that the diffusion process is well-suited for parallel computation on graphics processing units (GPUs). For images of size , our GPU implementation takes less than 0.1 s to produce state-of-the-art Poisson denoising performance.

  17. Topological T-duality via Lie algebroids and Q-flux in Poisson-generalized geometry

    NASA Astrophysics Data System (ADS)

    Asakawa, Tsuguhiko; Muraki, Hisayoshi; Watamura, Satoshi

    2015-10-01

    It is known that the topological T-duality exchanges H- and F-fluxes. In this paper, we reformulate the topological T-duality as an exchange of two Lie algebroids in the generalized tangent bundle. Then, we apply the same formulation to the Poisson-generalized geometry, which is introduced [T. Asakawa, H. Muraki, S. Sasa and S. Watamura, Int. J. Mod. Phys. A 30, 1550097 (2015), arXiv:1408.2649 [hep-th

  18. Hamiltonian structure and Darboux theorem for families of generalized Lotka-Volterra systems

    NASA Astrophysics Data System (ADS)

    Hernández-Bermejo, Benito; Fairén, Víctor

    1998-11-01

    This work is devoted to the establishment of a Poisson structure for a format of equations known as generalized Lotka-Volterra systems. These equations, which include the classical Lotka-Volterra systems as a particular case, have been deeply studied in the literature. They have been shown to constitute a whole hierarchy of systems, the characterization of which is made in the context of simple algebra. Our main result is to show that this algebraic structure is completely translatable into the Poisson domain. Important Poisson structures features, such as the symplectic foliation and the Darboux canonical representation, rise as a result of rather simple matrix manipulations.

  19. Fractional Relativistic Yamaleev Oscillator Model and Its Dynamical Behaviors

    NASA Astrophysics Data System (ADS)

    Luo, Shao-Kai; He, Jin-Man; Xu, Yan-Li; Zhang, Xiao-Tian

    2016-07-01

    In the paper we construct a new kind of fractional dynamical model, i.e. the fractional relativistic Yamaleev oscillator model, and explore its dynamical behaviors. We will find that the fractional relativistic Yamaleev oscillator model possesses Lie algebraic structure and satisfies generalized Poisson conservation law. We will also give the Poisson conserved quantities of the model. Further, the relation between conserved quantities and integral invariants of the model is studied and it is proved that, by using the Poisson conserved quantities, we can construct integral invariants of the model. Finally, the stability of the manifold of equilibrium states of the fractional relativistic Yamaleev oscillator model is studied. The paper provides a general method, i.e. fractional generalized Hamiltonian method, for constructing a family of fractional dynamical models of an actual dynamical system.

  20. How to characterize a nonlinear elastic material? A review on nonlinear constitutive parameters in isotropic finite elasticity

    PubMed Central

    2017-01-01

    The mechanical response of a homogeneous isotropic linearly elastic material can be fully characterized by two physical constants, the Young’s modulus and the Poisson’s ratio, which can be derived by simple tensile experiments. Any other linear elastic parameter can be obtained from these two constants. By contrast, the physical responses of nonlinear elastic materials are generally described by parameters which are scalar functions of the deformation, and their particular choice is not always clear. Here, we review in a unified theoretical framework several nonlinear constitutive parameters, including the stretch modulus, the shear modulus and the Poisson function, that are defined for homogeneous isotropic hyperelastic materials and are measurable under axial or shear experimental tests. These parameters represent changes in the material properties as the deformation progresses, and can be identified with their linear equivalent when the deformations are small. Universal relations between certain of these parameters are further established, and then used to quantify nonlinear elastic responses in several hyperelastic models for rubber, soft tissue and foams. The general parameters identified here can also be viewed as a flexible basis for coupling elastic responses in multi-scale processes, where an open challenge is the transfer of meaningful information between scales. PMID:29225507

  1. Morphology and linear-elastic moduli of random network solids.

    PubMed

    Nachtrab, Susan; Kapfer, Sebastian C; Arns, Christoph H; Madadi, Mahyar; Mecke, Klaus; Schröder-Turk, Gerd E

    2011-06-17

    The effective linear-elastic moduli of disordered network solids are analyzed by voxel-based finite element calculations. We analyze network solids given by Poisson-Voronoi processes and by the structure of collagen fiber networks imaged by confocal microscopy. The solid volume fraction ϕ is varied by adjusting the fiber radius, while keeping the structural mesh or pore size of the underlying network fixed. For intermediate ϕ, the bulk and shear modulus are approximated by empirical power-laws K(phi)proptophin and G(phi)proptophim with n≈1.4 and m≈1.7. The exponents for the collagen and the Poisson-Voronoi network solids are similar, and are close to the values n=1.22 and m=2.11 found in a previous voxel-based finite element study of Poisson-Voronoi systems with different boundary conditions. However, the exponents of these empirical power-laws are at odds with the analytic values of n=1 and m=2, valid for low-density cellular structures in the limit of thin beams. We propose a functional form for K(ϕ) that models the cross-over from a power-law at low densities to a porous solid at high densities; a fit of the data to this functional form yields the asymptotic exponent n≈1.00, as expected. Further, both the intensity of the Poisson-Voronoi process and the collagen concentration in the samples, both of which alter the typical pore or mesh size, affect the effective moduli only by the resulting change of the solid volume fraction. These findings suggest that a network solid with the structure of the collagen networks can be modeled in quantitative agreement by a Poisson-Voronoi process. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. The Kontsevich tetrahedral flow revisited

    NASA Astrophysics Data System (ADS)

    Bouisaghouane, A.; Buring, R.; Kiselev, A.

    2017-09-01

    We prove that the Kontsevich tetrahedral flow P ˙ =Qa:b(P) , the right-hand side of which is a linear combination of two differential monomials of degree four in a bi-vector P on an affine real Poisson manifold Nn, does infinitesimally preserve the space of Poisson bi-vectors on Nn if and only if the two monomials in Qa:b(P) are balanced by the ratio a : b = 1 : 6. The proof is explicit; it is written in the language of Kontsevich graphs.

  3. Convex reformulation of biologically-based multi-criteria intensity-modulated radiation therapy optimization including fractionation effects

    NASA Astrophysics Data System (ADS)

    Hoffmann, Aswin L.; den Hertog, Dick; Siem, Alex Y. D.; Kaanders, Johannes H. A. M.; Huizenga, Henk

    2008-11-01

    Finding fluence maps for intensity-modulated radiation therapy (IMRT) can be formulated as a multi-criteria optimization problem for which Pareto optimal treatment plans exist. To account for the dose-per-fraction effect of fractionated IMRT, it is desirable to exploit radiobiological treatment plan evaluation criteria based on the linear-quadratic (LQ) cell survival model as a means to balance the radiation benefits and risks in terms of biologic response. Unfortunately, the LQ-model-based radiobiological criteria are nonconvex functions, which make the optimization problem hard to solve. We apply the framework proposed by Romeijn et al (2004 Phys. Med. Biol. 49 1991-2013) to find transformations of LQ-model-based radiobiological functions and establish conditions under which transformed functions result in equivalent convex criteria that do not change the set of Pareto optimal treatment plans. The functions analysed are: the LQ-Poisson-based model for tumour control probability (TCP) with and without inter-patient heterogeneity in radiation sensitivity, the LQ-Poisson-based relative seriality s-model for normal tissue complication probability (NTCP), the equivalent uniform dose (EUD) under the LQ-Poisson model and the fractionation-corrected Probit-based model for NTCP according to Lyman, Kutcher and Burman. These functions differ from those analysed before in that they cannot be decomposed into elementary EUD or generalized-EUD functions. In addition, we show that applying increasing and concave transformations to the convexified functions is beneficial for the piecewise approximation of the Pareto efficient frontier.

  4. Equivalent Theories and Changing Hamiltonian Observables in General Relativity

    NASA Astrophysics Data System (ADS)

    Pitts, J. Brian

    2018-03-01

    Change and local spatial variation are missing in Hamiltonian general relativity according to the most common definition of observables as having 0 Poisson bracket with all first-class constraints. But other definitions of observables have been proposed. In pursuit of Hamiltonian-Lagrangian equivalence, Pons, Salisbury and Sundermeyer use the Anderson-Bergmann-Castellani gauge generator G, a tuned sum of first-class constraints. Kuchař waived the 0 Poisson bracket condition for the Hamiltonian constraint to achieve changing observables. A systematic combination of the two reforms might use the gauge generator but permit non-zero Lie derivative Poisson brackets for the external gauge symmetry of General Relativity. Fortunately one can test definitions of observables by calculation using two formulations of a theory, one without gauge freedom and one with gauge freedom. The formulations, being empirically equivalent, must have equivalent observables. For de Broglie-Proca non-gauge massive electromagnetism, all constraints are second-class, so everything is observable. Demanding equivalent observables from gauge Stueckelberg-Utiyama electromagnetism, one finds that the usual definition fails while the Pons-Salisbury-Sundermeyer definition with G succeeds. This definition does not readily yield change in GR, however. Should GR's external gauge freedom of general relativity share with internal gauge symmetries the 0 Poisson bracket (invariance), or is covariance (a transformation rule) sufficient? A graviton mass breaks the gauge symmetry (general covariance), but it can be restored by parametrization with clock fields. By requiring equivalent observables, one can test whether observables should have 0 or the Lie derivative as the Poisson bracket with the gauge generator G. The latter definition is vindicated by calculation. While this conclusion has been reported previously, here the calculation is given in some detail.

  5. Equivalent Theories and Changing Hamiltonian Observables in General Relativity

    NASA Astrophysics Data System (ADS)

    Pitts, J. Brian

    2018-05-01

    Change and local spatial variation are missing in Hamiltonian general relativity according to the most common definition of observables as having 0 Poisson bracket with all first-class constraints. But other definitions of observables have been proposed. In pursuit of Hamiltonian-Lagrangian equivalence, Pons, Salisbury and Sundermeyer use the Anderson-Bergmann-Castellani gauge generator G, a tuned sum of first-class constraints. Kuchař waived the 0 Poisson bracket condition for the Hamiltonian constraint to achieve changing observables. A systematic combination of the two reforms might use the gauge generator but permit non-zero Lie derivative Poisson brackets for the external gauge symmetry of General Relativity. Fortunately one can test definitions of observables by calculation using two formulations of a theory, one without gauge freedom and one with gauge freedom. The formulations, being empirically equivalent, must have equivalent observables. For de Broglie-Proca non-gauge massive electromagnetism, all constraints are second-class, so everything is observable. Demanding equivalent observables from gauge Stueckelberg-Utiyama electromagnetism, one finds that the usual definition fails while the Pons-Salisbury-Sundermeyer definition with G succeeds. This definition does not readily yield change in GR, however. Should GR's external gauge freedom of general relativity share with internal gauge symmetries the 0 Poisson bracket (invariance), or is covariance (a transformation rule) sufficient? A graviton mass breaks the gauge symmetry (general covariance), but it can be restored by parametrization with clock fields. By requiring equivalent observables, one can test whether observables should have 0 or the Lie derivative as the Poisson bracket with the gauge generator G. The latter definition is vindicated by calculation. While this conclusion has been reported previously, here the calculation is given in some detail.

  6. Modeling laser velocimeter signals as triply stochastic Poisson processes

    NASA Technical Reports Server (NTRS)

    Mayo, W. T., Jr.

    1976-01-01

    Previous models of laser Doppler velocimeter (LDV) systems have not adequately described dual-scatter signals in a manner useful for analysis and simulation of low-level photon-limited signals. At low photon rates, an LDV signal at the output of a photomultiplier tube is a compound nonhomogeneous filtered Poisson process, whose intensity function is another (slower) Poisson process with the nonstationary rate and frequency parameters controlled by a random flow (slowest) process. In the present paper, generalized Poisson shot noise models are developed for low-level LDV signals. Theoretical results useful in detection error analysis and simulation are presented, along with measurements of burst amplitude statistics. Computer generated simulations illustrate the difference between Gaussian and Poisson models of low-level signals.

  7. Monitoring Poisson observations using combined applications of Shewhart and EWMA charts

    NASA Astrophysics Data System (ADS)

    Abujiya, Mu'azu Ramat

    2017-11-01

    The Shewhart and exponentially weighted moving average (EWMA) charts for nonconformities are the most widely used procedures of choice for monitoring Poisson observations in modern industries. Individually, the Shewhart EWMA charts are only sensitive to large and small shifts, respectively. To enhance the detection abilities of the two schemes in monitoring all kinds of shifts in Poisson count data, this study examines the performance of combined applications of the Shewhart, and EWMA Poisson control charts. Furthermore, the study proposes modifications based on well-structured statistical data collection technique, ranked set sampling (RSS), to detect shifts in the mean of a Poisson process more quickly. The relative performance of the proposed Shewhart-EWMA Poisson location charts is evaluated in terms of the average run length (ARL), standard deviation of the run length (SDRL), median run length (MRL), average ratio ARL (ARARL), average extra quadratic loss (AEQL) and performance comparison index (PCI). Consequently, all the new Poisson control charts based on RSS method are generally more superior than most of the existing schemes for monitoring Poisson processes. The use of these combined Shewhart-EWMA Poisson charts is illustrated with an example to demonstrate the practical implementation of the design procedure.

  8. A comparative study of generalized linear mixed modelling and artificial neural network approach for the joint modelling of survival and incidence of Dengue patients in Sri Lanka

    NASA Astrophysics Data System (ADS)

    Hapugoda, J. C.; Sooriyarachchi, M. R.

    2017-09-01

    Survival time of patients with a disease and the incidence of that particular disease (count) is frequently observed in medical studies with the data of a clustered nature. In many cases, though, the survival times and the count can be correlated in a way that, diseases that occur rarely could have shorter survival times or vice versa. Due to this fact, joint modelling of these two variables will provide interesting and certainly improved results than modelling these separately. Authors have previously proposed a methodology using Generalized Linear Mixed Models (GLMM) by joining the Discrete Time Hazard model with the Poisson Regression model to jointly model survival and count model. As Aritificial Neural Network (ANN) has become a most powerful computational tool to model complex non-linear systems, it was proposed to develop a new joint model of survival and count of Dengue patients of Sri Lanka by using that approach. Thus, the objective of this study is to develop a model using ANN approach and compare the results with the previously developed GLMM model. As the response variables are continuous in nature, Generalized Regression Neural Network (GRNN) approach was adopted to model the data. To compare the model fit, measures such as root mean square error (RMSE), absolute mean error (AME) and correlation coefficient (R) were used. The measures indicate the GRNN model fits the data better than the GLMM model.

  9. Birth and Death Process Modeling Leads to the Poisson Distribution: A Journey Worth Taking

    ERIC Educational Resources Information Center

    Rash, Agnes M.; Winkel, Brian J.

    2009-01-01

    This paper describes details of development of the general birth and death process from which we can extract the Poisson process as a special case. This general process is appropriate for a number of courses and units in courses and can enrich the study of mathematics for students as it touches and uses a diverse set of mathematical topics, e.g.,…

  10. Aksz Construction of Topological Open p-BRANE Action and Nambu Brackets

    NASA Astrophysics Data System (ADS)

    Bouwknegt, Peter; Jurčo, Branislav

    2013-04-01

    We review the AKSZ construction as applied to the topological open membranes and Poisson sigma models. We describe a generalization to open topological p-branes. Also, we propose a related (not necessarily BV) Nambu-Poisson sigma model.

  11. Cell model and elastic moduli of disordered solids - Low temperature limit

    NASA Technical Reports Server (NTRS)

    Peng, S. T. J.; Landel, R. F.; Moacanin, J.; Simha, Robert; Papazoglou, Elisabeth

    1987-01-01

    The cell theory has been previously employed to compute the equation of state of a disordered condensed system. It is now generalized to include anisotropic stresses. The condition of affine deformation is adopted, transforming an orginally spherical into an ellipsoidal cell. With a Lennard-Jones n-m potential between nonbonded centers, the formal expression for the deformational free energy is derived. It is to be evaluated in the limit of the linear elastic range. Since the bulk modulus in this limit is already known, it is convenient to consider a uniaxial deformation. To begin with, restrictions are made to the low-temperature limit in the absence of entropy contributions. Young's modulus and Poisson's ratio then follow.

  12. Robust small area prediction for counts.

    PubMed

    Tzavidis, Nikos; Ranalli, M Giovanna; Salvati, Nicola; Dreassi, Emanuela; Chambers, Ray

    2015-06-01

    A new semiparametric approach to model-based small area prediction for counts is proposed and used for estimating the average number of visits to physicians for Health Districts in Central Italy. The proposed small area predictor can be viewed as an outlier robust alternative to the more commonly used empirical plug-in predictor that is based on a Poisson generalized linear mixed model with Gaussian random effects. Results from the real data application and from a simulation experiment confirm that the proposed small area predictor has good robustness properties and in some cases can be more efficient than alternative small area approaches. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  13. Matrix decomposition graphics processing unit solver for Poisson image editing

    NASA Astrophysics Data System (ADS)

    Lei, Zhao; Wei, Li

    2012-10-01

    In recent years, gradient-domain methods have been widely discussed in the image processing field, including seamless cloning and image stitching. These algorithms are commonly carried out by solving a large sparse linear system: the Poisson equation. However, solving the Poisson equation is a computational and memory intensive task which makes it not suitable for real-time image editing. A new matrix decomposition graphics processing unit (GPU) solver (MDGS) is proposed to settle the problem. A matrix decomposition method is used to distribute the work among GPU threads, so that MDGS will take full advantage of the computing power of current GPUs. Additionally, MDGS is a hybrid solver (combines both the direct and iterative techniques) and has two-level architecture. These enable MDGS to generate identical solutions with those of the common Poisson methods and achieve high convergence rate in most cases. This approach is advantageous in terms of parallelizability, enabling real-time image processing, low memory-taken and extensive applications.

  14. Quantification of integrated HIV DNA by repetitive-sampling Alu-HIV PCR on the basis of poisson statistics.

    PubMed

    De Spiegelaere, Ward; Malatinkova, Eva; Lynch, Lindsay; Van Nieuwerburgh, Filip; Messiaen, Peter; O'Doherty, Una; Vandekerckhove, Linos

    2014-06-01

    Quantification of integrated proviral HIV DNA by repetitive-sampling Alu-HIV PCR is a candidate virological tool to monitor the HIV reservoir in patients. However, the experimental procedures and data analysis of the assay are complex and hinder its widespread use. Here, we provide an improved and simplified data analysis method by adopting binomial and Poisson statistics. A modified analysis method on the basis of Poisson statistics was used to analyze the binomial data of positive and negative reactions from a 42-replicate Alu-HIV PCR by use of dilutions of an integration standard and on samples of 57 HIV-infected patients. Results were compared with the quantitative output of the previously described Alu-HIV PCR method. Poisson-based quantification of the Alu-HIV PCR was linearly correlated with the standard dilution series, indicating that absolute quantification with the Poisson method is a valid alternative for data analysis of repetitive-sampling Alu-HIV PCR data. Quantitative outputs of patient samples assessed by the Poisson method correlated with the previously described Alu-HIV PCR analysis, indicating that this method is a valid alternative for quantifying integrated HIV DNA. Poisson-based analysis of the Alu-HIV PCR data enables absolute quantification without the need of a standard dilution curve. Implementation of the CI estimation permits improved qualitative analysis of the data and provides a statistical basis for the required minimal number of technical replicates. © 2014 The American Association for Clinical Chemistry.

  15. Summer Proceedings 2016: The Center for Computing Research at Sandia National Laboratories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carleton, James Brian; Parks, Michael L.

    Solving sparse linear systems from the discretization of elliptic partial differential equations (PDEs) is an important building block in many engineering applications. Sparse direct solvers can solve general linear systems, but are usually slower and use much more memory than effective iterative solvers. To overcome these two disadvantages, a hierarchical solver (LoRaSp) based on H2-matrices was introduced in [22]. Here, we have developed a parallel version of the algorithm in LoRaSp to solve large sparse matrices on distributed memory machines. On a single processor, the factorization time of our parallel solver scales almost linearly with the problem size for three-dimensionalmore » problems, as opposed to the quadratic scalability of many existing sparse direct solvers. Moreover, our solver leads to almost constant numbers of iterations, when used as a preconditioner for Poisson problems. On more than one processor, our algorithm has significant speedups compared to sequential runs. With this parallel algorithm, we are able to solve large problems much faster than many existing packages as demonstrated by the numerical experiments.« less

  16. Filling of a Poisson trap by a population of random intermittent searchers.

    PubMed

    Bressloff, Paul C; Newby, Jay M

    2012-03-01

    We extend the continuum theory of random intermittent search processes to the case of N independent searchers looking to deliver cargo to a single hidden target located somewhere on a semi-infinite track. Each searcher randomly switches between a stationary state and either a leftward or rightward constant velocity state. We assume that all of the particles start at one end of the track and realize sample trajectories independently generated from the same underlying stochastic process. The hidden target is treated as a partially absorbing trap in which a particle can only detect the target and deliver its cargo if it is stationary and within range of the target; the particle is removed from the system after delivering its cargo. As a further generalization of previous models, we assume that up to n successive particles can find the target and deliver its cargo. Assuming that the rate of target detection scales as 1/N, we show that there exists a well-defined mean-field limit N→∞, in which the stochastic model reduces to a deterministic system of linear reaction-hyperbolic equations for the concentrations of particles in each of the internal states. These equations decouple from the stochastic process associated with filling the target with cargo. The latter can be modeled as a Poisson process in which the time-dependent rate of filling λ(t) depends on the concentration of stationary particles within the target domain. Hence, we refer to the target as a Poisson trap. We analyze the efficiency of filling the Poisson trap with n particles in terms of the waiting time density f(n)(t). The latter is determined by the integrated Poisson rate μ(t)=∫(0)(t)λ(s)ds, which in turn depends on the solution to the reaction-hyperbolic equations. We obtain an approximate solution for the particle concentrations by reducing the system of reaction-hyperbolic equations to a scalar advection-diffusion equation using a quasisteady-state analysis. We compare our analytical results for the mean-field model with Monte Carlo simulations for finite N. We thus determine how the mean first passage time (MFPT) for filling the target depends on N and n.

  17. A coarse-grid projection method for accelerating incompressible flow computations

    NASA Astrophysics Data System (ADS)

    San, Omer; Staples, Anne

    2011-11-01

    We present a coarse-grid projection (CGP) algorithm for accelerating incompressible flow computations, which is applicable to methods involving Poisson equations as incompressibility constraints. CGP methodology is a modular approach that facilitates data transfer with simple interpolations and uses black-box solvers for the Poisson and advection-diffusion equations in the flow solver. Here, we investigate a particular CGP method for the vorticity-stream function formulation that uses the full weighting operation for mapping from fine to coarse grids, the third-order Runge-Kutta method for time stepping, and finite differences for the spatial discretization. After solving the Poisson equation on a coarsened grid, bilinear interpolation is used to obtain the fine data for consequent time stepping on the full grid. We compute several benchmark flows: the Taylor-Green vortex, a vortex pair merging, a double shear layer, decaying turbulence and the Taylor-Green vortex on a distorted grid. In all cases we use either FFT-based or V-cycle multigrid linear-cost Poisson solvers. Reducing the number of degrees of freedom of the Poisson solver by powers of two accelerates these computations while, for the first level of coarsening, retaining the same level of accuracy in the fine resolution vorticity field.

  18. Unobtrusive Detection of Mild Cognitive Impairment in Older Adults Through Home Monitoring.

    PubMed

    Akl, Ahmad; Snoek, Jasper; Mihailidis, Alex

    2017-03-01

    The early detection of dementias such as Alzheimer's disease can in some cases reverse, stop, or slow cognitive decline and in general greatly reduce the burden of care. This is of increasing significance as demographic studies are warning of an aging population in North America and worldwide. Various smart homes and systems have been developed to detect cognitive decline through continuous monitoring of high risk individuals. However, the majority of these smart homes and systems use a number of predefined heuristics to detect changes in cognition, which has been demonstrated to focus on the idiosyncratic nuances of the individual subjects, and thus, does not generalize. In this paper, we address this problem by building generalized linear models of home activity of older adults monitored using unobtrusive sensing technologies. We use inhomogenous Poisson processes to model the presence of the recruited older adults within different rooms throughout the day. We employ an information theoretic approach to compare the generalized linear models learned, and we observe significant statistical differences between the cognitively intact and impaired older adults. Using a simple thresholding approach, we were able to detect mild cognitive impairment in older adults with an average area under the ROC curve of 0.716 and an average area under the precision-recall curve of 0.706 using activity models estimated over a time window of 12 weeks.

  19. Acceleration of Linear Finite-Difference Poisson-Boltzmann Methods on Graphics Processing Units.

    PubMed

    Qi, Ruxi; Botello-Smith, Wesley M; Luo, Ray

    2017-07-11

    Electrostatic interactions play crucial roles in biophysical processes such as protein folding and molecular recognition. Poisson-Boltzmann equation (PBE)-based models have emerged as widely used in modeling these important processes. Though great efforts have been put into developing efficient PBE numerical models, challenges still remain due to the high dimensionality of typical biomolecular systems. In this study, we implemented and analyzed commonly used linear PBE solvers for the ever-improving graphics processing units (GPU) for biomolecular simulations, including both standard and preconditioned conjugate gradient (CG) solvers with several alternative preconditioners. Our implementation utilizes the standard Nvidia CUDA libraries cuSPARSE, cuBLAS, and CUSP. Extensive tests show that good numerical accuracy can be achieved given that the single precision is often used for numerical applications on GPU platforms. The optimal GPU performance was observed with the Jacobi-preconditioned CG solver, with a significant speedup over standard CG solver on CPU in our diversified test cases. Our analysis further shows that different matrix storage formats also considerably affect the efficiency of different linear PBE solvers on GPU, with the diagonal format best suited for our standard finite-difference linear systems. Further efficiency may be possible with matrix-free operations and integrated grid stencil setup specifically tailored for the banded matrices in PBE-specific linear systems.

  20. Maslov indices, Poisson brackets, and singular differential forms

    NASA Astrophysics Data System (ADS)

    Esterlis, I.; Haggard, H. M.; Hedeman, A.; Littlejohn, R. G.

    2014-06-01

    Maslov indices are integers that appear in semiclassical wave functions and quantization conditions. They are often notoriously difficult to compute. We present methods of computing the Maslov index that rely only on typically elementary Poisson brackets and simple linear algebra. We also present a singular differential form, whose integral along a curve gives the Maslov index of that curve. The form is closed but not exact, and transforms by an exact differential under canonical transformations. We illustrate the method with the 6j-symbol, which is important in angular-momentum theory and in quantum gravity.

  1. A multiscale filter for noise reduction of low-dose cone beam projections.

    PubMed

    Yao, Weiguang; Farr, Jonathan B

    2015-08-21

    The Poisson or compound Poisson process governs the randomness of photon fluence in cone beam computed tomography (CBCT) imaging systems. The probability density function depends on the mean (noiseless) of the fluence at a certain detector. This dependence indicates the natural requirement of multiscale filters to smooth noise while preserving structures of the imaged object on the low-dose cone beam projection. In this work, we used a Gaussian filter, exp(-x2/2σ(2)(f)) as the multiscale filter to de-noise the low-dose cone beam projections. We analytically obtained the expression of σ(f), which represents the scale of the filter, by minimizing local noise-to-signal ratio. We analytically derived the variance of residual noise from the Poisson or compound Poisson processes after Gaussian filtering. From the derived analytical form of the variance of residual noise, optimal σ(2)(f)) is proved to be proportional to the noiseless fluence and modulated by local structure strength expressed as the linear fitting error of the structure. A strategy was used to obtain the reliable linear fitting error: smoothing the projection along the longitudinal direction to calculate the linear fitting error along the lateral direction and vice versa. The performance of our multiscale filter was examined on low-dose cone beam projections of a Catphan phantom and a head-and-neck patient. After performing the filter on the Catphan phantom projections scanned with pulse time 4 ms, the number of visible line pairs was similar to that scanned with 16 ms, and the contrast-to-noise ratio of the inserts was higher than that scanned with 16 ms about 64% in average. For the simulated head-and-neck patient projections with pulse time 4 ms, the visibility of soft tissue structures in the patient was comparable to that scanned with 20 ms. The image processing took less than 0.5 s per projection with 1024   ×   768 pixels.

  2. Performance of Nonlinear Finite-Difference Poisson-Boltzmann Solvers

    PubMed Central

    Cai, Qin; Hsieh, Meng-Juei; Wang, Jun; Luo, Ray

    2014-01-01

    We implemented and optimized seven finite-difference solvers for the full nonlinear Poisson-Boltzmann equation in biomolecular applications, including four relaxation methods, one conjugate gradient method, and two inexact Newton methods. The performance of the seven solvers was extensively evaluated with a large number of nucleic acids and proteins. Worth noting is the inexact Newton method in our analysis. We investigated the role of linear solvers in its performance by incorporating the incomplete Cholesky conjugate gradient and the geometric multigrid into its inner linear loop. We tailored and optimized both linear solvers for faster convergence rate. In addition, we explored strategies to optimize the successive over-relaxation method to reduce its convergence failures without too much sacrifice in its convergence rate. Specifically we attempted to adaptively change the relaxation parameter and to utilize the damping strategy from the inexact Newton method to improve the successive over-relaxation method. Our analysis shows that the nonlinear methods accompanied with a functional-assisted strategy, such as the conjugate gradient method and the inexact Newton method, can guarantee convergence in the tested molecules. Especially the inexact Newton method exhibits impressive performance when it is combined with highly efficient linear solvers that are tailored for its special requirement. PMID:24723843

  3. Estimating relative risks in multicenter studies with a small number of centers - which methods to use? A simulation study.

    PubMed

    Pedroza, Claudia; Truong, Van Thi Thanh

    2017-11-02

    Analyses of multicenter studies often need to account for center clustering to ensure valid inference. For binary outcomes, it is particularly challenging to properly adjust for center when the number of centers or total sample size is small, or when there are few events per center. Our objective was to evaluate the performance of generalized estimating equation (GEE) log-binomial and Poisson models, generalized linear mixed models (GLMMs) assuming binomial and Poisson distributions, and a Bayesian binomial GLMM to account for center effect in these scenarios. We conducted a simulation study with few centers (≤30) and 50 or fewer subjects per center, using both a randomized controlled trial and an observational study design to estimate relative risk. We compared the GEE and GLMM models with a log-binomial model without adjustment for clustering in terms of bias, root mean square error (RMSE), and coverage. For the Bayesian GLMM, we used informative neutral priors that are skeptical of large treatment effects that are almost never observed in studies of medical interventions. All frequentist methods exhibited little bias, and the RMSE was very similar across the models. The binomial GLMM had poor convergence rates, ranging from 27% to 85%, but performed well otherwise. The results show that both GEE models need to use small sample corrections for robust SEs to achieve proper coverage of 95% CIs. The Bayesian GLMM had similar convergence rates but resulted in slightly more biased estimates for the smallest sample sizes. However, it had the smallest RMSE and good coverage across all scenarios. These results were very similar for both study designs. For the analyses of multicenter studies with a binary outcome and few centers, we recommend adjustment for center with either a GEE log-binomial or Poisson model with appropriate small sample corrections or a Bayesian binomial GLMM with informative priors.

  4. Comparison of robustness to outliers between robust poisson models and log-binomial models when estimating relative risks for common binary outcomes: a simulation study.

    PubMed

    Chen, Wansu; Shi, Jiaxiao; Qian, Lei; Azen, Stanley P

    2014-06-26

    To estimate relative risks or risk ratios for common binary outcomes, the most popular model-based methods are the robust (also known as modified) Poisson and the log-binomial regression. Of the two methods, it is believed that the log-binomial regression yields more efficient estimators because it is maximum likelihood based, while the robust Poisson model may be less affected by outliers. Evidence to support the robustness of robust Poisson models in comparison with log-binomial models is very limited. In this study a simulation was conducted to evaluate the performance of the two methods in several scenarios where outliers existed. The findings indicate that for data coming from a population where the relationship between the outcome and the covariate was in a simple form (e.g. log-linear), the two models yielded comparable biases and mean square errors. However, if the true relationship contained a higher order term, the robust Poisson models consistently outperformed the log-binomial models even when the level of contamination is low. The robust Poisson models are more robust (or less sensitive) to outliers compared to the log-binomial models when estimating relative risks or risk ratios for common binary outcomes. Users should be aware of the limitations when choosing appropriate models to estimate relative risks or risk ratios.

  5. Dynamics of a prey-predator system under Poisson white noise excitation

    NASA Astrophysics Data System (ADS)

    Pan, Shan-Shan; Zhu, Wei-Qiu

    2014-10-01

    The classical Lotka-Volterra (LV) model is a well-known mathematical model for prey-predator ecosystems. In the present paper, the pulse-type version of stochastic LV model, in which the effect of a random natural environment has been modeled as Poisson white noise, is investigated by using the stochastic averaging method. The averaged generalized Itô stochastic differential equation and Fokker-Planck-Kolmogorov (FPK) equation are derived for prey-predator ecosystem driven by Poisson white noise. Approximate stationary solution for the averaged generalized FPK equation is obtained by using the perturbation method. The effect of prey self-competition parameter ɛ2 s on ecosystem behavior is evaluated. The analytical result is confirmed by corresponding Monte Carlo (MC) simulation.

  6. A Review of Multivariate Distributions for Count Data Derived from the Poisson Distribution.

    PubMed

    Inouye, David; Yang, Eunho; Allen, Genevera; Ravikumar, Pradeep

    2017-01-01

    The Poisson distribution has been widely studied and used for modeling univariate count-valued data. Multivariate generalizations of the Poisson distribution that permit dependencies, however, have been far less popular. Yet, real-world high-dimensional count-valued data found in word counts, genomics, and crime statistics, for example, exhibit rich dependencies, and motivate the need for multivariate distributions that can appropriately model this data. We review multivariate distributions derived from the univariate Poisson, categorizing these models into three main classes: 1) where the marginal distributions are Poisson, 2) where the joint distribution is a mixture of independent multivariate Poisson distributions, and 3) where the node-conditional distributions are derived from the Poisson. We discuss the development of multiple instances of these classes and compare the models in terms of interpretability and theory. Then, we empirically compare multiple models from each class on three real-world datasets that have varying data characteristics from different domains, namely traffic accident data, biological next generation sequencing data, and text data. These empirical experiments develop intuition about the comparative advantages and disadvantages of each class of multivariate distribution that was derived from the Poisson. Finally, we suggest new research directions as explored in the subsequent discussion section.

  7. Disformal invariance of curvature perturbation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Motohashi, Hayato; White, Jonathan, E-mail: motohashi@kicp.uchicago.edu, E-mail: jwhite@post.kek.jp

    2016-02-01

    We show that under a general disformal transformation the linear comoving curvature perturbation is not identically invariant, but is invariant on superhorizon scales for any theory that is disformally related to Horndeski's theory. The difference between disformally related curvature perturbations is found to be given in terms of the comoving density perturbation associated with a single canonical scalar field. In General Relativity it is well-known that this quantity vanishes on superhorizon scales through the Poisson equation that is obtained on combining the Hamiltonian and momentum constraints, and we confirm that a similar result holds for any theory that is disformallymore » related to Horndeski's scalar-tensor theory so long as the invertibility condition for the disformal transformation is satisfied. We also consider the curvature perturbation at full nonlinear order in the unitary gauge, and find that it is invariant under a general disformal transformation if we assume that an attractor regime has been reached. Finally, we also discuss the counting of degrees of freedom in theories disformally related to Horndeski's.« less

  8. Multivariate Autoregressive Modeling and Granger Causality Analysis of Multiple Spike Trains

    PubMed Central

    Krumin, Michael; Shoham, Shy

    2010-01-01

    Recent years have seen the emergence of microelectrode arrays and optical methods allowing simultaneous recording of spiking activity from populations of neurons in various parts of the nervous system. The analysis of multiple neural spike train data could benefit significantly from existing methods for multivariate time-series analysis which have proven to be very powerful in the modeling and analysis of continuous neural signals like EEG signals. However, those methods have not generally been well adapted to point processes. Here, we use our recent results on correlation distortions in multivariate Linear-Nonlinear-Poisson spiking neuron models to derive generalized Yule-Walker-type equations for fitting ‘‘hidden” Multivariate Autoregressive models. We use this new framework to perform Granger causality analysis in order to extract the directed information flow pattern in networks of simulated spiking neurons. We discuss the relative merits and limitations of the new method. PMID:20454705

  9. Interface stresses in fiber-reinforced materials with regular fiber arrangements

    NASA Astrophysics Data System (ADS)

    Mueller, W. H.; Schmauder, S.

    The theory of linear elasticity is used here to analyze the stresses inside and at the surface of fiber-reinforced composites. Plane strain, plane stress, and generalized plane strain are analyzed using the shell model and the BHE model and are numerically studied using finite element analysis. Interface stresses are shown to depend weakly on Poisson's ratio. For equal values of the ratio, generalized plane strain and plane strain results are identical. For small volume fractions up to 40 vol pct of fibers, the shell and the BHE models predict the interface stresses very well over a wide range of elastic mismatches and for different fiber arrangements. At higher volume fractions the stresses are influenced by interactions with neighboring fibers. Introducing an external pressure into the shell model allows the prediction of interface stresses in real composite with isolated or regularly arranged fibers.

  10. A function space framework for structural total variation regularization with applications in inverse problems

    NASA Astrophysics Data System (ADS)

    Hintermüller, Michael; Holler, Martin; Papafitsoros, Kostas

    2018-06-01

    In this work, we introduce a function space setting for a wide class of structural/weighted total variation (TV) regularization methods motivated by their applications in inverse problems. In particular, we consider a regularizer that is the appropriate lower semi-continuous envelope (relaxation) of a suitable TV type functional initially defined for sufficiently smooth functions. We study examples where this relaxation can be expressed explicitly, and we also provide refinements for weighted TV for a wide range of weights. Since an integral characterization of the relaxation in function space is, in general, not always available, we show that, for a rather general linear inverse problems setting, instead of the classical Tikhonov regularization problem, one can equivalently solve a saddle-point problem where no a priori knowledge of an explicit formulation of the structural TV functional is needed. In particular, motivated by concrete applications, we deduce corresponding results for linear inverse problems with norm and Poisson log-likelihood data discrepancy terms. Finally, we provide proof-of-concept numerical examples where we solve the saddle-point problem for weighted TV denoising as well as for MR guided PET image reconstruction.

  11. A flexible count data regression model for risk analysis.

    PubMed

    Guikema, Seth D; Coffelt, Jeremy P; Goffelt, Jeremy P

    2008-02-01

    In many cases, risk and reliability analyses involve estimating the probabilities of discrete events such as hardware failures and occurrences of disease or death. There is often additional information in the form of explanatory variables that can be used to help estimate the likelihood of different numbers of events in the future through the use of an appropriate regression model, such as a generalized linear model. However, existing generalized linear models (GLM) are limited in their ability to handle the types of variance structures often encountered in using count data in risk and reliability analysis. In particular, standard models cannot handle both underdispersed data (variance less than the mean) and overdispersed data (variance greater than the mean) in a single coherent modeling framework. This article presents a new GLM based on a reformulation of the Conway-Maxwell Poisson (COM) distribution that is useful for both underdispersed and overdispersed count data and demonstrates this model by applying it to the assessment of electric power system reliability. The results show that the proposed COM GLM can provide as good of fits to data as the commonly used existing models for overdispered data sets while outperforming these commonly used models for underdispersed data sets.

  12. In the linear quadratic model, the Poisson approximation and the Zaider-Minerbo formula agree on the ranking of tumor control probabilities, up to a critical cell birth rate.

    PubMed

    Ballhausen, Hendrik; Belka, Claus

    2017-03-01

    To provide a rule for the agreement or disagreement of the Poisson approximation (PA) and the Zaider-Minerbo formula (ZM) on the ranking of treatment alternatives in terms of tumor control probability (TCP) in the linear quadratic model. A general criterion involving a critical cell birth rate was formally derived. For demonstration, the criterion was applied to a distinct radiobiological model of fast growing head and neck tumors and a respective range of 22 conventional and nonconventional head and neck schedules. There is a critical cell birth rate b crit below which PA and ZM agree on which one out of two alternative treatment schemes with single-cell survival curves S'(t) and S''(t) offers better TCP: [Formula: see text] For cell birth rates b above this critical cell birth rate, PA and ZM disagree if and only if b >b crit > 0. In case of the exemplary head and neck schedules, out of 231 possible combinations, only 16 or 7% were found where PA and ZM disagreed. In all 231 cases the prediction of the criterion was numerically confirmed, and cell birth rates at crossovers between schedules matched the calculated critical cell birth rates. TCP estimated by PA and ZM almost never numerically coincide. Still, in many cases both formulas at least agree about which one out of two alternative fractionation schemes offers better TCP. In case of fast growing tumors featuring a high cell birth rate, however, ZM may suggest a re-evaluation of treatment options.

  13. A Generalized QMRA Beta-Poisson Dose-Response Model.

    PubMed

    Xie, Gang; Roiko, Anne; Stratton, Helen; Lemckert, Charles; Dunn, Peter K; Mengersen, Kerrie

    2016-10-01

    Quantitative microbial risk assessment (QMRA) is widely accepted for characterizing the microbial risks associated with food, water, and wastewater. Single-hit dose-response models are the most commonly used dose-response models in QMRA. Denoting PI(d) as the probability of infection at a given mean dose d, a three-parameter generalized QMRA beta-Poisson dose-response model, PI(d|α,β,r*), is proposed in which the minimum number of organisms required for causing infection, K min , is not fixed, but a random variable following a geometric distribution with parameter 0

  14. Beyond the continuum: how molecular solvent structure affects electrostatics and hydrodynamics at solid-electrolyte interfaces.

    PubMed

    Bonthuis, Douwe Jan; Netz, Roland R

    2013-10-03

    Standard continuum theory fails to predict several key experimental results of electrostatic and electrokinetic measurements at aqueous electrolyte interfaces. In order to extend the continuum theory to include the effects of molecular solvent structure, we generalize the equations for electrokinetic transport to incorporate a space dependent dielectric profile, viscosity profile, and non-electrostatic interaction potential. All necessary profiles are extracted from atomistic molecular dynamics (MD) simulations. We show that the MD results for the ion-specific distribution of counterions at charged hydrophilic and hydrophobic interfaces are accurately reproduced using the dielectric profile of pure water and a non-electrostatic repulsion in an extended Poisson-Boltzmann equation. The distributions of Na(+) at both surface types and Cl(-) at hydrophilic surfaces can be modeled using linear dielectric response theory, whereas for Cl(-) at hydrophobic surfaces it is necessary to apply nonlinear response theory. The extended Poisson-Boltzmann equation reproduces the experimental values of the double-layer capacitance for many different carbon-based surfaces. In conjunction with a generalized hydrodynamic theory that accounts for a space dependent viscosity, the model captures the experimentally observed saturation of the electrokinetic mobility as a function of the bare surface charge density and the so-called anomalous double-layer conductivity. The two-scale approach employed here-MD simulations and continuum theory-constitutes a successful modeling scheme, providing basic insight into the molecular origins of the static and kinetic properties of charged surfaces, and allowing quantitative modeling at low computational cost.

  15. Symmetries of the Space of Linear Symplectic Connections

    NASA Astrophysics Data System (ADS)

    Fox, Daniel J. F.

    2017-01-01

    There is constructed a family of Lie algebras that act in a Hamiltonian way on the symplectic affine space of linear symplectic connections on a symplectic manifold. The associated equivariant moment map is a formal sum of the Cahen-Gutt moment map, the Ricci tensor, and a translational term. The critical points of a functional constructed from it interpolate between the equations for preferred symplectic connections and the equations for critical symplectic connections. The commutative algebra of formal sums of symmetric tensors on a symplectic manifold carries a pair of compatible Poisson structures, one induced from the canonical Poisson bracket on the space of functions on the cotangent bundle polynomial in the fibers, and the other induced from the algebraic fiberwise Schouten bracket on the symmetric algebra of each fiber of the cotangent bundle. These structures are shown to be compatible, and the required Lie algebras are constructed as central extensions of their! linear combinations restricted to formal sums of symmetric tensors whose first order term is a multiple of the differential of its zeroth order term.

  16. Using the Gamma-Poisson Model to Predict Library Circulations.

    ERIC Educational Resources Information Center

    Burrell, Quentin L.

    1990-01-01

    Argues that the gamma mixture of Poisson processes, for all its perceived defects, can be used to make predictions regarding future library book circulations of a quality adequate for general management requirements. The use of the model is extensively illustrated with data from two academic libraries. (Nine references) (CLB)

  17. A stochastic model for stationary dynamics of prices in real estate markets. A case of random intensity for Poisson moments of prices changes

    NASA Astrophysics Data System (ADS)

    Rusakov, Oleg; Laskin, Michael

    2017-06-01

    We consider a stochastic model of changes of prices in real estate markets. We suppose that in a book of prices the changes happen in points of jumps of a Poisson process with a random intensity, i.e. moments of changes sequently follow to a random process of the Cox process type. We calculate cumulative mathematical expectations and variances for the random intensity of this point process. In the case that the process of random intensity is a martingale the cumulative variance has a linear grows. We statistically process a number of observations of real estate prices and accept hypotheses of a linear grows for estimations as well for cumulative average, as for cumulative variance both for input and output prises that are writing in the book of prises.

  18. Statistical analysis of excitation energies in actinide and rare-earth nuclei

    NASA Astrophysics Data System (ADS)

    Levon, A. I.; Magner, A. G.; Radionov, S. V.

    2018-04-01

    Statistical analysis of distributions of the collective states in actinide and rare-earth nuclei is performed in terms of the nearest-neighbor spacing distribution (NNSD). Several approximations, such as the linear approach to the level repulsion density and that suggested by Brody to the NNSDs were applied for the analysis. We found an intermediate character of the experimental spectra between the order and the chaos for a number of rare-earth and actinide nuclei. The spectra are closer to the Wigner distribution for energies limited by 3 MeV, and to the Poisson distribution for data including higher excitation energies and higher spins. The latter result is in agreement with the theoretical calculations. These features are confirmed by the cumulative distributions, where the Wigner contribution dominates at smaller spacings while the Poisson one is more important at larger spacings, and our linear approach improves the comparison with experimental data at all desired spacings.

  19. A quasi-Monte-Carlo comparison of parametric and semiparametric regression methods for heavy-tailed and non-normal data: an application to healthcare costs.

    PubMed

    Jones, Andrew M; Lomas, James; Moore, Peter T; Rice, Nigel

    2016-10-01

    We conduct a quasi-Monte-Carlo comparison of the recent developments in parametric and semiparametric regression methods for healthcare costs, both against each other and against standard practice. The population of English National Health Service hospital in-patient episodes for the financial year 2007-2008 (summed for each patient) is randomly divided into two equally sized subpopulations to form an estimation set and a validation set. Evaluating out-of-sample using the validation set, a conditional density approximation estimator shows considerable promise in forecasting conditional means, performing best for accuracy of forecasting and among the best four for bias and goodness of fit. The best performing model for bias is linear regression with square-root-transformed dependent variables, whereas a generalized linear model with square-root link function and Poisson distribution performs best in terms of goodness of fit. Commonly used models utilizing a log-link are shown to perform badly relative to other models considered in our comparison.

  20. The Poisson model limits in NBA basketball: Complexity in team sports

    NASA Astrophysics Data System (ADS)

    Martín-González, Juan Manuel; de Saá Guerra, Yves; García-Manso, Juan Manuel; Arriaza, Enrique; Valverde-Estévez, Teresa

    2016-12-01

    Team sports are frequently studied by researchers. There is presumption that scoring in basketball is a random process and that can be described using the Poisson Model. Basketball is a collaboration-opposition sport, where the non-linear local interactions among players are reflected in the evolution of the score that ultimately determines the winner. In the NBA, the outcomes of close games are often decided in the last minute, where fouls play a main role. We examined 6130 NBA games in order to analyze the time intervals between baskets and scoring dynamics. Most numbers of baskets (n) over a time interval (ΔT) follow a Poisson distribution, but some (e.g., ΔT = 10 s, n > 3) behave as a Power Law. The Poisson distribution includes most baskets in any game, in most game situations, but in close games in the last minute, the numbers of events are distributed following a Power Law. The number of events can be adjusted by a mixture of two distributions. In close games, both teams try to maintain their advantage solely in order to reach the last minute: a completely different game. For this reason, we propose to use the Poisson model as a reference. The complex dynamics will emerge from the limits of this model.

  1. Online sequential Monte Carlo smoother for partially observed diffusion processes

    NASA Astrophysics Data System (ADS)

    Gloaguen, Pierre; Étienne, Marie-Pierre; Le Corff, Sylvain

    2018-12-01

    This paper introduces a new algorithm to approximate smoothed additive functionals of partially observed diffusion processes. This method relies on a new sequential Monte Carlo method which allows to compute such approximations online, i.e., as the observations are received, and with a computational complexity growing linearly with the number of Monte Carlo samples. The original algorithm cannot be used in the case of partially observed stochastic differential equations since the transition density of the latent data is usually unknown. We prove that it may be extended to partially observed continuous processes by replacing this unknown quantity by an unbiased estimator obtained for instance using general Poisson estimators. This estimator is proved to be consistent and its performance are illustrated using data from two models.

  2. Dynamic fluctuations in single-molecule biophysics experiments. Comment on "Extracting physics of life at the molecular level: A review of single-molecule data analyses" by W. Colomb and S.K. Sarkar

    NASA Astrophysics Data System (ADS)

    Krapf, Diego

    2015-06-01

    Single-molecule biophysics includes the study of isolated molecules and that of individual molecules within living cells. In both cases, dynamic fluctuations at the nanoscale play a critical role. Colomb and Sarkar emphasize how different noise sources affect the analysis of single molecule data [1]. Fluctuations in biomolecular systems arise from two very different mechanisms. On one hand thermal fluctuations are a predominant feature in the behavior of individual molecules. On the other hand, non-Gaussian fluctuations can arise from inter- and intramolecular interactions [2], spatial heterogeneities [3], non-Poisson external perturbations [4] and complex non-linear dynamics in general [5,6].

  3. BFV-BRST analysis of equivalence between noncommutative and ordinary gauge theories

    NASA Astrophysics Data System (ADS)

    Dayi, O. F.

    2000-05-01

    Constrained hamiltonian structure of noncommutative gauge theory for the gauge group /U(1) is discussed. Constraints are shown to be first class, although, they do not give an Abelian algebra in terms of Poisson brackets. The related BFV-BRST charge gives a vanishing generalized Poisson bracket by itself due to the associativity of /*-product. Equivalence of noncommutative and ordinary gauge theories is formulated in generalized phase space by using BFV-BRST charge and a solution is obtained. Gauge fixing is discussed.

  4. A Review of Multivariate Distributions for Count Data Derived from the Poisson Distribution

    PubMed Central

    Inouye, David; Yang, Eunho; Allen, Genevera; Ravikumar, Pradeep

    2017-01-01

    The Poisson distribution has been widely studied and used for modeling univariate count-valued data. Multivariate generalizations of the Poisson distribution that permit dependencies, however, have been far less popular. Yet, real-world high-dimensional count-valued data found in word counts, genomics, and crime statistics, for example, exhibit rich dependencies, and motivate the need for multivariate distributions that can appropriately model this data. We review multivariate distributions derived from the univariate Poisson, categorizing these models into three main classes: 1) where the marginal distributions are Poisson, 2) where the joint distribution is a mixture of independent multivariate Poisson distributions, and 3) where the node-conditional distributions are derived from the Poisson. We discuss the development of multiple instances of these classes and compare the models in terms of interpretability and theory. Then, we empirically compare multiple models from each class on three real-world datasets that have varying data characteristics from different domains, namely traffic accident data, biological next generation sequencing data, and text data. These empirical experiments develop intuition about the comparative advantages and disadvantages of each class of multivariate distribution that was derived from the Poisson. Finally, we suggest new research directions as explored in the subsequent discussion section. PMID:28983398

  5. Poisson regression models outperform the geometrical model in estimating the peak-to-trough ratio of seasonal variation: a simulation study.

    PubMed

    Christensen, A L; Lundbye-Christensen, S; Dethlefsen, C

    2011-12-01

    Several statistical methods of assessing seasonal variation are available. Brookhart and Rothman [3] proposed a second-order moment-based estimator based on the geometrical model derived by Edwards [1], and reported that this estimator is superior in estimating the peak-to-trough ratio of seasonal variation compared with Edwards' estimator with respect to bias and mean squared error. Alternatively, seasonal variation may be modelled using a Poisson regression model, which provides flexibility in modelling the pattern of seasonal variation and adjustments for covariates. Based on a Monte Carlo simulation study three estimators, one based on the geometrical model, and two based on log-linear Poisson regression models, were evaluated in regards to bias and standard deviation (SD). We evaluated the estimators on data simulated according to schemes varying in seasonal variation and presence of a secular trend. All methods and analyses in this paper are available in the R package Peak2Trough[13]. Applying a Poisson regression model resulted in lower absolute bias and SD for data simulated according to the corresponding model assumptions. Poisson regression models had lower bias and SD for data simulated to deviate from the corresponding model assumptions than the geometrical model. This simulation study encourages the use of Poisson regression models in estimating the peak-to-trough ratio of seasonal variation as opposed to the geometrical model. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  6. Stochastic modeling of soil salinity

    NASA Astrophysics Data System (ADS)

    Suweis, S.; Porporato, A. M.; Daly, E.; van der Zee, S.; Maritan, A.; Rinaldo, A.

    2010-12-01

    A minimalist stochastic model of primary soil salinity is proposed, in which the rate of soil salinization is determined by the balance between dry and wet salt deposition and the intermittent leaching events caused by rainfall events. The equations for the probability density functions of salt mass and concentration are found by reducing the coupled soil moisture and salt mass balance equations to a single stochastic differential equation (generalized Langevin equation) driven by multiplicative Poisson noise. Generalized Langevin equations with multiplicative white Poisson noise pose the usual Ito (I) or Stratonovich (S) prescription dilemma. Different interpretations lead to different results and then choosing between the I and S prescriptions is crucial to describe correctly the dynamics of the model systems. We show how this choice can be determined by physical information about the timescales involved in the process. We also show that when the multiplicative noise is at most linear in the random variable one prescription can be made equivalent to the other by a suitable transformation in the jump probability distribution. We then apply these results to the generalized Langevin equation that drives the salt mass dynamics. The stationary analytical solutions for the probability density functions of salt mass and concentration provide insight on the interplay of the main soil, plant and climate parameters responsible for long term soil salinization. In particular, they show the existence of two distinct regimes, one where the mean salt mass remains nearly constant (or decreases) with increasing rainfall frequency, and another where mean salt content increases markedly with increasing rainfall frequency. As a result, relatively small reductions of rainfall in drier climates may entail dramatic shifts in longterm soil salinization trends, with significant consequences, e.g. for climate change impacts on rain fed agriculture.

  7. Extended generalized geometry and a DBI-type effective action for branes ending on branes

    NASA Astrophysics Data System (ADS)

    Jurčo, Branislav; Schupp, Peter; Vysoký, Jan

    2014-08-01

    Starting from the Nambu-Goto bosonic membrane action, we develop a geometric description suitable for p-brane backgrounds. With tools of generalized geometry we derive the pertinent generalization of the string open-closed relations to the p-brane case. Nambu-Poisson structures are used in this context to generalize the concept of semi-classical noncommutativity of D-branes governed by a Poisson tensor. We find a natural description of the correspondence of recently proposed commutative and noncommutative versions of an effective action for p-branes ending on a p '-brane. We calculate the power series expansion of the action in background independent gauge. Leading terms in the double scaling limit are given by a generalization of a (semi-classical) matrix model.

  8. Filtrations on Springer fiber cohomology and Kostka polynomials

    NASA Astrophysics Data System (ADS)

    Bellamy, Gwyn; Schedler, Travis

    2018-03-01

    We prove a conjecture which expresses the bigraded Poisson-de Rham homology of the nilpotent cone of a semisimple Lie algebra in terms of the generalized (one-variable) Kostka polynomials, via a formula suggested by Lusztig. This allows us to construct a canonical family of filtrations on the flag variety cohomology, and hence on irreducible representations of the Weyl group, whose Hilbert series are given by the generalized Kostka polynomials. We deduce consequences for the cohomology of all Springer fibers. In particular, this computes the grading on the zeroth Poisson homology of all classical finite W-algebras, as well as the filtration on the zeroth Hochschild homology of all quantum finite W-algebras, and we generalize to all homology degrees. As a consequence, we deduce a conjecture of Proudfoot on symplectic duality, relating in type A the Poisson homology of Slodowy slices to the intersection cohomology of nilpotent orbit closures. In the last section, we give an analogue of our main theorem in the setting of mirabolic D-modules.

  9. Poisson Growth Mixture Modeling of Intensive Longitudinal Data: An Application to Smoking Cessation Behavior

    ERIC Educational Resources Information Center

    Shiyko, Mariya P.; Li, Yuelin; Rindskopf, David

    2012-01-01

    Intensive longitudinal data (ILD) have become increasingly common in the social and behavioral sciences; count variables, such as the number of daily smoked cigarettes, are frequently used outcomes in many ILD studies. We demonstrate a generalized extension of growth mixture modeling (GMM) to Poisson-distributed ILD for identifying qualitatively…

  10. Susceptibility to Heat-Related Fluid and Electrolyte Imbalance Emergency Department Visits in Atlanta, Georgia, USA.

    PubMed

    Heidari, Leila; Winquist, Andrea; Klein, Mitchel; O'Lenick, Cassandra; Grundstein, Andrew; Ebelt Sarnat, Stefanie

    2016-10-02

    Identification of populations susceptible to heat effects is critical for targeted prevention and more accurate risk assessment. Fluid and electrolyte imbalance (FEI) may provide an objective indicator of heat morbidity. Data on daily ambient temperature and FEI emergency department (ED) visits were collected in Atlanta, Georgia, USA during 1993-2012. Associations of warm-season same-day temperatures and FEI ED visits were estimated using Poisson generalized linear models. Analyses explored associations between FEI ED visits and various temperature metrics (maximum, minimum, average, and diurnal change in ambient temperature, apparent temperature, and heat index) modeled using linear, quadratic, and cubic terms to allow for non-linear associations. Effect modification by potential determinants of heat susceptibility (sex; race; comorbid congestive heart failure, kidney disease, and diabetes; and neighborhood poverty and education levels) was assessed via stratification. Higher warm-season ambient temperature was significantly associated with FEI ED visits, regardless of temperature metric used. Stratified analyses suggested heat-related risks for all populations, but particularly for males. This work highlights the utility of FEI as an indicator of heat morbidity, the health threat posed by warm-season temperatures, and the importance of considering susceptible populations in heat-health research.

  11. Susceptibility to Heat-Related Fluid and Electrolyte Imbalance Emergency Department Visits in Atlanta, Georgia, USA

    PubMed Central

    Heidari, Leila; Winquist, Andrea; Klein, Mitchel; O’Lenick, Cassandra; Grundstein, Andrew; Ebelt Sarnat, Stefanie

    2016-01-01

    Identification of populations susceptible to heat effects is critical for targeted prevention and more accurate risk assessment. Fluid and electrolyte imbalance (FEI) may provide an objective indicator of heat morbidity. Data on daily ambient temperature and FEI emergency department (ED) visits were collected in Atlanta, Georgia, USA during 1993–2012. Associations of warm-season same-day temperatures and FEI ED visits were estimated using Poisson generalized linear models. Analyses explored associations between FEI ED visits and various temperature metrics (maximum, minimum, average, and diurnal change in ambient temperature, apparent temperature, and heat index) modeled using linear, quadratic, and cubic terms to allow for non-linear associations. Effect modification by potential determinants of heat susceptibility (sex; race; comorbid congestive heart failure, kidney disease, and diabetes; and neighborhood poverty and education levels) was assessed via stratification. Higher warm-season ambient temperature was significantly associated with FEI ED visits, regardless of temperature metric used. Stratified analyses suggested heat-related risks for all populations, but particularly for males. This work highlights the utility of FEI as an indicator of heat morbidity, the health threat posed by warm-season temperatures, and the importance of considering susceptible populations in heat-health research. PMID:27706089

  12. Highly efficient and exact method for parallelization of grid-based algorithms and its implementation in DelPhi

    PubMed Central

    Li, Chuan; Li, Lin; Zhang, Jie; Alexov, Emil

    2012-01-01

    The Gauss-Seidel method is a standard iterative numerical method widely used to solve a system of equations and, in general, is more efficient comparing to other iterative methods, such as the Jacobi method. However, standard implementation of the Gauss-Seidel method restricts its utilization in parallel computing due to its requirement of using updated neighboring values (i.e., in current iteration) as soon as they are available. Here we report an efficient and exact (not requiring assumptions) method to parallelize iterations and to reduce the computational time as a linear/nearly linear function of the number of CPUs. In contrast to other existing solutions, our method does not require any assumptions and is equally applicable for solving linear and nonlinear equations. This approach is implemented in the DelPhi program, which is a finite difference Poisson-Boltzmann equation solver to model electrostatics in molecular biology. This development makes the iterative procedure on obtaining the electrostatic potential distribution in the parallelized DelPhi several folds faster than that in the serial code. Further we demonstrate the advantages of the new parallelized DelPhi by computing the electrostatic potential and the corresponding energies of large supramolecular structures. PMID:22674480

  13. A novel method to predict current voltage characteristics of positive corona discharges based on a perturbation technique. I. Local analysis

    NASA Astrophysics Data System (ADS)

    Shibata, Hisaichi; Takaki, Ryoji

    2017-11-01

    A novel method to compute current-voltage characteristics (CVCs) of direct current positive corona discharges is formulated based on a perturbation technique. We use linearized fluid equations coupled with the linearized Poisson's equation. Townsend relation is assumed to predict CVCs apart from the linearization point. We choose coaxial cylinders as a test problem, and we have successfully predicted parameters which can determine CVCs with arbitrary inner and outer radii. It is also confirmed that the proposed method essentially does not induce numerical instabilities.

  14. Adjusting Expected Mortality Rates Using Information From a Control Population: An Example Using Socioeconomic Status.

    PubMed

    Bower, Hannah; Andersson, Therese M-L; Crowther, Michael J; Dickman, Paul W; Lambe, Mats; Lambert, Paul C

    2018-04-01

    Expected or reference mortality rates are commonly used in the calculation of measures such as relative survival in population-based cancer survival studies and standardized mortality ratios. These expected rates are usually presented according to age, sex, and calendar year. In certain situations, stratification of expected rates by other factors is required to avoid potential bias if interest lies in quantifying measures according to such factors as, for example, socioeconomic status. If data are not available on a population level, information from a control population could be used to adjust expected rates. We have presented two approaches for adjusting expected mortality rates using information from a control population: a Poisson generalized linear model and a flexible parametric survival model. We used a control group from BCBaSe-a register-based, matched breast cancer cohort in Sweden with diagnoses between 1992 and 2012-to illustrate the two methods using socioeconomic status as a risk factor of interest. Results showed that Poisson and flexible parametric survival approaches estimate similar adjusted mortality rates according to socioeconomic status. Additional uncertainty involved in the methods to estimate stratified, expected mortality rates described in this study can be accounted for using a parametric bootstrap, but this might make little difference if using a large control population.

  15. A nonlinear equation for ionic diffusion in a strong binary electrolyte

    PubMed Central

    Ghosal, Sandip; Chen, Zhen

    2010-01-01

    The problem of the one-dimensional electro-diffusion of ions in a strong binary electrolyte is considered. The mathematical description, known as the Poisson–Nernst–Planck (PNP) system, consists of a diffusion equation for each species augmented by transport owing to a self-consistent electrostatic field determined by the Poisson equation. This description is also relevant to other important problems in physics, such as electron and hole diffusion across semiconductor junctions and the diffusion of ions in plasmas. If concentrations do not vary appreciably over distances of the order of the Debye length, the Poisson equation can be replaced by the condition of local charge neutrality first introduced by Planck. It can then be shown that both species diffuse at the same rate with a common diffusivity that is intermediate between that of the slow and fast species (ambipolar diffusion). Here, we derive a more general theory by exploiting the ratio of the Debye length to a characteristic length scale as a small asymptotic parameter. It is shown that the concentration of either species may be described by a nonlinear partial differential equation that provides a better approximation than the classical linear equation for ambipolar diffusion, but reduces to it in the appropriate limit. PMID:21818176

  16. A multiscale filter for noise reduction of low-dose cone beam projections

    NASA Astrophysics Data System (ADS)

    Yao, Weiguang; Farr, Jonathan B.

    2015-08-01

    The Poisson or compound Poisson process governs the randomness of photon fluence in cone beam computed tomography (CBCT) imaging systems. The probability density function depends on the mean (noiseless) of the fluence at a certain detector. This dependence indicates the natural requirement of multiscale filters to smooth noise while preserving structures of the imaged object on the low-dose cone beam projection. In this work, we used a Gaussian filter, \\text{exp}≤ft(-{{x}2}/2σ f2\\right) as the multiscale filter to de-noise the low-dose cone beam projections. We analytically obtained the expression of {σf} , which represents the scale of the filter, by minimizing local noise-to-signal ratio. We analytically derived the variance of residual noise from the Poisson or compound Poisson processes after Gaussian filtering. From the derived analytical form of the variance of residual noise, optimal σ f2 is proved to be proportional to the noiseless fluence and modulated by local structure strength expressed as the linear fitting error of the structure. A strategy was used to obtain the reliable linear fitting error: smoothing the projection along the longitudinal direction to calculate the linear fitting error along the lateral direction and vice versa. The performance of our multiscale filter was examined on low-dose cone beam projections of a Catphan phantom and a head-and-neck patient. After performing the filter on the Catphan phantom projections scanned with pulse time 4 ms, the number of visible line pairs was similar to that scanned with 16 ms, and the contrast-to-noise ratio of the inserts was higher than that scanned with 16 ms about 64% in average. For the simulated head-and-neck patient projections with pulse time 4 ms, the visibility of soft tissue structures in the patient was comparable to that scanned with 20 ms. The image processing took less than 0.5 s per projection with 1024   ×   768 pixels.

  17. Universal Poisson Statistics of mRNAs with Complex Decay Pathways.

    PubMed

    Thattai, Mukund

    2016-01-19

    Messenger RNA (mRNA) dynamics in single cells are often modeled as a memoryless birth-death process with a constant probability per unit time that an mRNA molecule is synthesized or degraded. This predicts a Poisson steady-state distribution of mRNA number, in close agreement with experiments. This is surprising, since mRNA decay is known to be a complex process. The paradox is resolved by realizing that the Poisson steady state generalizes to arbitrary mRNA lifetime distributions. A mapping between mRNA dynamics and queueing theory highlights an identifiability problem: a measured Poisson steady state is consistent with a large variety of microscopic models. Here, I provide a rigorous and intuitive explanation for the universality of the Poisson steady state. I show that the mRNA birth-death process and its complex decay variants all take the form of the familiar Poisson law of rare events, under a nonlinear rescaling of time. As a corollary, not only steady-states but also transients are Poisson distributed. Deviations from the Poisson form occur only under two conditions, promoter fluctuations leading to transcriptional bursts or nonindependent degradation of mRNA molecules. These results place severe limits on the power of single-cell experiments to probe microscopic mechanisms, and they highlight the need for single-molecule measurements. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  18. Bayesian dynamic modeling of time series of dengue disease case counts.

    PubMed

    Martínez-Bello, Daniel Adyro; López-Quílez, Antonio; Torres-Prieto, Alexander

    2017-07-01

    The aim of this study is to model the association between weekly time series of dengue case counts and meteorological variables, in a high-incidence city of Colombia, applying Bayesian hierarchical dynamic generalized linear models over the period January 2008 to August 2015. Additionally, we evaluate the model's short-term performance for predicting dengue cases. The methodology shows dynamic Poisson log link models including constant or time-varying coefficients for the meteorological variables. Calendar effects were modeled using constant or first- or second-order random walk time-varying coefficients. The meteorological variables were modeled using constant coefficients and first-order random walk time-varying coefficients. We applied Markov Chain Monte Carlo simulations for parameter estimation, and deviance information criterion statistic (DIC) for model selection. We assessed the short-term predictive performance of the selected final model, at several time points within the study period using the mean absolute percentage error. The results showed the best model including first-order random walk time-varying coefficients for calendar trend and first-order random walk time-varying coefficients for the meteorological variables. Besides the computational challenges, interpreting the results implies a complete analysis of the time series of dengue with respect to the parameter estimates of the meteorological effects. We found small values of the mean absolute percentage errors at one or two weeks out-of-sample predictions for most prediction points, associated with low volatility periods in the dengue counts. We discuss the advantages and limitations of the dynamic Poisson models for studying the association between time series of dengue disease and meteorological variables. The key conclusion of the study is that dynamic Poisson models account for the dynamic nature of the variables involved in the modeling of time series of dengue disease, producing useful models for decision-making in public health.

  19. A Matlab-based finite-difference solver for the Poisson problem with mixed Dirichlet-Neumann boundary conditions

    NASA Astrophysics Data System (ADS)

    Reimer, Ashton S.; Cheviakov, Alexei F.

    2013-03-01

    A Matlab-based finite-difference numerical solver for the Poisson equation for a rectangle and a disk in two dimensions, and a spherical domain in three dimensions, is presented. The solver is optimized for handling an arbitrary combination of Dirichlet and Neumann boundary conditions, and allows for full user control of mesh refinement. The solver routines utilize effective and parallelized sparse vector and matrix operations. Computations exhibit high speeds, numerical stability with respect to mesh size and mesh refinement, and acceptable error values even on desktop computers. Catalogue identifier: AENQ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENQ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License v3.0 No. of lines in distributed program, including test data, etc.: 102793 No. of bytes in distributed program, including test data, etc.: 369378 Distribution format: tar.gz Programming language: Matlab 2010a. Computer: PC, Macintosh. Operating system: Windows, OSX, Linux. RAM: 8 GB (8, 589, 934, 592 bytes) Classification: 4.3. Nature of problem: To solve the Poisson problem in a standard domain with “patchy surface”-type (strongly heterogeneous) Neumann/Dirichlet boundary conditions. Solution method: Finite difference with mesh refinement. Restrictions: Spherical domain in 3D; rectangular domain or a disk in 2D. Unusual features: Choice between mldivide/iterative solver for the solution of large system of linear algebraic equations that arise. Full user control of Neumann/Dirichlet boundary conditions and mesh refinement. Running time: Depending on the number of points taken and the geometry of the domain, the routine may take from less than a second to several hours to execute.

  20. Extending the Solvation-Layer Interface Condition Continum Electrostatic Model to a Linearized Poisson-Boltzmann Solvent.

    PubMed

    Molavi Tabrizi, Amirhossein; Goossens, Spencer; Mehdizadeh Rahimi, Ali; Cooper, Christopher D; Knepley, Matthew G; Bardhan, Jaydeep P

    2017-06-13

    We extend the linearized Poisson-Boltzmann (LPB) continuum electrostatic model for molecular solvation to address charge-hydration asymmetry. Our new solvation-layer interface condition (SLIC)/LPB corrects for first-shell response by perturbing the traditional continuum-theory interface conditions at the protein-solvent and the Stern-layer interfaces. We also present a GPU-accelerated treecode implementation capable of simulating large proteins, and our results demonstrate that the new model exhibits significant accuracy improvements over traditional LPB models, while reducing the number of fitting parameters from dozens (atomic radii) to just five parameters, which have physical meanings related to first-shell water behavior at an uncharged interface. In particular, atom radii in the SLIC model are not optimized but uniformly scaled from their Lennard-Jones radii. Compared to explicit-solvent free-energy calculations of individual atoms in small molecules, SLIC/LPB is significantly more accurate than standard parametrizations (RMS error 0.55 kcal/mol for SLIC, compared to RMS error of 3.05 kcal/mol for standard LPB). On parametrizing the electrostatic model with a simple nonpolar component for total molecular solvation free energies, our model predicts octanol/water transfer free energies with an RMS error 1.07 kcal/mol. A more detailed assessment illustrates that standard continuum electrostatic models reproduce total charging free energies via a compensation of significant errors in atomic self-energies; this finding offers a window into improving the accuracy of Generalized-Born theories and other coarse-grained models. Most remarkably, the SLIC model also reproduces positive charging free energies for atoms in hydrophobic groups, whereas standard PB models are unable to generate positive charging free energies regardless of the parametrized radii. The GPU-accelerated solver is freely available online, as is a MATLAB implementation.

  1. Some case studies of skewed (and other ab-normal) data distributions arising in low-level environmental research.

    PubMed

    Currie, L A

    2001-07-01

    Three general classes of skewed data distributions have been encountered in research on background radiation, chemical and radiochemical blanks, and low levels of 85Kr and 14C in the atmosphere and the cryosphere. The first class of skewed data can be considered to be theoretically, or fundamentally skewed. It is typified by the exponential distribution of inter-arrival times for nuclear counting events for a Poisson process. As part of a study of the nature of low-level (anti-coincidence) Geiger-Muller counter background radiation, tests were performed on the Poisson distribution of counts, the uniform distribution of arrival times, and the exponential distribution of inter-arrival times. The real laboratory system, of course, failed the (inter-arrival time) test--for very interesting reasons, linked to the physics of the measurement process. The second, computationally skewed, class relates to skewness induced by non-linear transformations. It is illustrated by non-linear concentration estimates from inverse calibration, and bivariate blank corrections for low-level 14C-12C aerosol data that led to highly asymmetric uncertainty intervals for the biomass carbon contribution to urban "soot". The third, environmentally, skewed, data class relates to a universal problem for the detection of excursions above blank or baseline levels: namely, the widespread occurrence of ab-normal distributions of environmental and laboratory blanks. This is illustrated by the search for fundamental factors that lurk behind skewed frequency distributions of sulfur laboratory blanks and 85Kr environmental baselines, and the application of robust statistical procedures for reliable detection decisions in the face of skewed isotopic carbon procedural blanks with few degrees of freedom.

  2. Study on longitudinal dispersion relation in one-dimensional relativistic plasma: Linear theory and Vlasov simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, H.; Wu, S. Z.; Zhou, C. T.

    2013-09-15

    The dispersion relation of one-dimensional longitudinal plasma waves in relativistic homogeneous plasmas is investigated with both linear theory and Vlasov simulation in this paper. From the Vlasov-Poisson equations, the linear dispersion relation is derived for the proper one-dimensional Jüttner distribution. Numerically obtained linear dispersion relation as well as an approximate formula for plasma wave frequency in the long wavelength limit is given. The dispersion of longitudinal wave is also simulated with a relativistic Vlasov code. The real and imaginary parts of dispersion relation are well studied by varying wave number and plasma temperature. Simulation results are in agreement with establishedmore » linear theory.« less

  3. Protein-ion binding process on finite macromolecular concentration. A Poisson-Boltzmann and Monte Carlo study.

    PubMed

    de Carvalho, Sidney Jurado; Fenley, Márcia O; da Silva, Fernando Luís Barroso

    2008-12-25

    Electrostatic interactions are one of the key driving forces for protein-ligands complexation. Different levels for the theoretical modeling of such processes are available on the literature. Most of the studies on the Molecular Biology field are performed within numerical solutions of the Poisson-Boltzmann Equation and the dielectric continuum models framework. In such dielectric continuum models, there are two pivotal questions: (a) how the protein dielectric medium should be modeled, and (b) what protocol should be used when solving this effective Hamiltonian. By means of Monte Carlo (MC) and Poisson-Boltzmann (PB) calculations, we define the applicability of the PB approach with linear and nonlinear responses for macromolecular electrostatic interactions in electrolyte solution, revealing some physical mechanisms and limitations behind it especially due the raise of both macromolecular charge and concentration out of the strong coupling regime. A discrepancy between PB and MC for binding constant shifts is shown and explained in terms of the manner PB approximates the excess chemical potentials of the ligand, and not as a consequence of the nonlinear thermal treatment and/or explicit ion-ion interactions as it could be argued. Our findings also show that the nonlinear PB predictions with a low dielectric response well reproduce the pK shifts calculations carried out with an uniform dielectric model. This confirms and completes previous results obtained by both MC and linear PB calculations.

  4. Non-Linear Concentration-Response Relationships between Ambient Ozone and Daily Mortality.

    PubMed

    Bae, Sanghyuk; Lim, Youn-Hee; Kashima, Saori; Yorifuji, Takashi; Honda, Yasushi; Kim, Ho; Hong, Yun-Chul

    2015-01-01

    Ambient ozone (O3) concentration has been reported to be significantly associated with mortality. However, linearity of the relationships and the presence of a threshold has been controversial. The aim of the present study was to examine the concentration-response relationship and threshold of the association between ambient O3 concentration and non-accidental mortality in 13 Japanese and Korean cities from 2000 to 2009. We selected Japanese and Korean cities which have population of over 1 million. We constructed Poisson regression models adjusting daily mean temperature, daily mean PM10, humidity, time trend, season, year, day of the week, holidays and yearly population. The association between O3 concentration and mortality was examined using linear, spline and linear-threshold models. The thresholds were estimated for each city, by constructing linear-threshold models. We also examined the city-combined association using a generalized additive mixed model. The mean O3 concentration did not differ greatly between Korea and Japan, which were 26.2 ppb and 24.2 ppb, respectively. Seven out of 13 cities showed better fits for the spline model compared with the linear model, supporting a non-linear relationships between O3 concentration and mortality. All of the 7 cities showed J or U shaped associations suggesting the existence of thresholds. The range of city-specific thresholds was from 11 to 34 ppb. The city-combined analysis also showed a non-linear association with a threshold around 30-40 ppb. We have observed non-linear concentration-response relationship with thresholds between daily mean ambient O3 concentration and daily number of non-accidental death in Japanese and Korean cities.

  5. Numerical Analysis of 2-D and 3-D MHD Flows Relevant to Fusion Applications

    DOE PAGES

    Khodak, Andrei

    2017-08-21

    Here, the analysis of many fusion applications such as liquid-metal blankets requires application of computational fluid dynamics (CFD) methods for electrically conductive liquids in geometrically complex regions and in the presence of a strong magnetic field. A current state of the art general purpose CFD code allows modeling of the flow in complex geometric regions, with simultaneous conjugated heat transfer analysis in liquid and surrounding solid parts. Together with a magnetohydrodynamics (MHD) capability, the general purpose CFD code will be a valuable tool for the design and optimization of fusion devices. This paper describes an introduction of MHD capability intomore » the general purpose CFD code CFX, part of the ANSYS Workbench. The code was adapted for MHD problems using a magnetic induction approach. CFX allows introduction of user-defined variables using transport or Poisson equations. For MHD adaptation of the code three additional transport equations were introduced for the components of the magnetic field, in addition to the Poisson equation for electric potential. The Lorentz force is included in the momentum transport equation as a source term. Fusion applications usually involve very strong magnetic fields, with values of the Hartmann number of up to tens of thousands. In this situation a system of MHD equations become very rigid with very large source terms and very strong variable gradients. To increase system robustness, special measures were introduced during the iterative convergence process, such as linearization using source coefficient for momentum equations. The MHD implementation in general purpose CFD code was tested against benchmarks, specifically selected for liquid-metal blanket applications. Results of numerical simulations using present implementation closely match analytical solutions for a Hartmann number of up to 1500 for a 2-D laminar flow in the duct of square cross section, with conducting and nonconducting walls. Results for a 3-D test case are also included.« less

  6. Numerical Analysis of 2-D and 3-D MHD Flows Relevant to Fusion Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khodak, Andrei

    Here, the analysis of many fusion applications such as liquid-metal blankets requires application of computational fluid dynamics (CFD) methods for electrically conductive liquids in geometrically complex regions and in the presence of a strong magnetic field. A current state of the art general purpose CFD code allows modeling of the flow in complex geometric regions, with simultaneous conjugated heat transfer analysis in liquid and surrounding solid parts. Together with a magnetohydrodynamics (MHD) capability, the general purpose CFD code will be a valuable tool for the design and optimization of fusion devices. This paper describes an introduction of MHD capability intomore » the general purpose CFD code CFX, part of the ANSYS Workbench. The code was adapted for MHD problems using a magnetic induction approach. CFX allows introduction of user-defined variables using transport or Poisson equations. For MHD adaptation of the code three additional transport equations were introduced for the components of the magnetic field, in addition to the Poisson equation for electric potential. The Lorentz force is included in the momentum transport equation as a source term. Fusion applications usually involve very strong magnetic fields, with values of the Hartmann number of up to tens of thousands. In this situation a system of MHD equations become very rigid with very large source terms and very strong variable gradients. To increase system robustness, special measures were introduced during the iterative convergence process, such as linearization using source coefficient for momentum equations. The MHD implementation in general purpose CFD code was tested against benchmarks, specifically selected for liquid-metal blanket applications. Results of numerical simulations using present implementation closely match analytical solutions for a Hartmann number of up to 1500 for a 2-D laminar flow in the duct of square cross section, with conducting and nonconducting walls. Results for a 3-D test case are also included.« less

  7. Blind beam-hardening correction from Poisson measurements

    NASA Astrophysics Data System (ADS)

    Gu, Renliang; Dogandžić, Aleksandar

    2016-02-01

    We develop a sparse image reconstruction method for Poisson-distributed polychromatic X-ray computed tomography (CT) measurements under the blind scenario where the material of the inspected object and the incident energy spectrum are unknown. We employ our mass-attenuation spectrum parameterization of the noiseless measurements and express the mass- attenuation spectrum as a linear combination of B-spline basis functions of order one. A block coordinate-descent algorithm is developed for constrained minimization of a penalized Poisson negative log-likelihood (NLL) cost function, where constraints and penalty terms ensure nonnegativity of the spline coefficients and nonnegativity and sparsity of the density map image; the image sparsity is imposed using a convex total-variation (TV) norm penalty term. This algorithm alternates between a Nesterov's proximal-gradient (NPG) step for estimating the density map image and a limited-memory Broyden-Fletcher-Goldfarb-Shanno with box constraints (L-BFGS-B) step for estimating the incident-spectrum parameters. To accelerate convergence of the density- map NPG steps, we apply function restart and a step-size selection scheme that accounts for varying local Lipschitz constants of the Poisson NLL. Real X-ray CT reconstruction examples demonstrate the performance of the proposed scheme.

  8. Predicting spatio-temporal failure in large scale observational and micro scale experimental systems

    NASA Astrophysics Data System (ADS)

    de las Heras, Alejandro; Hu, Yong

    2006-10-01

    Forecasting has become an essential part of modern thought, but the practical limitations still are manifold. We addressed future rates of change by comparing models that take into account time, and models that focus more on space. Cox regression confirmed that linear change can be safely assumed in the short-term. Spatially explicit Poisson regression, provided a ceiling value for the number of deforestation spots. With several observed and estimated rates, it was decided to forecast using the more robust assumptions. A Markov-chain cellular automaton thus projected 5-year deforestation in the Amazonian Arc of Deforestation, showing that even a stable rate of change would largely deplete the forest area. More generally, resolution and implementation of the existing models could explain many of the modelling difficulties still affecting forecasting.

  9. Enhanced charging kinetics of porous electrodes: surface conduction as a short-circuit mechanism.

    PubMed

    Mirzadeh, Mohammad; Gibou, Frederic; Squires, Todd M

    2014-08-29

    We use direct numerical simulations of the Poisson-Nernst-Planck equations to study the charging kinetics of porous electrodes and to evaluate the predictive capabilities of effective circuit models, both linear and nonlinear. The classic transmission line theory of de Levie holds for general electrode morphologies, but only at low applied potentials. Charging dynamics are slowed appreciably at high potentials, yet not as significantly as predicted by the nonlinear transmission line model of Biesheuvel and Bazant. We identify surface conduction as a mechanism which can effectively "short circuit" the high-resistance electrolyte in the bulk of the pores, thus accelerating the charging dynamics and boosting power densities. Notably, the boost in power density holds only for electrode morphologies with continuous conducting surfaces in the charging direction.

  10. Structural, elastic and electronic properties of transition metal carbides ZnC, NbC and their ternary alloys ZnxNb1-xC

    NASA Astrophysics Data System (ADS)

    Zidi, Y.; Méçabih, S.; Abbar, B.; Amari, S.

    2018-02-01

    We have investigated the structural, electronic and elastic properties of transition-metal carbides ZnxNb1-xC alloys in the range of 0 ≤ x ≤ 1 using the density functional theory (DFT). The full potential linearized augmented plane wave (FP-LAPW) method within a framework of the generalized gradient approximation (GGA) and GGA + U (where U is the Hubbard correlation terms) approach is used to perform the calculations presented here. The lattice parameters, the bulk modulus, its pressure derivative and the elastic constants were determined. We have obtained Young's modulus, shear modulus, Poisson's ratio, anisotropy factor by the aid of the calculated elastic constants. We discuss the total and partial densities of states and charge densities.

  11. An unbiased risk estimator for image denoising in the presence of mixed poisson-gaussian noise.

    PubMed

    Le Montagner, Yoann; Angelini, Elsa D; Olivo-Marin, Jean-Christophe

    2014-03-01

    The behavior and performance of denoising algorithms are governed by one or several parameters, whose optimal settings depend on the content of the processed image and the characteristics of the noise, and are generally designed to minimize the mean squared error (MSE) between the denoised image returned by the algorithm and a virtual ground truth. In this paper, we introduce a new Poisson-Gaussian unbiased risk estimator (PG-URE) of the MSE applicable to a mixed Poisson-Gaussian noise model that unifies the widely used Gaussian and Poisson noise models in fluorescence bioimaging applications. We propose a stochastic methodology to evaluate this estimator in the case when little is known about the internal machinery of the considered denoising algorithm, and we analyze both theoretically and empirically the characteristics of the PG-URE estimator. Finally, we evaluate the PG-URE-driven parametrization for three standard denoising algorithms, with and without variance stabilizing transforms, and different characteristics of the Poisson-Gaussian noise mixture.

  12. Estimation of parameters in Shot-Noise-Driven Doubly Stochastic Poisson processes using the EM algorithm--modeling of pre- and postsynaptic spike trains.

    PubMed

    Mino, H

    2007-01-01

    To estimate the parameters, the impulse response (IR) functions of some linear time-invariant systems generating intensity processes, in Shot-Noise-Driven Doubly Stochastic Poisson Process (SND-DSPP) in which multivariate presynaptic spike trains and postsynaptic spike trains can be assumed to be modeled by the SND-DSPPs. An explicit formula for estimating the IR functions from observations of multivariate input processes of the linear systems and the corresponding counting process (output process) is derived utilizing the expectation maximization (EM) algorithm. The validity of the estimation formula was verified through Monte Carlo simulations in which two presynaptic spike trains and one postsynaptic spike train were assumed to be observable. The IR functions estimated on the basis of the proposed identification method were close to the true IR functions. The proposed method will play an important role in identifying the input-output relationship of pre- and postsynaptic neural spike trains in practical situations.

  13. Design studies of the Ku-band, wide-band Gyro-TWT amplifier

    NASA Astrophysics Data System (ADS)

    Jung, Sang Wook; Lee, Han Seul; Jang, Kwong Ho; Choi, Jin Joo; Hong, Yong Jun; Shin, Jin Woo; So, Jun Ho; Won, Jong Hyo

    2014-02-01

    This paper reports a Ku-band, wide band Gyrotron-Traveling-wave-tube(Gyro-TWT) that is currently being developed at Kwangwoon University. The Gyro-TWT has a two stage linear tapered interaction circuit to obtain a wide operating bandwidth. The linearly-tapered interaction circuit and nonlinearly-tapered magnetic field gives the Gyro-TWT a wide operating bandwidth. The Gyro-TWT bandwidth is 23%. The 2d-Particle-in-cell(PIC) and MAGIC2d code simulation results are 17.3 dB and 24.34 kW, respectively for the maximum saturated output power. A double anode MIG was simulated with E-Gun code. The results were 0.7 for the transvers to the axial beam velocity ratio (=alpha) and a 2.3% axial velocity spread at 50 kV and 4 A. A magnetic field profile simulation was performed by using the Poisson code to obtain the grazing magnetic field of the entire interaction circuit with Poisson code.

  14. Estimating False Positive Contamination in Crater Annotations from Citizen Science Data

    NASA Astrophysics Data System (ADS)

    Tar, P. D.; Bugiolacchi, R.; Thacker, N. A.; Gilmour, J. D.

    2017-01-01

    Web-based citizen science often involves the classification of image features by large numbers of minimally trained volunteers, such as the identification of lunar impact craters under the Moon Zoo project. Whilst such approaches facilitate the analysis of large image data sets, the inexperience of users and ambiguity in image content can lead to contamination from false positive identifications. We give an approach, using Linear Poisson Models and image template matching, that can quantify levels of false positive contamination in citizen science Moon Zoo crater annotations. Linear Poisson Models are a form of machine learning which supports predictive error modelling and goodness-of-fits, unlike most alternative machine learning methods. The proposed supervised learning system can reduce the variability in crater counts whilst providing predictive error assessments of estimated quantities of remaining true verses false annotations. In an area of research influenced by human subjectivity, the proposed method provides a level of objectivity through the utilisation of image evidence, guided by candidate crater identifications.

  15. Modeling Linguistic Variables With Regression Models: Addressing Non-Gaussian Distributions, Non-independent Observations, and Non-linear Predictors With Random Effects and Generalized Additive Models for Location, Scale, and Shape

    PubMed Central

    Coupé, Christophe

    2018-01-01

    As statistical approaches are getting increasingly used in linguistics, attention must be paid to the choice of methods and algorithms used. This is especially true since they require assumptions to be satisfied to provide valid results, and because scientific articles still often fall short of reporting whether such assumptions are met. Progress is being, however, made in various directions, one of them being the introduction of techniques able to model data that cannot be properly analyzed with simpler linear regression models. We report recent advances in statistical modeling in linguistics. We first describe linear mixed-effects regression models (LMM), which address grouping of observations, and generalized linear mixed-effects models (GLMM), which offer a family of distributions for the dependent variable. Generalized additive models (GAM) are then introduced, which allow modeling non-linear parametric or non-parametric relationships between the dependent variable and the predictors. We then highlight the possibilities offered by generalized additive models for location, scale, and shape (GAMLSS). We explain how they make it possible to go beyond common distributions, such as Gaussian or Poisson, and offer the appropriate inferential framework to account for ‘difficult’ variables such as count data with strong overdispersion. We also demonstrate how they offer interesting perspectives on data when not only the mean of the dependent variable is modeled, but also its variance, skewness, and kurtosis. As an illustration, the case of phonemic inventory size is analyzed throughout the article. For over 1,500 languages, we consider as predictors the number of speakers, the distance from Africa, an estimation of the intensity of language contact, and linguistic relationships. We discuss the use of random effects to account for genealogical relationships, the choice of appropriate distributions to model count data, and non-linear relationships. Relying on GAMLSS, we assess a range of candidate distributions, including the Sichel, Delaporte, Box-Cox Green and Cole, and Box-Cox t distributions. We find that the Box-Cox t distribution, with appropriate modeling of its parameters, best fits the conditional distribution of phonemic inventory size. We finally discuss the specificities of phoneme counts, weak effects, and how GAMLSS should be considered for other linguistic variables. PMID:29713298

  16. Modeling Linguistic Variables With Regression Models: Addressing Non-Gaussian Distributions, Non-independent Observations, and Non-linear Predictors With Random Effects and Generalized Additive Models for Location, Scale, and Shape.

    PubMed

    Coupé, Christophe

    2018-01-01

    As statistical approaches are getting increasingly used in linguistics, attention must be paid to the choice of methods and algorithms used. This is especially true since they require assumptions to be satisfied to provide valid results, and because scientific articles still often fall short of reporting whether such assumptions are met. Progress is being, however, made in various directions, one of them being the introduction of techniques able to model data that cannot be properly analyzed with simpler linear regression models. We report recent advances in statistical modeling in linguistics. We first describe linear mixed-effects regression models (LMM), which address grouping of observations, and generalized linear mixed-effects models (GLMM), which offer a family of distributions for the dependent variable. Generalized additive models (GAM) are then introduced, which allow modeling non-linear parametric or non-parametric relationships between the dependent variable and the predictors. We then highlight the possibilities offered by generalized additive models for location, scale, and shape (GAMLSS). We explain how they make it possible to go beyond common distributions, such as Gaussian or Poisson, and offer the appropriate inferential framework to account for 'difficult' variables such as count data with strong overdispersion. We also demonstrate how they offer interesting perspectives on data when not only the mean of the dependent variable is modeled, but also its variance, skewness, and kurtosis. As an illustration, the case of phonemic inventory size is analyzed throughout the article. For over 1,500 languages, we consider as predictors the number of speakers, the distance from Africa, an estimation of the intensity of language contact, and linguistic relationships. We discuss the use of random effects to account for genealogical relationships, the choice of appropriate distributions to model count data, and non-linear relationships. Relying on GAMLSS, we assess a range of candidate distributions, including the Sichel, Delaporte, Box-Cox Green and Cole, and Box-Cox t distributions. We find that the Box-Cox t distribution, with appropriate modeling of its parameters, best fits the conditional distribution of phonemic inventory size. We finally discuss the specificities of phoneme counts, weak effects, and how GAMLSS should be considered for other linguistic variables.

  17. Generalized Poisson-Kac Processes: Basic Properties and Implications in Extended Thermodynamics and Transport

    NASA Astrophysics Data System (ADS)

    Giona, Massimiliano; Brasiello, Antonio; Crescitelli, Silvestro

    2016-04-01

    We introduce a new class of stochastic processes in Rn,{{{mathbb R}}^n}, referred to as generalized Poisson-Kac (GPK) processes, that generalizes the Poisson-Kac telegrapher's random motion in higher dimensions. These stochastic processes possess finite propagation velocity, almost everywhere smooth trajectories, and converge in the Kac limit to Brownian motion. GPK processes are defined by coupling the selection of a bounded velocity vector from a family of N distinct ones with a Markovian dynamics controlling probabilistically this selection. This model can be used as a probabilistic tool for a stochastically consistent formulation of extended thermodynamic theories far from equilibrium.

  18. WAKES: Wavelet Adaptive Kinetic Evolution Solvers

    NASA Astrophysics Data System (ADS)

    Mardirian, Marine; Afeyan, Bedros; Larson, David

    2016-10-01

    We are developing a general capability to adaptively solve phase space evolution equations mixing particle and continuum techniques in an adaptive manner. The multi-scale approach is achieved using wavelet decompositions which allow phase space density estimation to occur with scale dependent increased accuracy and variable time stepping. Possible improvements on the SFK method of Larson are discussed, including the use of multiresolution analysis based Richardson-Lucy Iteration, adaptive step size control in explicit vs implicit approaches. Examples will be shown with KEEN waves and KEEPN (Kinetic Electrostatic Electron Positron Nonlinear) waves, which are the pair plasma generalization of the former, and have a much richer span of dynamical behavior. WAKES techniques are well suited for the study of driven and released nonlinear, non-stationary, self-organized structures in phase space which have no fluid, limit nor a linear limit, and yet remain undamped and coherent well past the drive period. The work reported here is based on the Vlasov-Poisson model of plasma dynamics. Work supported by a Grant from the AFOSR.

  19. Markov modulated Poisson process models incorporating covariates for rainfall intensity.

    PubMed

    Thayakaran, R; Ramesh, N I

    2013-01-01

    Time series of rainfall bucket tip times at the Beaufort Park station, Bracknell, in the UK are modelled by a class of Markov modulated Poisson processes (MMPP) which may be thought of as a generalization of the Poisson process. Our main focus in this paper is to investigate the effects of including covariate information into the MMPP model framework on statistical properties. In particular, we look at three types of time-varying covariates namely temperature, sea level pressure, and relative humidity that are thought to be affecting the rainfall arrival process. Maximum likelihood estimation is used to obtain the parameter estimates, and likelihood ratio tests are employed in model comparison. Simulated data from the fitted model are used to make statistical inferences about the accumulated rainfall in the discrete time interval. Variability of the daily Poisson arrival rates is studied.

  20. Poisson-Boltzmann versus Size-Modified Poisson-Boltzmann Electrostatics Applied to Lipid Bilayers.

    PubMed

    Wang, Nuo; Zhou, Shenggao; Kekenes-Huskey, Peter M; Li, Bo; McCammon, J Andrew

    2014-12-26

    Mean-field methods, such as the Poisson-Boltzmann equation (PBE), are often used to calculate the electrostatic properties of molecular systems. In the past two decades, an enhancement of the PBE, the size-modified Poisson-Boltzmann equation (SMPBE), has been reported. Here, the PBE and the SMPBE are reevaluated for realistic molecular systems, namely, lipid bilayers, under eight different sets of input parameters. The SMPBE appears to reproduce the molecular dynamics simulation results better than the PBE only under specific parameter sets, but in general, it performs no better than the Stern layer correction of the PBE. These results emphasize the need for careful discussions of the accuracy of mean-field calculations on realistic systems with respect to the choice of parameters and call for reconsideration of the cost-efficiency and the significance of the current SMPBE formulation.

  1. Is high relative humidity associated with childhood hand, foot, and mouth disease in rural and urban areas?

    PubMed

    Yang, H; Wu, J; Cheng, J; Wang, X; Wen, L; Li, K; Su, H

    2017-01-01

    To examine the relationship between relative humidity and childhood hand, foot and mouth disease (HFMD) in Hefei, China, and to explore whether the effect is different between urban and rural areas. Retrospective ecological study. A Poisson generalized linear model combined with a distributed lag non-linear model was used to examine the relationship between relative humidity and childhood HFMD in a temperate Chinese city during 2010-2012. The effect of relative humidity on childhood HFMD increased above a humidity of 84%, with a 0.34% (95% CI: 0.23%-0.45%) increase of childhood HFMD per 1% increment of relative humidity. Notably, urban children, male children, and children aged 0-4 years appeared to be more vulnerable to the effect of relative humidity on HFMD. This article study indicates that high relative humidity may trigger childhood HFMD in a temperate area, Hefei, particularly for those who are young and from urban areas. Copyright © 2015 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.

  2. Lattice dynamic properties of Rh2XAl (X=Fe and Y) alloys

    NASA Astrophysics Data System (ADS)

    Al, Selgin; Arikan, Nihat; Demir, Süleyman; Iyigör, Ahmet

    2018-02-01

    The electronic band structure, elastic and vibrational spectra of Rh2FeAl and Rh2YAl alloys were computed in detail by employing an ab-initio pseudopotential method and a linear-response technique based on the density-functional theory (DFT) scheme within a generalized gradient approximation (GGA). Computed lattice constants, bulk modulus and elastic constants were compared. Rh2YAl exhibited higher ability to resist volume change than Rh2FeAl. The elastic constants, shear modulus, Young modulus, Poisson's ratio, B/G ratio electronic band structure, total and partial density of states, and total magnetic moment of alloys were also presented. Rh2FeAl showed spin up and spin down states whereas Rh2YAl showed none due to being non-magnetic. The calculated total densities of states for both materials suggest that both alloys are metallic in nature. Full phonon spectra of Rh2FeAl and Rh2YA1 alloys in the L21 phase were collected using the ab-initio linear response method. The obtained phonon frequencies were in the positive region indicating that both alloys are dynamically stable.

  3. [Influence of humidex on incidence of bacillary dysentery in Hefei: a time-series study].

    PubMed

    Zhang, H; Zhao, K F; He, R X; Zhao, D S; Xie, M Y; Wang, S S; Bai, L J; Cheng, Q; Zhang, Y W; Su, H

    2017-11-10

    Objective: To investigate the effect of humidex combined with mean temperature and relative humidity on the incidence of bacillary dysentery in Hefei. Methods: Daily counts of bacillary dysentery cases and weather data in Hefei were collected from January 1, 2006 to December 31, 2013. Then, the humidex was calculated from temperature and relative humidity. A Poisson generalized linear regression combined with distributed lag non-linear model was applied to analyze the relationship between humidex and the incidence of bacillary dysentery, after adjusting for long-term and seasonal trends, day of week and other weather confounders. Stratified analyses by gender, age and address were also conducted. Results: The risk of bacillary dysentery increased with the rise of humidex. The adverse effect of high humidex (90 percentile of humidex) appeared in 2-days lag and it was the largest at 4-days lag ( RR =1.063, 95 %CI : 1.037-1.090). Subgroup analyses indicated that all groups were affected by high humidex at lag 2-5 days. Conclusion: High humidex could significantly increase the risk of bacillary dysentery, and the lagged effects were observed.

  4. Effective description of higher-order scalar-tensor theories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Langlois, David; Mancarella, Michele; Vernizzi, Filippo

    Most existing theories of dark energy and/or modified gravity, involving a scalar degree of freedom, can be conveniently described within the framework of the Effective Theory of Dark Energy, based on the unitary gauge where the scalar field is uniform. We extend this effective approach by allowing the Lagrangian in unitary gauge to depend on the time derivative of the lapse function. Although this dependence generically signals the presence of an extra scalar degree of freedom, theories that contain only one propagating scalar degree of freedom, in addition to the usual tensor modes, can be constructed by requiring the initialmore » Lagrangian to be degenerate. Starting from a general quadratic action, we derive the dispersion relations for the linear perturbations around Minkowski and a cosmological background. Our analysis directly applies to the recently introduced Degenerate Higher-Order Scalar-Tensor (DHOST) theories. For these theories, we find that one cannot recover a Poisson-like equation in the static linear regime except for the subclass that includes the Horndeski and so-called 'beyond Horndeski' theories. We also discuss Lorentz-breaking models inspired by Horava gravity.« less

  5. Nonlinear dynamics of electromagnetic turbulence in a nonuniform magnetized plasma

    NASA Astrophysics Data System (ADS)

    Shukla, P. K.; Mirza, Arshad M.; Faria, R. T.

    1998-03-01

    By using the hydrodynamic electron response with fixed (kinetic) ions along with Poisson's equation as well as Ampère's law, a system of nonlinear equations for low-frequency (in comparison with the electron gyrofrequency) long-(short-) wavelength electromagnetic waves in a nonuniform resistive magnetoplasma has been derived. The plasma contains equilibrium density gradient and sheared equilibrium plasma flows. In the linear limit, local dispersion relations are obtained and analyzed. It is found that sheared equilibrium flows can cause instability of Alfvén-like electromagnetic waves even in the absence of a density gradient. Furthermore, it is shown that possible stationary solutions of the nonlinear equations without dissipation can be represented in the form of various types of vortices. On the other hand, the temporal behavior of our nonlinear dissipative systems without the equilibrium density inhomogeneity can be described by the generalized Lorenz equations which admit chaotic trajectories. The density inhomogeneity may lead to even qualitative changes in the chaotic dynamics. The results of our investigation should be useful in understanding the linear and nonlinear properties of nonthermal electromagnetic waves in space and laboratory plasmas.

  6. The Poisson-Boltzmann theory for the two-plates problem: some exact results.

    PubMed

    Xing, Xiang-Jun

    2011-12-01

    The general solution to the nonlinear Poisson-Boltzmann equation for two parallel charged plates, either inside a symmetric electrolyte, or inside a 2q:-q asymmetric electrolyte, is found in terms of Weierstrass elliptic functions. From this we derive some exact asymptotic results for the interaction between charged plates, as well as the exact form of the renormalized surface charge density.

  7. Linear and non-linear Modified Gravity forecasts with future surveys

    NASA Astrophysics Data System (ADS)

    Casas, Santiago; Kunz, Martin; Martinelli, Matteo; Pettorino, Valeria

    2017-12-01

    Modified Gravity theories generally affect the Poisson equation and the gravitational slip in an observable way, that can be parameterized by two generic functions (η and μ) of time and space. We bin their time dependence in redshift and present forecasts on each bin for future surveys like Euclid. We consider both Galaxy Clustering and Weak Lensing surveys, showing the impact of the non-linear regime, with two different semi-analytical approximations. In addition to these future observables, we use a prior covariance matrix derived from the Planck observations of the Cosmic Microwave Background. In this work we neglect the information from the cross correlation of these observables, and treat them as independent. Our results show that η and μ in different redshift bins are significantly correlated, but including non-linear scales reduces or even eliminates the correlation, breaking the degeneracy between Modified Gravity parameters and the overall amplitude of the matter power spectrum. We further apply a Zero-phase Component Analysis and identify which combinations of the Modified Gravity parameter amplitudes, in different redshift bins, are best constrained by future surveys. We extend the analysis to two particular parameterizations of μ and η and consider, in addition to Euclid, also SKA1, SKA2, DESI: we find in this case that future surveys will be able to constrain the current values of η and μ at the 2-5% level when using only linear scales (wavevector k < 0 . 15 h/Mpc), depending on the specific time parameterization; sensitivity improves to about 1% when non-linearities are included.

  8. Poisson structure on a space with linear SU(2) fuzziness

    NASA Astrophysics Data System (ADS)

    Khorrami, Mohammad; Fatollahi, Amir H.; Shariati, Ahmad

    2009-07-01

    The Poisson structure is constructed for a model in which spatial coordinates of configuration space are noncommutative and satisfy the commutation relations of a Lie algebra. The case is specialized to that of the group SU(2), for which the counterpart of the angular momentum as well as the Euler parametrization of the phase space are introduced. SU(2)-invariant classical systems are discussed, and it is observed that the path of particle can be obtained by the solution of a first-order equation, as the case with such models on commutative spaces. The examples of free particle, rotationally invariant potentials, and specially the isotropic harmonic oscillator are investigated in more detail.

  9. Linear-Nonlinear-Poisson Models of Primate Choice Dynamics

    ERIC Educational Resources Information Center

    Corrado, Greg S.; Sugrue, Leo P.; Seung, H. Sebastian; Newsome, William T.

    2005-01-01

    The equilibrium phenomenon of matching behavior traditionally has been studied in stationary environments. Here we attempt to uncover the local mechanism of choice that gives rise to matching by studying behavior in a highly dynamic foraging environment. In our experiments, 2 rhesus monkeys ("Macacca mulatta") foraged for juice rewards by making…

  10. AFMPB: An adaptive fast multipole Poisson-Boltzmann solver for calculating electrostatics in biomolecular systems

    NASA Astrophysics Data System (ADS)

    Lu, Benzhuo; Cheng, Xiaolin; Huang, Jingfang; McCammon, J. Andrew

    2010-06-01

    A Fortran program package is introduced for rapid evaluation of the electrostatic potentials and forces in biomolecular systems modeled by the linearized Poisson-Boltzmann equation. The numerical solver utilizes a well-conditioned boundary integral equation (BIE) formulation, a node-patch discretization scheme, a Krylov subspace iterative solver package with reverse communication protocols, and an adaptive new version of fast multipole method in which the exponential expansions are used to diagonalize the multipole-to-local translations. The program and its full description, as well as several closely related libraries and utility tools are available at http://lsec.cc.ac.cn/~lubz/afmpb.html and a mirror site at http://mccammon.ucsd.edu/. This paper is a brief summary of the program: the algorithms, the implementation and the usage. Program summaryProgram title: AFMPB: Adaptive fast multipole Poisson-Boltzmann solver Catalogue identifier: AEGB_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGB_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPL 2.0 No. of lines in distributed program, including test data, etc.: 453 649 No. of bytes in distributed program, including test data, etc.: 8 764 754 Distribution format: tar.gz Programming language: Fortran Computer: Any Operating system: Any RAM: Depends on the size of the discretized biomolecular system Classification: 3 External routines: Pre- and post-processing tools are required for generating the boundary elements and for visualization. Users can use MSMS ( http://www.scripps.edu/~sanner/html/msms_home.html) for pre-processing, and VMD ( http://www.ks.uiuc.edu/Research/vmd/) for visualization. Sub-programs included: An iterative Krylov subspace solvers package from SPARSKIT by Yousef Saad ( http://www-users.cs.umn.edu/~saad/software/SPARSKIT/sparskit.html), and the fast multipole methods subroutines from FMMSuite ( http://www.fastmultipole.org/). Nature of problem: Numerical solution of the linearized Poisson-Boltzmann equation that describes electrostatic interactions of molecular systems in ionic solutions. Solution method: A novel node-patch scheme is used to discretize the well-conditioned boundary integral equation formulation of the linearized Poisson-Boltzmann equation. Various Krylov subspace solvers can be subsequently applied to solve the resulting linear system, with a bounded number of iterations independent of the number of discretized unknowns. The matrix-vector multiplication at each iteration is accelerated by the adaptive new versions of fast multipole methods. The AFMPB solver requires other stand-alone pre-processing tools for boundary mesh generation, post-processing tools for data analysis and visualization, and can be conveniently coupled with different time stepping methods for dynamics simulation. Restrictions: Only three or six significant digits options are provided in this version. Unusual features: Most of the codes are in Fortran77 style. Memory allocation functions from Fortran90 and above are used in a few subroutines. Additional comments: The current version of the codes is designed and written for single core/processor desktop machines. Check http://lsec.cc.ac.cn/~lubz/afmpb.html and http://mccammon.ucsd.edu/ for updates and changes. Running time: The running time varies with the number of discretized elements ( N) in the system and their distributions. In most cases, it scales linearly as a function of N.

  11. Enhanced Night Vision Via a Combination of Poisson Interpolation and Machine Learning

    DTIC Science & Technology

    2006-02-01

    of 0-255, they are mostly similar. The right plot shows a family of m(x, ψ) curves of ψ=2 (the most linear) through ψ=1024 (the most curved ...complicating low-light imaging. Nayar and Branzoi [04] later suggested a second variant using a DLP micromirror array to modulate the exposure, via time...255, they are mostly similar. The right plot shows a family of m(x, ψ) curves of ψ=2 (the most linear) through ψ=1024 (the most curved

  12. Nonparametric Bayesian Segmentation of a Multivariate Inhomogeneous Space-Time Poisson Process.

    PubMed

    Ding, Mingtao; He, Lihan; Dunson, David; Carin, Lawrence

    2012-12-01

    A nonparametric Bayesian model is proposed for segmenting time-evolving multivariate spatial point process data. An inhomogeneous Poisson process is assumed, with a logistic stick-breaking process (LSBP) used to encourage piecewise-constant spatial Poisson intensities. The LSBP explicitly favors spatially contiguous segments, and infers the number of segments based on the observed data. The temporal dynamics of the segmentation and of the Poisson intensities are modeled with exponential correlation in time, implemented in the form of a first-order autoregressive model for uniformly sampled discrete data, and via a Gaussian process with an exponential kernel for general temporal sampling. We consider and compare two different inference techniques: a Markov chain Monte Carlo sampler, which has relatively high computational complexity; and an approximate and efficient variational Bayesian analysis. The model is demonstrated with a simulated example and a real example of space-time crime events in Cincinnati, Ohio, USA.

  13. Four-dimensional gravity as an almost-Poisson system

    NASA Astrophysics Data System (ADS)

    Ita, Eyo Eyo

    2015-04-01

    In this paper, we examine the phase space structure of a noncanonical formulation of four-dimensional gravity referred to as the Instanton representation of Plebanski gravity (IRPG). The typical Hamiltonian (symplectic) approach leads to an obstruction to the definition of a symplectic structure on the full phase space of the IRPG. We circumvent this obstruction, using the Lagrange equations of motion, to find the appropriate generalization of the Poisson bracket. It is shown that the IRPG does not support a Poisson bracket except on the vector constraint surface. Yet there exists a fundamental bilinear operation on its phase space which produces the correct equations of motion and induces the correct transformation properties of the basic fields. This bilinear operation is known as the almost-Poisson bracket, which fails to satisfy the Jacobi identity and in this case also the condition of antisymmetry. We place these results into the overall context of nonsymplectic systems.

  14. Deterministic multidimensional nonuniform gap sampling.

    PubMed

    Worley, Bradley; Powers, Robert

    2015-12-01

    Born from empirical observations in nonuniformly sampled multidimensional NMR data relating to gaps between sampled points, the Poisson-gap sampling method has enjoyed widespread use in biomolecular NMR. While the majority of nonuniform sampling schemes are fully randomly drawn from probability densities that vary over a Nyquist grid, the Poisson-gap scheme employs constrained random deviates to minimize the gaps between sampled grid points. We describe a deterministic gap sampling method, based on the average behavior of Poisson-gap sampling, which performs comparably to its random counterpart with the additional benefit of completely deterministic behavior. We also introduce a general algorithm for multidimensional nonuniform sampling based on a gap equation, and apply it to yield a deterministic sampling scheme that combines burst-mode sampling features with those of Poisson-gap schemes. Finally, we derive a relationship between stochastic gap equations and the expectation value of their sampling probability densities. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. Error Propagation Dynamics of PIV-based Pressure Field Calculations: How well does the pressure Poisson solver perform inherently?

    PubMed

    Pan, Zhao; Whitehead, Jared; Thomson, Scott; Truscott, Tadd

    2016-08-01

    Obtaining pressure field data from particle image velocimetry (PIV) is an attractive technique in fluid dynamics due to its noninvasive nature. The application of this technique generally involves integrating the pressure gradient or solving the pressure Poisson equation using a velocity field measured with PIV. However, very little research has been done to investigate the dynamics of error propagation from PIV-based velocity measurements to the pressure field calculation. Rather than measure the error through experiment, we investigate the dynamics of the error propagation by examining the Poisson equation directly. We analytically quantify the error bound in the pressure field, and are able to illustrate the mathematical roots of why and how the Poisson equation based pressure calculation propagates error from the PIV data. The results show that the error depends on the shape and type of boundary conditions, the dimensions of the flow domain, and the flow type.

  16. Bringing consistency to simulation of population models--Poisson simulation as a bridge between micro and macro simulation.

    PubMed

    Gustafsson, Leif; Sternad, Mikael

    2007-10-01

    Population models concern collections of discrete entities such as atoms, cells, humans, animals, etc., where the focus is on the number of entities in a population. Because of the complexity of such models, simulation is usually needed to reproduce their complete dynamic and stochastic behaviour. Two main types of simulation models are used for different purposes, namely micro-simulation models, where each individual is described with its particular attributes and behaviour, and macro-simulation models based on stochastic differential equations, where the population is described in aggregated terms by the number of individuals in different states. Consistency between micro- and macro-models is a crucial but often neglected aspect. This paper demonstrates how the Poisson Simulation technique can be used to produce a population macro-model consistent with the corresponding micro-model. This is accomplished by defining Poisson Simulation in strictly mathematical terms as a series of Poisson processes that generate sequences of Poisson distributions with dynamically varying parameters. The method can be applied to any population model. It provides the unique stochastic and dynamic macro-model consistent with a correct micro-model. The paper also presents a general macro form for stochastic and dynamic population models. In an appendix Poisson Simulation is compared with Markov Simulation showing a number of advantages. Especially aggregation into state variables and aggregation of many events per time-step makes Poisson Simulation orders of magnitude faster than Markov Simulation. Furthermore, you can build and execute much larger and more complicated models with Poisson Simulation than is possible with the Markov approach.

  17. The Equivalence of Information-Theoretic and Likelihood-Based Methods for Neural Dimensionality Reduction

    PubMed Central

    Williamson, Ross S.; Sahani, Maneesh; Pillow, Jonathan W.

    2015-01-01

    Stimulus dimensionality-reduction methods in neuroscience seek to identify a low-dimensional space of stimulus features that affect a neuron’s probability of spiking. One popular method, known as maximally informative dimensions (MID), uses an information-theoretic quantity known as “single-spike information” to identify this space. Here we examine MID from a model-based perspective. We show that MID is a maximum-likelihood estimator for the parameters of a linear-nonlinear-Poisson (LNP) model, and that the empirical single-spike information corresponds to the normalized log-likelihood under a Poisson model. This equivalence implies that MID does not necessarily find maximally informative stimulus dimensions when spiking is not well described as Poisson. We provide several examples to illustrate this shortcoming, and derive a lower bound on the information lost when spiking is Bernoulli in discrete time bins. To overcome this limitation, we introduce model-based dimensionality reduction methods for neurons with non-Poisson firing statistics, and show that they can be framed equivalently in likelihood-based or information-theoretic terms. Finally, we show how to overcome practical limitations on the number of stimulus dimensions that MID can estimate by constraining the form of the non-parametric nonlinearity in an LNP model. We illustrate these methods with simulations and data from primate visual cortex. PMID:25831448

  18. Numerical simulations of microwave heating of liquids: enhancements using Krylov subspace methods

    NASA Astrophysics Data System (ADS)

    Lollchund, M. R.; Dookhitram, K.; Sunhaloo, M. S.; Boojhawon, R.

    2013-04-01

    In this paper, we compare the performances of three iterative solvers for large sparse linear systems arising in the numerical computations of incompressible Navier-Stokes (NS) equations. These equations are employed mainly in the simulation of microwave heating of liquids. The emphasis of this work is on the application of Krylov projection techniques such as Generalized Minimal Residual (GMRES) to solve the Pressure Poisson Equations that result from discretisation of the NS equations. The performance of the GMRES method is compared with the traditional Gauss-Seidel (GS) and point successive over relaxation (PSOR) techniques through their application to simulate the dynamics of water housed inside a vertical cylindrical vessel which is subjected to microwave radiation. It is found that as the mesh size increases, GMRES gives the fastest convergence rate in terms of computational times and number of iterations.

  19. Technical report. The application of probability-generating functions to linear-quadratic radiation survival curves.

    PubMed

    Kendal, W S

    2000-04-01

    To illustrate how probability-generating functions (PGFs) can be employed to derive a simple probabilistic model for clonogenic survival after exposure to ionizing irradiation. Both repairable and irreparable radiation damage to DNA were assumed to occur by independent (Poisson) processes, at intensities proportional to the irradiation dose. Also, repairable damage was assumed to be either repaired or further (lethally) injured according to a third (Bernoulli) process, with the probability of lethal conversion being directly proportional to dose. Using the algebra of PGFs, these three processes were combined to yield a composite PGF that described the distribution of lethal DNA lesions in irradiated cells. The composite PGF characterized a Poisson distribution with mean, chiD+betaD2, where D was dose and alpha and beta were radiobiological constants. This distribution yielded the conventional linear-quadratic survival equation. To test the composite model, the derived distribution was used to predict the frequencies of multiple chromosomal aberrations in irradiated human lymphocytes. The predictions agreed well with observation. This probabilistic model was consistent with single-hit mechanisms, but it was not consistent with binary misrepair mechanisms. A stochastic model for radiation survival has been constructed from elementary PGFs that exactly yields the linear-quadratic relationship. This approach can be used to investigate other simple probabilistic survival models.

  20. Atmospheric pollutants and hospital admissions due to pneumonia in children

    PubMed Central

    Negrisoli, Juliana; Nascimento, Luiz Fernando C.

    2013-01-01

    OBJECTIVE: To analyze the relationship between exposure to air pollutants and hospitalizations due to pneumonia in children of Sorocaba, São Paulo, Brazil. METHODS: Time series ecological study, from 2007 to 2008. Daily data were obtained from the State Environmental Agency for Pollution Control for particulate matter, nitric oxide, nitrogen dioxide, ozone, besides air temperature and relative humidity. The data concerning pneumonia admissions were collected in the public health system of Sorocaba. Correlations between the variables of interest using Pearson cofficient were calculated. Models with lags from zero to five days after exposure to pollutants were performed to analyze the association between the exposure to environmental pollutants and hospital admissions. The analysis used the generalized linear model of Poisson regression, being significant p<0.05. RESULTS: There were 1,825 admissions for pneumonia, with a daily mean of 2.5±2.1. There was a strong correlation between pollutants and hospital admissions, except for ozone. Regarding the Poisson regression analysis with the multi-pollutant model, only nitrogen dioxide was statistically significant in the same day (relative risk - RR=1.016), as well as particulate matter with a lag of four days (RR=1.009) after exposure to pollutants. CONCLUSIONS: There was an acute effect of exposure to nitrogen dioxide and a later effect of exposure to particulate matter on children hospitalizations for pneumonia in Sorocaba. PMID:24473956

  1. Predicting rates of inbreeding in populations undergoing selection.

    PubMed Central

    Woolliams, J A; Bijma, P

    2000-01-01

    Tractable forms of predicting rates of inbreeding (DeltaF) in selected populations with general indices, nonrandom mating, and overlapping generations were developed, with the principal results assuming a period of equilibrium in the selection process. An existing theorem concerning the relationship between squared long-term genetic contributions and rates of inbreeding was extended to nonrandom mating and to overlapping generations. DeltaF was shown to be approximately (1)/(4)(1 - omega) times the expected sum of squared lifetime contributions, where omega is the deviation from Hardy-Weinberg proportions. This relationship cannot be used for prediction since it is based upon observed quantities. Therefore, the relationship was further developed to express DeltaF in terms of expected long-term contributions that are conditional on a set of selective advantages that relate the selection processes in two consecutive generations and are predictable quantities. With random mating, if selected family sizes are assumed to be independent Poisson variables then the expected long-term contribution could be substituted for the observed, providing (1)/(4) (since omega = 0) was increased to (1)/(2). Established theory was used to provide a correction term to account for deviations from the Poisson assumptions. The equations were successfully applied, using simple linear models, to the problem of predicting DeltaF with sib indices in discrete generations since previously published solutions had proved complex. PMID:10747074

  2. Generalized master equations for non-Poisson dynamics on networks.

    PubMed

    Hoffmann, Till; Porter, Mason A; Lambiotte, Renaud

    2012-10-01

    The traditional way of studying temporal networks is to aggregate the dynamics of the edges to create a static weighted network. This implicitly assumes that the edges are governed by Poisson processes, which is not typically the case in empirical temporal networks. Accordingly, we examine the effects of non-Poisson inter-event statistics on the dynamics of edges, and we apply the concept of a generalized master equation to the study of continuous-time random walks on networks. We show that this equation reduces to the standard rate equations when the underlying process is Poissonian and that its stationary solution is determined by an effective transition matrix whose leading eigenvector is easy to calculate. We conduct numerical simulations and also derive analytical results for the stationary solution under the assumption that all edges have the same waiting-time distribution. We discuss the implications of our work for dynamical processes on temporal networks and for the construction of network diagnostics that take into account their nontrivial stochastic nature.

  3. Generalized master equations for non-Poisson dynamics on networks

    NASA Astrophysics Data System (ADS)

    Hoffmann, Till; Porter, Mason A.; Lambiotte, Renaud

    2012-10-01

    The traditional way of studying temporal networks is to aggregate the dynamics of the edges to create a static weighted network. This implicitly assumes that the edges are governed by Poisson processes, which is not typically the case in empirical temporal networks. Accordingly, we examine the effects of non-Poisson inter-event statistics on the dynamics of edges, and we apply the concept of a generalized master equation to the study of continuous-time random walks on networks. We show that this equation reduces to the standard rate equations when the underlying process is Poissonian and that its stationary solution is determined by an effective transition matrix whose leading eigenvector is easy to calculate. We conduct numerical simulations and also derive analytical results for the stationary solution under the assumption that all edges have the same waiting-time distribution. We discuss the implications of our work for dynamical processes on temporal networks and for the construction of network diagnostics that take into account their nontrivial stochastic nature.

  4. Poisson process stimulation of an excitable membrane cable model.

    PubMed Central

    Goldfinger, M D

    1986-01-01

    The convergence of multiple inputs within a single-neuronal substrate is a common design feature of both peripheral and central nervous systems. Typically, the result of such convergence impinges upon an intracellularly contiguous axon, where it is encoded into a train of action potentials. The simplest representation of the result of convergence of multiple inputs is a Poisson process; a general representation of axonal excitability is the Hodgkin-Huxley/cable theory formalism. The present work addressed multiple input convergence upon an axon by applying Poisson process stimulation to the Hodgkin-Huxley axonal cable. The results showed that both absolute and relative refractory periods yielded in the axonal output a random but non-Poisson process. While smaller amplitude stimuli elicited a type of short-interval conditioning, larger amplitude stimuli elicited impulse trains approaching Poisson criteria except for the effects of refractoriness. These results were obtained for stimulus trains consisting of pulses of constant amplitude and constant or variable durations. By contrast, with or without stimulus pulse shape variability, the post-impulse conditional probability for impulse initiation in the steady-state was a Poisson-like process. For stimulus variability consisting of randomly smaller amplitudes or randomly longer durations, mean impulse frequency was attenuated or potentiated, respectively. Limitations and implications of these computations are discussed. PMID:3730505

  5. Experimental micro mechanics methods for conventional and negative Poisson's ratio cellular solids as Cosserat continua

    NASA Technical Reports Server (NTRS)

    Lakes, R.

    1991-01-01

    Continuum representations of micromechanical phenomena in structured materials are described, with emphasis on cellular solids. These phenomena are interpreted in light of Cosserat elasticity, a generalized continuum theory which admits degrees of freedom not present in classical elasticity. These are the rotation of points in the material, and a couple per unit area or couple stress. Experimental work in this area is reviewed, and other interpretation schemes are discussed. The applicability of Cosserat elasticity to cellular solids and fibrous composite materials is considered as is the application of related generalized continuum theories. New experimental results are presented for foam materials with negative Poisson's ratios.

  6. Information transfer with rate-modulated Poisson processes: a simple model for nonstationary stochastic resonance.

    PubMed

    Goychuk, I

    2001-08-01

    Stochastic resonance in a simple model of information transfer is studied for sensory neurons and ensembles of ion channels. An exact expression for the information gain is obtained for the Poisson process with the signal-modulated spiking rate. This result allows one to generalize the conventional stochastic resonance (SR) problem (with periodic input signal) to the arbitrary signals of finite duration (nonstationary SR). Moreover, in the case of a periodic signal, the rate of information gain is compared with the conventional signal-to-noise ratio. The paper establishes the general nonequivalence between both measures notwithstanding their apparent similarity in the limit of weak signals.

  7. A Poisson process approximation for generalized K-5 confidence regions

    NASA Technical Reports Server (NTRS)

    Arsham, H.; Miller, D. R.

    1982-01-01

    One-sided confidence regions for continuous cumulative distribution functions are constructed using empirical cumulative distribution functions and the generalized Kolmogorov-Smirnov distance. The band width of such regions becomes narrower in the right or left tail of the distribution. To avoid tedious computation of confidence levels and critical values, an approximation based on the Poisson process is introduced. This aproximation provides a conservative confidence region; moreover, the approximation error decreases monotonically to 0 as sample size increases. Critical values necessary for implementation are given. Applications are made to the areas of risk analysis, investment modeling, reliability assessment, and analysis of fault tolerant systems.

  8. Developing a methodology to predict PM10 concentrations in urban areas using generalized linear models.

    PubMed

    Garcia, J M; Teodoro, F; Cerdeira, R; Coelho, L M R; Kumar, Prashant; Carvalho, M G

    2016-09-01

    A methodology to predict PM10 concentrations in urban outdoor environments is developed based on the generalized linear models (GLMs). The methodology is based on the relationship developed between atmospheric concentrations of air pollutants (i.e. CO, NO2, NOx, VOCs, SO2) and meteorological variables (i.e. ambient temperature, relative humidity (RH) and wind speed) for a city (Barreiro) of Portugal. The model uses air pollution and meteorological data from the Portuguese monitoring air quality station networks. The developed GLM considers PM10 concentrations as a dependent variable, and both the gaseous pollutants and meteorological variables as explanatory independent variables. A logarithmic link function was considered with a Poisson probability distribution. Particular attention was given to cases with air temperatures both below and above 25°C. The best performance for modelled results against the measured data was achieved for the model with values of air temperature above 25°C compared with the model considering all ranges of air temperatures and with the model considering only temperature below 25°C. The model was also tested with similar data from another Portuguese city, Oporto, and results found to behave similarly. It is concluded that this model and the methodology could be adopted for other cities to predict PM10 concentrations when these data are not available by measurements from air quality monitoring stations or other acquisition means.

  9. Parallel SOR methods with a parabolic-diffusion acceleration technique for solving an unstructured-grid Poisson equation on 3D arbitrary geometries

    NASA Astrophysics Data System (ADS)

    Zapata, M. A. Uh; Van Bang, D. Pham; Nguyen, K. D.

    2016-05-01

    This paper presents a parallel algorithm for the finite-volume discretisation of the Poisson equation on three-dimensional arbitrary geometries. The proposed method is formulated by using a 2D horizontal block domain decomposition and interprocessor data communication techniques with message passing interface. The horizontal unstructured-grid cells are reordered according to the neighbouring relations and decomposed into blocks using a load-balanced distribution to give all processors an equal amount of elements. In this algorithm, two parallel successive over-relaxation methods are presented: a multi-colour ordering technique for unstructured grids based on distributed memory and a block method using reordering index following similar ideas of the partitioning for structured grids. In all cases, the parallel algorithms are implemented with a combination of an acceleration iterative solver. This solver is based on a parabolic-diffusion equation introduced to obtain faster solutions of the linear systems arising from the discretisation. Numerical results are given to evaluate the performances of the methods showing speedups better than linear.

  10. Single-Specimen Technique to Establish the J-Resistance of Linear Viscoelastic Solids with Constant Poisson's Ratio

    NASA Technical Reports Server (NTRS)

    Gutierrez-Lemini, Danton; McCool, Alex (Technical Monitor)

    2001-01-01

    A method is developed to establish the J-resistance function for an isotropic linear viscoelastic solid of constant Poisson's ratio using the single-specimen technique with constant-rate test data. The method is based on the fact that, for a test specimen of fixed crack size under constant rate, the initiation J-integral may be established from the crack size itself, the actual external load and load-point displacement at growth initiation, and the relaxation modulus of the viscoelastic solid, without knowledge of the complete test record. Since crack size alone, of the required data, would be unknown at each point of the load-vs-load-point displacement curve of a single-specimen test, an expression is derived to estimate it. With it, the physical J-integral at each point of the test record may be established. Because of its basis on single-specimen testing, not only does the method not require the use of multiple specimens with differing initial crack sizes, but avoids the need for tracking crack growth as well.

  11. STIR: Improved Electrolyte Surface Exchange via Atomically Strained Surfaces

    DTIC Science & Technology

    2015-09-03

    at the University of Delaware. Concomitant with the experimental work, we also conducted numerical simulations of the experiments. A Poisson- Nernst ...oxygen ion lattice site results in a reaction volume and an associated Vex·ΔP term in the Arrhenius rate equation . In addition, tensile strain (i.e...simulations of the experiments. In recent work at the University of Delaware [9-13], we used finite element solution of generalized Poisson- Nernst -Planck

  12. Concurrent generation of multivariate mixed data with variables of dissimilar types.

    PubMed

    Amatya, Anup; Demirtas, Hakan

    2016-01-01

    Data sets originating from wide range of research studies are composed of multiple variables that are correlated and of dissimilar types, primarily of count, binary/ordinal and continuous attributes. The present paper builds on the previous works on multivariate data generation and develops a framework for generating multivariate mixed data with a pre-specified correlation matrix. The generated data consist of components that are marginally count, binary, ordinal and continuous, where the count and continuous variables follow the generalized Poisson and normal distributions, respectively. The use of the generalized Poisson distribution provides a flexible mechanism which allows under- and over-dispersed count variables generally encountered in practice. A step-by-step algorithm is provided and its performance is evaluated using simulated and real-data scenarios.

  13. Localization of intense electromagnetic waves in a relativistically hot plasma.

    PubMed

    Shukla, P K; Eliasson, B

    2005-02-18

    We consider nonlinear interactions between intense short electromagnetic waves (EMWs) and a relativistically hot electron plasma that supports relativistic electron holes (REHs). It is shown that such EMW-REH interactions are governed by a coupled nonlinear system of equations composed of a nonlinear Schro dinger equation describing the dynamics of the EMWs and the Poisson-relativistic Vlasov system describing the dynamics of driven REHs. The present nonlinear system of equations admits both a linearly trapped discrete number of eigenmodes of the EMWs in a quasistationary REH and a modification of the REH by large-amplitude trapped EMWs. Computer simulations of the relativistic Vlasov and Maxwell-Poisson system of equations show complex interactions between REHs loaded with localized EMWs.

  14. Long-wavelength Magnetic and Gravity Anomaly Correlations of Africa and Europe

    NASA Technical Reports Server (NTRS)

    Vonfrese, R. R. B.; Hinze, W. J. (Principal Investigator); Olivier, R.

    1984-01-01

    Preliminary MAGSAT scalar magnetic anomaly data were compiled for comparison with long-wavelength-pass filtered free-air gravity anomalies and regional heat-flow and tectonic data. To facilitate the correlation analysis at satellite elevations over a spherical-Earth, equivalent point source inversion was used to differentially reduce the magnetic satellite anomalies to the radial pole at 350 km elevation, and to upward continue the first radial derivative of the free-air gravity anomalies. Correlation patterns between these regional geopotential anomaly fields are quantitatively established by moving window linear regression based on Poisson's theorem. Prominent correlations include direct correspondences for the Baltic Shield, where both anomalies are negative, and the central Mediterranean and Zaire Basin where both anomalies are positive. Inverse relationships are generally common over the Precambrian Shield in northwest Africa, the Basins and Shields in southern Africa, and the Alpine Orogenic Belt. Inverse correlations also presist over the North Sea Rifts, the Benue Rift, and more generally over the East African Rifts. The results of this quantitative correlation analysis support the general inverse relationships of gravity and magnetic anomalies observed for North American continental terrain which may be broadly related to magnetic crustal thickness variations.

  15. Long-wavelength magnetic and gravity anomaly correlations on Africa and Europe

    NASA Technical Reports Server (NTRS)

    Vonfrese, R. R. B.; Olivier, R.; Hinze, W. J.

    1985-01-01

    Preliminary MAGSAT scalar magnetic anomaly data were compiled for comparison with long-wavelength-pass filtered free-air gravity anomalies and regional heat-flow and tectonic data. To facilitate the correlation analysis at satellite elevations over a spherical-Earth, equivalent point source inversion was used to differentially reduce the magnetic satellite anomalies to the radial pole at 350 km elevation, and to upward continue the first radial derivative of the free-air gravity anomalies. Correlation patterns between these regional geopotential anomaly fields are quantitatively established by moving window linear regression based on Poisson's theorem. Prominent correlations include direct correspondences for the Baltic shield, where both anomalies are negative, and the central Mediterranean and Zaire Basin where both anomalies are positive. Inverse relationships are generally common over the Precambrian Shield in northwest Africa, the Basins and Shields in southern Africa, and the Alpine Orogenic Belt. Inverse correlations also presist over the North Sea Rifts, the Benue Rift, and more generally over the East African Rifts. The results of this quantitative correlation analysis support the general inverse relationships of gravity and magnetic anomalies observed for North American continental terrain which may be broadly related to magnetic crustal thickness variations.

  16. Multiple domains of social support are associated with diabetes self-management among Veterans.

    PubMed

    Gray, Kristen E; Hoerster, Katherine D; Reiber, Gayle E; Bastian, Lori A; Nelson, Karin M

    2018-01-01

    Objectives To examine, among Veterans, relationships of general social support and diabetes-specific social support for physical activity and healthy eating with diabetes self-management behaviors. Methods Patients from VA Puget Sound, Seattle completed a cross-sectional survey in 2012-2013 ( N = 717). We measured (a) general social support and (b) diabetes-specific social support for healthy eating and physical activity with domains reflecting support person participation, encouragement, and sharing ideas. Among 189 self-reporting diabetes patients, we fit linear and modified Poisson regression models estimating associations of social support with diabetes self-management behaviors: adherence to general and diabetes-specific diets and blood glucose monitoring (days/week); physical activity (< vs. ≥150 min/week); and smoking status (smoker/non-smoker). Results General social support was not associated with diabetes self-management. For diabetes-specific social support, higher healthy eating support scores across all domains were associated with better adherence to general and diabetes-specific diets. Higher physical activity support scores were positively associated with ≥150 min/week of physical activity only for the participation domain. Discussion Diabetes-specific social support was a stronger and more consistent correlate of improved self-management than general social support, particularly for lifestyle behaviors. Incorporating family/friends into Veterans' diabetes self-management routines may lead to better self-management and improvements in disease control and outcomes.

  17. Different responses of weather factors on hand, foot and mouth disease in three different climate areas of Gansu, China.

    PubMed

    Gou, Faxiang; Liu, Xinfeng; He, Jian; Liu, Dongpeng; Cheng, Yao; Liu, Haixia; Yang, Xiaoting; Wei, Kongfu; Zheng, Yunhe; Jiang, Xiaojuan; Meng, Lei; Hu, Wenbiao

    2018-01-08

    To determine the linear and non-linear interacting relationships between weather factors and hand, foot and mouth disease (HFMD) in children in Gansu, China, and gain further traction as an early warning signal based on weather variability for HFMD transmission. Weekly HFMD cases aged less than 15 and meteorological information from 2010 to 2014 in Jiuquan, Lanzhou and Tianshu, Gansu, China were collected. Generalized linear regression models (GLM) with Poisson link and classification and regression trees (CART) were employed to determine the combined and interactive relationship of weather factors and HFMD in both linear and non-linear ways. GLM suggested an increase in weekly HFMD of 5.9% [95% confidence interval (CI): 5.4%, 6.5%] in Tianshui, 2.8% [2.5%, 3.1%] in Lanzhou and 1.8% [1.4%, 2.2%] in Jiuquan in association with a 1 °C increase in average temperature, respectively. And 1% increase of relative humidity could increase weekly HFMD of 2.47% [2.23%, 2.71%] in Lanzhou and 1.11% [0.72%, 1.51%] in Tianshui. CART revealed that average temperature and relative humidity were the first two important determinants, and their threshold values for average temperature deceased from 20 °C of Jiuquan to 16 °C in Tianshui; and for relative humidity, threshold values increased from 38% of Jiuquan to 65% of Tianshui. Average temperature was the primary weather factor in three areas, more sensitive in southeast Tianshui, compared with northwest Jiuquan; Relative humidity's effect on HFMD showed a non-linear interacting relationship with average temperature.

  18. A linear model of population dynamics

    NASA Astrophysics Data System (ADS)

    Lushnikov, A. A.; Kagan, A. I.

    2016-08-01

    The Malthus process of population growth is reformulated in terms of the probability w(n,t) to find exactly n individuals at time t assuming that both the birth and the death rates are linear functions of the population size. The master equation for w(n,t) is solved exactly. It is shown that w(n,t) strongly deviates from the Poisson distribution and is expressed in terms either of Laguerre’s polynomials or a modified Bessel function. The latter expression allows for considerable simplifications of the asymptotic analysis of w(n,t).

  19. Analysis and control of hourglass instabilities in underintegrated linear and nonlinear elasticity

    NASA Technical Reports Server (NTRS)

    Jacquotte, Olivier P.; Oden, J. Tinsley

    1994-01-01

    Methods are described to identify and correct a bad finite element approximation of the governing operator obtained when under-integration is used in numerical code for several model problems: the Poisson problem, the linear elasticity problem, and for problems in the nonlinear theory of elasticity. For each of these problems, the reason for the occurrence of instabilities is given, a way to control or eliminate them is presented, and theorems of existence, uniqueness, and convergence for the given methods are established. Finally, numerical results are included which illustrate the theory.

  20. Beyond Poisson-Boltzmann: Fluctuation effects and correlation functions

    NASA Astrophysics Data System (ADS)

    Netz, R. R.; Orland, H.

    2000-02-01

    We formulate the exact non-linear field theory for a fluctuating counter-ion distribution in the presence of a fixed, arbitrary charge distribution. The Poisson-Boltzmann equation is obtained as the saddle-point of the field-theoretic action, and the effects of counter-ion fluctuations are included by a loop-wise expansion around this saddle point. The Poisson equation is obeyed at each order in this loop expansion. We explicitly give the expansion of the Gibbs potential up to two loops. We then apply our field-theoretic formalism to the case of a single impenetrable wall with counter ions only (in the absence of salt ions). We obtain the fluctuation corrections to the electrostatic potential and the counter-ion density to one-loop order without further approximations. The relative importance of fluctuation corrections is controlled by a single parameter, which is proportional to the cube of the counter-ion valency and to the surface charge density. The effective interactions and correlation functions between charged particles close to the charged wall are obtained on the one-loop level.

  1. A Hierarchical Poisson Log-Normal Model for Network Inference from RNA Sequencing Data

    PubMed Central

    Gallopin, Mélina; Rau, Andrea; Jaffrézic, Florence

    2013-01-01

    Gene network inference from transcriptomic data is an important methodological challenge and a key aspect of systems biology. Although several methods have been proposed to infer networks from microarray data, there is a need for inference methods able to model RNA-seq data, which are count-based and highly variable. In this work we propose a hierarchical Poisson log-normal model with a Lasso penalty to infer gene networks from RNA-seq data; this model has the advantage of directly modelling discrete data and accounting for inter-sample variance larger than the sample mean. Using real microRNA-seq data from breast cancer tumors and simulations, we compare this method to a regularized Gaussian graphical model on log-transformed data, and a Poisson log-linear graphical model with a Lasso penalty on power-transformed data. For data simulated with large inter-sample dispersion, the proposed model performs better than the other methods in terms of sensitivity, specificity and area under the ROC curve. These results show the necessity of methods specifically designed for gene network inference from RNA-seq data. PMID:24147011

  2. Reference manual for the POISSON/SUPERFISH Group of Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1987-01-01

    The POISSON/SUPERFISH Group codes were set up to solve two separate problems: the design of magnets and the design of rf cavities in a two-dimensional geometry. The first stage of either problem is to describe the layout of the magnet or cavity in a way that can be used as input to solve the generalized Poisson equation for magnets or the Helmholtz equations for cavities. The computer codes require that the problems be discretized by replacing the differentials (dx,dy) by finite differences ({delta}X,{delta}Y). Instead of defining the function everywhere in a plane, the function is defined only at a finitemore » number of points on a mesh in the plane.« less

  3. A theorem about Hamiltonian systems.

    PubMed

    Case, K M

    1984-09-01

    A simple theorem in Hamiltonian mechanics is pointed out. One consequence is a generalization of the classical result that symmetries are generated by Poisson brackets of conserved functionals. General applications are discussed. Special emphasis is given to the Kadomtsev-Petviashvili equation.

  4. Bayesian dynamic modeling of time series of dengue disease case counts

    PubMed Central

    López-Quílez, Antonio; Torres-Prieto, Alexander

    2017-01-01

    The aim of this study is to model the association between weekly time series of dengue case counts and meteorological variables, in a high-incidence city of Colombia, applying Bayesian hierarchical dynamic generalized linear models over the period January 2008 to August 2015. Additionally, we evaluate the model’s short-term performance for predicting dengue cases. The methodology shows dynamic Poisson log link models including constant or time-varying coefficients for the meteorological variables. Calendar effects were modeled using constant or first- or second-order random walk time-varying coefficients. The meteorological variables were modeled using constant coefficients and first-order random walk time-varying coefficients. We applied Markov Chain Monte Carlo simulations for parameter estimation, and deviance information criterion statistic (DIC) for model selection. We assessed the short-term predictive performance of the selected final model, at several time points within the study period using the mean absolute percentage error. The results showed the best model including first-order random walk time-varying coefficients for calendar trend and first-order random walk time-varying coefficients for the meteorological variables. Besides the computational challenges, interpreting the results implies a complete analysis of the time series of dengue with respect to the parameter estimates of the meteorological effects. We found small values of the mean absolute percentage errors at one or two weeks out-of-sample predictions for most prediction points, associated with low volatility periods in the dengue counts. We discuss the advantages and limitations of the dynamic Poisson models for studying the association between time series of dengue disease and meteorological variables. The key conclusion of the study is that dynamic Poisson models account for the dynamic nature of the variables involved in the modeling of time series of dengue disease, producing useful models for decision-making in public health. PMID:28671941

  5. Poisson Regression Analysis of Illness and Injury Surveillance Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frome E.L., Watkins J.P., Ellis E.D.

    2012-12-12

    The Department of Energy (DOE) uses illness and injury surveillance to monitor morbidity and assess the overall health of the work force. Data collected from each participating site include health events and a roster file with demographic information. The source data files are maintained in a relational data base, and are used to obtain stratified tables of health event counts and person time at risk that serve as the starting point for Poisson regression analysis. The explanatory variables that define these tables are age, gender, occupational group, and time. Typical response variables of interest are the number of absences duemore » to illness or injury, i.e., the response variable is a count. Poisson regression methods are used to describe the effect of the explanatory variables on the health event rates using a log-linear main effects model. Results of fitting the main effects model are summarized in a tabular and graphical form and interpretation of model parameters is provided. An analysis of deviance table is used to evaluate the importance of each of the explanatory variables on the event rate of interest and to determine if interaction terms should be considered in the analysis. Although Poisson regression methods are widely used in the analysis of count data, there are situations in which over-dispersion occurs. This could be due to lack-of-fit of the regression model, extra-Poisson variation, or both. A score test statistic and regression diagnostics are used to identify over-dispersion. A quasi-likelihood method of moments procedure is used to evaluate and adjust for extra-Poisson variation when necessary. Two examples are presented using respiratory disease absence rates at two DOE sites to illustrate the methods and interpretation of the results. In the first example the Poisson main effects model is adequate. In the second example the score test indicates considerable over-dispersion and a more detailed analysis attributes the over-dispersion to extra-Poisson variation. The R open source software environment for statistical computing and graphics is used for analysis. Additional details about R and the data that were used in this report are provided in an Appendix. Information on how to obtain R and utility functions that can be used to duplicate results in this report are provided.« less

  6. Treecode-based generalized Born method

    NASA Astrophysics Data System (ADS)

    Xu, Zhenli; Cheng, Xiaolin; Yang, Haizhao

    2011-02-01

    We have developed a treecode-based O(Nlog N) algorithm for the generalized Born (GB) implicit solvation model. Our treecode-based GB (tGB) is based on the GBr6 [J. Phys. Chem. B 111, 3055 (2007)], an analytical GB method with a pairwise descreening approximation for the R6 volume integral expression. The algorithm is composed of a cutoff scheme for the effective Born radii calculation, and a treecode implementation of the GB charge-charge pair interactions. Test results demonstrate that the tGB algorithm can reproduce the vdW surface based Poisson solvation energy with an average relative error less than 0.6% while providing an almost linear-scaling calculation for a representative set of 25 proteins with different sizes (from 2815 atoms to 65456 atoms). For a typical system of 10k atoms, the tGB calculation is three times faster than the direct summation as implemented in the original GBr6 model. Thus, our tGB method provides an efficient way for performing implicit solvent GB simulations of larger biomolecular systems at longer time scales.

  7. Stagnation in Mortality Decline among Elders in the Netherlands

    ERIC Educational Resources Information Center

    Janssen, Fanny; Nusselder, Wilma J.; Looman, Caspar W. N.; Mackenbach, Johan P.; Kunst, Anton E.

    2003-01-01

    Purpose: This study assesses whether the stagnation of old-age (80+) mortality decline observed in The Netherlands in the 1980s continued in the 1990s and determines which factors contributed to this stagnation. Emphasis is on the role of smoking. Design and Methods: Poisson regression analysis with linear splines was applied to total and…

  8. Generalized Linear Models of Home Activity for Automatic Detection of Mild Cognitive Impairment in Older Adults*

    PubMed Central

    Akl, Ahmad; Snoek, Jasper; Mihailidis, Alex

    2015-01-01

    With a globally aging population, the burden of care of cognitively impaired older adults is becoming increasingly concerning. Instances of Alzheimer’s disease and other forms of dementia are becoming ever more frequent. Earlier detection of cognitive impairment offers significant benefits, but remains difficult to do in practice. In this paper, we develop statistical models of the behavior of older adults within their homes using sensor data in order to detect the early onset of cognitive decline. Specifically, we use inhomogenous Poisson processes to model the presence of subjects within different rooms throughout the day in the home using unobtrusive sensing technologies. We compare the distributions learned from cognitively intact and impaired subjects using information theoretic tools and observe statistical differences between the two populations which we believe can be used to help detect the onset of cognitive decline. PMID:25570050

  9. Generalized Linear Models of home activity for automatic detection of mild cognitive impairment in older adults.

    PubMed

    Akl, Ahmad; Snoek, Jasper; Mihailidis, Alex

    2014-01-01

    With a globally aging population, the burden of care of cognitively impaired older adults is becoming increasingly concerning. Instances of Alzheimer's disease and other forms of dementia are becoming ever more frequent. Earlier detection of cognitive impairment offers significant benefits, but remains difficult to do in practice. In this paper, we develop statistical models of the behavior of older adults within their homes using sensor data in order to detect the early onset of cognitive decline. Specifically, we use inhomogenous Poisson processes to model the presence of subjects within different rooms throughout the day in the home using unobtrusive sensing technologies. We compare the distributions learned from cognitively intact and impaired subjects using information theoretic tools and observe statistical differences between the two populations which we believe can be used to help detect the onset of cognitive decline.

  10. Calculations of the binding affinities of protein-protein complexes with the fast multipole method

    NASA Astrophysics Data System (ADS)

    Kim, Bongkeun; Song, Jiming; Song, Xueyu

    2010-09-01

    In this paper, we used a coarse-grained model at the residue level to calculate the binding free energies of three protein-protein complexes. General formulations to calculate the electrostatic binding free energy and the van der Waals free energy are presented by solving linearized Poisson-Boltzmann equations using the boundary element method in combination with the fast multipole method. The residue level model with the fast multipole method allows us to efficiently investigate how the mutations on the active site of the protein-protein interface affect the changes in binding affinities of protein complexes. Good correlations between the calculated results and the experimental ones indicate that our model can capture the dominant contributions to the protein-protein interactions. At the same time, additional effects on protein binding due to atomic details are also discussed in the context of the limitations of such a coarse-grained model.

  11. On the theory of Lorentz gases with long range interactions

    NASA Astrophysics Data System (ADS)

    Nota, Alessia; Simonella, Sergio; Velázquez, Juan J. L.

    We construct and study the stochastic force field generated by a Poisson distribution of sources at finite density, x1,x2,…, in ℝ3 each of them yielding a long range potential QiΦ(x - xi) with possibly different charges Qi ∈ ℝ. The potential Φ is assumed to behave typically as |x|-s for large |x|, with s > 1/2. We will denote the resulting random field as “generalized Holtsmark field”. We then consider the dynamics of one tagged particle in such random force fields, in several scaling limits where the mean free path is much larger than the average distance between the scatterers. We estimate the diffusive time scale and identify conditions for the vanishing of correlations. These results are used to obtain appropriate kinetic descriptions in terms of a linear Boltzmann or Landau evolution equation depending on the specific choices of the interaction potential.

  12. Outcomes of a pilot hand hygiene randomized cluster trial to reduce communicable infections among US office-based employees.

    PubMed

    Stedman-Smith, Maggie; DuBois, Cathy L Z; Grey, Scott F; Kingsbury, Diana M; Shakya, Sunita; Scofield, Jennifer; Slenkovich, Ken

    2015-04-01

    To determine the effectiveness of an office-based multimodal hand hygiene improvement intervention in reducing self-reported communicable infections and work-related absence. A randomized cluster trial including an electronic training video, hand sanitizer, and educational posters (n = 131, intervention; n = 193, control). Primary outcomes include (1) self-reported acute respiratory infections (ARIs)/influenza-like illness (ILI) and/or gastrointestinal (GI) infections during the prior 30 days; and (2) related lost work days. Incidence rate ratios calculated using generalized linear mixed models with a Poisson distribution, adjusted for confounders and random cluster effects. A 31% relative reduction in self-reported combined ARI-ILI/GI infections (incidence rate ratio: 0.69; 95% confidence interval, 0.49 to 0.98). A 21% nonsignificant relative reduction in lost work days. An office-based multimodal hand hygiene improvement intervention demonstrated a substantive reduction in self-reported combined ARI-ILI/GI infections.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Asaba, Shinsuke; Hikage, Chiaki; Koyama, Kazuya

    We perform a principal component analysis to assess ability of future observations to measure departures from General Relativity in predictions of the Poisson and anisotropy equations on linear scales. In particular, we focus on how the measurements of redshift-space distortions (RSD) observed from spectroscopic galaxy redshift surveys will improve the constraints when combined with lensing tomographic surveys. Assuming a Euclid-like galaxy imaging and redshift survey, we find that adding the 3D information decreases the statistical uncertainty by a factor between 3 and 7 compared to the case when only observables from lensing tomographic surveys are used. We also find thatmore » the number of well-constrained modes increases by a factor between 3 and 6. Our study indicates the importance of joint galaxy imaging and redshift surveys such as SuMIRe and Euclid to give more stringent tests of the ΛCDM model and to distinguish between various modified gravity and dark energy models.« less

  14. On buffer overflow duration in a finite-capacity queueing system with multiple vacation policy

    NASA Astrophysics Data System (ADS)

    Kempa, Wojciech M.

    2017-12-01

    A finite-buffer queueing system with Poisson arrivals and generally distributed processing times, operating under multiple vacation policy, is considered. Each time when the system becomes empty, the service station takes successive independent and identically distributed vacation periods, until, at the completion epoch of one of them, at least one job waiting for service is detected in the buffer. Applying analytical approach based on the idea of embedded Markov chain, integral equations and linear algebra, the compact-form representation for the cumulative distribution function (CDF for short) of the first buffer overflow duration is found. Hence, the formula for the CDF of next such periods is obtained. Moreover, probability distributions of the number of job losses in successive buffer overflow periods are found. The considered queueing system can be efficienly applied in modelling energy saving mechanisms in wireless network communication.

  15. Impact of temperature on childhood pneumonia estimated from satellite remote sensing.

    PubMed

    Xu, Zhiwei; Liu, Yang; Ma, Zongwei; Li, Shenghui; Hu, Wenbiao; Tong, Shilu

    2014-07-01

    The effect of temperature on childhood pneumonia in subtropical regions is largely unknown so far. This study examined the impact of temperature on childhood pneumonia in Brisbane, Australia. A quasi-Poisson generalized linear model combined with a distributed lag non-linear model was used to quantify the main effect of temperature on emergency department visits (EDVs) for childhood pneumonia in Brisbane from 2001 to 2010. The model residuals were checked to identify added effects due to heat waves or cold spells. Both high and low temperatures were associated with an increase in EDVs for childhood pneumonia. Children aged 2-5 years, and female children were particularly vulnerable to the impacts of heat and cold, and Indigenous children were sensitive to heat. Heat waves and cold spells had significant added effects on childhood pneumonia, and the magnitude of these effects increased with intensity and duration. There were changes over time in both the main and added effects of temperature on childhood pneumonia. Children, especially those female and Indigenous, should be particularly protected from extreme temperatures. Future development of early warning systems should take the change over time in the impact of temperature on children's health into account. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. Impact of temperature on mortality in Hubei, China: a multi-county time series analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Yunquan; Yu, Chuanhua; Bao, Junzhe; Li, Xudong

    2017-03-01

    We examined the impact of extreme temperatures on mortality in 12 counties across Hubei Province, central China, during 2009-2012. Quasi-Poisson generalized linear regression combined with distributed lag non-linear model was first applied to estimate county-specific relationship between temperature and mortality. A multivariable meta-analysis was then used to pool the estimates of county-specific mortality effects of extreme cold temperature (1st percentile) and hot temperature (99th percentile). An inverse J-shaped relationship was observed between temperature and mortality at the provincial level. Heat effect occurred immediately and persisted for 2-3 days, whereas cold effect was 1-2 days delayed and much longer lasting. Higher mortality risks were observed among females, the elderly aged over 75 years, persons dying outside the hospital and those with high education attainment, especially for cold effects. Our data revealed some slight differences in heat- and cold- related mortality effects on urban and rural residents. These findings may have important implications for developing locally-based preventive and intervention strategies to reduce temperature-related mortality, especially for those susceptible subpopulations. Also, urbanization should be considered as a potential influence factor when evaluating temperature-mortality association in future researches.

  17. A theorem about Hamiltonian systems

    PubMed Central

    Case, K. M.

    1984-01-01

    A simple theorem in Hamiltonian mechanics is pointed out. One consequence is a generalization of the classical result that symmetries are generated by Poisson brackets of conserved functionals. General applications are discussed. Special emphasis is given to the Kadomtsev-Petviashvili equation. PMID:16593515

  18. An implicit boundary integral method for computing electric potential of macromolecules in solvent

    NASA Astrophysics Data System (ADS)

    Zhong, Yimin; Ren, Kui; Tsai, Richard

    2018-04-01

    A numerical method using implicit surface representations is proposed to solve the linearized Poisson-Boltzmann equation that arises in mathematical models for the electrostatics of molecules in solvent. The proposed method uses an implicit boundary integral formulation to derive a linear system defined on Cartesian nodes in a narrowband surrounding the closed surface that separates the molecule and the solvent. The needed implicit surface is constructed from the given atomic description of the molecules, by a sequence of standard level set algorithms. A fast multipole method is applied to accelerate the solution of the linear system. A few numerical studies involving some standard test cases are presented and compared to other existing results.

  19. Equivalent theories redefine Hamiltonian observables to exhibit change in general relativity

    NASA Astrophysics Data System (ADS)

    Pitts, J. Brian

    2017-03-01

    Change and local spatial variation are missing in canonical General Relativity’s observables as usually defined, an aspect of the problem of time. Definitions can be tested using equivalent formulations of a theory, non-gauge and gauge, because they must have equivalent observables and everything is observable in the non-gauge formulation. Taking an observable from the non-gauge formulation and finding the equivalent in the gauge formulation, one requires that the equivalent be an observable, thus constraining definitions. For massive photons, the de Broglie-Proca non-gauge formulation observable {{A}μ} is equivalent to the Stueckelberg-Utiyama gauge formulation quantity {{A}μ}+{{\\partial}μ}φ, which must therefore be an observable. To achieve that result, observables must have 0 Poisson bracket not with each first-class constraint, but with the Rosenfeld-Anderson-Bergmann-Castellani gauge generator G, a tuned sum of first-class constraints, in accord with the Pons-Salisbury-Sundermeyer definition of observables. The definition for external gauge symmetries can be tested using massive gravity, where one can install gauge freedom by parametrization with clock fields X A . The non-gauge observable {{g}μ ν} has the gauge equivalent {{X}A}{{,}μ}{{g}μ ν}{{X}B}{{,}ν}. The Poisson bracket of {{X}A}{{,}μ}{{g}μ ν}{{X}B}{{,}ν} with G turns out to be not 0 but a Lie derivative. This non-zero Poisson bracket refines and systematizes Kuchař’s proposal to relax the 0 Poisson bracket condition with the Hamiltonian constraint. Thus observables need covariance, not invariance, in relation to external gauge symmetries. The Lagrangian and Hamiltonian for massive gravity are those of General Relativity  +   Λ   +  4 scalars, so the same definition of observables applies to General Relativity. Local fields such as {{g}μ ν} are observables. Thus observables change. Requiring equivalent observables for equivalent theories also recovers Hamiltonian-Lagrangian equivalence.

  20. Statistical error in simulations of Poisson processes: Example of diffusion in solids

    NASA Astrophysics Data System (ADS)

    Nilsson, Johan O.; Leetmaa, Mikael; Vekilova, Olga Yu.; Simak, Sergei I.; Skorodumova, Natalia V.

    2016-08-01

    Simulations of diffusion in solids often produce poor statistics of diffusion events. We present an analytical expression for the statistical error in ion conductivity obtained in such simulations. The error expression is not restricted to any computational method in particular, but valid in the context of simulation of Poisson processes in general. This analytical error expression is verified numerically for the case of Gd-doped ceria by running a large number of kinetic Monte Carlo calculations.

  1. Probabilistic reasoning in data analysis.

    PubMed

    Sirovich, Lawrence

    2011-09-20

    This Teaching Resource provides lecture notes, slides, and a student assignment for a lecture on probabilistic reasoning in the analysis of biological data. General probabilistic frameworks are introduced, and a number of standard probability distributions are described using simple intuitive ideas. Particular attention is focused on random arrivals that are independent of prior history (Markovian events), with an emphasis on waiting times, Poisson processes, and Poisson probability distributions. The use of these various probability distributions is applied to biomedical problems, including several classic experimental studies.

  2. Preconditioner and convergence study for the Quantum Computer Aided Design (QCAD) nonlinear poisson problem posed on the Ottawa Flat 270 design geometry.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kalashnikova, Irina

    2012-05-01

    A numerical study aimed to evaluate different preconditioners within the Trilinos Ifpack and ML packages for the Quantum Computer Aided Design (QCAD) non-linear Poisson problem implemented within the Albany code base and posed on the Ottawa Flat 270 design geometry is performed. This study led to some new development of Albany that allows the user to select an ML preconditioner with Zoltan repartitioning based on nodal coordinates, which is summarized. Convergence of the numerical solutions computed within the QCAD computational suite with successive mesh refinement is examined in two metrics, the mean value of the solution (an L{sup 1} norm)more » and the field integral of the solution (L{sup 2} norm).« less

  3. Evaluation of Shiryaev-Roberts procedure for on-line environmental radiation monitoring.

    PubMed

    Watson, Mara M; Seliman, Ayman F; Bliznyuk, Valery N; DeVol, Timothy A

    2018-04-30

    Water can become contaminated as a result of a leak from a nuclear facility, such as a waste facility, or from clandestine nuclear activity. Low-level on-line radiation monitoring is needed to detect these events in real time. A Bayesian control chart method, Shiryaev-Roberts (SR) procedure, was compared with classical methods, 3-σ and cumulative sum (CUSUM), for quantifying an accumulating signal from an extractive scintillating resin flow-cell detection system. Solutions containing 0.10-5.0 Bq/L of 99 Tc, as T99cO 4 - were pumped through a flow cell packed with extractive scintillating resin used in conjunction with a Beta-RAM Model 5 HPLC detector. While T99cO 4 - accumulated on the resin, time series data were collected. Control chart methods were applied to the data using statistical algorithms developed in MATLAB. SR charts were constructed using Poisson (Poisson SR) and Gaussian (Gaussian SR) probability distributions of count data to estimate the likelihood ratio. Poisson and Gaussian SR charts required less volume of radioactive solution at a fixed concentration to exceed the control limit in most cases than 3-σ and CUSUM control charts, particularly solutions with lower activity. SR is thus the ideal control chart for low-level on-line radiation monitoring. Once the control limit was exceeded, activity concentrations were estimated from the SR control chart using the control chart slope on a semi-logarithmic plot. A linear regression fit was applied to averaged slope data for five activity concentration groupings for Poisson and Gaussian SR control charts. A correlation coefficient (R 2 ) of 0.77 for Poisson SR and 0.90 for Gaussian SR suggest this method will adequately estimate activity concentration for an unknown solution. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Influenza vaccine coverage, influenza-associated morbidity and all-cause mortality in Catalonia (Spain).

    PubMed

    Muñoz, M Pilar; Soldevila, Núria; Martínez, Anna; Carmona, Glòria; Batalla, Joan; Acosta, Lesly M; Domínguez, Angela

    2011-07-12

    The objective of this work was to study the behaviour of influenza with respect to morbidity and all-cause mortality in Catalonia, and their association with influenza vaccination coverage. The study was carried out over 13 influenza seasons, from epidemiological week 40 of 1994 to week 20 of 2007, and included confirmed cases of influenza and all-cause mortality. Two generalized linear models were fitted: influenza-associated morbidity was modelled by Poisson regression and all-cause mortality by negative binomial regression. The seasonal component was modelled with the periodic function formed by the sum of the sinus and cosines. Expected influenza mortality during periods of influenza virus circulation was estimated by Poisson regression and its confidence intervals using the Bootstrap approach. Vaccination coverage was associated with a reduction in influenza-associated morbidity (p<0.001), but not with a reduction in all-cause mortality (p=0.149). In the case of influenza-associated morbidity, an increase of 5% in vaccination coverage represented a reduction of 3% in the incidence rate of influenza. There was a positive association between influenza-associated morbidity and all-cause mortality. Excess mortality attributable to influenza epidemics was estimated as 34.4 (95% CI: 28.4-40.8) weekly deaths. In conclusion, all-cause mortality is a good indicator of influenza surveillance and vaccination coverage is associated with a reduction in influenza-associated morbidity but not with all-cause mortality. Copyright © 2011 Elsevier Ltd. All rights reserved.

  5. A new approach for handling longitudinal count data with zero-inflation and overdispersion: poisson geometric process model.

    PubMed

    Wan, Wai-Yin; Chan, Jennifer S K

    2009-08-01

    For time series of count data, correlated measurements, clustering as well as excessive zeros occur simultaneously in biomedical applications. Ignoring such effects might contribute to misleading treatment outcomes. A generalized mixture Poisson geometric process (GMPGP) model and a zero-altered mixture Poisson geometric process (ZMPGP) model are developed from the geometric process model, which was originally developed for modelling positive continuous data and was extended to handle count data. These models are motivated by evaluating the trend development of new tumour counts for bladder cancer patients as well as by identifying useful covariates which affect the count level. The models are implemented using Bayesian method with Markov chain Monte Carlo (MCMC) algorithms and are assessed using deviance information criterion (DIC).

  6. Variational tricomplex of a local gauge system, Lagrange structure and weak Poisson bracket

    NASA Astrophysics Data System (ADS)

    Sharapov, A. A.

    2015-09-01

    We introduce the concept of a variational tricomplex, which is applicable both to variational and nonvariational gauge systems. Assigning this tricomplex with an appropriate symplectic structure and a Cauchy foliation, we establish a general correspondence between the Lagrangian and Hamiltonian pictures of one and the same (not necessarily variational) dynamics. In practical terms, this correspondence allows one to construct the generating functional of a weak Poisson structure starting from that of a Lagrange structure. As a byproduct, a covariant procedure is proposed for deriving the classical BRST charge of the BFV formalism by a given BV master action. The general approach is illustrated by the examples of Maxwell’s electrodynamics and chiral bosons in two dimensions.

  7. A Poisson hierarchical modelling approach to detecting copy number variation in sequence coverage data.

    PubMed

    Sepúlveda, Nuno; Campino, Susana G; Assefa, Samuel A; Sutherland, Colin J; Pain, Arnab; Clark, Taane G

    2013-02-26

    The advent of next generation sequencing technology has accelerated efforts to map and catalogue copy number variation (CNV) in genomes of important micro-organisms for public health. A typical analysis of the sequence data involves mapping reads onto a reference genome, calculating the respective coverage, and detecting regions with too-low or too-high coverage (deletions and amplifications, respectively). Current CNV detection methods rely on statistical assumptions (e.g., a Poisson model) that may not hold in general, or require fine-tuning the underlying algorithms to detect known hits. We propose a new CNV detection methodology based on two Poisson hierarchical models, the Poisson-Gamma and Poisson-Lognormal, with the advantage of being sufficiently flexible to describe different data patterns, whilst robust against deviations from the often assumed Poisson model. Using sequence coverage data of 7 Plasmodium falciparum malaria genomes (3D7 reference strain, HB3, DD2, 7G8, GB4, OX005, and OX006), we showed that empirical coverage distributions are intrinsically asymmetric and overdispersed in relation to the Poisson model. We also demonstrated a low baseline false positive rate for the proposed methodology using 3D7 resequencing data and simulation. When applied to the non-reference isolate data, our approach detected known CNV hits, including an amplification of the PfMDR1 locus in DD2 and a large deletion in the CLAG3.2 gene in GB4, and putative novel CNV regions. When compared to the recently available FREEC and cn.MOPS approaches, our findings were more concordant with putative hits from the highest quality array data for the 7G8 and GB4 isolates. In summary, the proposed methodology brings an increase in flexibility, robustness, accuracy and statistical rigour to CNV detection using sequence coverage data.

  8. A Model Comparison for Count Data with a Positively Skewed Distribution with an Application to the Number of University Mathematics Courses Completed

    ERIC Educational Resources Information Center

    Liou, Pey-Yan

    2009-01-01

    The current study examines three regression models: OLS (ordinary least square) linear regression, Poisson regression, and negative binomial regression for analyzing count data. Simulation results show that the OLS regression model performed better than the others, since it did not produce more false statistically significant relationships than…

  9. Generic Schemes for Single-Molecule Kinetics. 2: Information Content of the Poisson Indicator.

    PubMed

    Avila, Thomas R; Piephoff, D Evan; Cao, Jianshu

    2017-08-24

    Recently, we described a pathway analysis technique (paper 1) for analyzing generic schemes for single-molecule kinetics based upon the first-passage time distribution. Here, we employ this method to derive expressions for the Poisson indicator, a normalized measure of stochastic variation (essentially equivalent to the Fano factor and Mandel's Q parameter), for various renewal (i.e., memoryless) enzymatic reactions. We examine its dependence on substrate concentration, without assuming all steps follow Poissonian kinetics. Based upon fitting to the functional forms of the first two waiting time moments, we show that, to second order, the non-Poissonian kinetics are generally underdetermined but can be specified in certain scenarios. For an enzymatic reaction with an arbitrary intermediate topology, we identify a generic minimum of the Poisson indicator as a function of substrate concentration, which can be used to tune substrate concentration to the stochastic fluctuations and to estimate the largest number of underlying consecutive links in a turnover cycle. We identify a local maximum of the Poisson indicator (with respect to substrate concentration) for a renewal process as a signature of competitive binding, either between a substrate and an inhibitor or between multiple substrates. Our analysis explores the rich connections between Poisson indicator measurements and microscopic kinetic mechanisms.

  10. Technical and biological variance structure in mRNA-Seq data: life in the real world

    PubMed Central

    2012-01-01

    Background mRNA expression data from next generation sequencing platforms is obtained in the form of counts per gene or exon. Counts have classically been assumed to follow a Poisson distribution in which the variance is equal to the mean. The Negative Binomial distribution which allows for over-dispersion, i.e., for the variance to be greater than the mean, is commonly used to model count data as well. Results In mRNA-Seq data from 25 subjects, we found technical variation to generally follow a Poisson distribution as has been reported previously and biological variability was over-dispersed relative to the Poisson model. The mean-variance relationship across all genes was quadratic, in keeping with a Negative Binomial (NB) distribution. Over-dispersed Poisson and NB distributional assumptions demonstrated marked improvements in goodness-of-fit (GOF) over the standard Poisson model assumptions, but with evidence of over-fitting in some genes. Modeling of experimental effects improved GOF for high variance genes but increased the over-fitting problem. Conclusions These conclusions will guide development of analytical strategies for accurate modeling of variance structure in these data and sample size determination which in turn will aid in the identification of true biological signals that inform our understanding of biological systems. PMID:22769017

  11. A novel multitarget model of radiation-induced cell killing based on the Gaussian distribution.

    PubMed

    Zhao, Lei; Mi, Dong; Sun, Yeqing

    2017-05-07

    The multitarget version of the traditional target theory based on the Poisson distribution is still used to describe the dose-survival curves of cells after ionizing radiation in radiobiology and radiotherapy. However, noting that the usual ionizing radiation damage is the result of two sequential stochastic processes, the probability distribution of the damage number per cell should follow a compound Poisson distribution, like e.g. Neyman's distribution of type A (N. A.). In consideration of that the Gaussian distribution can be considered as the approximation of the N. A. in the case of high flux, a multitarget model based on the Gaussian distribution is proposed to describe the cell inactivation effects in low linear energy transfer (LET) radiation with high dose-rate. Theoretical analysis and experimental data fitting indicate that the present theory is superior to the traditional multitarget model and similar to the Linear - Quadratic (LQ) model in describing the biological effects of low-LET radiation with high dose-rate, and the parameter ratio in the present model can be used as an alternative indicator to reflect the radiation damage and radiosensitivity of the cells. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. A coarse-grid-projection acceleration method for finite-element incompressible flow computations

    NASA Astrophysics Data System (ADS)

    Kashefi, Ali; Staples, Anne; FiN Lab Team

    2015-11-01

    Coarse grid projection (CGP) methodology provides a framework for accelerating computations by performing some part of the computation on a coarsened grid. We apply the CGP to pressure projection methods for finite element-based incompressible flow simulations. Based on it, the predicted velocity field data is restricted to a coarsened grid, the pressure is determined by solving the Poisson equation on the coarse grid, and the resulting data are prolonged to the preset fine grid. The contributions of the CGP method to the pressure correction technique are twofold: first, it substantially lessens the computational cost devoted to the Poisson equation, which is the most time-consuming part of the simulation process. Second, it preserves the accuracy of the velocity field. The velocity and pressure spaces are approximated by Galerkin spectral element using piecewise linear basis functions. A restriction operator is designed so that fine data are directly injected into the coarse grid. The Laplacian and divergence matrices are driven by taking inner products of coarse grid shape functions. Linear interpolation is implemented to construct a prolongation operator. A study of the data accuracy and the CPU time for the CGP-based versus non-CGP computations is presented. Laboratory for Fluid Dynamics in Nature.

  13. Curvature and gravity actions for matrix models: II. The case of general Poisson structures

    NASA Astrophysics Data System (ADS)

    Blaschke, Daniel N.; Steinacker, Harold

    2010-12-01

    We study the geometrical meaning of higher order terms in matrix models of Yang-Mills type in the semi-classical limit, generalizing recent results (Blaschke and Steinacker 2010 Class. Quantum Grav. 27 165010 (arXiv:1003.4132)) to the case of four-dimensional spacetime geometries with general Poisson structure. Such terms are expected to arise e.g. upon quantization of the IKKT-type models. We identify terms which depend only on the intrinsic geometry and curvature, including modified versions of the Einstein-Hilbert action as well as terms which depend on the extrinsic curvature. Furthermore, a mechanism is found which implies that the effective metric G on the spacetime brane {\\cal M}\\subset \\mathds{R}^D 'almost' coincides with the induced metric g. Deviations from G = g are suppressed, and characterized by the would-be U(1) gauge field.

  14. Generic buckling curves for specially orthotropic rectangular plates

    NASA Technical Reports Server (NTRS)

    Brunnelle, E. J.; Oyibo, G. A.

    1983-01-01

    Using a double affine transformation, the classical buckling equation for specially orthotropic plates and the corresponding virtual work theorem are presented in a particularly simple fashion. These dual representations are characterized by a single material constant, called the generalized rigidity ratio, whose range is predicted to be the closed interval from 0 to 1 (if this prediction is correct then the numerical results using a ratio greater than 1 in the specially orthotropic plate literature are incorrect); when natural boundary conditions are considered a generalized Poisson's ratio is introduced. Thus the buckling results are valid for any specially orthotropic material; hence the curves presented in the text are generic rather than specific. The solution trends are twofold; the buckling coefficients decrease with decreasing generalized rigidity ratio and, when applicable, they decrease with increasing generalized Poisson's ratio. Since the isotropic plate is one limiting case of the above analysis, it is also true that isotropic buckling coefficients decrease with increasing Poission's ratio.

  15. Segmentation algorithm for non-stationary compound Poisson processes. With an application to inventory time series of market members in a financial market

    NASA Astrophysics Data System (ADS)

    Tóth, B.; Lillo, F.; Farmer, J. D.

    2010-11-01

    We introduce an algorithm for the segmentation of a class of regime switching processes. The segmentation algorithm is a non parametric statistical method able to identify the regimes (patches) of a time series. The process is composed of consecutive patches of variable length. In each patch the process is described by a stationary compound Poisson process, i.e. a Poisson process where each count is associated with a fluctuating signal. The parameters of the process are different in each patch and therefore the time series is non-stationary. Our method is a generalization of the algorithm introduced by Bernaola-Galván, et al. [Phys. Rev. Lett. 87, 168105 (2001)]. We show that the new algorithm outperforms the original one for regime switching models of compound Poisson processes. As an application we use the algorithm to segment the time series of the inventory of market members of the London Stock Exchange and we observe that our method finds almost three times more patches than the original one.

  16. Multiscale modeling of a rectifying bipolar nanopore: Comparing Poisson-Nernst-Planck to Monte Carlo

    NASA Astrophysics Data System (ADS)

    Matejczyk, Bartłomiej; Valiskó, Mónika; Wolfram, Marie-Therese; Pietschmann, Jan-Frederik; Boda, Dezső

    2017-03-01

    In the framework of a multiscale modeling approach, we present a systematic study of a bipolar rectifying nanopore using a continuum and a particle simulation method. The common ground in the two methods is the application of the Nernst-Planck (NP) equation to compute ion transport in the framework of the implicit-water electrolyte model. The difference is that the Poisson-Boltzmann theory is used in the Poisson-Nernst-Planck (PNP) approach, while the Local Equilibrium Monte Carlo (LEMC) method is used in the particle simulation approach (NP+LEMC) to relate the concentration profile to the electrochemical potential profile. Since we consider a bipolar pore which is short and narrow, we perform simulations using two-dimensional PNP. In addition, results of a non-linear version of PNP that takes crowding of ions into account are shown. We observe that the mean field approximation applied in PNP is appropriate to reproduce the basic behavior of the bipolar nanopore (e.g., rectification) for varying parameters of the system (voltage, surface charge, electrolyte concentration, and pore radius). We present current data that characterize the nanopore's behavior as a device, as well as concentration, electrical potential, and electrochemical potential profiles.

  17. Multiscale modeling of a rectifying bipolar nanopore: Comparing Poisson-Nernst-Planck to Monte Carlo.

    PubMed

    Matejczyk, Bartłomiej; Valiskó, Mónika; Wolfram, Marie-Therese; Pietschmann, Jan-Frederik; Boda, Dezső

    2017-03-28

    In the framework of a multiscale modeling approach, we present a systematic study of a bipolar rectifying nanopore using a continuum and a particle simulation method. The common ground in the two methods is the application of the Nernst-Planck (NP) equation to compute ion transport in the framework of the implicit-water electrolytemodel. The difference is that the Poisson-Boltzmann theory is used in the Poisson-Nernst-Planck (PNP) approach, while the Local Equilibrium Monte Carlo (LEMC) method is used in the particle simulation approach (NP+LEMC) to relate the concentration profile to the electrochemical potential profile. Since we consider a bipolar pore which is short and narrow, we perform simulations using two-dimensional PNP. In addition, results of a non-linear version of PNP that takes crowding of ions into account are shown. We observe that the mean field approximation applied in PNP is appropriate to reproduce the basic behavior of the bipolar nanopore (e.g., rectification) for varying parameters of the system (voltage, surface charge,electrolyte concentration, and pore radius). We present current data that characterize the nanopore's behavior as a device, as well as concentration, electrical potential, and electrochemical potential profiles.

  18. Square Root Graphical Models: Multivariate Generalizations of Univariate Exponential Families that Permit Positive Dependencies

    PubMed Central

    Inouye, David I.; Ravikumar, Pradeep; Dhillon, Inderjit S.

    2016-01-01

    We develop Square Root Graphical Models (SQR), a novel class of parametric graphical models that provides multivariate generalizations of univariate exponential family distributions. Previous multivariate graphical models (Yang et al., 2015) did not allow positive dependencies for the exponential and Poisson generalizations. However, in many real-world datasets, variables clearly have positive dependencies. For example, the airport delay time in New York—modeled as an exponential distribution—is positively related to the delay time in Boston. With this motivation, we give an example of our model class derived from the univariate exponential distribution that allows for almost arbitrary positive and negative dependencies with only a mild condition on the parameter matrix—a condition akin to the positive definiteness of the Gaussian covariance matrix. Our Poisson generalization allows for both positive and negative dependencies without any constraints on the parameter values. We also develop parameter estimation methods using node-wise regressions with ℓ1 regularization and likelihood approximation methods using sampling. Finally, we demonstrate our exponential generalization on a synthetic dataset and a real-world dataset of airport delay times. PMID:27563373

  19. An empirical Bayesian and Buhlmann approach with non-homogenous Poisson process

    NASA Astrophysics Data System (ADS)

    Noviyanti, Lienda

    2015-12-01

    All general insurance companies in Indonesia have to adjust their current premium rates according to maximum and minimum limit rates in the new regulation established by the Financial Services Authority (Otoritas Jasa Keuangan / OJK). In this research, we estimated premium rate by means of the Bayesian and the Buhlmann approach using historical claim frequency and claim severity in a five-group risk. We assumed a Poisson distributed claim frequency and a Normal distributed claim severity. Particularly, we used a non-homogenous Poisson process for estimating the parameters of claim frequency. We found that estimated premium rates are higher than the actual current rate. Regarding to the OJK upper and lower limit rates, the estimates among the five-group risk are varied; some are in the interval and some are out of the interval.

  20. A dictionary learning approach for Poisson image deblurring.

    PubMed

    Ma, Liyan; Moisan, Lionel; Yu, Jian; Zeng, Tieyong

    2013-07-01

    The restoration of images corrupted by blur and Poisson noise is a key issue in medical and biological image processing. While most existing methods are based on variational models, generally derived from a maximum a posteriori (MAP) formulation, recently sparse representations of images have shown to be efficient approaches for image recovery. Following this idea, we propose in this paper a model containing three terms: a patch-based sparse representation prior over a learned dictionary, the pixel-based total variation regularization term and a data-fidelity term capturing the statistics of Poisson noise. The resulting optimization problem can be solved by an alternating minimization technique combined with variable splitting. Extensive experimental results suggest that in terms of visual quality, peak signal-to-noise ratio value and the method noise, the proposed algorithm outperforms state-of-the-art methods.

  1. Convergence of Spectral Discretizations of the Vlasov--Poisson System

    DOE PAGES

    Manzini, G.; Funaro, D.; Delzanno, G. L.

    2017-09-26

    Here we prove the convergence of a spectral discretization of the Vlasov-Poisson system. The velocity term of the Vlasov equation is discretized using either Hermite functions on the infinite domain or Legendre polynomials on a bounded domain. The spatial term of the Vlasov and Poisson equations is discretized using periodic Fourier expansions. Boundary conditions are treated in weak form through a penalty type term that can be applied also in the Hermite case. As a matter of fact, stability properties of the approximated scheme descend from this added term. The convergence analysis is carried out in detail for the 1D-1Vmore » case, but results can be generalized to multidimensional domains, obtained as Cartesian product, in both space and velocity. The error estimates show the spectral convergence under suitable regularity assumptions on the exact solution.« less

  2. Unique Zigzag-Shaped Buckling Zn2C Monolayer with Strain-Tunable Band Gap and Negative Poisson Ratio.

    PubMed

    Meng, Lingbiao; Zhang, Yingjuan; Zhou, Minjie; Zhang, Jicheng; Zhou, Xiuwen; Ni, Shuang; Wu, Weidong

    2018-02-19

    Designing new materials with reduced dimensionality and distinguished properties has continuously attracted intense interest for materials innovation. Here we report a novel two-dimensional (2D) Zn 2 C monolayer nanomaterial with exceptional structure and properties by means of first-principles calculations. This new Zn 2 C monolayer is composed of quasi-tetrahedral tetracoordinate carbon and quasi-linear bicoordinate zinc, featuring a peculiar zigzag-shaped buckling configuration. The unique coordinate topology endows this natural 2D semiconducting monolayer with strongly strain tunable band gap and unusual negative Poisson ratios. The monolayer has good dynamic and thermal stabilities and is also the lowest-energy structure of 2D space indicated by the particle-swarm optimization (PSO) method, implying its synthetic feasibility. With these intriguing properties the material may find applications in nanoelectronics and micromechanics.

  3. Vitamin D and health care costs: Results from two independent population-based cohort studies.

    PubMed

    Hannemann, A; Wallaschofski, H; Nauck, M; Marschall, P; Flessa, S; Grabe, H J; Schmidt, C O; Baumeister, S E

    2017-10-31

    Vitamin D deficiency is associated with higher morbidity. However, there is few data regarding the effect of vitamin D deficiency on health care costs. This study examined the cross-sectional and longitudinal associations between the serum 25-hydroxy vitamin D concentration (25OHD) and direct health care costs and hospitalization in two independent samples of the general population in North-Eastern Germany. We studied 7217 healthy individuals from the 'Study of Health in Pomerania' (SHIP n = 3203) and the 'Study of Health in Pomerania-Trend' (SHIP-Trend n = 4014) who had valid 25OHD measurements and provided data on annual total costs, outpatient costs, hospital stays, and inpatient costs. The associations between 25OHD concentrations (modelled continuously using factional polynomials) and health care costs were examined using a generalized linear model with gamma distribution and a log link. Poisson regression models were used to estimate relative risks of hospitalization. In cross-sectional analysis of SHIP-Trend, non-linear associations between the 25OHD concentration and inpatient costs and hospitalization were detected: participants with 25OHD concentrations of 5, 10 and 15 ng/ml had 226.1%, 51.5% and 14.1%, respectively, higher inpatient costs than those with 25OHD concentrations of 20 ng/ml (overall p-value = 0.001) in multivariable models. We found a relation between lower 25OHD concentrations and increased inpatient health care costs and hospitalization. Our results thus indicate an influence of vitamin D deficiency on health care costs in the general population. Copyright © 2017 Elsevier Ltd and European Society for Clinical Nutrition and Metabolism. All rights reserved.

  4. General methodology for nonlinear modeling of neural systems with Poisson point-process inputs.

    PubMed

    Marmarelis, V Z; Berger, T W

    2005-07-01

    This paper presents a general methodological framework for the practical modeling of neural systems with point-process inputs (sequences of action potentials or, more broadly, identical events) based on the Volterra and Wiener theories of functional expansions and system identification. The paper clarifies the distinctions between Volterra and Wiener kernels obtained from Poisson point-process inputs. It shows that only the Wiener kernels can be estimated via cross-correlation, but must be defined as zero along the diagonals. The Volterra kernels can be estimated far more accurately (and from shorter data-records) by use of the Laguerre expansion technique adapted to point-process inputs, and they are independent of the mean rate of stimulation (unlike their P-W counterparts that depend on it). The Volterra kernels can also be estimated for broadband point-process inputs that are not Poisson. Useful applications of this modeling approach include cases where we seek to determine (model) the transfer characteristics between one neuronal axon (a point-process 'input') and another axon (a point-process 'output') or some other measure of neuronal activity (a continuous 'output', such as population activity) with which a causal link exists.

  5. Mapping species abundance by a spatial zero-inflated Poisson model: a case study in the Wadden Sea, the Netherlands.

    PubMed

    Lyashevska, Olga; Brus, Dick J; van der Meer, Jaap

    2016-01-01

    The objective of the study was to provide a general procedure for mapping species abundance when data are zero-inflated and spatially correlated counts. The bivalve species Macoma balthica was observed on a 500×500 m grid in the Dutch part of the Wadden Sea. In total, 66% of the 3451 counts were zeros. A zero-inflated Poisson mixture model was used to relate counts to environmental covariates. Two models were considered, one with relatively fewer covariates (model "small") than the other (model "large"). The models contained two processes: a Bernoulli (species prevalence) and a Poisson (species intensity, when the Bernoulli process predicts presence). The model was used to make predictions for sites where only environmental data are available. Predicted prevalences and intensities show that the model "small" predicts lower mean prevalence and higher mean intensity, than the model "large". Yet, the product of prevalence and intensity, which might be called the unconditional intensity, is very similar. Cross-validation showed that the model "small" performed slightly better, but the difference was small. The proposed methodology might be generally applicable, but is computer intensive.

  6. Spatiotemporal hurdle models for zero-inflated count data: Exploring trends in emergency department visits.

    PubMed

    Neelon, Brian; Chang, Howard H; Ling, Qiang; Hastings, Nicole S

    2016-12-01

    Motivated by a study exploring spatiotemporal trends in emergency department use, we develop a class of two-part hurdle models for the analysis of zero-inflated areal count data. The models consist of two components-one for the probability of any emergency department use and one for the number of emergency department visits given use. Through a hierarchical structure, the models incorporate both patient- and region-level predictors, as well as spatially and temporally correlated random effects for each model component. The random effects are assigned multivariate conditionally autoregressive priors, which induce dependence between the components and provide spatial and temporal smoothing across adjacent spatial units and time periods, resulting in improved inferences. To accommodate potential overdispersion, we consider a range of parametric specifications for the positive counts, including truncated negative binomial and generalized Poisson distributions. We adopt a Bayesian inferential approach, and posterior computation is handled conveniently within standard Bayesian software. Our results indicate that the negative binomial and generalized Poisson hurdle models vastly outperform the Poisson hurdle model, demonstrating that overdispersed hurdle models provide a useful approach to analyzing zero-inflated spatiotemporal data. © The Author(s) 2014.

  7. Research in computer science

    NASA Technical Reports Server (NTRS)

    Ortega, J. M.

    1986-01-01

    Various graduate research activities in the field of computer science are reported. Among the topics discussed are: (1) failure probabilities in multi-version software; (2) Gaussian Elimination on parallel computers; (3) three dimensional Poisson solvers on parallel/vector computers; (4) automated task decomposition for multiple robot arms; (5) multi-color incomplete cholesky conjugate gradient methods on the Cyber 205; and (6) parallel implementation of iterative methods for solving linear equations.

  8. Poisson traces, D-modules, and symplectic resolutions

    NASA Astrophysics Data System (ADS)

    Etingof, Pavel; Schedler, Travis

    2018-03-01

    We survey the theory of Poisson traces (or zeroth Poisson homology) developed by the authors in a series of recent papers. The goal is to understand this subtle invariant of (singular) Poisson varieties, conditions for it to be finite-dimensional, its relationship to the geometry and topology of symplectic resolutions, and its applications to quantizations. The main technique is the study of a canonical D-module on the variety. In the case the variety has finitely many symplectic leaves (such as for symplectic singularities and Hamiltonian reductions of symplectic vector spaces by reductive groups), the D-module is holonomic, and hence, the space of Poisson traces is finite-dimensional. As an application, there are finitely many irreducible finite-dimensional representations of every quantization of the variety. Conjecturally, the D-module is the pushforward of the canonical D-module under every symplectic resolution of singularities, which implies that the space of Poisson traces is dual to the top cohomology of the resolution. We explain many examples where the conjecture is proved, such as symmetric powers of du Val singularities and symplectic surfaces and Slodowy slices in the nilpotent cone of a semisimple Lie algebra. We compute the D-module in the case of surfaces with isolated singularities and show it is not always semisimple. We also explain generalizations to arbitrary Lie algebras of vector fields, connections to the Bernstein-Sato polynomial, relations to two-variable special polynomials such as Kostka polynomials and Tutte polynomials, and a conjectural relationship with deformations of symplectic resolutions. In the appendix we give a brief recollection of the theory of D-modules on singular varieties that we require.

  9. QMRA for Drinking Water: 2. The Effect of Pathogen Clustering in Single-Hit Dose-Response Models.

    PubMed

    Nilsen, Vegard; Wyller, John

    2016-01-01

    Spatial and/or temporal clustering of pathogens will invalidate the commonly used assumption of Poisson-distributed pathogen counts (doses) in quantitative microbial risk assessment. In this work, the theoretically predicted effect of spatial clustering in conventional "single-hit" dose-response models is investigated by employing the stuttering Poisson distribution, a very general family of count distributions that naturally models pathogen clustering and contains the Poisson and negative binomial distributions as special cases. The analysis is facilitated by formulating the dose-response models in terms of probability generating functions. It is shown formally that the theoretical single-hit risk obtained with a stuttering Poisson distribution is lower than that obtained with a Poisson distribution, assuming identical mean doses. A similar result holds for mixed Poisson distributions. Numerical examples indicate that the theoretical single-hit risk is fairly insensitive to moderate clustering, though the effect tends to be more pronounced for low mean doses. Furthermore, using Jensen's inequality, an upper bound on risk is derived that tends to better approximate the exact theoretical single-hit risk for highly overdispersed dose distributions. The bound holds with any dose distribution (characterized by its mean and zero inflation index) and any conditional dose-response model that is concave in the dose variable. Its application is exemplified with published data from Norovirus feeding trials, for which some of the administered doses were prepared from an inoculum of aggregated viruses. The potential implications of clustering for dose-response assessment as well as practical risk characterization are discussed. © 2016 Society for Risk Analysis.

  10. Poisson traces, D-modules, and symplectic resolutions.

    PubMed

    Etingof, Pavel; Schedler, Travis

    2018-01-01

    We survey the theory of Poisson traces (or zeroth Poisson homology) developed by the authors in a series of recent papers. The goal is to understand this subtle invariant of (singular) Poisson varieties, conditions for it to be finite-dimensional, its relationship to the geometry and topology of symplectic resolutions, and its applications to quantizations. The main technique is the study of a canonical D-module on the variety. In the case the variety has finitely many symplectic leaves (such as for symplectic singularities and Hamiltonian reductions of symplectic vector spaces by reductive groups), the D-module is holonomic, and hence, the space of Poisson traces is finite-dimensional. As an application, there are finitely many irreducible finite-dimensional representations of every quantization of the variety. Conjecturally, the D-module is the pushforward of the canonical D-module under every symplectic resolution of singularities, which implies that the space of Poisson traces is dual to the top cohomology of the resolution. We explain many examples where the conjecture is proved, such as symmetric powers of du Val singularities and symplectic surfaces and Slodowy slices in the nilpotent cone of a semisimple Lie algebra. We compute the D-module in the case of surfaces with isolated singularities and show it is not always semisimple. We also explain generalizations to arbitrary Lie algebras of vector fields, connections to the Bernstein-Sato polynomial, relations to two-variable special polynomials such as Kostka polynomials and Tutte polynomials, and a conjectural relationship with deformations of symplectic resolutions. In the appendix we give a brief recollection of the theory of D-modules on singular varieties that we require.

  11. WE-H-BRA-01: BEST IN PHYSICS (THERAPY): Nano-Dosimetric Kinetic Model for Variable Relative Biological Effectiveness of Proton and Ion Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abolfath, R; Bronk, L; Titt, U.

    2016-06-15

    Purpose: Recent clonogenic cell survival and γH2AX studies suggest proton relative biological effectiveness (RBE) may be a non-linear function of linear energy transfer (LET) in the distal edge of the Bragg peak and beyond. We sought to develop a multiscale model to account for non-linear response phenomena to aid in the optimization of intensity-modulated proton therapy. Methods: The model is based on first-principle simulations of proton track structures, including secondary ions, and an analytical derivation of the dependence on particle LET of the linear-quadratic (LQ) model parameters α and β. The derived formulas are an extension of the microdosimetric kineticmore » (MK) model that captures dissipative track structures and non-Poissonian distribution of DNA damage at the distal edge of the Bragg peak and beyond. Monte Carlo simulations were performed to confirm the non-linear dose-response characteristics arising from the non-Poisson distribution of initial DNA damage. Results: In contrast to low LET segments of the proton depth dose, from the beam entrance to the Bragg peak, strong deviations from non-dissipative track structures and Poisson distribution in the ionization events in the Bragg peak distal edge govern the non-linear cell response and result in the transformation α=(1+c-1 L) α-x+2(c-0 L+c-2 L^2 )(1+c-1 L) β-x and β=(1+c-1 L)^2 β-x. Here L is the charged particle LET, and c-0,c-1, and c-2 are functions of microscopic parameters and can be served as fitting parameters to the cell-survival data. In the low LET limit c-1, and c-2 are negligible hence the linear model proposed and used by Wilkins-Oelfke for the proton treatment planning system can be retrieved. The present model fits well the recent clonogenic survival data measured recently in our group in MDACC. Conclusion: The present hybrid method provides higher accuracy in calculating the RBE-weighted dose in the target and normal tissues.« less

  12. Statistical properties of superimposed stationary spike trains.

    PubMed

    Deger, Moritz; Helias, Moritz; Boucsein, Clemens; Rotter, Stefan

    2012-06-01

    The Poisson process is an often employed model for the activity of neuronal populations. It is known, though, that superpositions of realistic, non- Poisson spike trains are not in general Poisson processes, not even for large numbers of superimposed processes. Here we construct superimposed spike trains from intracellular in vivo recordings from rat neocortex neurons and compare their statistics to specific point process models. The constructed superimposed spike trains reveal strong deviations from the Poisson model. We find that superpositions of model spike trains that take the effective refractoriness of the neurons into account yield a much better description. A minimal model of this kind is the Poisson process with dead-time (PPD). For this process, and for superpositions thereof, we obtain analytical expressions for some second-order statistical quantities-like the count variability, inter-spike interval (ISI) variability and ISI correlations-and demonstrate the match with the in vivo data. We conclude that effective refractoriness is the key property that shapes the statistical properties of the superposition spike trains. We present new, efficient algorithms to generate superpositions of PPDs and of gamma processes that can be used to provide more realistic background input in simulations of networks of spiking neurons. Using these generators, we show in simulations that neurons which receive superimposed spike trains as input are highly sensitive for the statistical effects induced by neuronal refractoriness.

  13. Scaling, elasticity, and CLPT

    NASA Technical Reports Server (NTRS)

    Brunelle, Eugene J.

    1994-01-01

    The first few viewgraphs describe the general solution properties of linear elasticity theory which are given by the following two statements: (1) for stress B.C. on S(sub sigma) and zero displacement B.C. on S(sub u) the altered displacements u(sub i)(*) and the actual stresses tau(sub ij) are elastically dependent on Poisson's ratio nu alone: thus the actual displacements are given by u(sub i) = mu(exp -1)u(sub i)(*); and (2) for zero stress B.C. on S(sub sigma) and displacement B.C. on S(sub u) the actual displacements u(sub i) and the altered stresses tau(sub ij)(*) are elastically dependent on Poisson's ratio nu alone: thus the actual stresses are given by tau(sub ij) = E tau(sub ij)(*). The remaining viewgraphs describe the minimum parameter formulation of the general classical laminate theory plate problem as follows: The general CLT plate problem is expressed as a 3 x 3 system of differential equations in the displacements u, v, and w. The eighteen (six each) A(sub ij), B(sub ij), and D(sub ij) system coefficients are ply-weighted sums of the transformed reduced stiffnesses (bar-Q(sub ij))(sub k); the (bar-Q(sub ij))(sub k) in turn depend on six reduced stiffnesses (Q(sub ij))(sub k) and the material and geometry properties of the k(sup th) layer. This paper develops a method for redefining the system coefficients, the displacement components (u,v,w), and the position components (x,y) such that a minimum parameter formulation is possible. The pivotal steps in this method are (1) the reduction of (bar-Q(sub ij))(sub k) dependencies to just two constants Q(*) = (Q(12) + 2Q(66))/(Q(11)Q(22))(exp 1/2) and F(*) - (Q(22)/Q(11))(exp 1/2) in terms of ply-independent reference values Q(sub ij); (2) the reduction of the remaining portions of the A, B, and D coefficients to nondimensional ply-weighted sums (with 0 to 1 ranges) that are independent of Q(*) and F(*); and (3) the introduction of simple coordinate stretchings for u, v, w and x,y such that the process is neatly completed.

  14. The Electric Potential of a Macromolecule in a Solvent: A Fundamental Approach

    NASA Astrophysics Data System (ADS)

    Juffer, André H.; Botta, Eugen F. F.; van Keulen, Bert A. M.; van der Ploeg, Auke; Berendsen, Herman J. C.

    1991-11-01

    A general numerical method is presented to compute the electric potential for a macromolecule of arbitrary shape in a solvent with nonzero ionic strength. The model is based on a continuum description of the dielectric and screening properties of the system, which consists of a bounded internal region with discrete charges and an infinite external region. The potential obeys the Poisson equation in the internal region and the linearized Poisson-Boltzmann equation in the external region, coupled through appropriate boundary conditions. It is shown how this three-dimensional problem can be presented as a pair of coupled integral equations for the potential and the normal component of the electric field at the dielectric interface. These equations can be solved by a straightforward application of boundary element techniques. The solution involves the decomposition of a matrix that depends only on the geometry of the surface and not on the positions of the charges. With this approach the number of unknowns is reduced by an order of magnitude with respect to the usual finite difference methods. Special attention is given to the numerical inaccuracies resulting from charges which are located close to the interface; an adapted formulation is given for that case. The method is tested both for a spherical geometry, for which an exact solution is available, and for a realistic problem, for which a finite difference solution and experimental verification is available. The latter concerns the shift in acid strength (pK-values) of histidines in the copper-containing protein azurin on oxidation of the copper, for various values of the ionic strength. A general method is given to triangulate a macromolecular surface. The possibility is discussed to use the method presented here for a correct treatment of long-range electrostatic interactions in simulations of solvated macromolecules, which form an essential part of correct potentials of mean force.

  15. Species abundance distributions in neutral models with immigration or mutation and general lifetimes.

    PubMed

    Lambert, Amaury

    2011-07-01

    We consider a general, neutral, dynamical model of biodiversity. Individuals have i.i.d. lifetime durations, which are not necessarily exponentially distributed, and each individual gives birth independently at constant rate λ. Thus, the population size is a homogeneous, binary Crump-Mode-Jagers process (which is not necessarily a Markov process). We assume that types are clonally inherited. We consider two classes of speciation models in this setting. In the immigration model, new individuals of an entirely new species singly enter the population at constant rate μ (e.g., from the mainland into the island). In the mutation model, each individual independently experiences point mutations in its germ line, at constant rate θ. We are interested in the species abundance distribution, i.e., in the numbers, denoted I(n)(k) in the immigration model and A(n)(k) in the mutation model, of species represented by k individuals, k = 1, 2, . . . , n, when there are n individuals in the total population. In the immigration model, we prove that the numbers (I(t)(k); k ≥ 1) of species represented by k individuals at time t, are independent Poisson variables with parameters as in Fisher's log-series. When conditioning on the total size of the population to equal n, this results in species abundance distributions given by Ewens' sampling formula. In particular, I(n)(k) converges as n → ∞ to a Poisson r.v. with mean γ/k, where γ : = μ/λ. In the mutation model, as n → ∞, we obtain the almost sure convergence of n (-1) A(n)(k) to a nonrandom explicit constant. In the case of a critical, linear birth-death process, this constant is given by Fisher's log-series, namely n(-1) A(n)(k) converges to α(k)/k, where α : = λ/(λ + θ). In both models, the abundances of the most abundant species are briefly discussed.

  16. A regularization corrected score method for nonlinear regression models with covariate error.

    PubMed

    Zucker, David M; Gorfine, Malka; Li, Yi; Tadesse, Mahlet G; Spiegelman, Donna

    2013-03-01

    Many regression analyses involve explanatory variables that are measured with error, and failing to account for this error is well known to lead to biased point and interval estimates of the regression coefficients. We present here a new general method for adjusting for covariate error. Our method consists of an approximate version of the Stefanski-Nakamura corrected score approach, using the method of regularization to obtain an approximate solution of the relevant integral equation. We develop the theory in the setting of classical likelihood models; this setting covers, for example, linear regression, nonlinear regression, logistic regression, and Poisson regression. The method is extremely general in terms of the types of measurement error models covered, and is a functional method in the sense of not involving assumptions on the distribution of the true covariate. We discuss the theoretical properties of the method and present simulation results in the logistic regression setting (univariate and multivariate). For illustration, we apply the method to data from the Harvard Nurses' Health Study concerning the relationship between physical activity and breast cancer mortality in the period following a diagnosis of breast cancer. Copyright © 2013, The International Biometric Society.

  17. New method for blowup of the Euler-Poisson system

    NASA Astrophysics Data System (ADS)

    Kwong, Man Kam; Yuen, Manwai

    2016-08-01

    In this paper, we provide a new method for establishing the blowup of C2 solutions for the pressureless Euler-Poisson system with attractive forces for RN (N ≥ 2) with ρ(0, x0) > 0 and Ω 0 i j ( x 0 ) = /1 2 [" separators=" ∂ i u j ( 0 , x 0 ) - ∂ j u i ( 0 , x 0 ) ] = 0 at some point x0 ∈ RN. By applying the generalized Hubble transformation div u ( t , x 0 ( t ) ) = /N a ˙ ( t ) a ( t ) to a reduced Riccati differential inequality derived from the system, we simplify the inequality into the Emden equation a ̈ ( t ) = - /λ a ( t ) N - 1 , a ( 0 ) = 1 , a ˙ ( 0 ) = /div u ( 0 , x 0 ) N . Known results on its blowup set allow us to easily obtain the blowup conditions of the Euler-Poisson system.

  18. Extended Poisson process modelling and analysis of grouped binary data.

    PubMed

    Faddy, Malcolm J; Smith, David M

    2012-05-01

    A simple extension of the Poisson process results in binomially distributed counts of events in a time interval. A further extension generalises this to probability distributions under- or over-dispersed relative to the binomial distribution. Substantial levels of under-dispersion are possible with this modelling, but only modest levels of over-dispersion - up to Poisson-like variation. Although simple analytical expressions for the moments of these probability distributions are not available, approximate expressions for the mean and variance are derived, and used to re-parameterise the models. The modelling is applied in the analysis of two published data sets, one showing under-dispersion and the other over-dispersion. More appropriate assessment of the precision of estimated parameters and reliable model checking diagnostics follow from this more general modelling of these data sets. © 2012 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Filtering with Marked Point Process Observations via Poisson Chaos Expansion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun Wei, E-mail: wsun@mathstat.concordia.ca; Zeng Yong, E-mail: zengy@umkc.edu; Zhang Shu, E-mail: zhangshuisme@hotmail.com

    2013-06-15

    We study a general filtering problem with marked point process observations. The motivation comes from modeling financial ultra-high frequency data. First, we rigorously derive the unnormalized filtering equation with marked point process observations under mild assumptions, especially relaxing the bounded condition of stochastic intensity. Then, we derive the Poisson chaos expansion for the unnormalized filter. Based on the chaos expansion, we establish the uniqueness of solutions of the unnormalized filtering equation. Moreover, we derive the Poisson chaos expansion for the unnormalized filter density under additional conditions. To explore the computational advantage, we further construct a new consistent recursive numerical schememore » based on the truncation of the chaos density expansion for a simple case. The new algorithm divides the computations into those containing solely system coefficients and those including the observations, and assign the former off-line.« less

  20. Daily temperature change in relation to the risk of childhood bacillary dysentery among different age groups and sexes in a temperate city in China.

    PubMed

    Li, K; Zhao, K; Shi, L; Wen, L; Yang, H; Cheng, J; Wang, X; Su, H

    2016-02-01

    In recent years, many studies have found that ambient temperature is significantly associated with bacillary dysentery (BD). However, there is limited evidence on the relationship between temperature and childhood BD in temperate areas. To investigate the relationship between daily mean temperature (MT) and childhood BD in China. Data on daily MT and childhood BD between 2006 and 2012 were collected from the Bureau of Meteorology and the Centre for Disease Control and Prevention in Hefei, Anhui Province, China. A Poisson generalized linear regression model combined with a distributed lag non-linear model was used to analyse the effects of temperature on childhood BD across different age and sex subgroups. An increase in temperature was significantly associated with childhood BD, and each 1 °C increase corresponded to an increase of 1.58% [95% confidence interval (CI) 0.46-2.71%] in the number of cases of BD. Children aged 0-5 years and girls were particularly sensitive to the effects of temperature. High temperatures may increase the risk of childhood BD in Hefei. Children aged 0-5 years and girls appear to be particularly sensitive to the effects of high temperature. Copyright © 2015 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.

  1. Coupling of a structural analysis and flow simulation for short-fiber-reinforced polymers: property prediction and transfer of results

    NASA Astrophysics Data System (ADS)

    Kröner, C.; Altenbach, H.; Naumenko, K.

    2009-05-01

    The aim of this paper is to discuss the basic theories of interfaces able to transfer the results of an injection molding analyis of fiber-reinforced polymers, performed by using the commercial computer code Moldflow, to the structural analysis program ABAQUS. The elastic constants of the materials, such as Young's modulus, shear modulus, and Poisson's ratio, which depend on both the fiber content and the degree of fiber orientation, were calculated not by the usual method of "orientation averaging," but with the help of linear functions fitted to experimental data. The calculation and transfer of all needed data, such as material properties, geometry, directions of anisotropy, and so on, is performed by an interface developed. The interface is suit able for midplane elements in Moldflow. It calculates and transfers to ABAQUS all data necessary for the use of shell elements. In addition, a method is described how a nonlinear orthotropic behavior can be modeled starting from the generalized Hooke's law. It is also shown how such a model can be implemented in ABAQUS by means of a material subroutine. The results obtained according to this subroutine are compared with those based on an orthotropic, linear, elastic simulation.

  2. Statistical power analyses using G*Power 3.1: tests for correlation and regression analyses.

    PubMed

    Faul, Franz; Erdfelder, Edgar; Buchner, Axel; Lang, Albert-Georg

    2009-11-01

    G*Power is a free power analysis program for a variety of statistical tests. We present extensions and improvements of the version introduced by Faul, Erdfelder, Lang, and Buchner (2007) in the domain of correlation and regression analyses. In the new version, we have added procedures to analyze the power of tests based on (1) single-sample tetrachoric correlations, (2) comparisons of dependent correlations, (3) bivariate linear regression, (4) multiple linear regression based on the random predictor model, (5) logistic regression, and (6) Poisson regression. We describe these new features and provide a brief introduction to their scope and handling.

  3. The interplay between screening properties and colloid anisotropy: towards a reliable pair potential for disc-like charged particles.

    PubMed

    Agra, R; Trizac, E; Bocquet, L

    2004-12-01

    The electrostatic potential of a highly charged disc (clay platelet) in an electrolyte is investigated in detail. The corresponding non-linear Poisson-Boltzmann (PB) equation is solved numerically, and we show that the far-field behaviour (relevant for colloidal interactions in dilute suspensions) is exactly that obtained within linearized PB theory, with the surface boundary condition of a uniform potential. The latter linear problem is solved by a new semi-analytical procedure and both the potential amplitude (quantified by an effective charge) and potential anisotropy coincide closely within PB and linearized PB, provided the disc bare charge is high enough. This anisotropy remains at all scales; it is encoded in a function that may vary over several orders of magnitude depending on the azimuthal angle under which the disc is seen. The results allow to construct a pair potential for discs interaction, that is strongly orientation dependent.

  4. Iterative algorithms for a non-linear inverse problem in atmospheric lidar

    NASA Astrophysics Data System (ADS)

    Denevi, Giulia; Garbarino, Sara; Sorrentino, Alberto

    2017-08-01

    We consider the inverse problem of retrieving aerosol extinction coefficients from Raman lidar measurements. In this problem the unknown and the data are related through the exponential of a linear operator, the unknown is non-negative and the data follow the Poisson distribution. Standard methods work on the log-transformed data and solve the resulting linear inverse problem, but neglect to take into account the noise statistics. In this study we show that proper modelling of the noise distribution can improve substantially the quality of the reconstructed extinction profiles. To achieve this goal, we consider the non-linear inverse problem with non-negativity constraint, and propose two iterative algorithms derived using the Karush-Kuhn-Tucker conditions. We validate the algorithms with synthetic and experimental data. As expected, the proposed algorithms out-perform standard methods in terms of sensitivity to noise and reliability of the estimated profile.

  5. Pressure fluctuations and time scales in turbulent channel flow

    NASA Astrophysics Data System (ADS)

    Septham, Kamthon; Morrison, Jonathan; Diwan, Sourabh

    2015-11-01

    Pressure fluctuations in turbulent channel flow subjected to globally stabilising linear feedback control are investigated at Reτ = 400 . The passivity-based control is adopted and explained by the conservative characteristics of the nonlinear terms contributing to the Reynolds-Orr equation (Sharma et al. Phys. Fluids 2011). The linear control operates via vU' ; the maximum forcing is located at y+ ~ 20 , corresponding to the location of the maximum in the mean-square pressure gradient. The responses of the rapid (linear) and slow (nonlinear) pressure fluctuations to the linear control are investigated using the Green's function representations. It demonstrates that the linear control operates via the linear source terms of the Poisson equation for pressure fluctuations. Landahl's timescales of the minimal flow unit (MFU) in turbulent channel flow are examined at y+ = 20 . It shows that the timescales of MFU agree well with the theoretical values proposed by Landahl (1993). Therefore, the effectiveness of the linear control to attenuate wall turbulence is explained by Landahl's theory for timescales, in that the control proceeds via the shear interaction timescale which is significantly shorter than both the nonlinear and viscous timescales.

  6. Datamining approaches for modeling tumor control probability.

    PubMed

    Naqa, Issam El; Deasy, Joseph O; Mu, Yi; Huang, Ellen; Hope, Andrew J; Lindsay, Patricia E; Apte, Aditya; Alaly, James; Bradley, Jeffrey D

    2010-11-01

    Tumor control probability (TCP) to radiotherapy is determined by complex interactions between tumor biology, tumor microenvironment, radiation dosimetry, and patient-related variables. The complexity of these heterogeneous variable interactions constitutes a challenge for building predictive models for routine clinical practice. We describe a datamining framework that can unravel the higher order relationships among dosimetric dose-volume prognostic variables, interrogate various radiobiological processes, and generalize to unseen data before when applied prospectively. Several datamining approaches are discussed that include dose-volume metrics, equivalent uniform dose, mechanistic Poisson model, and model building methods using statistical regression and machine learning techniques. Institutional datasets of non-small cell lung cancer (NSCLC) patients are used to demonstrate these methods. The performance of the different methods was evaluated using bivariate Spearman rank correlations (rs). Over-fitting was controlled via resampling methods. Using a dataset of 56 patients with primary NCSLC tumors and 23 candidate variables, we estimated GTV volume and V75 to be the best model parameters for predicting TCP using statistical resampling and a logistic model. Using these variables, the support vector machine (SVM) kernel method provided superior performance for TCP prediction with an rs=0.68 on leave-one-out testing compared to logistic regression (rs=0.4), Poisson-based TCP (rs=0.33), and cell kill equivalent uniform dose model (rs=0.17). The prediction of treatment response can be improved by utilizing datamining approaches, which are able to unravel important non-linear complex interactions among model variables and have the capacity to predict on unseen data for prospective clinical applications.

  7. A Poisson hierarchical modelling approach to detecting copy number variation in sequence coverage data

    PubMed Central

    2013-01-01

    Background The advent of next generation sequencing technology has accelerated efforts to map and catalogue copy number variation (CNV) in genomes of important micro-organisms for public health. A typical analysis of the sequence data involves mapping reads onto a reference genome, calculating the respective coverage, and detecting regions with too-low or too-high coverage (deletions and amplifications, respectively). Current CNV detection methods rely on statistical assumptions (e.g., a Poisson model) that may not hold in general, or require fine-tuning the underlying algorithms to detect known hits. We propose a new CNV detection methodology based on two Poisson hierarchical models, the Poisson-Gamma and Poisson-Lognormal, with the advantage of being sufficiently flexible to describe different data patterns, whilst robust against deviations from the often assumed Poisson model. Results Using sequence coverage data of 7 Plasmodium falciparum malaria genomes (3D7 reference strain, HB3, DD2, 7G8, GB4, OX005, and OX006), we showed that empirical coverage distributions are intrinsically asymmetric and overdispersed in relation to the Poisson model. We also demonstrated a low baseline false positive rate for the proposed methodology using 3D7 resequencing data and simulation. When applied to the non-reference isolate data, our approach detected known CNV hits, including an amplification of the PfMDR1 locus in DD2 and a large deletion in the CLAG3.2 gene in GB4, and putative novel CNV regions. When compared to the recently available FREEC and cn.MOPS approaches, our findings were more concordant with putative hits from the highest quality array data for the 7G8 and GB4 isolates. Conclusions In summary, the proposed methodology brings an increase in flexibility, robustness, accuracy and statistical rigour to CNV detection using sequence coverage data. PMID:23442253

  8. Gauge Momenta as Casimir Functions of Nonholonomic Systems

    NASA Astrophysics Data System (ADS)

    García-Naranjo, Luis C.; Montaldi, James

    2018-05-01

    We consider nonholonomic systems with symmetry possessing a certain type of first integral which is linear in the velocities. We develop a systematic method for modifying the standard nonholonomic almost Poisson structure that describes the dynamics so that these integrals become Casimir functions after reduction. This explains a number of recent results on Hamiltonization of nonholonomic systems, and has consequences for the study of relative equilibria in such systems.

  9. POSTPROCESSING MIXED FINITE ELEMENT METHODS FOR SOLVING CAHN-HILLIARD EQUATION: METHODS AND ERROR ANALYSIS

    PubMed Central

    Wang, Wansheng; Chen, Long; Zhou, Jie

    2015-01-01

    A postprocessing technique for mixed finite element methods for the Cahn-Hilliard equation is developed and analyzed. Once the mixed finite element approximations have been computed at a fixed time on the coarser mesh, the approximations are postprocessed by solving two decoupled Poisson equations in an enriched finite element space (either on a finer grid or a higher-order space) for which many fast Poisson solvers can be applied. The nonlinear iteration is only applied to a much smaller size problem and the computational cost using Newton and direct solvers is negligible compared with the cost of the linear problem. The analysis presented here shows that this technique remains the optimal rate of convergence for both the concentration and the chemical potential approximations. The corresponding error estimate obtained in our paper, especially the negative norm error estimates, are non-trivial and different with the existing results in the literatures. PMID:27110063

  10. Does attitude matter in computer use in Australian general practice? A zero-inflated Poisson regression analysis.

    PubMed

    Khan, Asaduzzaman; Western, Mark

    The purpose of this study was to explore factors that facilitate or hinder effective use of computers in Australian general medical practice. This study is based on data extracted from a national telephone survey of 480 general practitioners (GPs) across Australia. Clinical functions performed by GPs using computers were examined using a zero-inflated Poisson (ZIP) regression modelling. About 17% of GPs were not using computer for any clinical function, while 18% reported using computers for all clinical functions. The ZIP model showed that computer anxiety was negatively associated with effective computer use, while practitioners' belief about usefulness of computers was positively associated with effective computer use. Being a female GP or working in partnership or group practice increased the odds of effectively using computers for clinical functions. To fully capitalise on the benefits of computer technology, GPs need to be convinced that this technology is useful and can make a difference.

  11. Lognormal Distribution of Cellular Uptake of Radioactivity: Statistical Analysis of α-Particle Track Autoradiography

    PubMed Central

    Neti, Prasad V.S.V.; Howell, Roger W.

    2010-01-01

    Recently, the distribution of radioactivity among a population of cells labeled with 210Po was shown to be well described by a log-normal (LN) distribution function (J Nucl Med. 2006;47:1049–1058) with the aid of autoradiography. To ascertain the influence of Poisson statistics on the interpretation of the autoradiographic data, the present work reports on a detailed statistical analysis of these earlier data. Methods The measured distributions of α-particle tracks per cell were subjected to statistical tests with Poisson, LN, and Poisson-lognormal (P-LN) models. Results The LN distribution function best describes the distribution of radioactivity among cell populations exposed to 0.52 and 3.8 kBq/mL of 210Po-citrate. When cells were exposed to 67 kBq/mL, the P-LN distribution function gave a better fit; however, the underlying activity distribution remained log-normal. Conclusion The present analysis generally provides further support for the use of LN distributions to describe the cellular uptake of radioactivity. Care should be exercised when analyzing autoradiographic data on activity distributions to ensure that Poisson processes do not distort the underlying LN distribution. PMID:18483086

  12. Log Normal Distribution of Cellular Uptake of Radioactivity: Statistical Analysis of Alpha Particle Track Autoradiography

    PubMed Central

    Neti, Prasad V.S.V.; Howell, Roger W.

    2008-01-01

    Recently, the distribution of radioactivity among a population of cells labeled with 210Po was shown to be well described by a log normal distribution function (J Nucl Med 47, 6 (2006) 1049-1058) with the aid of an autoradiographic approach. To ascertain the influence of Poisson statistics on the interpretation of the autoradiographic data, the present work reports on a detailed statistical analyses of these data. Methods The measured distributions of alpha particle tracks per cell were subjected to statistical tests with Poisson (P), log normal (LN), and Poisson – log normal (P – LN) models. Results The LN distribution function best describes the distribution of radioactivity among cell populations exposed to 0.52 and 3.8 kBq/mL 210Po-citrate. When cells were exposed to 67 kBq/mL, the P – LN distribution function gave a better fit, however, the underlying activity distribution remained log normal. Conclusions The present analysis generally provides further support for the use of LN distributions to describe the cellular uptake of radioactivity. Care should be exercised when analyzing autoradiographic data on activity distributions to ensure that Poisson processes do not distort the underlying LN distribution. PMID:16741316

  13. A numerical investigation into the ability of the Poisson PDE to extract the mass-density from land-based gravity data: A case study of salt diapirs in the north coast of the Persian Gulf

    NASA Astrophysics Data System (ADS)

    AllahTavakoli, Yahya; Safari, Abdolreza

    2017-08-01

    This paper is counted as a numerical investigation into the capability of Poisson's Partial Differential Equation (PDE) at Earth's surface to extract the near-surface mass-density from land-based gravity data. For this purpose, first it focuses on approximating the gradient tensor of Earth's gravitational potential by means of land-based gravity data. Then, based on the concepts of both the gradient tensor and Poisson's PDE at the Earth's surface, certain formulae are proposed for the mass-density determination. Furthermore, this paper shows how the generalized Tikhonov regularization strategy can be used for enhancing the efficiency of the proposed approach. Finally, in a real case study, the formulae are applied to 6350 gravity stations located within a part of the north coast of the Persian Gulf. The case study numerically indicates that the proposed formulae, provided by Poisson's PDE, has the ability to convert land-based gravity data into the terrain mass-density which has been used for depicting areas of salt diapirs in the region of the case study.

  14. Auxetic textiles.

    PubMed

    Rant, Darja; Rijavec, Tatjana; Pavko-Čuden, Alenka

    2013-01-01

    Common materials have Poisson's ratio values ranging from 0.0 to 0.5. Auxetic materials exhibit negative Poisson's ratio. They expand laterally when stretched longitudinally and contract laterally when compressed. In recent years the use of textile technology to fabricate auxetic materials has attracted more and more attention. It is reflected in the extent of available research work exploring the auxetic potential of various textile structures and subsequent increase in the number of research papers published. Generally there are two approaches to producing auxetic textiles. The first one includes the use of auxetic fibers to produce an auxetic textile structure, whereas the other utilizes conventional fibres to produce a textile structure with auxetic properties. This review deals with auxetic materials in general and in the specific context of auxetic polymers, auxetic fibers, and auxetic textile structures made from conventional fibers and knitted structures with auxetic potential.

  15. Formulation of the Multi-Hit Model With a Non-Poisson Distribution of Hits

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vassiliev, Oleg N., E-mail: Oleg.Vassiliev@albertahealthservices.ca

    2012-07-15

    Purpose: We proposed a formulation of the multi-hit single-target model in which the Poisson distribution of hits was replaced by a combination of two distributions: one for the number of particles entering the target and one for the number of hits a particle entering the target produces. Such an approach reflects the fact that radiation damage is a result of two different random processes: particle emission by a radiation source and interaction of particles with matter inside the target. Methods and Materials: Poisson distribution is well justified for the first of the two processes. The second distribution depends on howmore » a hit is defined. To test our approach, we assumed that the second distribution was also a Poisson distribution. The two distributions combined resulted in a non-Poisson distribution. We tested the proposed model by comparing it with previously reported data for DNA single- and double-strand breaks induced by protons and electrons, for survival of a range of cell lines, and variation of the initial slopes of survival curves with radiation quality for heavy-ion beams. Results: Analysis of cell survival equations for this new model showed that they had realistic properties overall, such as the initial and high-dose slopes of survival curves, the shoulder, and relative biological effectiveness (RBE) In most cases tested, a better fit of survival curves was achieved with the new model than with the linear-quadratic model. The results also suggested that the proposed approach may extend the multi-hit model beyond its traditional role in analysis of survival curves to predicting effects of radiation quality and analysis of DNA strand breaks. Conclusions: Our model, although conceptually simple, performed well in all tests. The model was able to consistently fit data for both cell survival and DNA single- and double-strand breaks. It correctly predicted the dependence of radiation effects on parameters of radiation quality.« less

  16. MO-G-17A-05: PET Image Deblurring Using Adaptive Dictionary Learning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Valiollahzadeh, S; Clark, J; Mawlawi, O

    2014-06-15

    Purpose: The aim of this work is to deblur PET images while suppressing Poisson noise effects using adaptive dictionary learning (DL) techniques. Methods: The model that relates a blurred and noisy PET image to the desired image is described as a linear transform y=Hm+n where m is the desired image, H is a blur kernel, n is Poisson noise and y is the blurred image. The approach we follow to recover m involves the sparse representation of y over a learned dictionary, since the image has lots of repeated patterns, edges, textures and smooth regions. The recovery is based onmore » an optimization of a cost function having four major terms: adaptive dictionary learning term, sparsity term, regularization term, and MLEM Poisson noise estimation term. The optimization is solved by a variable splitting method that introduces additional variables. We simulated a 128×128 Hoffman brain PET image (baseline) with varying kernel types and sizes (Gaussian 9×9, σ=5.4mm; Uniform 5×5, σ=2.9mm) with additive Poisson noise (Blurred). Image recovery was performed once when the kernel type was included in the model optimization and once with the model blinded to kernel type. The recovered image was compared to the baseline as well as another recovery algorithm PIDSPLIT+ (Setzer et. al.) by calculating PSNR (Peak SNR) and normalized average differences in pixel intensities (NADPI) of line profiles across the images. Results: For known kernel types, the PSNR of the Gaussian (Uniform) was 28.73 (25.1) and 25.18 (23.4) for DL and PIDSPLIT+ respectively. For blinded deblurring the PSNRs were 25.32 and 22.86 for DL and PIDSPLIT+ respectively. NADPI between baseline and DL, and baseline and blurred for the Gaussian kernel was 2.5 and 10.8 respectively. Conclusion: PET image deblurring using dictionary learning seems to be a good approach to restore image resolution in presence of Poisson noise. GE Health Care.« less

  17. A network thermodynamic method for numerical solution of the Nernst-Planck and Poisson equation system with application to ionic transport through membranes.

    PubMed

    Horno, J; González-Caballero, F; González-Fernández, C F

    1990-01-01

    Simple techniques of network thermodynamics are used to obtain the numerical solution of the Nernst-Planck and Poisson equation system. A network model for a particular physical situation, namely ionic transport through a thin membrane with simultaneous diffusion, convection and electric current, is proposed. Concentration and electric field profiles across the membrane, as well as diffusion potential, have been simulated using the electric circuit simulation program, SPICE. The method is quite general and extremely efficient, permitting treatments of multi-ion systems whatever the boundary and experimental conditions may be.

  18. An exterior Poisson solver using fast direct methods and boundary integral equations with applications to nonlinear potential flow

    NASA Technical Reports Server (NTRS)

    Young, D. P.; Woo, A. C.; Bussoletti, J. E.; Johnson, F. T.

    1986-01-01

    A general method is developed combining fast direct methods and boundary integral equation methods to solve Poisson's equation on irregular exterior regions. The method requires O(N log N) operations where N is the number of grid points. Error estimates are given that hold for regions with corners and other boundary irregularities. Computational results are given in the context of computational aerodynamics for a two-dimensional lifting airfoil. Solutions of boundary integral equations for lifting and nonlifting aerodynamic configurations using preconditioned conjugate gradient are examined for varying degrees of thinness.

  19. Minimizing the stochasticity of halos in large-scale structure surveys

    NASA Astrophysics Data System (ADS)

    Hamaus, Nico; Seljak, Uroš; Desjacques, Vincent; Smith, Robert E.; Baldauf, Tobias

    2010-08-01

    In recent work (Seljak, Hamaus, and Desjacques 2009) it was found that weighting central halo galaxies by halo mass can significantly suppress their stochasticity relative to the dark matter, well below the Poisson model expectation. This is useful for constraining relations between galaxies and the dark matter, such as the galaxy bias, especially in situations where sampling variance errors can be eliminated. In this paper we extend this study with the goal of finding the optimal mass-dependent halo weighting. We use N-body simulations to perform a general analysis of halo stochasticity and its dependence on halo mass. We investigate the stochasticity matrix, defined as Cij≡⟨(δi-biδm)(δj-bjδm)⟩, where δm is the dark matter overdensity in Fourier space, δi the halo overdensity of the i-th halo mass bin, and bi the corresponding halo bias. In contrast to the Poisson model predictions we detect nonvanishing correlations between different mass bins. We also find the diagonal terms to be sub-Poissonian for the highest-mass halos. The diagonalization of this matrix results in one large and one low eigenvalue, with the remaining eigenvalues close to the Poisson prediction 1/n¯, where n¯ is the mean halo number density. The eigenmode with the lowest eigenvalue contains most of the information and the corresponding eigenvector provides an optimal weighting function to minimize the stochasticity between halos and dark matter. We find this optimal weighting function to match linear mass weighting at high masses, while at the low-mass end the weights approach a constant whose value depends on the low-mass cut in the halo mass function. This weighting further suppresses the stochasticity as compared to the previously explored mass weighting. Finally, we employ the halo model to derive the stochasticity matrix and the scale-dependent bias from an analytical perspective. It is remarkably successful in reproducing our numerical results and predicts that the stochasticity between halos and the dark matter can be reduced further when going to halo masses lower than we can resolve in current simulations.

  20. Risk assessment for cardiovascular and respiratory mortality due to air pollution and synoptic meteorology in 10 Canadian cities.

    PubMed

    Vanos, Jennifer K; Hebbern, Christopher; Cakmak, Sabit

    2014-02-01

    Synoptic weather and ambient air quality synergistically influence human health. We report the relative risk of mortality from all non-accidental, respiratory-, and cardiovascular-related causes, associated with exposure to four air pollutants, by weather type and season, in 10 major Canadian cities for 1981 through 1999. We conducted this multi-city time-series study using Poisson generalized linear models stratified by season and each of six distinctive synoptic weather types. Statistically significant relationships of mortality due to short-term exposure to carbon monoxide, nitrogen dioxide, sulphur dioxide, and ozone were found, with significant modifications of risk by weather type, season, and mortality cause. In total, 61% of the respiratory-related mortality relative risk estimates were significantly higher than for cardiovascular-related mortality. The combined effect of weather and air pollution is greatest when tropical-type weather is present in the spring or summer. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  1. Nonlinear decoding of a complex movie from the mammalian retina

    PubMed Central

    Deny, Stéphane; Martius, Georg

    2018-01-01

    Retina is a paradigmatic system for studying sensory encoding: the transformation of light into spiking activity of ganglion cells. The inverse problem, where stimulus is reconstructed from spikes, has received less attention, especially for complex stimuli that should be reconstructed “pixel-by-pixel”. We recorded around a hundred neurons from a dense patch in a rat retina and decoded movies of multiple small randomly-moving discs. We constructed nonlinear (kernelized and neural network) decoders that improved significantly over linear results. An important contribution to this was the ability of nonlinear decoders to reliably separate between neural responses driven by locally fluctuating light signals, and responses at locally constant light driven by spontaneous-like activity. This improvement crucially depended on the precise, non-Poisson temporal structure of individual spike trains, which originated in the spike-history dependence of neural responses. We propose a general principle by which downstream circuitry could discriminate between spontaneous and stimulus-driven activity based solely on higher-order statistical structure in the incoming spike trains. PMID:29746463

  2. The Constitutive Modeling of Thin Films with Randon Material Wrinkles

    NASA Technical Reports Server (NTRS)

    Murphey, Thomas W.; Mikulas, Martin M.

    2001-01-01

    Material wrinkles drastically alter the structural constitutive properties of thin films. Normally linear elastic materials, when wrinkled, become highly nonlinear and initially inelastic. Stiffness' reduced by 99% and negative Poisson's ratios are typically observed. This paper presents an effective continuum constitutive model for the elastic effects of material wrinkles in thin films. The model considers general two-dimensional stress and strain states (simultaneous bi-axial and shear stress/strain) and neglects out of plane bending. The constitutive model is derived from a traditional mechanics analysis of an idealized physical model of random material wrinkles. Model parameters are the directly measurable wrinkle characteristics of amplitude and wavelength. For these reasons, the equations are mechanistic and deterministic. The model is compared with bi-axial tensile test data for wrinkled Kaptong(Registered Trademark) HN and is shown to deterministically predict strain as a function of stress with an average RMS error of 22%. On average, fitting the model to test data yields an RMS error of 1.2%

  3. Testing healthy immigrant effects among late life immigrants in the United States: using multiple indicators.

    PubMed

    Choi, Sunha H

    2012-04-01

    This study tested a healthy immigrant effect (HIE) and postimmigration health status changes among late life immigrants. Using three waves of the Second Longitudinal Study of Aging (1994-2000) and the linked mortality file through 2006, this study compared (a) chronic health conditions, (b) longitudinal trajectories of self-rated health, (c) longitudinal trajectories of functional impairments, and (d) mortality between three groups (age 70+): (i) late life immigrants with less than 15 years in the United States (n = 133), (ii) longer term immigrants (n = 672), and (iii) U.S.-born individuals (n = 8,642). Logistic and Poisson regression, hierarchical generalized linear modeling, and survival analyses were conducted. Late life immigrants were less likely to suffer from cancer, had lower numbers of chronic conditions at baseline, and displayed lower hazards of mortality during the 12-year follow-up. However, their self-rated health and functional status were worse than those of their counterparts over time. A HIE was only partially supported among older adults.

  4. Outcomes of a Pilot Hand Hygiene Randomized Cluster Trial to Reduce Communicable Infections Among US Office-Based Employees

    PubMed Central

    DuBois, Cathy L.Z.; Grey, Scott F.; Kingsbury, Diana M.; Shakya, Sunita; Scofield, Jennifer; Slenkovich, Ken

    2015-01-01

    Objective: To determine the effectiveness of an office-based multimodal hand hygiene improvement intervention in reducing self-reported communicable infections and work-related absence. Methods: A randomized cluster trial including an electronic training video, hand sanitizer, and educational posters (n = 131, intervention; n = 193, control). Primary outcomes include (1) self-reported acute respiratory infections (ARIs)/influenza-like illness (ILI) and/or gastrointestinal (GI) infections during the prior 30 days; and (2) related lost work days. Incidence rate ratios calculated using generalized linear mixed models with a Poisson distribution, adjusted for confounders and random cluster effects. Results: A 31% relative reduction in self-reported combined ARI-ILI/GI infections (incidence rate ratio: 0.69; 95% confidence interval, 0.49 to 0.98). A 21% nonsignificant relative reduction in lost work days. Conclusions: An office-based multimodal hand hygiene improvement intervention demonstrated a substantive reduction in self-reported combined ARI-ILI/GI infections. PMID:25719534

  5. Potential impacts of climate variability on respiratory morbidity in children, infants, and adults.

    PubMed

    Souza, Amaury de; Fernandes, Widinei Alves; Pavão, Hamilton Germano; Lastoria, Giancarlo; Albrez, Edilce do Amaral

    2012-01-01

    To determine whether climate variability influences the number of hospitalizations for respiratory diseases in infants, children, and adults in the city of Campo Grande, Brazil. We used daily data on admissions for respiratory diseases, precipitation, air temperature, humidity, and wind speed for the 2004-2008 period. We calculated the thermal comfort index, effective temperature, and effective temperature with wind speed (wind-chill or heat index) using the meteorological data obtained. Generalized linear models, with Poisson multiple regression, were used in order to predict hospitalizations for respiratory disease. The variables studied were (collectively) found to show relatively high correlation coefficients in relation to hospital admission for pneumonia in children (R² = 68.4%), infants (R² = 71.8%), and adults (R² = 81.8%). Our results indicate a quantitative risk for an increase in the number of hospitalizations of children, infants, and adults, according to the increase or decrease in temperature, humidity, precipitation, wind speed, and thermal comfort index in the city under study.

  6. Dissipative N-point-vortex Models in the Plane

    NASA Astrophysics Data System (ADS)

    Shashikanth, Banavara N.

    2010-02-01

    A method is presented for constructing point vortex models in the plane that dissipate the Hamiltonian function at any prescribed rate and yet conserve the level sets of the invariants of the Hamiltonian model arising from the SE (2) symmetries. The method is purely geometric in that it uses the level sets of the Hamiltonian and the invariants to construct the dissipative field and is based on elementary classical geometry in ℝ3. Extension to higher-dimensional spaces, such as the point vortex phase space, is done using exterior algebra. The method is in fact general enough to apply to any smooth finite-dimensional system with conserved quantities, and, for certain special cases, the dissipative vector field constructed can be associated with an appropriately defined double Nambu-Poisson bracket. The most interesting feature of this method is that it allows for an infinite sequence of such dissipative vector fields to be constructed by repeated application of a symmetric linear operator (matrix) at each point of the intersection of the level sets.

  7. Spatial Distribution of Large Cloud Drops

    NASA Technical Reports Server (NTRS)

    Marshak, A.; Knyazikhin, Y.; Larsen, M.; Wiscombe, W.

    2004-01-01

    By analyzing aircraft measurements of individual drop sizes in clouds, we have shown in a companion paper (Knyazikhin et al., 2004) that the probability of finding a drop of radius r at a linear scale l decreases as l(sup D(r)) where 0 less than or equal to D(r) less than or equal to 1. This paper shows striking examples of the spatial distribution of large cloud drops using models that simulate the observed power laws. In contrast to currently used models that assume homogeneity and therefore a Poisson distribution of cloud drops, these models show strong drop clustering, the more so the larger the drops. The degree of clustering is determined by the observed exponents D(r). The strong clustering of large drops arises naturally from the observed power-law statistics. This clustering has vital consequences for rain physics explaining how rain can form so fast. It also helps explain why remotely sensed cloud drop size is generally biased and why clouds absorb more sunlight than conventional radiative transfer models predict.

  8. Theory and simulations of covariance mapping in multiple dimensions for data analysis in high-event-rate experiments

    NASA Astrophysics Data System (ADS)

    Zhaunerchyk, V.; Frasinski, L. J.; Eland, J. H. D.; Feifel, R.

    2014-05-01

    Multidimensional covariance analysis and its validity for correlation of processes leading to multiple products are investigated from a theoretical point of view. The need to correct for false correlations induced by experimental parameters which fluctuate from shot to shot, such as the intensity of self-amplified spontaneous emission x-ray free-electron laser pulses, is emphasized. Threefold covariance analysis based on simple extension of the two-variable formulation is shown to be valid for variables exhibiting Poisson statistics. In this case, false correlations arising from fluctuations in an unstable experimental parameter that scale linearly with signals can be eliminated by threefold partial covariance analysis, as defined here. Fourfold covariance based on the same simple extension is found to be invalid in general. Where fluctuations in an unstable parameter induce nonlinear signal variations, a technique of contingent covariance analysis is proposed here to suppress false correlations. In this paper we also show a method to eliminate false correlations associated with fluctuations of several unstable experimental parameters.

  9. Firm profitability and the network of organizational capabilities

    NASA Astrophysics Data System (ADS)

    Wagner, Friedrich; Milaković, Mishael; Alfarano, Simone

    2010-11-01

    A Laplace distribution for firm profit rates (or returns on assets) can be obtained through the sum of many independent shocks if the number of shocks is Poisson distributed. Interpreting this as a linear chain of events, we generalize the process to a hierarchical network structure. The hierarchical model reproduces the observed distributional patterns of firm profitability, which crucially depend on the life span of firms. While the profit rates of long-lived firms obey a symmetric Laplacian, short-lived firms display a different behavior depending on whether they are capable of generating positive profits or not. Successful short-lived firms exhibit a symmetric yet more leptokurtic pdf than long-lived firms. Our model suggests that these firms are more dynamic in their organizational capabilities, but on average also face more risk than long-lived firms. Finally, short-lived firms that fail to generate positive profits have the most leptokurtic distribution among the three classes, and on average lose slightly more than their total assets within a year.

  10. Perturbation theory for cosmologies with nonlinear structure

    NASA Astrophysics Data System (ADS)

    Goldberg, Sophia R.; Gallagher, Christopher S.; Clifton, Timothy

    2017-11-01

    The next generation of cosmological surveys will operate over unprecedented scales, and will therefore provide exciting new opportunities for testing general relativity. The standard method for modelling the structures that these surveys will observe is to use cosmological perturbation theory for linear structures on horizon-sized scales, and Newtonian gravity for nonlinear structures on much smaller scales. We propose a two-parameter formalism that generalizes this approach, thereby allowing interactions between large and small scales to be studied in a self-consistent and well-defined way. This uses both post-Newtonian gravity and cosmological perturbation theory, and can be used to model realistic cosmological scenarios including matter, radiation and a cosmological constant. We find that the resulting field equations can be written as a hierarchical set of perturbation equations. At leading-order, these equations allow us to recover a standard set of Friedmann equations, as well as a Newton-Poisson equation for the inhomogeneous part of the Newtonian energy density in an expanding background. For the perturbations in the large-scale cosmology, however, we find that the field equations are sourced by both nonlinear and mode-mixing terms, due to the existence of small-scale structures. These extra terms should be expected to give rise to new gravitational effects, through the mixing of gravitational modes on small and large scales—effects that are beyond the scope of standard linear cosmological perturbation theory. We expect our formalism to be useful for accurately modeling gravitational physics in universes that contain nonlinear structures, and for investigating the effects of nonlinear gravity in the era of ultra-large-scale surveys.

  11. Characterizing the effect of summer temperature on heatstroke-related emergency ambulance dispatches in the Kanto area of Japan

    NASA Astrophysics Data System (ADS)

    Ng, Chris Fook Sheng; Ueda, Kayo; Ono, Masaji; Nitta, Hiroshi; Takami, Akinori

    2014-07-01

    Despite rising concern on the impact of heat on human health, the risk of high summer temperature on heatstroke-related emergency dispatches is not well understood in Japan. A time-series study was conducted to examine the association between apparent temperature and daily heatstroke-related ambulance dispatches (HSAD) within the Kanto area of Japan. A total of 12,907 HSAD occurring from 2000 to 2009 in five major cities—Saitama, Chiba, Tokyo, Kawasaki, and Yokohama—were analyzed. Generalized additive models and zero-inflated Poisson regressions were used to estimate the effects of daily maximum three-hour apparent temperature (AT) on dispatch frequency from May to September, with adjustment for seasonality, long-term trend, weekends, and public holidays. Linear and non-linear exposure effects were considered. Effects on days when AT first exceeded its summer median were also investigated. City-specific estimates were combined using random effects meta-analyses. Exposure-response relationship was found to be fairly linear. Significant risk increase began from 21 °C with a combined relative risk (RR) of 1.22 (95 % confidence interval, 1.03-1.44), increasing to 1.49 (1.42-1.57) at peak AT. When linear exposure was assumed, combined RR was 1.43 (1.37-1.50) per degree Celsius increment. Overall association was significant the first few times when median AT was initially exceeded in a particular warm season. More than two-thirds of these initial hot days were in June, implying the harmful effect of initial warming as the season changed. Risk increase that began early at the fairly mild perceived temperature implies the need for early precaution.

  12. Characterizing the effect of summer temperature on heatstroke-related emergency ambulance dispatches in the Kanto area of Japan.

    PubMed

    Ng, Chris Fook Sheng; Ueda, Kayo; Ono, Masaji; Nitta, Hiroshi; Takami, Akinori

    2014-07-01

    Despite rising concern on the impact of heat on human health, the risk of high summer temperature on heatstroke-related emergency dispatches is not well understood in Japan. A time-series study was conducted to examine the association between apparent temperature and daily heatstroke-related ambulance dispatches (HSAD) within the Kanto area of Japan. A total of 12,907 HSAD occurring from 2000 to 2009 in five major cities-Saitama, Chiba, Tokyo, Kawasaki, and Yokohama-were analyzed. Generalized additive models and zero-inflated Poisson regressions were used to estimate the effects of daily maximum three-hour apparent temperature (AT) on dispatch frequency from May to September, with adjustment for seasonality, long-term trend, weekends, and public holidays. Linear and non-linear exposure effects were considered. Effects on days when AT first exceeded its summer median were also investigated. City-specific estimates were combined using random effects meta-analyses. Exposure-response relationship was found to be fairly linear. Significant risk increase began from 21 °C with a combined relative risk (RR) of 1.22 (95% confidence interval, 1.03-1.44), increasing to 1.49 (1.42-1.57) at peak AT. When linear exposure was assumed, combined RR was 1.43 (1.37-1.50) per degree Celsius increment. Overall association was significant the first few times when median AT was initially exceeded in a particular warm season. More than two-thirds of these initial hot days were in June, implying the harmful effect of initial warming as the season changed. Risk increase that began early at the fairly mild perceived temperature implies the need for early precaution.

  13. Quantifying biological samples using Linear Poisson Independent Component Analysis for MALDI-ToF mass spectra

    PubMed Central

    Deepaisarn, S; Tar, P D; Thacker, N A; Seepujak, A; McMahon, A W

    2018-01-01

    Abstract Motivation Matrix-assisted laser desorption/ionisation time-of-flight mass spectrometry (MALDI) facilitates the analysis of large organic molecules. However, the complexity of biological samples and MALDI data acquisition leads to high levels of variation, making reliable quantification of samples difficult. We present a new analysis approach that we believe is well-suited to the properties of MALDI mass spectra, based upon an Independent Component Analysis derived for Poisson sampled data. Simple analyses have been limited to studying small numbers of mass peaks, via peak ratios, which is known to be inefficient. Conventional PCA and ICA methods have also been applied, which extract correlations between any number of peaks, but we argue makes inappropriate assumptions regarding data noise, i.e. uniform and Gaussian. Results We provide evidence that the Gaussian assumption is incorrect, motivating the need for our Poisson approach. The method is demonstrated by making proportion measurements from lipid-rich binary mixtures of lamb brain and liver, and also goat and cow milk. These allow our measurements and error predictions to be compared to ground truth. Availability and implementation Software is available via the open source image analysis system TINA Vision, www.tina-vision.net. Contact paul.tar@manchester.ac.uk Supplementary information Supplementary data are available at Bioinformatics online. PMID:29091994

  14. Development of a fractional-step method for the unsteady incompressible Navier-Stokes equations in generalized coordinate systems

    NASA Technical Reports Server (NTRS)

    Rosenfeld, Moshe; Kwak, Dochan; Vinokur, Marcel

    1992-01-01

    A fractional step method is developed for solving the time-dependent three-dimensional incompressible Navier-Stokes equations in generalized coordinate systems. The primitive variable formulation uses the pressure, defined at the center of the computational cell, and the volume fluxes across the faces of the cells as the dependent variables, instead of the Cartesian components of the velocity. This choice is equivalent to using the contravariant velocity components in a staggered grid multiplied by the volume of the computational cell. The governing equations are discretized by finite volumes using a staggered mesh system. The solution of the continuity equation is decoupled from the momentum equations by a fractional step method which enforces mass conservation by solving a Poisson equation. This procedure, combined with the consistent approximations of the geometric quantities, is done to satisfy the discretized mass conservation equation to machine accuracy, as well as to gain the favorable convergence properties of the Poisson solver. The momentum equations are solved by an approximate factorization method, and a novel ZEBRA scheme with four-color ordering is devised for the efficient solution of the Poisson equation. Several two- and three-dimensional laminar test cases are computed and compared with other numerical and experimental results to validate the solution method. Good agreement is obtained in all cases.

  15. Hydrodynamic limit of Wigner-Poisson kinetic theory: Revisited

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Akbari-Moghanjoughi, M.; International Centre for Advanced Studies in Physical Sciences and Institute for Theoretical Physics, Ruhr University Bochum, D-44780 Bochum

    2015-02-15

    In this paper, we revisit the hydrodynamic limit of the Langmuir wave dispersion relation based on the Wigner-Poisson model in connection with that obtained directly from the original Lindhard dielectric function based on the random-phase-approximation. It is observed that the (fourth-order) expansion of the exact Lindhard dielectric constant correctly reduces to the hydrodynamic dispersion relation with an additional term of fourth-order, beside that caused by the quantum diffraction effect. It is also revealed that the generalized Lindhard dielectric theory accounts for the recently discovered Shukla-Eliasson attractive potential (SEAP). However, the expansion of the exact Lindhard static dielectric function leads tomore » a k{sup 4} term of different magnitude than that obtained from the linearized quantum hydrodynamics model. It is shown that a correction factor of 1/9 should be included in the term arising from the quantum Bohm potential of the momentum balance equation in fluid model in order for a correct plasma dielectric response treatment. Finally, it is observed that the long-range oscillatory screening potential (Friedel oscillations) of type cos(2k{sub F}r)/r{sup 3}, which is a consequence of the divergence of the dielectric function at point k = 2k{sub F} in a quantum plasma, arises due to the finiteness of the Fermi-wavenumber and is smeared out in the limit of very high electron number-densities, typical of white dwarfs and neutron stars. In the very low electron number-density regime, typical of semiconductors and metals, where the Friedel oscillation wavelength becomes much larger compared to the interparticle distances, the SEAP appears with a much deeper potential valley. It is remarked that the fourth-order approximate Lindhard dielectric constant approaches that of the linearized quantum hydrodynamic in the limit if very high electron number-density. By evaluation of the imaginary part of the Lindhard dielectric function, it is shown that the Landau-damping region in ω-k plane increases dramatically by increase of the electron number-density.« less

  16. Temperature dependence of elastic and strength properties of T300/5208 graphite-epoxy

    NASA Technical Reports Server (NTRS)

    Milkovich, S. M.; Herakovich, C. T.

    1984-01-01

    Experimental results are presented for the elastic and strength properties of T300/5208 graphite-epoxy at room temperature, 116K (-250 F), and 394K (+250 F). Results are presented for unidirectional 0, 90, and 45 degree laminates, and + or - 30, + or - 45, and + or - 60 degree angle-ply laminates. The stress-strain behavior of the 0 and 90 degree laminates is essentially linear for all three temperatures and that the stress-strain behavior of all other laminates is linear at 116K. A second-order curve provides the best fit for the temperature is linear at 116K. A second-order curve provides the best fit for the temperature dependence of the elastic modulus of all laminates and for the principal shear modulus. Poisson's ratio appears to vary linearly with temperature. all moduli decrease with increasing temperature except for E (sub 1) which exhibits a small increase. The strength temperature dependence is also quadratic for all laminates except the 0 degree - laminate which exhibits linear temperature dependence. In many cases the temperature dependence of properties is nearly linear.

  17. Electrokinetics Models for Micro and Nano Fluidic Impedance Sensors

    DTIC Science & Technology

    2010-11-01

    primitive Differential-Algebraic Equations (DAEs), used to process and interpret the experimentally measured electrical impedance data (Sun and Morgan...field, and species respectively. A second-order scheme was used to calculate the ionic species distribution. The linearized algebraic equations were...is governed by the Poisson equation 2 0 0 r i i i F z cε ε φ∇ + =∑ where ε0 and εr are, respectively, the electrical permittivity in the vacuum

  18. Fourier analysis of the SOR iteration

    NASA Technical Reports Server (NTRS)

    Leveque, R. J.; Trefethen, L. N.

    1986-01-01

    The SOR iteration for solving linear systems of equations depends upon an overrelaxation factor omega. It is shown that for the standard model problem of Poisson's equation on a rectangle, the optimal omega and corresponding convergence rate can be rigorously obtained by Fourier analysis. The trick is to tilt the space-time grid so that the SOR stencil becomes symmetrical. The tilted grid also gives insight into the relation between convergence rates of several variants.

  19. Control of Structure in Turbulent Flows: Bifurcating and Blooming Jets.

    DTIC Science & Technology

    1987-10-10

    injected through computational boundaries. (2) to satisfy no- slip boundary conditions or (3) during ’ grid " refinement when one element may be split...use of fast Poisson solvers on a mesh of M grid points, the operation count for this step can approach 0(M log M). Additional required steps are (1...consider s- three-dimensionai perturbations to the uart vortices. The linear stability calculations ot Pierrehumbert & Widnadl [101 are available for

  20. The rotational motion of an earth orbiting gyroscope according to the Einstein theory of general relativity

    NASA Technical Reports Server (NTRS)

    Hoots, F. R.; Fitzpatrick, P. M.

    1979-01-01

    The classical Poisson equations of rotational motion are used to study the attitude motions of an earth orbiting, rapidly spinning gyroscope perturbed by the effects of general relativity (Einstein theory). The center of mass of the gyroscope is assumed to move about a rotating oblate earth in an evolving elliptic orbit which includes all first-order oblateness effects produced by the earth. A method of averaging is used to obtain a transformation of variables, for the nonresonance case, which significantly simplifies the Poisson differential equations of motion of the gyroscope. Long-term solutions are obtained by an exact analytical integration of the simplified transformed equations. These solutions may be used to predict both the orientation of the gyroscope and the motion of its rotational angular momentum vector as viewed from its center of mass. The results are valid for all eccentricities and all inclinations not near the critical inclination.

  1. Stochastic foundations of undulatory transport phenomena: generalized Poisson-Kac processes—part III extensions and applications to kinetic theory and transport

    NASA Astrophysics Data System (ADS)

    Giona, Massimiliano; Brasiello, Antonio; Crescitelli, Silvestro

    2017-08-01

    This third part extends the theory of Generalized Poisson-Kac (GPK) processes to nonlinear stochastic models and to a continuum of states. Nonlinearity is treated in two ways: (i) as a dependence of the parameters (intensity of the stochastic velocity, transition rates) of the stochastic perturbation on the state variable, similarly to the case of nonlinear Langevin equations, and (ii) as the dependence of the stochastic microdynamic equations of motion on the statistical description of the process itself (nonlinear Fokker-Planck-Kac models). Several numerical and physical examples illustrate the theory. Gathering nonlinearity and a continuum of states, GPK theory provides a stochastic derivation of the nonlinear Boltzmann equation, furnishing a positive answer to the Kac’s program in kinetic theory. The transition from stochastic microdynamics to transport theory within the framework of the GPK paradigm is also addressed.

  2. Ionization effects and linear stability in a coaxial plasma device

    NASA Astrophysics Data System (ADS)

    Kurt, Erol; Kurt, Hilal; Bayhan, Ulku

    2009-03-01

    A 2-D computer simulation of a coaxial plasma device depending on the conservation equations of electrons, ions and excited atoms together with the Poisson equation for a plasma gun is carried out. Some characteristics of the plasma focus device (PF) such as critical wave numbers a c and voltages U c in the cases of various pressures Pare estimated in order to satisfy the necessary conditions of traveling particle densities ( i.e. plasma patterns) via a linear analysis. Oscillatory solutions are characterized by a nonzero imaginary part of the growth rate Im ( σ) for all cases. The model also predicts the minimal voltage ranges of the system for certain pressure intervals.

  3. Unobtrusive Detection of Mild Cognitive Impairment in Older Adults Through Home Monitoring*

    PubMed Central

    Akl, Ahmad; Snoek, Jasper; Mihailidis, Alex

    2016-01-01

    The early detection of dementias such as Alzheimer’s disease can in some cases reverse, stop or slow cognitive decline and in general greatly reduce the burden of care. This is of increasing significance as demographic studies are warning of an aging population in North America and worldwide. Various smart homes and systems have been developed to detect cognitive decline through continuous monitoring of high risk individuals. However, the majority of these smart homes and systems use a number of predefined heuristics to detect changes in cognition, which has been demonstrated to focus on the idiosyncratic nuances of the individual subjects and thus does not generalize. In this paper, we address this problem by building generalized linear models of home activity of subjects monitored using unobtrusive sensing technologies. We use inhomogenous Poisson processes to model the presence of subjects within different rooms throughout the day. We employ an information theoretic approach to compare the activity distributions learned, and we observe significant statistical differences between the cognitively intact and impaired subjects. Using a simple thresholding approach, we were able to detect mild cognitive impairment in older adults with an average area under the ROC curve of 0.716 and an average area under the precision-recall curve of 0.706 using distributions estimated over time windows of 12 weeks. PMID:26841424

  4. Zero-truncated negative binomial - Erlang distribution

    NASA Astrophysics Data System (ADS)

    Bodhisuwan, Winai; Pudprommarat, Chookait; Bodhisuwan, Rujira; Saothayanun, Luckhana

    2017-11-01

    The zero-truncated negative binomial-Erlang distribution is introduced. It is developed from negative binomial-Erlang distribution. In this work, the probability mass function is derived and some properties are included. The parameters of the zero-truncated negative binomial-Erlang distribution are estimated by using the maximum likelihood estimation. Finally, the proposed distribution is applied to real data, the number of methamphetamine in the Bangkok, Thailand. Based on the results, it shows that the zero-truncated negative binomial-Erlang distribution provided a better fit than the zero-truncated Poisson, zero-truncated negative binomial, zero-truncated generalized negative-binomial and zero-truncated Poisson-Lindley distributions for this data.

  5. A Legendre–Fourier spectral method with exact conservation laws for the Vlasov–Poisson system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Manzini, Gianmarco; Delzanno, Gian Luca; Vencels, Juris

    In this study, we present the design and implementation of an L 2-stable spectral method for the discretization of the Vlasov–Poisson model of a collisionless plasma in one space and velocity dimension. The velocity and space dependence of the Vlasov equation are resolved through a truncated spectral expansion based on Legendre and Fourier basis functions, respectively. The Poisson equation, which is coupled to the Vlasov equation, is also resolved through a Fourier expansion. The resulting system of ordinary differential equation is discretized by the implicit second-order accurate Crank–Nicolson time discretization. The non-linear dependence between the Vlasov and Poisson equations ismore » iteratively solved at any time cycle by a Jacobian-Free Newton–Krylov method. In this work we analyze the structure of the main conservation laws of the resulting Legendre–Fourier model, e.g., mass, momentum, and energy, and prove that they are exactly satisfied in the semi-discrete and discrete setting. The L 2-stability of the method is ensured by discretizing the boundary conditions of the distribution function at the boundaries of the velocity domain by a suitable penalty term. The impact of the penalty term on the conservation properties is investigated theoretically and numerically. An implementation of the penalty term that does not affect the conservation of mass, momentum and energy, is also proposed and studied. A collisional term is introduced in the discrete model to control the filamentation effect, but does not affect the conservation properties of the system. Numerical results on a set of standard test problems illustrate the performance of the method.« less

  6. A Legendre–Fourier spectral method with exact conservation laws for the Vlasov–Poisson system

    DOE PAGES

    Manzini, Gianmarco; Delzanno, Gian Luca; Vencels, Juris; ...

    2016-04-22

    In this study, we present the design and implementation of an L 2-stable spectral method for the discretization of the Vlasov–Poisson model of a collisionless plasma in one space and velocity dimension. The velocity and space dependence of the Vlasov equation are resolved through a truncated spectral expansion based on Legendre and Fourier basis functions, respectively. The Poisson equation, which is coupled to the Vlasov equation, is also resolved through a Fourier expansion. The resulting system of ordinary differential equation is discretized by the implicit second-order accurate Crank–Nicolson time discretization. The non-linear dependence between the Vlasov and Poisson equations ismore » iteratively solved at any time cycle by a Jacobian-Free Newton–Krylov method. In this work we analyze the structure of the main conservation laws of the resulting Legendre–Fourier model, e.g., mass, momentum, and energy, and prove that they are exactly satisfied in the semi-discrete and discrete setting. The L 2-stability of the method is ensured by discretizing the boundary conditions of the distribution function at the boundaries of the velocity domain by a suitable penalty term. The impact of the penalty term on the conservation properties is investigated theoretically and numerically. An implementation of the penalty term that does not affect the conservation of mass, momentum and energy, is also proposed and studied. A collisional term is introduced in the discrete model to control the filamentation effect, but does not affect the conservation properties of the system. Numerical results on a set of standard test problems illustrate the performance of the method.« less

  7. Differential expression analysis for RNAseq using Poisson mixed models

    PubMed Central

    Sun, Shiquan; Hood, Michelle; Scott, Laura; Peng, Qinke; Mukherjee, Sayan; Tung, Jenny

    2017-01-01

    Abstract Identifying differentially expressed (DE) genes from RNA sequencing (RNAseq) studies is among the most common analyses in genomics. However, RNAseq DE analysis presents several statistical and computational challenges, including over-dispersed read counts and, in some settings, sample non-independence. Previous count-based methods rely on simple hierarchical Poisson models (e.g. negative binomial) to model independent over-dispersion, but do not account for sample non-independence due to relatedness, population structure and/or hidden confounders. Here, we present a Poisson mixed model with two random effects terms that account for both independent over-dispersion and sample non-independence. We also develop a scalable sampling-based inference algorithm using a latent variable representation of the Poisson distribution. With simulations, we show that our method properly controls for type I error and is generally more powerful than other widely used approaches, except in small samples (n <15) with other unfavorable properties (e.g. small effect sizes). We also apply our method to three real datasets that contain related individuals, population stratification or hidden confounders. Our results show that our method increases power in all three data compared to other approaches, though the power gain is smallest in the smallest sample (n = 6). Our method is implemented in MACAU, freely available at www.xzlab.org/software.html. PMID:28369632

  8. Casimir meets Poisson: improved quark/gluon discrimination with counting observables

    DOE PAGES

    Frye, Christopher; Larkoski, Andrew J.; Thaler, Jesse; ...

    2017-09-19

    Charged track multiplicity is among the most powerful observables for discriminating quark- from gluon-initiated jets. Despite its utility, it is not infrared and collinear (IRC) safe, so perturbative calculations are limited to studying the energy evolution of multiplicity moments. While IRC-safe observables, like jet mass, are perturbatively calculable, their distributions often exhibit Casimir scaling, such that their quark/gluon discrimination power is limited by the ratio of quark to gluon color factors. In this paper, we introduce new IRC-safe counting observables whose discrimination performance exceeds that of jet mass and approaches that of track multiplicity. The key observation is that trackmore » multiplicity is approximately Poisson distributed, with more suppressed tails than the Sudakov peak structure from jet mass. By using an iterated version of the soft drop jet grooming algorithm, we can define a “soft drop multiplicity” which is Poisson distributed at leading-logarithmic accuracy. In addition, we calculate the next-to-leading-logarithmic corrections to this Poisson structure. If we allow the soft drop groomer to proceed to the end of the jet branching history, we can define a collinear-unsafe (but still infrared-safe) counting observable. Exploiting the universality of the collinear limit, we define generalized fragmentation functions to study the perturbative energy evolution of collinear-unsafe multiplicity.« less

  9. Casimir meets Poisson: improved quark/gluon discrimination with counting observables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frye, Christopher; Larkoski, Andrew J.; Thaler, Jesse

    Charged track multiplicity is among the most powerful observables for discriminating quark- from gluon-initiated jets. Despite its utility, it is not infrared and collinear (IRC) safe, so perturbative calculations are limited to studying the energy evolution of multiplicity moments. While IRC-safe observables, like jet mass, are perturbatively calculable, their distributions often exhibit Casimir scaling, such that their quark/gluon discrimination power is limited by the ratio of quark to gluon color factors. In this paper, we introduce new IRC-safe counting observables whose discrimination performance exceeds that of jet mass and approaches that of track multiplicity. The key observation is that trackmore » multiplicity is approximately Poisson distributed, with more suppressed tails than the Sudakov peak structure from jet mass. By using an iterated version of the soft drop jet grooming algorithm, we can define a “soft drop multiplicity” which is Poisson distributed at leading-logarithmic accuracy. In addition, we calculate the next-to-leading-logarithmic corrections to this Poisson structure. If we allow the soft drop groomer to proceed to the end of the jet branching history, we can define a collinear-unsafe (but still infrared-safe) counting observable. Exploiting the universality of the collinear limit, we define generalized fragmentation functions to study the perturbative energy evolution of collinear-unsafe multiplicity.« less

  10. Fractional Poisson-Nernst-Planck Model for Ion Channels I: Basic Formulations and Algorithms.

    PubMed

    Chen, Duan

    2017-11-01

    In this work, we propose a fractional Poisson-Nernst-Planck model to describe ion permeation in gated ion channels. Due to the intrinsic conformational changes, crowdedness in narrow channel pores, binding and trapping introduced by functioning units of channel proteins, ionic transport in the channel exhibits a power-law-like anomalous diffusion dynamics. We start from continuous-time random walk model for a single ion and use a long-tailed density distribution function for the particle jump waiting time, to derive the fractional Fokker-Planck equation. Then, it is generalized to the macroscopic fractional Poisson-Nernst-Planck model for ionic concentrations. Necessary computational algorithms are designed to implement numerical simulations for the proposed model, and the dynamics of gating current is investigated. Numerical simulations show that the fractional PNP model provides a more qualitatively reasonable match to the profile of gating currents from experimental observations. Meanwhile, the proposed model motivates new challenges in terms of mathematical modeling and computations.

  11. Linear Mechanisms and Pressure Fluctuations in Wall Turbulence

    NASA Astrophysics Data System (ADS)

    Septham, Kamthon; Morrison, Jonathan

    2014-11-01

    Full-domain, linear feedback control of turbulent channel flow at Reτ <= 400 via vU' at low wavenumbers is an effective method to attenuate turbulent channel flow such that it is relaminarised. The passivity-based control approach is adopted and explained by the conservative characteristics of the nonlinear terms contributing to the Reynolds-Orr equation (Sharma et al .Phys .Fluids 2011). The linear forcing acts on the wall-normal velocity field and thus the pressure field via the linear (rapid) source term of the Poisson equation for pressure fluctuations, 2U'∂v/∂x . The minimum required spanwise wavelength resolution without losing control is constant at λz+ = 125, based on the wall friction velocity at t = 0 . The result shows that the maximum forcing is located at y+ ~ 20 , corresponding to the location of the maximum in the mean-square pressure gradient. The effectiveness of linear control is qualitatively explained by Landahl's theory for timescales, in that the control proceeds via the shear interaction timescale which is much shorter than both the nonlinear and viscous timescales. The response of the rapid (linear) and slow (nonlinear) pressure fluctuations to the linear control is examined and discussed.

  12. Hamiltonian description and quantization of dissipative systems

    NASA Astrophysics Data System (ADS)

    Enz, Charles P.

    1994-09-01

    Dissipative systems are described by a Hamiltonian, combined with a “dynamical matrix” which generalizes the simplectic form of the equations of motion. Criteria for dissipation are given and the examples of a particle with friction and of the Lotka-Volterra model are presented. Quantization is first introduced by translating generalized Poisson brackets into commutators and anticommutators. Then a generalized Schrödinger equation expressed by a dynamical matrix is constructed and discussed.

  13. Beyond the spectral theorem: Spectrally decomposing arbitrary functions of nondiagonalizable operators

    NASA Astrophysics Data System (ADS)

    Riechers, Paul M.; Crutchfield, James P.

    2018-06-01

    Nonlinearities in finite dimensions can be linearized by projecting them into infinite dimensions. Unfortunately, the familiar linear operator techniques that one would then hope to use often fail since the operators cannot be diagonalized. The curse of nondiagonalizability also plays an important role even in finite-dimensional linear operators, leading to analytical impediments that occur across many scientific domains. We show how to circumvent it via two tracks. First, using the well-known holomorphic functional calculus, we develop new practical results about spectral projection operators and the relationship between left and right generalized eigenvectors. Second, we generalize the holomorphic calculus to a meromorphic functional calculus that can decompose arbitrary functions of nondiagonalizable linear operators in terms of their eigenvalues and projection operators. This simultaneously simplifies and generalizes functional calculus so that it is readily applicable to analyzing complex physical systems. Together, these results extend the spectral theorem of normal operators to a much wider class, including circumstances in which poles and zeros of the function coincide with the operator spectrum. By allowing the direct manipulation of individual eigenspaces of nonnormal and nondiagonalizable operators, the new theory avoids spurious divergences. As such, it yields novel insights and closed-form expressions across several areas of physics in which nondiagonalizable dynamics arise, including memoryful stochastic processes, open nonunitary quantum systems, and far-from-equilibrium thermodynamics. The technical contributions include the first full treatment of arbitrary powers of an operator, highlighting the special role of the zero eigenvalue. Furthermore, we show that the Drazin inverse, previously only defined axiomatically, can be derived as the negative-one power of singular operators within the meromorphic functional calculus and we give a new general method to construct it. We provide new formulae for constructing spectral projection operators and delineate the relations among projection operators, eigenvectors, and left and right generalized eigenvectors. By way of illustrating its application, we explore several, rather distinct examples. First, we analyze stochastic transition operators in discrete and continuous time. Second, we show that nondiagonalizability can be a robust feature of a stochastic process, induced even by simple counting. As a result, we directly derive distributions of the time-dependent Poisson process and point out that nondiagonalizability is intrinsic to it and the broad class of hidden semi-Markov processes. Third, we show that the Drazin inverse arises naturally in stochastic thermodynamics and that applying the meromorphic functional calculus provides closed-form solutions for the dynamics of key thermodynamic observables. Finally, we draw connections to the Ruelle-Frobenius-Perron and Koopman operators for chaotic dynamical systems and propose how to extract eigenvalues from a time-series.

  14. General Theorems about Homogeneous Ellipsoidal Inclusions

    ERIC Educational Resources Information Center

    Korringa, J.; And Others

    1978-01-01

    Mathematical theorems about the properties of ellipsoids are developed. Included are Poisson's theorem concerning the magnetization of a homogeneous body of ellipsoidal shape, the polarization of a dielectric, the transport of heat or electricity through an ellipsoid, and other problems. (BB)

  15. Hamiltonian structure of the Lotka-Volterra equations

    NASA Astrophysics Data System (ADS)

    Nutku, Y.

    1990-03-01

    The Lotka-Volterra equations governing predator-prey relations are shown to admit Hamiltonian structure with respect to a generalized Poisson bracket. These equations provide an example of a system for which the naive criterion for the existence of Hamiltonian structure fails. We show further that there is a three-component generalization of the Lotka-Volterra equations which is a bi-Hamiltonian system.

  16. Zeroth Poisson Homology, Foliated Cohomology and Perfect Poisson Manifolds

    NASA Astrophysics Data System (ADS)

    Martínez-Torres, David; Miranda, Eva

    2018-01-01

    We prove that, for compact regular Poisson manifolds, the zeroth homology group is isomorphic to the top foliated cohomology group, and we give some applications. In particular, we show that, for regular unimodular Poisson manifolds, top Poisson and foliated cohomology groups are isomorphic. Inspired by the symplectic setting, we define what a perfect Poisson manifold is. We use these Poisson homology computations to provide families of perfect Poisson manifolds.

  17. Fitting monthly Peninsula Malaysian rainfall using Tweedie distribution

    NASA Astrophysics Data System (ADS)

    Yunus, R. M.; Hasan, M. M.; Zubairi, Y. Z.

    2017-09-01

    In this study, the Tweedie distribution was used to fit the monthly rainfall data from 24 monitoring stations of Peninsula Malaysia for the period from January, 2008 to April, 2015. The aim of the study is to determine whether the distributions within the Tweedie family fit well the monthly Malaysian rainfall data. Within the Tweedie family, the gamma distribution is generally used for fitting the rainfall totals, however the Poisson-gamma distribution is more useful to describe two important features of rainfall pattern, which are the occurrences (dry months) and the amount (wet months). First, the appropriate distribution of the monthly rainfall was identified within the Tweedie family for each station. Then, the Tweedie Generalised Linear Model (GLM) with no explanatory variable was used to model the monthly rainfall data. Graphical representation was used to assess model appropriateness. The QQ plots of quantile residuals show that the Tweedie models fit the monthly rainfall data better for majority of the stations in the west coast and mid land than those in the east coast of Peninsula. This significant finding suggests that the best fitted distribution depends on the geographical location of the monitoring station. In this paper, a simple model is developed for generating synthetic rainfall data for use in various areas, including agriculture and irrigation. We have showed that the data that were simulated using the Tweedie distribution have fairly similar frequency histogram to that of the actual data. Both the mean number of rainfall events and mean amount of rain for a month were estimated simultaneously for the case that the Poisson gamma distribution fits the data reasonably well. Thus, this work complements previous studies that fit the rainfall amount and the occurrence of rainfall events separately, each to a different distribution.

  18. No control genes required: Bayesian analysis of qRT-PCR data.

    PubMed

    Matz, Mikhail V; Wright, Rachel M; Scott, James G

    2013-01-01

    Model-based analysis of data from quantitative reverse-transcription PCR (qRT-PCR) is potentially more powerful and versatile than traditional methods. Yet existing model-based approaches cannot properly deal with the higher sampling variances associated with low-abundant targets, nor do they provide a natural way to incorporate assumptions about the stability of control genes directly into the model-fitting process. In our method, raw qPCR data are represented as molecule counts, and described using generalized linear mixed models under Poisson-lognormal error. A Markov Chain Monte Carlo (MCMC) algorithm is used to sample from the joint posterior distribution over all model parameters, thereby estimating the effects of all experimental factors on the expression of every gene. The Poisson-based model allows for the correct specification of the mean-variance relationship of the PCR amplification process, and can also glean information from instances of no amplification (zero counts). Our method is very flexible with respect to control genes: any prior knowledge about the expected degree of their stability can be directly incorporated into the model. Yet the method provides sensible answers without such assumptions, or even in the complete absence of control genes. We also present a natural Bayesian analogue of the "classic" analysis, which uses standard data pre-processing steps (logarithmic transformation and multi-gene normalization) but estimates all gene expression changes jointly within a single model. The new methods are considerably more flexible and powerful than the standard delta-delta Ct analysis based on pairwise t-tests. Our methodology expands the applicability of the relative-quantification analysis protocol all the way to the lowest-abundance targets, and provides a novel opportunity to analyze qRT-PCR data without making any assumptions concerning target stability. These procedures have been implemented as the MCMC.qpcr package in R.

  19. Integrability and Poisson Structures of Three Dimensional Dynamical Systems and Equations of Hydrodynamic Type

    NASA Astrophysics Data System (ADS)

    Gumral, Hasan

    Poisson structure of completely integrable 3 dimensional dynamical systems can be defined in terms of an integrable 1-form. We take advantage of this fact and use the theory of foliations in discussing the geometrical structure underlying complete and partial integrability. We show that the Halphen system can be formulated in terms of a flat SL(2,R)-valued connection and belongs to a non-trivial Godbillon-Vey class. On the other hand, for the Euler top and a special case of 3-species Lotka-Volterra equations which are contained in the Halphen system as limiting cases, this structure degenerates into the form of globally integrable bi-Hamiltonian structures. The globally integrable bi-Hamiltonian case is a linear and the sl_2 structure is a quadratic unfolding of an integrable 1-form in 3 + 1 dimensions. We complete the discussion of the Hamiltonian structure of 2-component equations of hydrodynamic type by presenting the Hamiltonian operators for Euler's equation and a continuum limit of Toda lattice. We present further infinite sequences of conserved quantities for shallow water equations and show that their generalizations by Kodama admit bi-Hamiltonian structure. We present a simple way of constructing the second Hamiltonian operators for N-component equations admitting some scaling properties. The Kodama reduction of the dispersionless-Boussinesq equations and the Lax reduction of the Benney moment equations are shown to be equivalent by a symmetry transformation. They can be cast into the form of a triplet of conservation laws which enable us to recognize a non-trivial scaling symmetry. The resulting bi-Hamiltonian structure generates three infinite sequences of conserved densities.

  20. A Focused Fundamental Study of Predicting Materials Degradation & Fatigue. Volume 1

    DTIC Science & Technology

    1997-05-31

    physical properties are: bulk modulus, shear strength, coefficient of friction, modulus of elasticity/ rigidity and Poisson’s ratio. Each of these physical...acting on a subsurface crack when abrasive motion occurs on the surface using linear elastic fracture mechanics theory. Both mechanisms involve a...The body of the scattering 5 cell was a 4-way Swagelok*(Crawford Fitting Co., Solon, OH) connector with a 1.5 mm hole drilled in the top for

  1. A minimally-resolved immersed boundary model for reaction-diffusion problems

    NASA Astrophysics Data System (ADS)

    Pal Singh Bhalla, Amneet; Griffith, Boyce E.; Patankar, Neelesh A.; Donev, Aleksandar

    2013-12-01

    We develop an immersed boundary approach to modeling reaction-diffusion processes in dispersions of reactive spherical particles, from the diffusion-limited to the reaction-limited setting. We represent each reactive particle with a minimally-resolved "blob" using many fewer degrees of freedom per particle than standard discretization approaches. More complicated or more highly resolved particle shapes can be built out of a collection of reactive blobs. We demonstrate numerically that the blob model can provide an accurate representation at low to moderate packing densities of the reactive particles, at a cost not much larger than solving a Poisson equation in the same domain. Unlike multipole expansion methods, our method does not require analytically computed Green's functions, but rather, computes regularized discrete Green's functions on the fly by using a standard grid-based discretization of the Poisson equation. This allows for great flexibility in implementing different boundary conditions, coupling to fluid flow or thermal transport, and the inclusion of other effects such as temporal evolution and even nonlinearities. We develop multigrid-based preconditioners for solving the linear systems that arise when using implicit temporal discretizations or studying steady states. In the diffusion-limited case the resulting linear system is a saddle-point problem, the efficient solution of which remains a challenge for suspensions of many particles. We validate our method by comparing to published results on reaction-diffusion in ordered and disordered suspensions of reactive spheres.

  2. Handling nonnormality and variance heterogeneity for quantitative sublethal toxicity tests.

    PubMed

    Ritz, Christian; Van der Vliet, Leana

    2009-09-01

    The advantages of using regression-based techniques to derive endpoints from environmental toxicity data are clear, and slowly, this superior analytical technique is gaining acceptance. As use of regression-based analysis becomes more widespread, some of the associated nuances and potential problems come into sharper focus. Looking at data sets that cover a broad spectrum of standard test species, we noticed that some model fits to data failed to meet two key assumptions-variance homogeneity and normality-that are necessary for correct statistical analysis via regression-based techniques. Failure to meet these assumptions often is caused by reduced variance at the concentrations showing severe adverse effects. Although commonly used with linear regression analysis, transformation of the response variable only is not appropriate when fitting data using nonlinear regression techniques. Through analysis of sample data sets, including Lemna minor, Eisenia andrei (terrestrial earthworm), and algae, we show that both the so-called Box-Cox transformation and use of the Poisson distribution can help to correct variance heterogeneity and nonnormality and so allow nonlinear regression analysis to be implemented. Both the Box-Cox transformation and the Poisson distribution can be readily implemented into existing protocols for statistical analysis. By correcting for nonnormality and variance heterogeneity, these two statistical tools can be used to encourage the transition to regression-based analysis and the depreciation of less-desirable and less-flexible analytical techniques, such as linear interpolation.

  3. Models for forecasting the flowering of Cornicabra olive groves.

    PubMed

    Rojo, Jesús; Pérez-Badia, Rosa

    2015-11-01

    This study examined the impact of weather-related variables on flowering phenology in the Cornicabra olive tree and constructed models based on linear and Poisson regression to forecast the onset and length of the pre-flowering and flowering phenophases. Spain is the world's leading olive oil producer, and the Cornicabra variety is the second largest Spanish variety in terms of surface area. However, there has been little phenological research into this variety. Phenological observations were made over a 5-year period (2009-2013) at four sampling sites in the province of Toledo (central Spain). Results showed that the onset of the pre-flowering phase is governed largely by temperature, which displayed a positive correlation with the temperature in the start of dormancy (November) and a negative correlation during the months prior to budburst (January, February and March). A similar relationship was recorded for the onset of flowering. Other weather-related variables, including solar radiation and rainfall, also influenced the succession of olive flowering phenophases. Linear models proved the most suitable for forecasting the onset and length of the pre-flowering period and the onset of flowering. The onset and length of pre-flowering can be predicted up to 1 or 2 months prior to budburst, whilst the onset of flowering can be forecast up to 3 months beforehand. By contrast, a nonlinear model using Poisson regression was best suited to predict the length of the flowering period.

  4. Study of non-linear deformation of vocal folds in simulations of human phonation

    NASA Astrophysics Data System (ADS)

    Saurabh, Shakti; Bodony, Daniel

    2014-11-01

    Direct numerical simulation is performed on a two-dimensional compressible, viscous fluid interacting with a non-linear, viscoelastic solid as a model for the generation of the human voice. The vocal fold (VF) tissues are modeled as multi-layered with varying stiffness in each layer and using a finite-strain Standard Linear Solid (SLS) constitutive model implemented in a quadratic finite element code and coupled to a high-order compressible Navier-Stokes solver through a boundary-fitted fluid-solid interface. The large non-linear mesh deformation is handled using an elliptic/poisson smoothening technique. Supra-glottal flow shows asymmetry in the flow, which in turn has a coupling effect on the motion of the VF. The fully compressible simulations gives direct insight into the sound produced as pressure distributions and the vocal fold deformation helps study the unsteady vortical flow resulting from the fluid-structure interaction along the full phonation cycle. Supported by the National Science Foundation (CAREER Award Number 1150439).

  5. Fully decoupled monolithic projection method for natural convection problems

    NASA Astrophysics Data System (ADS)

    Pan, Xiaomin; Kim, Kyoungyoun; Lee, Changhoon; Choi, Jung-Il

    2017-04-01

    To solve time-dependent natural convection problems, we propose a fully decoupled monolithic projection method. The proposed method applies the Crank-Nicolson scheme in time and the second-order central finite difference in space. To obtain a non-iterative monolithic method from the fully discretized nonlinear system, we first adopt linearizations of the nonlinear convection terms and the general buoyancy term with incurring second-order errors in time. Approximate block lower-upper decompositions, along with an approximate factorization technique, are additionally employed to a global linearly coupled system, which leads to several decoupled subsystems, i.e., a fully decoupled monolithic procedure. We establish global error estimates to verify the second-order temporal accuracy of the proposed method for velocity, pressure, and temperature in terms of a discrete l2-norm. Moreover, according to the energy evolution, the proposed method is proved to be stable if the time step is less than or equal to a constant. In addition, we provide numerical simulations of two-dimensional Rayleigh-Bénard convection and periodic forced flow. The results demonstrate that the proposed method significantly mitigates the time step limitation, reduces the computational cost because only one Poisson equation is required to be solved, and preserves the second-order temporal accuracy for velocity, pressure, and temperature. Finally, the proposed method reasonably predicts a three-dimensional Rayleigh-Bénard convection for different Rayleigh numbers.

  6. Electrophoresis of a polarizable charged colloid with hydrophobic surface: A numerical study

    NASA Astrophysics Data System (ADS)

    Bhattacharyya, Somnath; Majee, Partha Sarathi

    2017-04-01

    We consider the electrophoresis of a charged colloid for a generalized situation in which the particle is considered to be polarizable and the surface exhibits hydrophobicity. The dielectric polarization of the particle creates a nonlinear dependence of the electrophoretic velocity on the applied electric field, and the core hydrophobicity amplifies the fluid convection in the Debye layer. Thus, a linear analysis is no longer applicable for this situation. The present analysis is based on the numerical solution of the nonlinear electrokinetic equations based on the Navier-Stokes-Nernst-Planck-Poisson equations coupled with the Laplace equation for the electric field within the dielectric particle. The hydrophobicity of the particle may influence its electric polarization by enhancing the convective transport of ions. The nonlinear effects, such as double-layer polarization and relaxation, are also influenced by the hydrophobicity of the particle surface. The present results compare well for a lower range of the applied electric field and surface charge density with the existing results for a perfectly dielectric particle with a hydrophobic surface based on the first-order perturbation analysis due to Khair and Squires [Phys. Fluids 21, 042001 (2009), 10.1063/1.3116664]. Dielectric polarization creates a reduction in particle electrophoretic velocity, and its impact is strong for a moderate range of Debye length. A quantitative measure of the nonlinear effects is demonstrated by comparing the electrophoretic velocity with an existing linear model.

  7. Theory of linear sweep voltammetry with diffuse charge: Unsupported electrolytes, thin films, and leaky membranes

    NASA Astrophysics Data System (ADS)

    Yan, David; Bazant, Martin Z.; Biesheuvel, P. M.; Pugh, Mary C.; Dawson, Francis P.

    2017-03-01

    Linear sweep and cyclic voltammetry techniques are important tools for electrochemists and have a variety of applications in engineering. Voltammetry has classically been treated with the Randles-Sevcik equation, which assumes an electroneutral supported electrolyte. In this paper, we provide a comprehensive mathematical theory of voltammetry in electrochemical cells with unsupported electrolytes and for other situations where diffuse charge effects play a role, and present analytical and simulated solutions of the time-dependent Poisson-Nernst-Planck equations with generalized Frumkin-Butler-Volmer boundary conditions for a 1:1 electrolyte and a simple reaction. Using these solutions, we construct theoretical and simulated current-voltage curves for liquid and solid thin films, membranes with fixed background charge, and cells with blocking electrodes. The full range of dimensionless parameters is considered, including the dimensionless Debye screening length (scaled to the electrode separation), Damkohler number (ratio of characteristic diffusion and reaction times), and dimensionless sweep rate (scaled to the thermal voltage per diffusion time). The analysis focuses on the coupling of Faradaic reactions and diffuse charge dynamics, although capacitive charging of the electrical double layers is also studied, for early time transients at reactive electrodes and for nonreactive blocking electrodes. Our work highlights cases where diffuse charge effects are important in the context of voltammetry, and illustrates which regimes can be approximated using simple analytical expressions and which require more careful consideration.

  8. Impact of temperature variation between adjacent days on childhood hand, foot and mouth disease during April and July in urban and rural Hefei, China

    NASA Astrophysics Data System (ADS)

    Cheng, Jian; Zhu, Rui; Xu, Zhiwei; Wu, Jinju; Wang, Xu; Li, Kesheng; Wen, Liying; Yang, Huihui; Su, Hong

    2016-06-01

    Previous studies have found that both high temperature and low temperature increase the risk of childhood hand, foot and mouth disease (HFMD). However, little is known about whether temperature variation between neighboring days has any effects on childhood HFMD. A Poisson generalized linear regression model, combined with a distributed lag non-linear model, was applied to examine the relationship between temperature change and childhood HFMD in Hefei, China, from 1st January 2010 to 31st December 2012. Temperature change was defined as the difference of current day's mean temperature and previous day's mean temperature. Late spring and early summer (April-July) were chosen as the main study period due to it having the highest childhood HFMD incidence. There was a statistical association between temperature change between neighboring days and childhood HFMD. The effects of temperature change on childhood HFMD increased below a temperature change of 0 °C (temperature drop). The temperature change has the greatest adverse effect on childhood HFMD at 7 days lag, with 4 % (95 % confidence interval 2-7 %) increase per 3 °C drop of temperature. Male children and urban children appeared to be more vulnerable to the effects of temperature change. Temperature change between adjacent days might be an alternative temperature indictor for exploring the temperature-HFMD relationship.

  9. Impact of temperature variation between adjacent days on childhood hand, foot and mouth disease during April and July in urban and rural Hefei, China.

    PubMed

    Cheng, Jian; Zhu, Rui; Xu, Zhiwei; Wu, Jinju; Wang, Xu; Li, Kesheng; Wen, Liying; Yang, Huihui; Su, Hong

    2016-06-01

    Previous studies have found that both high temperature and low temperature increase the risk of childhood hand, foot and mouth disease (HFMD). However, little is known about whether temperature variation between neighboring days has any effects on childhood HFMD. A Poisson generalized linear regression model, combined with a distributed lag non-linear model, was applied to examine the relationship between temperature change and childhood HFMD in Hefei, China, from 1st January 2010 to 31st December 2012. Temperature change was defined as the difference of current day's mean temperature and previous day's mean temperature. Late spring and early summer (April-July) were chosen as the main study period due to it having the highest childhood HFMD incidence. There was a statistical association between temperature change between neighboring days and childhood HFMD. The effects of temperature change on childhood HFMD increased below a temperature change of 0 °C (temperature drop). The temperature change has the greatest adverse effect on childhood HFMD at 7 days lag, with 4 % (95 % confidence interval 2-7 %) increase per 3 °C drop of temperature. Male children and urban children appeared to be more vulnerable to the effects of temperature change. Temperature change between adjacent days might be an alternative temperature indictor for exploring the temperature-HFMD relationship.

  10. Assessing weather effects on dengue disease in Malaysia.

    PubMed

    Cheong, Yoon Ling; Burkart, Katrin; Leitão, Pedro J; Lakes, Tobia

    2013-11-26

    The number of dengue cases has been increasing on a global level in recent years, and particularly so in Malaysia, yet little is known about the effects of weather for identifying the short-term risk of dengue for the population. The aim of this paper is to estimate the weather effects on dengue disease accounting for non-linear temporal effects in Selangor, Kuala Lumpur and Putrajaya, Malaysia, from 2008 to 2010. We selected the weather parameters with a Poisson generalized additive model, and then assessed the effects of minimum temperature, bi-weekly accumulated rainfall and wind speed on dengue cases using a distributed non-linear lag model while adjusting for trend, day-of-week and week of the year. We found that the relative risk of dengue cases is positively associated with increased minimum temperature at a cumulative percentage change of 11.92% (95% CI: 4.41-32.19), from 25.4 °C to 26.5 °C, with the highest effect delayed by 51 days. Increasing bi-weekly accumulated rainfall had a positively strong effect on dengue cases at a cumulative percentage change of 21.45% (95% CI: 8.96, 51.37), from 215 mm to 302 mm, with the highest effect delayed by 26-28 days. The wind speed is negatively associated with dengue cases. The estimated lagged effects can be adapted in the dengue early warning system to assist in vector control and prevention plan.

  11. Flexible and structured survival model for a simultaneous estimation of non-linear and non-proportional effects and complex interactions between continuous variables: Performance of this multidimensional penalized spline approach in net survival trend analysis.

    PubMed

    Remontet, Laurent; Uhry, Zoé; Bossard, Nadine; Iwaz, Jean; Belot, Aurélien; Danieli, Coraline; Charvat, Hadrien; Roche, Laurent

    2018-01-01

    Cancer survival trend analyses are essential to describe accurately the way medical practices impact patients' survival according to the year of diagnosis. To this end, survival models should be able to account simultaneously for non-linear and non-proportional effects and for complex interactions between continuous variables. However, in the statistical literature, there is no consensus yet on how to build such models that should be flexible but still provide smooth estimates of survival. In this article, we tackle this challenge by smoothing the complex hypersurface (time since diagnosis, age at diagnosis, year of diagnosis, and mortality hazard) using a multidimensional penalized spline built from the tensor product of the marginal bases of time, age, and year. Considering this penalized survival model as a Poisson model, we assess the performance of this approach in estimating the net survival with a comprehensive simulation study that reflects simple and complex realistic survival trends. The bias was generally small and the root mean squared error was good and often similar to that of the true model that generated the data. This parametric approach offers many advantages and interesting prospects (such as forecasting) that make it an attractive and efficient tool for survival trend analyses.

  12. A {1,2}-Order Plate Theory Accounting for Three-Dimensional Thermoelastic Deformations in Thick Composite and Sandwich Laminates

    NASA Technical Reports Server (NTRS)

    Tessler, A.; Annett, M. S.; Gendron, G.

    2001-01-01

    A {1,2}-order theory for laminated composite and sandwich plates is extended to include thermoelastic effects. The theory incorporates all three-dimensional strains and stresses. Mixed-field assumptions are introduced which include linear in-plane displacements, parabolic transverse displacement and shear strains, and a cubic distribution of the transverse normal stress. Least squares strain compatibility conditions and exact traction boundary conditions are enforced to yield higher polynomial degree distributions for the transverse shear strains and transverse normal stress through the plate thickness. The principle of virtual work is used to derive a 10th-order system of equilibrium equations and associated Poisson boundary conditions. The predictive capability of the theory is demonstrated using a closed-form analytic solution for a simply-supported rectangular plate subjected to a linearly varying temperature field across the thickness. Several thin and moderately thick laminated composite and sandwich plates are analyzed. Numerical comparisons are made with corresponding solutions of the first-order shear deformation theory and three-dimensional elasticity theory. These results, which closely approximate the three-dimensional elasticity solutions, demonstrate that through - the - thickness deformations even in relatively thin and, especially in thick. composite and sandwich laminates can be significant under severe thermal gradients. The {1,2}-order kinematic assumptions insure an overall accurate theory that is in general superior and, in some cases, equivalent to the first-order theory.

  13. Ion strength limit of computed excess functions based on the linearized Poisson-Boltzmann equation.

    PubMed

    Fraenkel, Dan

    2015-12-05

    The linearized Poisson-Boltzmann (L-PB) equation is examined for its κ-range of validity (κ, Debye reciprocal length). This is done for the Debye-Hückel (DH) theory, i.e., using a single ion size, and for the SiS treatment (D. Fraenkel, Mol. Phys. 2010, 108, 1435), which extends the DH theory to the case of ion-size dissimilarity (therefore dubbed DH-SiS). The linearization of the PB equation has been claimed responsible for the DH theory's failure to fit with experiment at > 0.1 m; but DH-SiS fits with data of the mean ionic activity coefficient, γ± (molal), against m, even at m > 1 (κ > 0.33 Å(-1) ). The SiS expressions combine the overall extra-electrostatic potential energy of the smaller ion, as central ion-Ψa>b (κ), with that of the larger ion, as central ion-Ψb>a (κ); a and b are, respectively, the counterion and co-ion distances of closest approach. Ψa>b and Ψb>a are derived from the L-PB equation, which appears to conflict with their being effective up to moderate electrolyte concentrations (≈1 m). However, the L-PB equation can be valid up to κ ≥ 1.3 Å(-1) if one abandons the 1/κ criterion for its effectiveness and, instead, use, as criterion, the mean-field electrostatic interaction potential of the central ion with its ion cloud, at a radial distance dividing the cloud charge into two equal parts. The DH theory's failure is, thus, not because of using the L-PB equation; the lethal approximation is assigning a single size to the positive and negative ions. © 2015 Wiley Periodicals, Inc.

  14. Evolution of deep gray matter volume across the human lifespan.

    PubMed

    Narvacan, Karl; Treit, Sarah; Camicioli, Richard; Martin, Wayne; Beaulieu, Christian

    2017-08-01

    Magnetic resonance imaging of subcortical gray matter structures, which mediate behavior, cognition and the pathophysiology of several diseases, is crucial for establishing typical maturation patterns across the human lifespan. This single site study examines T1-weighted MPRAGE images of 3 healthy cohorts: (i) a cross-sectional cohort of 406 subjects aged 5-83 years; (ii) a longitudinal neurodevelopment cohort of 84 subjects scanned twice approximately 4 years apart, aged 5-27 years at first scan; and (iii) a longitudinal aging cohort of 55 subjects scanned twice approximately 3 years apart, aged 46-83 years at first scan. First scans from longitudinal subjects were included in the cross-sectional analysis. Age-dependent changes in thalamus, caudate, putamen, globus pallidus, nucleus accumbens, hippocampus, and amygdala volumes were tested with Poisson, quadratic, and linear models in the cross-sectional cohort, and quadratic and linear models in the longitudinal cohorts. Most deep gray matter structures best fit to Poisson regressions in the cross-sectional cohort and quadratic curves in the young longitudinal cohort, whereas the volume of all structures except the caudate and globus pallidus decreased linearly in the longitudinal aging cohort. Males had larger volumes than females for all subcortical structures, but sex differences in trajectories of change with age were not significant. Within subject analysis showed that 65%-80% of 13-17 year olds underwent a longitudinal decrease in volume between scans (∼4 years apart) for the putamen, globus pallidus, and hippocampus, suggesting unique developmental processes during adolescence. This lifespan study of healthy participants will form a basis for comparison to neurological and psychiatric disorders. Hum Brain Mapp 38:3771-3790, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  15. Smart materials systems through mesoscale patterning

    NASA Astrophysics Data System (ADS)

    Aksay, Ilhan A.; Groves, John T.; Gruner, Sol M.; Lee, P. C. Y.; Prud'homme, Robert K.; Shih, Wei-Heng; Torquato, Salvatore; Whitesides, George M.

    1996-02-01

    We report work on the fabrication of smart materials with two unique strategies: (1) self- assembly and (2) laser stereolithography. Both methods are akin to the processes used by biological systems. The first one is ideal for pattern development and the fabrication of miniaturized units in the submicron range and the second one in the 10 micrometer to 1 mm size range. By using these miniaturized units as building blocks, one can also produce smart material systems that can be used at larger length scales such as smart structural components. We have chosen to focus on two novel piezoceramic systems: (1) high-displacement piezoelectric actuators, and (2) piezoceramic hydrophone composites possessing negative Poisson ratio matrices. High-displacement actuators are essential in such applications as linear motors, pumps, switches, loud speakers, variable-focus mirrors, and laser deflectors. Arrays of such units can potentially be used for active vibration control of helicopter rotors as well as the fabrication of adaptive rotors. In the case of piezoceramic hydrophone composites, we utilize matrices having a negative Poisson's ratio in order to produce highly sensitive, miniaturized sensors. We envision such devices having promising new application areas such as the implantation of hydrophones in small blood vessels to monitor blood pressure. Negative Poisson ratio materials have promise as robust shock absorbers, air filters, and fasteners, and hence, can be used in aircraft and land vehicles.

  16. Incorporating signal-dependent noise for hyperspectral target detection

    NASA Astrophysics Data System (ADS)

    Morman, Christopher J.; Meola, Joseph

    2015-05-01

    The majority of hyperspectral target detection algorithms are developed from statistical data models employing stationary background statistics or white Gaussian noise models. Stationary background models are inaccurate as a result of two separate physical processes. First, varying background classes often exist in the imagery that possess different clutter statistics. Many algorithms can account for this variability through the use of subspaces or clustering techniques. The second physical process, which is often ignored, is a signal-dependent sensor noise term. For photon counting sensors that are often used in hyperspectral imaging systems, sensor noise increases as the measured signal level increases as a result of Poisson random processes. This work investigates the impact of this sensor noise on target detection performance. A linear noise model is developed describing sensor noise variance as a linear function of signal level. The linear noise model is then incorporated for detection of targets using data collected at Wright Patterson Air Force Base.

  17. Investigating adolescents' sweetened beverage consumption and Western fast food restaurant visits in China, 2006-2011.

    PubMed

    Lee, Yen-Han; Chiang, Timothy C; Liu, Ching-Ti; Chang, Yen-Chang

    2018-05-25

    Background China has undergone rapid Westernization and established dramatic social reforms since the early 21st century. However, health issues led to challenges in the lives of the Chinese residents. Western fast food and sweetened beverages, two food options associated with chronic diseases and obesity, have played key roles to alter adolescents' dietary patterns. This study aims to examine the association between adolescents' visits to Western fast food restaurants and sweetened beverage consumption. Methods Applying three waves of the China Health and Nutrition Study (CHNS) between 2006 and 2011 (n = 1063), we used generalized Poisson regression (GPR) to investigate the association between adolescents' Western fast food restaurant visits and sweetened beverage consumption, as the popularity of fast food and sweetened beverages has skyrocketed among adolescents in contemporary China. A linear-by-linear association test was used as a trend test to study general patterns between sweetened beverage consumption and Western fast food restaurant visits. We adjusted all models with sweetened beverage consumption frequency, four food preferences (fast food, salty snacks, fruits and vegetables), school status, gross household income, provinces, rural/urban regions, age and gender. Results From the results of the trend test, frequent sweetened beverage consumption was highly associated with more Western fast food restaurant visits among Chinese adolescents in the three waves (p < 0.001). Furthermore, we observed that adolescents, who had less than monthly sweetened beverage consumption or did not drink them at all, had much less likelihood of visiting Western fast food restaurants (p < 0.05), compared with those daily consumers. Conclusion Adolescents' sweetened beverage consumption was highly associated with Western fast food restaurant visits in contemporary China. Further actions are needed from the Chinese central government to create a healthier dietary environment for adolescents.

  18. Statistical Modeling of Fire Occurrence Using Data from the Tōhoku, Japan Earthquake and Tsunami.

    PubMed

    Anderson, Dana; Davidson, Rachel A; Himoto, Keisuke; Scawthorn, Charles

    2016-02-01

    In this article, we develop statistical models to predict the number and geographic distribution of fires caused by earthquake ground motion and tsunami inundation in Japan. Using new, uniquely large, and consistent data sets from the 2011 Tōhoku earthquake and tsunami, we fitted three types of models-generalized linear models (GLMs), generalized additive models (GAMs), and boosted regression trees (BRTs). This is the first time the latter two have been used in this application. A simple conceptual framework guided identification of candidate covariates. Models were then compared based on their out-of-sample predictive power, goodness of fit to the data, ease of implementation, and relative importance of the framework concepts. For the ground motion data set, we recommend a Poisson GAM; for the tsunami data set, a negative binomial (NB) GLM or NB GAM. The best models generate out-of-sample predictions of the total number of ignitions in the region within one or two. Prefecture-level prediction errors average approximately three. All models demonstrate predictive power far superior to four from the literature that were also tested. A nonlinear relationship is apparent between ignitions and ground motion, so for GLMs, which assume a linear response-covariate relationship, instrumental intensity was the preferred ground motion covariate because it captures part of that nonlinearity. Measures of commercial exposure were preferred over measures of residential exposure for both ground motion and tsunami ignition models. This may vary in other regions, but nevertheless highlights the value of testing alternative measures for each concept. Models with the best predictive power included two or three covariates. © 2015 Society for Risk Analysis.

  19. A big data approach to the development of mixed-effects models for seizure count data.

    PubMed

    Tharayil, Joseph J; Chiang, Sharon; Moss, Robert; Stern, John M; Theodore, William H; Goldenholz, Daniel M

    2017-05-01

    Our objective was to develop a generalized linear mixed model for predicting seizure count that is useful in the design and analysis of clinical trials. This model also may benefit the design and interpretation of seizure-recording paradigms. Most existing seizure count models do not include children, and there is currently no consensus regarding the most suitable model that can be applied to children and adults. Therefore, an additional objective was to develop a model that accounts for both adult and pediatric epilepsy. Using data from SeizureTracker.com, a patient-reported seizure diary tool with >1.2 million recorded seizures across 8 years, we evaluated the appropriateness of Poisson, negative binomial, zero-inflated negative binomial, and modified negative binomial models for seizure count data based on minimization of the Bayesian information criterion. Generalized linear mixed-effects models were used to account for demographic and etiologic covariates and for autocorrelation structure. Holdout cross-validation was used to evaluate predictive accuracy in simulating seizure frequencies. For both adults and children, we found that a negative binomial model with autocorrelation over 1 day was optimal. Using holdout cross-validation, the proposed model was found to provide accurate simulation of seizure counts for patients with up to four seizures per day. The optimal model can be used to generate more realistic simulated patient data with very few input parameters. The availability of a parsimonious, realistic virtual patient model can be of great utility in simulations of phase II/III clinical trials, epilepsy monitoring units, outpatient biosensors, and mobile Health (mHealth) applications. Wiley Periodicals, Inc. © 2017 International League Against Epilepsy.

  20. Predicting stem borer density in maize using RapidEye data and generalized linear models

    NASA Astrophysics Data System (ADS)

    Abdel-Rahman, Elfatih M.; Landmann, Tobias; Kyalo, Richard; Ong'amo, George; Mwalusepo, Sizah; Sulieman, Saad; Ru, Bruno Le

    2017-05-01

    Average maize yield in eastern Africa is 2.03 t ha-1 as compared to global average of 6.06 t ha-1 due to biotic and abiotic constraints. Amongst the biotic production constraints in Africa, stem borers are the most injurious. In eastern Africa, maize yield losses due to stem borers are currently estimated between 12% and 21% of the total production. The objective of the present study was to explore the possibility of RapidEye spectral data to assess stem borer larva densities in maize fields in two study sites in Kenya. RapidEye images were acquired for the Bomet (western Kenya) test site on the 9th of December 2014 and on 27th of January 2015, and for Machakos (eastern Kenya) a RapidEye image was acquired on the 3rd of January 2015. Five RapidEye spectral bands as well as 30 spectral vegetation indices (SVIs) were utilized to predict per field maize stem borer larva densities using generalized linear models (GLMs), assuming Poisson ('Po') and negative binomial ('NB') distributions. Root mean square error (RMSE) and ratio prediction to deviation (RPD) statistics were used to assess the models performance using a leave-one-out cross-validation approach. The Zero-inflated NB ('ZINB') models outperformed the 'NB' models and stem borer larva densities could only be predicted during the mid growing season in December and early January in both study sites, respectively (RMSE = 0.69-1.06 and RPD = 8.25-19.57). Overall, all models performed similar when all the 30 SVIs (non-nested) and only the significant (nested) SVIs were used. The models developed could improve decision making regarding controlling maize stem borers within integrated pest management (IPM) interventions.

  1. Effect of temperature rise and hydrostatic pressure on microbending loss and refractive index change in double-coated optical fiber

    NASA Astrophysics Data System (ADS)

    Seraji, Faramarz E.; Toutian, Golnoosh

    This paper presents an analysis of the effect of temperature rise and hydrostatic pressure on microbending loss, refractive index change, and stress components of a double-coated optical fiber by considering coating material parameters such as Young's modulus and the Poisson ratio. It is shown that, when temperature rises, the microbending loss and refractive index changes would decrease with increase of thickness of primary coating layer and will increase after passing through a minima. Increase of thickness of secondary coating layer causes the microbending loss and refractive index changes to decrease. We have shown that the temperature rise affecting the fiber makes the microbending loss and refractive index decrease, linearly. At a particular temperature, the microbending loss takes negative values, due to tensile pressure applied on the fiber. The increase of Young's modulus and the Poisson ratio of primary coating would lower the microbending loss and refractive index change whereas in the secondary coating layer, the condition reverses.

  2. Surface instability of an imperfectly bonded thin elastic film under surface van der Waals forces

    NASA Astrophysics Data System (ADS)

    Wang, Xu; Jing, Rong

    2017-02-01

    This paper studies surface instability of a thin elastic film imperfectly bonded to a rigid substrate interacting with a rigid contactor through van der Waals forces under plane strain conditions. The film-substrate interface is modeled as a linear spring with vanishing thickness described in terms of the normal and tangential interface parameters. Depending on the ratio of the two imperfect interface parameters, the critical value of the Poisson's ratio for the occurrence of surface wrinkling in the absence of surface energy can be greater than, equal to, or smaller than 0.25, which is the critical Poisson's ratio for a perfect film-substrate interface. The critical surface energy for the inhibition of the surface wrinkling is also obtained. Finally, we propose a very simple and effective method to study the surface instability of a multilayered elastic film with imperfect interfaces interacting with a rigid contactor or with another multilayered elastic film (or a multilayered simply supported plate) with imperfect interfaces.

  3. Optimal linear reconstruction of dark matter from halo catalogues

    DOE PAGES

    Cai, Yan -Chuan; Bernstein, Gary; Sheth, Ravi K.

    2011-04-01

    The dark matter lumps (or "halos") that contain galaxies have locations in the Universe that are to some extent random with respect to the overall matter distributions. We investigate how best to estimate the total matter distribution from the locations of the halos. We derive the weight function w(M) to apply to dark-matter haloes that minimizes the stochasticity between the weighted halo distribution and its underlying mass density field. The optimal w(M) depends on the range of masses of halos being used. While the standard biased-Poisson model of the halo distribution predicts that bias weighting is optimal, the simple factmore » that the mass is comprised of haloes implies that the optimal w(M) will be a mixture of mass-weighting and bias-weighting. In N-body simulations, the Poisson estimator is up to 15× noisier than the optimal. Optimal weighting could make cosmological tests based on the matter power spectrum or cross-correlations much more powerful and/or cost effective.« less

  4. Calculation of the Maxwell stress tensor and the Poisson-Boltzmann force on a solvated molecular surface using hypersingular boundary integrals

    NASA Astrophysics Data System (ADS)

    Lu, Benzhuo; Cheng, Xiaolin; Hou, Tingjun; McCammon, J. Andrew

    2005-08-01

    The electrostatic interaction among molecules solvated in ionic solution is governed by the Poisson-Boltzmann equation (PBE). Here the hypersingular integral technique is used in a boundary element method (BEM) for the three-dimensional (3D) linear PBE to calculate the Maxwell stress tensor on the solvated molecular surface, and then the PB forces and torques can be obtained from the stress tensor. Compared with the variational method (also in a BEM frame) that we proposed recently, this method provides an even more efficient way to calculate the full intermolecular electrostatic interaction force, especially for macromolecular systems. Thus, it may be more suitable for the application of Brownian dynamics methods to study the dynamics of protein/protein docking as well as the assembly of large 3D architectures involving many diffusing subunits. The method has been tested on two simple cases to demonstrate its reliability and efficiency, and also compared with our previous variational method used in BEM.

  5. Application of the Hotelling and ideal observers to detection and localization of exoplanets.

    PubMed

    Caucci, Luca; Barrett, Harrison H; Devaney, Nicholas; Rodríguez, Jeffrey J

    2007-12-01

    The ideal linear discriminant or Hotelling observer is widely used for detection tasks and image-quality assessment in medical imaging, but it has had little application in other imaging fields. We apply it to detection of planets outside of our solar system with long-exposure images obtained from ground-based or space-based telescopes. The statistical limitations in this problem include Poisson noise arising mainly from the host star, electronic noise in the image detector, randomness or uncertainty in the point-spread function (PSF) of the telescope, and possibly a random background. PSF randomness is reduced but not eliminated by the use of adaptive optics. We concentrate here on the effects of Poisson and electronic noise, but we also show how to extend the calculation to include a random PSF. For the case where the PSF is known exactly, we compare the Hotelling observer to other observers commonly used for planet detection; comparison is based on receiver operating characteristic (ROC) and localization ROC (LROC) curves.

  6. Application of the Hotelling and ideal observers to detection and localization of exoplanets

    PubMed Central

    Caucci, Luca; Barrett, Harrison H.; Devaney, Nicholas; Rodríguez, Jeffrey J.

    2008-01-01

    The ideal linear discriminant or Hotelling observer is widely used for detection tasks and image-quality assessment in medical imaging, but it has had little application in other imaging fields. We apply it to detection of planets outside of our solar system with long-exposure images obtained from ground-based or space-based telescopes. The statistical limitations in this problem include Poisson noise arising mainly from the host star, electronic noise in the image detector, randomness or uncertainty in the point-spread function (PSF) of the telescope, and possibly a random background. PSF randomness is reduced but not eliminated by the use of adaptive optics. We concentrate here on the effects of Poisson and electronic noise, but we also show how to extend the calculation to include a random PSF. For the case where the PSF is known exactly, we compare the Hotelling observer to other observers commonly used for planet detection; comparison is based on receiver operating characteristic (ROC) and localization ROC (LROC) curves. PMID:18059905

  7. The Euler-Poisson-Darboux equation for relativists

    NASA Astrophysics Data System (ADS)

    Stewart, John M.

    2009-09-01

    The Euler-Poisson-Darboux (EPD) equation is the simplest linear hyperbolic equation in two independent variables whose coefficients exhibit singularities, and as such must be of interest as a paradigm to relativists. Sadly it receives scant treatment in the textbooks. The first half of this review is didactic in nature. It discusses in the simplest terms possible the nature of solutions of the EPD equation for the timelike and spacelike singularity cases. Also covered is the Riemann representation of solutions of the characteristic initial value problem, which is hard to find in the literature. The second half examines a few of the possible applications, ranging from explicit computation of the leading terms in the far-field backscatter from predominantly outgoing radiation in a Schwarzschild space-time, to computing explicitly the leading terms in the matter-induced singularities in plane symmetric space-times. There are of course many other applications and the aim of this article is to encourage relativists to investigate this underrated paradigm.

  8. Predicting Hospital Admissions With Poisson Regression Analysis

    DTIC Science & Technology

    2009-06-01

    East and Four West. Four East is where bariatric , general, neurologic, otolaryngology (ENT), ophthalmologic, orthopedic, and plastic surgery ...where care is provided for cardiovascular, thoracic, and vascular surgery patients. Figure 1 shows a bar graph for each unit, giving the proportion of...provided at NMCSD, or a study could be conducted on the amount of time that patients generally wait for elective surgeries . There is also the

  9. Lindley frailty model for a class of compound Poisson processes

    NASA Astrophysics Data System (ADS)

    Kadilar, Gamze Özel; Ata, Nihal

    2013-10-01

    The Lindley distribution gain importance in survival analysis for the similarity of exponential distribution and allowance for the different shapes of hazard function. Frailty models provide an alternative to proportional hazards model where misspecified or omitted covariates are described by an unobservable random variable. Despite of the distribution of the frailty is generally assumed to be continuous, it is appropriate to consider discrete frailty distributions In some circumstances. In this paper, frailty models with discrete compound Poisson process for the Lindley distributed failure time are introduced. Survival functions are derived and maximum likelihood estimation procedures for the parameters are studied. Then, the fit of the models to the earthquake data set of Turkey are examined.

  10. DISCRETE COMPOUND POISSON PROCESSES AND TABLES OF THE GEOMETRIC POISSON DISTRIBUTION.

    DTIC Science & Technology

    A concise summary of the salient properties of discrete Poisson processes , with emphasis on comparing the geometric and logarithmic Poisson processes . The...the geometric Poisson process are given for 176 sets of parameter values. New discrete compound Poisson processes are also introduced. These...processes have properties that are particularly relevant when the summation of several different Poisson processes is to be analyzed. This study provides the

  11. Error analysis of finite element method for Poisson–Nernst–Planck equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Yuzhou; Sun, Pengtao; Zheng, Bin

    A priori error estimates of finite element method for time-dependent Poisson-Nernst-Planck equations are studied in this work. We obtain the optimal error estimates in L∞(H1) and L2(H1) norms, and suboptimal error estimates in L∞(L2) norm, with linear element, and optimal error estimates in L∞(L2) norm with quadratic or higher-order element, for both semi- and fully discrete finite element approximations. Numerical experiments are also given to validate the theoretical results.

  12. Markov and semi-Markov processes as a failure rate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grabski, Franciszek

    2016-06-08

    In this paper the reliability function is defined by the stochastic failure rate process with a non negative and right continuous trajectories. Equations for the conditional reliability functions of an object, under assumption that the failure rate is a semi-Markov process with an at most countable state space are derived. A proper theorem is presented. The linear systems of equations for the appropriate Laplace transforms allow to find the reliability functions for the alternating, the Poisson and the Furry-Yule failure rate processes.

  13. Concentration Dependent Physical Properties of Ge1-xSnx Solid Solution

    NASA Astrophysics Data System (ADS)

    Jivani, A. R.; Jani, A. R.

    2011-12-01

    Our own proposed potential is used to investigate few physical properties like total energy, bulk modulus, pressure derivative of bulk modulus, elastic constants, pressure derivative of elastic constants, Poisson's ratio and Young's modulus of Ge1-xSnx solid solution with x is atomic concentration of α-Sn. The potential combines linear plus quadratic types of electron-ion interaction. First time screening function proposed by Sarkar et al is used to investigate the properties of the Ge-Sn solid solution system.

  14. Effects of adaptive refinement on the inverse EEG solution

    NASA Astrophysics Data System (ADS)

    Weinstein, David M.; Johnson, Christopher R.; Schmidt, John A.

    1995-10-01

    One of the fundamental problems in electroencephalography can be characterized by an inverse problem. Given a subset of electrostatic potentials measured on the surface of the scalp and the geometry and conductivity properties within the head, calculate the current vectors and potential fields within the cerebrum. Mathematically the generalized EEG problem can be stated as solving Poisson's equation of electrical conduction for the primary current sources. The resulting problem is mathematically ill-posed i.e., the solution does not depend continuously on the data, such that small errors in the measurement of the voltages on the scalp can yield unbounded errors in the solution, and, for the general treatment of a solution of Poisson's equation, the solution is non-unique. However, if accurate solutions the general treatment of a solution of Poisson's equation, the solution is non-unique. However, if accurate solutions to such problems could be obtained, neurologists would gain noninvasive accesss to patient-specific cortical activity. Access to such data would ultimately increase the number of patients who could be effectively treated for pathological cortical conditions such as temporal lobe epilepsy. In this paper, we present the effects of spatial adaptive refinement on the inverse EEG problem and show that the use of adaptive methods allow for significantly better estimates of electric and potential fileds within the brain through an inverse procedure. To test these methods, we have constructed several finite element head models from magneteic resonance images of a patient. The finite element meshes ranged in size from 2724 nodes and 12,812 elements to 5224 nodes and 29,135 tetrahedral elements, depending on the level of discretization. We show that an adaptive meshing algorithm minimizes the error in the forward problem due to spatial discretization and thus increases the accuracy of the inverse solution.

  15. Reconstructing Information in Large-Scale Structure via Logarithmic Mapping

    NASA Astrophysics Data System (ADS)

    Szapudi, Istvan

    We propose to develop a new method to extract information from large-scale structure data combining two-point statistics and non-linear transformations; before, this information was available only with substantially more complex higher-order statistical methods. Initially, most of the cosmological information in large-scale structure lies in two-point statistics. With non- linear evolution, some of that useful information leaks into higher-order statistics. The PI and group has shown in a series of theoretical investigations how that leakage occurs, and explained the Fisher information plateau at smaller scales. This plateau means that even as more modes are added to the measurement of the power spectrum, the total cumulative information (loosely speaking the inverse errorbar) is not increasing. Recently we have shown in Neyrinck et al. (2009, 2010) that a logarithmic (and a related Gaussianization or Box-Cox) transformation on the non-linear Dark Matter or galaxy field reconstructs a surprisingly large fraction of this missing Fisher information of the initial conditions. This was predicted by the earlier wave mechanical formulation of gravitational dynamics by Szapudi & Kaiser (2003). The present proposal is focused on working out the theoretical underpinning of the method to a point that it can be used in practice to analyze data. In particular, one needs to deal with the usual real-life issues of galaxy surveys, such as complex geometry, discrete sam- pling (Poisson or sub-Poisson noise), bias (linear, or non-linear, deterministic, or stochastic), redshift distortions, pro jection effects for 2D samples, and the effects of photometric redshift errors. We will develop methods for weak lensing and Sunyaev-Zeldovich power spectra as well, the latter specifically targetting Planck. In addition, we plan to investigate the question of residual higher- order information after the non-linear mapping, and possible applications for cosmology. Our aim will be to work out practical methods, with the ultimate goal of cosmological parameter estimation. We will quantify with standard MCMC and Fisher methods (including DETF Figure of merit when applicable) the efficiency of our estimators, comparing with the conventional method, that uses the un-transformed field. Preliminary results indicate that the increase for NASA's WFIRST in the DETF Figure of Merit would be 1.5-4.2 using a range of pessimistic to optimistic assumptions, respectively.

  16. An efficient parallel sampling technique for Multivariate Poisson-Lognormal model: Analysis with two crash count datasets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhan, Xianyuan; Aziz, H. M. Abdul; Ukkusuri, Satish V.

    Our study investigates the Multivariate Poisson-lognormal (MVPLN) model that jointly models crash frequency and severity accounting for correlations. The ordinary univariate count models analyze crashes of different severity level separately ignoring the correlations among severity levels. The MVPLN model is capable to incorporate the general correlation structure and takes account of the over dispersion in the data that leads to a superior data fitting. But, the traditional estimation approach for MVPLN model is computationally expensive, which often limits the use of MVPLN model in practice. In this work, a parallel sampling scheme is introduced to improve the original Markov Chainmore » Monte Carlo (MCMC) estimation approach of the MVPLN model, which significantly reduces the model estimation time. Two MVPLN models are developed using the pedestrian vehicle crash data collected in New York City from 2002 to 2006, and the highway-injury data from Washington State (5-year data from 1990 to 1994) The Deviance Information Criteria (DIC) is used to evaluate the model fitting. The estimation results show that the MVPLN models provide a superior fit over univariate Poisson-lognormal (PLN), univariate Poisson, and Negative Binomial models. Moreover, the correlations among the latent effects of different severity levels are found significant in both datasets that justifies the importance of jointly modeling crash frequency and severity accounting for correlations.« less

  17. Receiver design for SPAD-based VLC systems under Poisson-Gaussian mixed noise model.

    PubMed

    Mao, Tianqi; Wang, Zhaocheng; Wang, Qi

    2017-01-23

    Single-photon avalanche diode (SPAD) is a promising photosensor because of its high sensitivity to optical signals in weak illuminance environment. Recently, it has drawn much attention from researchers in visible light communications (VLC). However, existing literature only deals with the simplified channel model, which only considers the effects of Poisson noise introduced by SPAD, but neglects other noise sources. Specifically, when an analog SPAD detector is applied, there exists Gaussian thermal noise generated by the transimpedance amplifier (TIA) and the digital-to-analog converter (D/A). Therefore, in this paper, we propose an SPAD-based VLC system with pulse-amplitude-modulation (PAM) under Poisson-Gaussian mixed noise model, where Gaussian-distributed thermal noise at the receiver is also investigated. The closed-form conditional likelihood of received signals is derived using the Laplace transform and the saddle-point approximation method, and the corresponding quasi-maximum-likelihood (quasi-ML) detector is proposed. Furthermore, the Poisson-Gaussian-distributed signals are converted to Gaussian variables with the aid of the generalized Anscombe transform (GAT), leading to an equivalent additive white Gaussian noise (AWGN) channel, and a hard-decision-based detector is invoked. Simulation results demonstrate that, the proposed GAT-based detector can reduce the computational complexity with marginal performance loss compared with the proposed quasi-ML detector, and both detectors are capable of accurately demodulating the SPAD-based PAM signals.

  18. Differential expression analysis for RNAseq using Poisson mixed models.

    PubMed

    Sun, Shiquan; Hood, Michelle; Scott, Laura; Peng, Qinke; Mukherjee, Sayan; Tung, Jenny; Zhou, Xiang

    2017-06-20

    Identifying differentially expressed (DE) genes from RNA sequencing (RNAseq) studies is among the most common analyses in genomics. However, RNAseq DE analysis presents several statistical and computational challenges, including over-dispersed read counts and, in some settings, sample non-independence. Previous count-based methods rely on simple hierarchical Poisson models (e.g. negative binomial) to model independent over-dispersion, but do not account for sample non-independence due to relatedness, population structure and/or hidden confounders. Here, we present a Poisson mixed model with two random effects terms that account for both independent over-dispersion and sample non-independence. We also develop a scalable sampling-based inference algorithm using a latent variable representation of the Poisson distribution. With simulations, we show that our method properly controls for type I error and is generally more powerful than other widely used approaches, except in small samples (n <15) with other unfavorable properties (e.g. small effect sizes). We also apply our method to three real datasets that contain related individuals, population stratification or hidden confounders. Our results show that our method increases power in all three data compared to other approaches, though the power gain is smallest in the smallest sample (n = 6). Our method is implemented in MACAU, freely available at www.xzlab.org/software.html. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  19. An efficient parallel sampling technique for Multivariate Poisson-Lognormal model: Analysis with two crash count datasets

    DOE PAGES

    Zhan, Xianyuan; Aziz, H. M. Abdul; Ukkusuri, Satish V.

    2015-11-19

    Our study investigates the Multivariate Poisson-lognormal (MVPLN) model that jointly models crash frequency and severity accounting for correlations. The ordinary univariate count models analyze crashes of different severity level separately ignoring the correlations among severity levels. The MVPLN model is capable to incorporate the general correlation structure and takes account of the over dispersion in the data that leads to a superior data fitting. But, the traditional estimation approach for MVPLN model is computationally expensive, which often limits the use of MVPLN model in practice. In this work, a parallel sampling scheme is introduced to improve the original Markov Chainmore » Monte Carlo (MCMC) estimation approach of the MVPLN model, which significantly reduces the model estimation time. Two MVPLN models are developed using the pedestrian vehicle crash data collected in New York City from 2002 to 2006, and the highway-injury data from Washington State (5-year data from 1990 to 1994) The Deviance Information Criteria (DIC) is used to evaluate the model fitting. The estimation results show that the MVPLN models provide a superior fit over univariate Poisson-lognormal (PLN), univariate Poisson, and Negative Binomial models. Moreover, the correlations among the latent effects of different severity levels are found significant in both datasets that justifies the importance of jointly modeling crash frequency and severity accounting for correlations.« less

  20. Stress-controlled Poisson ratio of a crystalline membrane: Application to graphene

    NASA Astrophysics Data System (ADS)

    Burmistrov, I. Â. S.; Gornyi, I. Â. V.; Kachorovskii, V. Â. Yu.; Katsnelson, M. Â. I.; Los, J. Â. H.; Mirlin, A. Â. D.

    2018-03-01

    We demonstrate that a key elastic parameter of a suspended crystalline membrane—the Poisson ratio (PR) ν —is a nontrivial function of the applied stress σ and of the system size L , i.e., ν =νL(σ ) . We consider a generic two-dimensional membrane embedded into space of dimensionality 2 +dc . (The physical situation corresponds to dc=1 .) A particularly important application of our results is to freestanding graphene. We find that at a very low stress, when the membrane exhibits linear response, the PR νL(0 ) decreases with increasing system size L and saturates for L →∞ at a value which depends on the boundary conditions and is essentially different from the value ν =-1 /3 previously predicted by the membrane theory within a self-consistent scaling analysis. By increasing σ , one drives a sufficiently large membrane (with the length L much larger than the Ginzburg length) into a nonlinear regime characterized by a universal value of PR that depends solely on dc, in close connection with the critical index η controlling the renormalization of bending rigidity. This universal nonlinear PR acquires its minimum value νmin=-1 in the limit dc→∞ , when η →0 . With the further increase of σ , the PR changes sign and finally saturates at a positive nonuniversal value prescribed by the conventional elasticity theory. We also show that one should distinguish between the absolute and differential PR (ν and νdiff, respectively). While coinciding in the limits of very low and very high stress, they differ in general: ν ≠νdiff . In particular, in the nonlinear universal regime, νdiff takes a universal value which, similarly to the absolute PR, is a function solely of dc (or, equivalently, of η ) but is different from the universal value of ν . In the limit of infinite dimensionality of the embedding space, dc→∞ (i.e., η →0 ), the universal value of νdiff tends to -1 /3 , at variance with the limiting value -1 of ν . Finally, we briefly discuss generalization of these results to a disordered membrane.

  1. Guidelines for Use of the Approximate Beta-Poisson Dose-Response Model.

    PubMed

    Xie, Gang; Roiko, Anne; Stratton, Helen; Lemckert, Charles; Dunn, Peter K; Mengersen, Kerrie

    2017-07-01

    For dose-response analysis in quantitative microbial risk assessment (QMRA), the exact beta-Poisson model is a two-parameter mechanistic dose-response model with parameters α>0 and β>0, which involves the Kummer confluent hypergeometric function. Evaluation of a hypergeometric function is a computational challenge. Denoting PI(d) as the probability of infection at a given mean dose d, the widely used dose-response model PI(d)=1-(1+dβ)-α is an approximate formula for the exact beta-Poisson model. Notwithstanding the required conditions α<β and β>1, issues related to the validity and approximation accuracy of this approximate formula have remained largely ignored in practice, partly because these conditions are too general to provide clear guidance. Consequently, this study proposes a probability measure Pr(0 < r < 1 | α̂, β̂) as a validity measure (r is a random variable that follows a gamma distribution; α̂ and β̂ are the maximum likelihood estimates of α and β in the approximate model); and the constraint conditions β̂>(22α̂)0.50 for 0.02<α̂<2 as a rule of thumb to ensure an accurate approximation (e.g., Pr(0 < r < 1 | α̂, β̂) >0.99) . This validity measure and rule of thumb were validated by application to all the completed beta-Poisson models (related to 85 data sets) from the QMRA community portal (QMRA Wiki). The results showed that the higher the probability Pr(0 < r < 1 | α̂, β̂), the better the approximation. The results further showed that, among the total 85 models examined, 68 models were identified as valid approximate model applications, which all had a near perfect match to the corresponding exact beta-Poisson model dose-response curve. © 2016 Society for Risk Analysis.

  2. Method for resonant measurement

    DOEpatents

    Rhodes, G.W.; Migliori, A.; Dixon, R.D.

    1996-03-05

    A method of measurement of objects to determine object flaws, Poisson`s ratio ({sigma}) and shear modulus ({mu}) is shown and described. First, the frequency for expected degenerate responses is determined for one or more input frequencies and then splitting of degenerate resonant modes are observed to identify the presence of flaws in the object. Poisson`s ratio and the shear modulus can be determined by identification of resonances dependent only on the shear modulus, and then using that shear modulus to find Poisson`s ratio using other modes dependent on both the shear modulus and Poisson`s ratio. 1 fig.

  3. Effects of weather variability and air pollutants on emergency admissions for cardiovascular and cerebrovascular diseases.

    PubMed

    Hori, Aya; Hashizume, Masahiro; Tsuda, Yoko; Tsukahara, Teruomi; Nomiyama, Tetsuo

    2012-01-01

    We examined the effect of ambient temperature, air pressure and air pollutants on daily emergency admissions by identifying the cause of admission for each type of stroke and cardiovascular disease using generalized linear Poisson regression models allowing for overdispersion, and controlling for seasonal and inter-annual variations, days of the week and public holidays, levels of influenza and respiratory syncytial viruses. Every 1°C decrease in mean temperature was associated with an increase in the daily number of emergency admissions by 7.83% (95% CI 2.06-13.25) for acute coronary syndrome (ACS) and heart failure, by 35.57% (95% CI 15.59-59.02) for intracerebral haemorrhage (ICH) and by 11.71% (95% CI 4.1-19.89) for cerebral infarction. An increase of emergency admissions due to ICH (3.25% (95% CI 0.94-5.51)), heart failure (3.56% (95% CI 1.09-5.96)) was observed at every 1 hPa decrease in air pressure from the previous days. We found stronger detrimental effect of cold on stroke than cardiovascular disease.

  4. Differential Covariance: A New Class of Methods to Estimate Sparse Connectivity from Neural Recordings

    PubMed Central

    Lin, Tiger W.; Das, Anup; Krishnan, Giri P.; Bazhenov, Maxim; Sejnowski, Terrence J.

    2017-01-01

    With our ability to record more neurons simultaneously, making sense of these data is a challenge. Functional connectivity is one popular way to study the relationship of multiple neural signals. Correlation-based methods are a set of currently well-used techniques for functional connectivity estimation. However, due to explaining away and unobserved common inputs (Stevenson, Rebesco, Miller, & Körding, 2008), they produce spurious connections. The general linear model (GLM), which models spike trains as Poisson processes (Okatan, Wilson, & Brown, 2005; Truccolo, Eden, Fellows, Donoghue, & Brown, 2005; Pillow et al., 2008), avoids these confounds. We develop here a new class of methods by using differential signals based on simulated intracellular voltage recordings. It is equivalent to a regularized AR(2) model. We also expand the method to simulated local field potential recordings and calcium imaging. In all of our simulated data, the differential covariance-based methods achieved performance better than or similar to the GLM method and required fewer data samples. This new class of methods provides alternative ways to analyze neural signals. PMID:28777719

  5. Differential Covariance: A New Class of Methods to Estimate Sparse Connectivity from Neural Recordings.

    PubMed

    Lin, Tiger W; Das, Anup; Krishnan, Giri P; Bazhenov, Maxim; Sejnowski, Terrence J

    2017-10-01

    With our ability to record more neurons simultaneously, making sense of these data is a challenge. Functional connectivity is one popular way to study the relationship of multiple neural signals. Correlation-based methods are a set of currently well-used techniques for functional connectivity estimation. However, due to explaining away and unobserved common inputs (Stevenson, Rebesco, Miller, & Körding, 2008 ), they produce spurious connections. The general linear model (GLM), which models spike trains as Poisson processes (Okatan, Wilson, & Brown, 2005 ; Truccolo, Eden, Fellows, Donoghue, & Brown, 2005 ; Pillow et al., 2008 ), avoids these confounds. We develop here a new class of methods by using differential signals based on simulated intracellular voltage recordings. It is equivalent to a regularized AR(2) model. We also expand the method to simulated local field potential recordings and calcium imaging. In all of our simulated data, the differential covariance-based methods achieved performance better than or similar to the GLM method and required fewer data samples. This new class of methods provides alternative ways to analyze neural signals.

  6. Effects of a 2009 Illinois Alcohol Tax Increase on Fatal Motor Vehicle Crashes.

    PubMed

    Wagenaar, Alexander C; Livingston, Melvin D; Staras, Stephanie S

    2015-09-01

    We examined the effects of a 2009 increase in alcohol taxes in Illinois on alcohol-related fatal motor vehicle crashes. We used an interrupted time-series design, with intrastate and cross-state comparisons and measurement derived from driver alcohol test results, for 104 months before and 28 months after enactment. Our analyses used autoregressive moving average and generalized linear mixed Poisson models. We examined both population-wide effects and stratifications by alcohol level, age, gender, and race. Fatal alcohol-related motor vehicle crashes declined 9.9 per month after the tax increase, a 26% reduction. The effect was similar for alcohol-impaired drivers with positive alcohol levels lower than 0.15 grams per deciliter (-22%) and drivers with very high alcohol levels of 0.15 or more (-25%). Drivers younger than 30 years showed larger declines (-37%) than those aged 30 years and older (-23%), but gender and race stratifications did not significantly differ. Increases in alcohol excise taxes, such as the 2009 Illinois act, could save thousands of lives yearly across the United States as part of a comprehensive strategy to reduce alcohol-impaired driving.

  7. Small-Scale Drop-Size Variability: Empirical Models for Drop-Size-Dependent Clustering in Clouds

    NASA Technical Reports Server (NTRS)

    Marshak, Alexander; Knyazikhin, Yuri; Larsen, Michael L.; Wiscombe, Warren J.

    2005-01-01

    By analyzing aircraft measurements of individual drop sizes in clouds, it has been shown in a companion paper that the probability of finding a drop of radius r at a linear scale l decreases as l(sup D(r)), where 0 less than or equals D(r) less than or equals 1. This paper shows striking examples of the spatial distribution of large cloud drops using models that simulate the observed power laws. In contrast to currently used models that assume homogeneity and a Poisson distribution of cloud drops, these models illustrate strong drop clustering, especially with larger drops. The degree of clustering is determined by the observed exponents D(r). The strong clustering of large drops arises naturally from the observed power-law statistics. This clustering has vital consequences for rain physics, including how fast rain can form. For radiative transfer theory, clustering of large drops enhances their impact on the cloud optical path. The clustering phenomenon also helps explain why remotely sensed cloud drop size is generally larger than that measured in situ.

  8. Small area estimation for estimating the number of infant mortality in West Java, Indonesia

    NASA Astrophysics Data System (ADS)

    Anggreyani, Arie; Indahwati, Kurnia, Anang

    2016-02-01

    Demographic and Health Survey Indonesia (DHSI) is a national designed survey to provide information regarding birth rate, mortality rate, family planning and health. DHSI was conducted by BPS in cooperation with National Population and Family Planning Institution (BKKBN), Indonesia Ministry of Health (KEMENKES) and USAID. Based on the publication of DHSI 2012, the infant mortality rate for a period of five years before survey conducted is 32 for 1000 birth lives. In this paper, Small Area Estimation (SAE) is used to estimate the number of infant mortality in districts of West Java. SAE is a special model of Generalized Linear Mixed Models (GLMM). In this case, the incidence of infant mortality is a Poisson distribution which has equdispersion assumption. The methods to handle overdispersion are binomial negative and quasi-likelihood model. Based on the results of analysis, quasi-likelihood model is the best model to overcome overdispersion problem. The basic model of the small area estimation used basic area level model. Mean square error (MSE) which based on resampling method is used to measure the accuracy of small area estimates.

  9. Theoretical interpretation of Warburg's impedance in unsupported electrolytic cells.

    PubMed

    Barbero, G

    2017-12-13

    We discuss the origin of Warburg's impedance in unsupported electrolytic cells containing only one group of positive and one group of negative ions. Our analysis is based on the Poisson-Nernst-Planck model, where the generation-recombination phenomenon is neglected. We show that to observe Warburg-like impedance the diffusion coefficient of the positive ions has to differ from that of the negative ones, and furthermore the electrodes have to be not blocking. We assume that the non-blocking properties of the electrodes can be described by means of an Ohmic model, where the charge exchange between the cell and the external circuit is described by means of an electrode conductivity. For simplicity we consider a symmetric cell. However, our analysis can be easily generalized to more complicated situations, where the cell is not symmetric and the charge exchange is described by the Chang-Jaffe model, or by a linearized version of the Butler-Volmer equation. Our analysis allows justification of the expression for Warburg's impedance proposed previously by several groups, based on wrong assumptions.

  10. Stock volatility and stroke mortality in a Chinese population.

    PubMed

    Zhang, Yuhao; Wang, Xin; Xu, Xiaohui; Chen, Renjie; Kan, Haidong

    2013-09-01

    This work was done to study the relationship between stock volatility and stroke mortality in Shanghai, China. Daily stroke death numbers and stock performance data from 1 January 2006 to 31 December 2008 in Shanghai were collected from the Shanghai Center for Disease Control and Prevention and Shanghai Stock Exchange (SSE), respectively. Data were analysed with overdispersed generalized linear Poisson models, controlling for long-term and seasonal trends of stroke mortality and weather conditions with natural smooth functions, as well as Index closing value, air pollution levels and day of the week. We observed a U-shaped relationship between the Index change and stroke deaths: both rising and falling of the Index were associated with more deaths, and the fewest deaths coincided with little or no change of the Index. We also examined the absolute daily change of the Index in relation to stroke deaths: each 100-point Index change corresponded to 3.22% [95% confidence interval (CI) 0.45-5.49] increase of stroke deaths. We found that stroke deaths fluctuated with daily stock changes in Shanghai, suggesting that stock volatility may adversely affect cerebrovascular health.

  11. Multidisciplinary perspective intervention with community involvement to decrease antibiotic sales in village groceries in Thailand.

    PubMed

    Arparsrithongsagul, Somsak; Kulsomboon, Vithaya; Zuckerman, Ilene H

    2015-03-01

    In Thailand, antibiotics are rampantly available in village groceries, despite the fact that it is illegal to sell antibiotics without a pharmacy license. This study implemented a multidisciplinary perspectives intervention with community involvement (MPI&CI), which was developed based on information obtained from focus groups that included multidisciplinary stakeholders. Community leaders in the intervention group were trained to implement MPI&CI in their villages. A quasi-experiment with a pretest-posttest design was conducted. Data were collected from 20 villages in Mahasarakham Province (intervention group) along with another 20 villages (comparison group). Using a generalized linear mixed model Poisson regression with repeated measures, groceries in the intervention group had 87% fewer antibiotics available at postintervention compared with preintervention (relative rate = 0.13; 95% confidence interval = 0.07-0.23), whereas the control group had only an 8% reduction in antibiotic availability (relative rate = 0.92; 95% confidence interval = 0.88-0.97) between the 2 time periods. Further study should be made to assess the sustainability and long-term effectiveness of MPI&CI. © 2013 APJPH.

  12. Climate change and temperature rise: implications on food- and water-borne diseases.

    PubMed

    El-Fadel, Mutasem; Ghanimeh, Sophia; Maroun, Rania; Alameddine, Ibrahim

    2012-10-15

    This study attempts to quantify climate-induced increases in morbidity rates associated with food- and water-borne illnesses in the context of an urban coastal city, taking Beirut-Lebanon as a study area. A Poisson generalized linear model was developed to assess the impacts of temperature on the morbidity rate. The model was used with four climatic scenarios to simulate a broad spectrum of driving forces and potential social, economic and technologic evolutions. The correlation established in this study exhibits a decrease in the number of illnesses with increasing temperature until reaching a threshold of 19.2 °C, beyond which the number of morbidity cases increases with temperature. By 2050, the results show a substantial increase in food- and water-borne related morbidity of 16 to 28% that can reach up to 42% by the end of the century under A1FI (fossil fuel intensive development) or can be reversed to ~0% under B1 (lowest emissions trajectory), highlighting the need for early mitigation and adaptation measures. Copyright © 2012 Elsevier B.V. All rights reserved.

  13. Expediting support for the pregnant mothers to obtain antenatal care at public health facilities in rural areas of Balochistan province, Pakistan.

    PubMed

    Ghaffar, Abdul; Pongpanich, Sathirakorn; Ghaffar, Najma; Chapman, Robert Sedgwick; Mureed, Sheh

    2015-01-01

    To identify, and compare relative importance of, factors associated with antenatal care (ANC) utilization in rural Balochistan, toward framing a policy to increase such utilization. This cross sectional study was conducted among 513 pregnant women in Jhal Magsi District, Balochistan, in 2011. A standardized interviewer-administered questionnaire was used. Predisposing, enabling, and reinforcing factors were evaluated with generalized linear models (Poisson distribution and log link). Prevalence of any ANC was only 14.4%. Predisposing, enabling, and reinforcing factors were all important determinants of ANC utilization. Reinforcing factors were clearly most important, husband's support for ANC was more important than support from other community members. Among predisposing factors, higher income, education, occupation, and better knowledge regarding benefits of ANC were positively and statistically significantly associated with ANC However increased number of children showed negative association. Complications free pregnancy showed positive significant association with ANC at public health facility among enabling factors. It is very important to increase antenatal care utilization in the study area and similar areas. Policy to achieve this should focus on enhancing support from the husband.

  14. Comparison of RF spectrum prediction methods for dynamic spectrum access

    NASA Astrophysics Data System (ADS)

    Kovarskiy, Jacob A.; Martone, Anthony F.; Gallagher, Kyle A.; Sherbondy, Kelly D.; Narayanan, Ram M.

    2017-05-01

    Dynamic spectrum access (DSA) refers to the adaptive utilization of today's busy electromagnetic spectrum. Cognitive radio/radar technologies require DSA to intelligently transmit and receive information in changing environments. Predicting radio frequency (RF) activity reduces sensing time and energy consumption for identifying usable spectrum. Typical spectrum prediction methods involve modeling spectral statistics with Hidden Markov Models (HMM) or various neural network structures. HMMs describe the time-varying state probabilities of Markov processes as a dynamic Bayesian network. Neural Networks model biological brain neuron connections to perform a wide range of complex and often non-linear computations. This work compares HMM, Multilayer Perceptron (MLP), and Recurrent Neural Network (RNN) algorithms and their ability to perform RF channel state prediction. Monte Carlo simulations on both measured and simulated spectrum data evaluate the performance of these algorithms. Generalizing spectrum occupancy as an alternating renewal process allows Poisson random variables to generate simulated data while energy detection determines the occupancy state of measured RF spectrum data for testing. The results suggest that neural networks achieve better prediction accuracy and prove more adaptable to changing spectral statistics than HMMs given sufficient training data.

  15. Modeling Creep Processes in Aging Polymers

    NASA Astrophysics Data System (ADS)

    Olali, N. V.; Voitovich, L. V.; Zazimko, N. N.; Malezhik, M. P.

    2016-03-01

    The photoelastic method is generalized to creep in hereditary aging materials. Optical-creep curves and mechanical-creep or optical-relaxation curves are used to interpret fringe patterns. For materials with constant Poisson's ratio, it is sufficient to use mechanical- or optical-creep curves for this purpose

  16. Electromagnetic gyrokinetic simulation in GTS

    NASA Astrophysics Data System (ADS)

    Ma, Chenhao; Wang, Weixing; Startsev, Edward; Lee, W. W.; Ethier, Stephane

    2017-10-01

    We report the recent development in the electromagnetic simulations for general toroidal geometry based on the particle-in-cell gyrokinetic code GTS. Because of the cancellation problem, the EM gyrokinetic simulation has numerical difficulties in the MHD limit where k⊥ρi -> 0 and/or β >me /mi . Recently several approaches has been developed to circumvent this problem: (1) p∥ formulation with analytical skin term iteratively approximated by simulation particles (Yang Chen), (2) A modified p∥ formulation with ∫ dtE∥ used in place of A∥ (Mishichenko); (3) A conservative theme where the electron density perturbation for the Poisson equation is calculated from an electron continuity equation (Bao) ; (4) double-split-weight scheme with two weights, one for Poisson equation and one for time derivative of Ampere's law, each with different splits designed to remove large terms from Vlasov equation (Startsev). These algorithms are being implemented into GTS framework for general toroidal geometry. The performance of these different algorithms will be compared for various EM modes.

  17. Determination of elastic constants of a generally orthotropic plate by modal analysis

    NASA Astrophysics Data System (ADS)

    Lai, T. C.; Lau, T. C.

    1993-01-01

    This paper describes a method of finding the elastic constants of a generally orthotropic composite thin plate through modal analysis based on a Rayleigh-Ritz formulation. The natural frequencies and mode shapes for a plate with free-free boundary conditions are obtained with chirp excitation. Based on the eigenvalue equation and the constitutive equations of the plate, an iteration scheme is derived using the experimentally determined natural frequencies to arrive at a set of converged values for the elastic constants. Four sets of experimental data are required for the four independent constants: namely the two Young's moduli E1 and E2, the in-plane shear modulus G12, and one Poisson's ratio nu12. The other Poisson's ratio nu21 can then be determined from the relationship among the constants. Comparison with static test results indicate good agreement. Choosing the right combinations of natural modes together with a set of reasonable initial estimates for the constants to start the iteration has been found to be crucial in achieving convergence.

  18. A Fast and Robust Poisson-Boltzmann Solver Based on Adaptive Cartesian Grids

    PubMed Central

    Boschitsch, Alexander H.; Fenley, Marcia O.

    2011-01-01

    An adaptive Cartesian grid (ACG) concept is presented for the fast and robust numerical solution of the 3D Poisson-Boltzmann Equation (PBE) governing the electrostatic interactions of large-scale biomolecules and highly charged multi-biomolecular assemblies such as ribosomes and viruses. The ACG offers numerous advantages over competing grid topologies such as regular 3D lattices and unstructured grids. For very large biological molecules and multi-biomolecule assemblies, the total number of grid-points is several orders of magnitude less than that required in a conventional lattice grid used in the current PBE solvers thus allowing the end user to obtain accurate and stable nonlinear PBE solutions on a desktop computer. Compared to tetrahedral-based unstructured grids, ACG offers a simpler hierarchical grid structure, which is naturally suited to multigrid, relieves indirect addressing requirements and uses fewer neighboring nodes in the finite difference stencils. Construction of the ACG and determination of the dielectric/ionic maps are straightforward, fast and require minimal user intervention. Charge singularities are eliminated by reformulating the problem to produce the reaction field potential in the molecular interior and the total electrostatic potential in the exterior ionic solvent region. This approach minimizes grid-dependency and alleviates the need for fine grid spacing near atomic charge sites. The technical portion of this paper contains three parts. First, the ACG and its construction for general biomolecular geometries are described. Next, a discrete approximation to the PBE upon this mesh is derived. Finally, the overall solution procedure and multigrid implementation are summarized. Results obtained with the ACG-based PBE solver are presented for: (i) a low dielectric spherical cavity, containing interior point charges, embedded in a high dielectric ionic solvent – analytical solutions are available for this case, thus allowing rigorous assessment of the solution accuracy; (ii) a pair of low dielectric charged spheres embedded in a ionic solvent to compute electrostatic interaction free energies as a function of the distance between sphere centers; (iii) surface potentials of proteins, nucleic acids and their larger-scale assemblies such as ribosomes; and (iv) electrostatic solvation free energies and their salt sensitivities – obtained with both linear and nonlinear Poisson-Boltzmann equation – for a large set of proteins. These latter results along with timings can serve as benchmarks for comparing the performance of different PBE solvers. PMID:21984876

  19. Stability of Poisson Equilibria and Hamiltonian Relative Equilibria by Energy Methods

    NASA Astrophysics Data System (ADS)

    Patrick, George W.; Roberts, Mark; Wulff, Claudia

    2004-12-01

    We develop a general stability theory for equilibrium points of Poisson dynamical systems and relative equilibria of Hamiltonian systems with symmetries, including several generalisations of the Energy-Casimir and Energy-Momentum Methods. Using a topological generalisation of Lyapunov’s result that an extremal critical point of a conserved quantity is stable, we show that a Poisson equilibrium is stable if it is an isolated point in the intersection of a level set of a conserved function with a subset of the phase space that is related to the topology of the symplectic leaf space at that point. This criterion is applied to generalise the energy-momentum method to Hamiltonian systems which are invariant under non-compact symmetry groups for which the coadjoint orbit space is not Hausdorff. We also show that a G-stable relative equilibrium satisfies the stronger condition of being A-stable, where A is a specific group-theoretically defined subset of G which contains the momentum isotropy subgroup of the relative equilibrium. The results are illustrated by an application to the stability of a rigid body in an ideal irrotational fluid.

  20. Compound Poisson Law for Hitting Times to Periodic Orbits in Two-Dimensional Hyperbolic Systems

    NASA Astrophysics Data System (ADS)

    Carney, Meagan; Nicol, Matthew; Zhang, Hong-Kun

    2017-11-01

    We show that a compound Poisson distribution holds for scaled exceedances of observables φ uniquely maximized at a periodic point ζ in a variety of two-dimensional hyperbolic dynamical systems with singularities (M,T,μ ), including the billiard maps of Sinai dispersing billiards in both the finite and infinite horizon case. The observable we consider is of form φ (z)=-ln d(z,ζ ) where d is a metric defined in terms of the stable and unstable foliation. The compound Poisson process we obtain is a Pólya-Aeppli distibution of index θ . We calculate θ in terms of the derivative of the map T. Furthermore if we define M_n=\\max {φ ,\\ldots ,φ circ T^n} and u_n (τ ) by \\lim _{n→ ∞} nμ (φ >u_n (τ ) )=τ the maximal process satisfies an extreme value law of form μ (M_n ≤ u_n)=e^{-θ τ }. These results generalize to a broader class of functions maximized at ζ , though the formulas regarding the parameters in the distribution need to be modified.

  1. Hierarchical dose response of E. coli O157:H7 from human outbreaks incorporating heterogeneity in exposure.

    PubMed

    Teunis, P F M; Ogden, I D; Strachan, N J C

    2008-06-01

    The infectivity of pathogenic microorganisms is a key factor in the transmission of an infectious disease in a susceptible population. Microbial infectivity is generally estimated from dose-response studies in human volunteers. This can only be done with mildly pathogenic organisms. Here a hierarchical Beta-Poisson dose-response model is developed utilizing data from human outbreaks. On the lowest level each outbreak is modelled separately and these are then combined at a second level to produce a group dose-response relation. The distribution of foodborne pathogens often shows strong heterogeneity and this is incorporated by introducing an additional parameter to the dose-response model, accounting for the degree of overdispersion relative to Poisson distribution. It was found that heterogeneity considerably influences the shape of the dose-response relationship and increases uncertainty in predicted risk. This uncertainty is greater than previously reported surrogate and outbreak models using a single level of analysis. Monte Carlo parameter samples (alpha, beta of the Beta-Poisson model) can be readily incorporated in risk assessment models built using tools such as S-plus and @ Risk.

  2. Integrable nonlinear Schrödinger system on a lattice with three structural elements in the unit cell

    NASA Astrophysics Data System (ADS)

    Vakhnenko, Oleksiy O.

    2018-05-01

    Developing the idea of increasing the number of structural elements in the unit cell of a quasi-one-dimensional lattice as applied to the semi-discrete integrable systems of nonlinear Schrödinger type, we construct the zero-curvature representation for the general integrable nonlinear system on a lattice with three structural elements in the unit cell. The integrability of the obtained general system permits to find explicitly a number of local conservation laws responsible for the main features of system dynamics and in particular for the so-called natural constraints separating the field variables into the basic and the concomitant ones. Thus, considering the reduction to the semi-discrete integrable system of nonlinear Schrödinger type, we revealed the essentially nontrivial impact of concomitant fields on the Poisson structure and on the whole Hamiltonian formulation of system dynamics caused by the nonzero background values of these fields. On the other hand, the zero-curvature representation of a general nonlinear system serves as an indispensable key to the dressing procedure of system integration based upon the Darboux transformation of the auxiliary linear problem and the implicit Bäcklund transformation of field variables. Due to the symmetries inherent to the six-component semi-discrete integrable nonlinear Schrödinger system with attractive-type nonlinearities, the Darboux-Bäcklund dressing scheme is shown to be simplified considerably, giving rise to the appropriately parameterized multi-component soliton solution consisting of six basic and four concomitant components.

  3. Properties of the Bivariate Delayed Poisson Process

    DTIC Science & Technology

    1974-07-01

    and Lewis (1972) in their Berkeley Symposium paper and here their analysis of the bivariate Poisson processes (without Poisson noise) is carried... Poisson processes . They cannot, however, be independent Poisson processes because their events are associated in pairs by the displace- ment centres...process because its marginal processes for events of each type are themselves (univariate) Poisson processes . Cox and Lewis (1972) assumed a

  4. Long-term statistics of extreme tsunami height at Crescent City

    NASA Astrophysics Data System (ADS)

    Dong, Sheng; Zhai, Jinjin; Tao, Shanshan

    2017-06-01

    Historically, Crescent City is one of the most vulnerable communities impacted by tsunamis along the west coast of the United States, largely attributed to its offshore geography. Trans-ocean tsunamis usually produce large wave runup at Crescent Harbor resulting in catastrophic damages, property loss and human death. How to determine the return values of tsunami height using relatively short-term observation data is of great significance to assess the tsunami hazards and improve engineering design along the coast of Crescent City. In the present study, the extreme tsunami heights observed along the coast of Crescent City from 1938 to 2015 are fitted using six different probabilistic distributions, namely, the Gumbel distribution, the Weibull distribution, the maximum entropy distribution, the lognormal distribution, the generalized extreme value distribution and the generalized Pareto distribution. The maximum likelihood method is applied to estimate the parameters of all above distributions. Both Kolmogorov-Smirnov test and root mean square error method are utilized for goodness-of-fit test and the better fitting distribution is selected. Assuming that the occurrence frequency of tsunami in each year follows the Poisson distribution, the Poisson compound extreme value distribution can be used to fit the annual maximum tsunami amplitude, and then the point and interval estimations of return tsunami heights are calculated for structural design. The results show that the Poisson compound extreme value distribution fits tsunami heights very well and is suitable to determine the return tsunami heights for coastal disaster prevention.

  5. Conservative regularization of compressible dissipationless two-fluid plasmas

    NASA Astrophysics Data System (ADS)

    Krishnaswami, Govind S.; Sachdev, Sonakshi; Thyagaraja, A.

    2018-02-01

    This paper extends our earlier approach [cf. A. Thyaharaja, Phys. Plasmas 17, 032503 (2010) and Krishnaswami et al., Phys. Plasmas 23, 022308 (2016)] to obtaining à priori bounds on enstrophy in neutral fluids and ideal magnetohydrodynamics. This results in a far-reaching local, three-dimensional, non-linear, dispersive generalization of a KdV-type regularization to compressible/incompressible dissipationless 2-fluid plasmas and models derived therefrom (quasi-neutral, Hall, and ideal MHD). It involves the introduction of vortical and magnetic "twirl" terms λl 2 ( w l + ( q l / m l ) B ) × ( ∇ × w l ) in the ion/electron velocity equations ( l = i , e ) where w l are vorticities. The cut-off lengths λl and number densities nl must satisfy λl 2 n l = C l , where Cl are constants. A novel feature is that the "flow" current ∑ l q l n l v l in Ampère's law is augmented by a solenoidal "twirl" current ∑ l ∇ × ∇ × λl 2 j flow , l . The resulting equations imply conserved linear and angular momenta and a positive definite swirl energy density E * which includes an enstrophic contribution ∑ l ( 1 / 2 ) λl 2 ρ l wl 2 . It is shown that the equations admit a Hamiltonian-Poisson bracket formulation. Furthermore, singularities in ∇ × B are conservatively regularized by adding ( λB 2 / 2 μ 0 ) ( ∇ × B ) 2 to E * . Finally, it is proved that among regularizations that admit a Hamiltonian formulation and preserve the continuity equations along with the symmetries of the ideal model, the twirl term is unique and minimal in non-linearity and space derivatives of velocities.

  6. A simple model for electrical charge in globular macromolecules and linear polyelectrolytes in solution

    NASA Astrophysics Data System (ADS)

    Krishnan, M.

    2017-05-01

    We present a model for calculating the net and effective electrical charge of globular macromolecules and linear polyelectrolytes such as proteins and DNA, given the concentration of monovalent salt and pH in solution. The calculation is based on a numerical solution of the non-linear Poisson-Boltzmann equation using a finite element discretized continuum approach. The model simultaneously addresses the phenomena of charge regulation and renormalization, both of which underpin the electrostatics of biomolecules in solution. We show that while charge regulation addresses the true electrical charge of a molecule arising from the acid-base equilibria of its ionizable groups, charge renormalization finds relevance in the context of a molecule's interaction with another charged entity. Writing this electrostatic interaction free energy in terms of a local electrical potential, we obtain an "interaction charge" for the molecule which we demonstrate agrees closely with the "effective charge" discussed in charge renormalization and counterion-condensation theories. The predictions of this model agree well with direct high-precision measurements of effective electrical charge of polyelectrolytes such as nucleic acids and disordered proteins in solution, without tunable parameters. Including the effective interior dielectric constant for compactly folded molecules as a tunable parameter, the model captures measurements of effective charge as well as published trends of pKa shifts in globular proteins. Our results suggest a straightforward general framework to model electrostatics in biomolecules in solution. In offering a platform that directly links theory and experiment, these calculations could foster a systematic understanding of the interrelationship between molecular 3D structure and conformation, electrical charge and electrostatic interactions in solution. The model could find particular relevance in situations where molecular crystal structures are not available or rapid, reliable predictions are desired.

  7. Short-term effects of meteorological factors on hand, foot and mouth disease among children in Shenzhen, China: Non-linearity, threshold and interaction.

    PubMed

    Zhang, Zhen; Xie, Xu; Chen, Xiliang; Li, Yuan; Lu, Yan; Mei, Shujiang; Liao, Yuxue; Lin, Hualiang

    2016-01-01

    Various meteorological factors have been associated with hand, foot and mouth disease (HFMD) among children; however, fewer studies have examined the non-linearity and interaction among the meteorological factors. A generalized additive model with a log link allowing Poisson auto-regression and over-dispersion was applied to investigate the short-term effects daily meteorological factors on children HFMD with adjustment of potential confounding factors. We found positive effects of mean temperature and wind speed, the excess relative risk (ERR) was 2.75% (95% CI: 1.98%, 3.53%) for one degree increase in daily mean temperature on lag day 6, and 3.93% (95% CI: 2.16% to 5.73%) for 1m/s increase in wind speed on lag day 3. We found a non-linear effect of relative humidity with thresholds with the low threshold at 45% and high threshold at 85%, within which there was positive effect, the ERR was 1.06% (95% CI: 0.85% to 1.27%) for 1 percent increase in relative humidity on lag day 5. No significant effect was observed for rainfall and sunshine duration. For the interactive effects, we found a weak additive interaction between mean temperature and relative humidity, and slightly antagonistic interaction between mean temperature and wind speed, and between relative humidity and wind speed in the additive models, but the interactions were not statistically significant. This study suggests that mean temperature, relative humidity and wind speed might be risk factors of children HFMD in Shenzhen, and the interaction analysis indicates that these meteorological factors might have played their roles individually. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. The clustering of galaxies in the completed SDSS-III Baryon Oscillation Spectroscopic Survey: cosmic flows and cosmic web from luminous red galaxies

    NASA Astrophysics Data System (ADS)

    Ata, Metin; Kitaura, Francisco-Shu; Chuang, Chia-Hsun; Rodríguez-Torres, Sergio; Angulo, Raul E.; Ferraro, Simone; Gil-Marín, Hector; McDonald, Patrick; Hernández Monteagudo, Carlos; Müller, Volker; Yepes, Gustavo; Autefage, Mathieu; Baumgarten, Falk; Beutler, Florian; Brownstein, Joel R.; Burden, Angela; Eisenstein, Daniel J.; Guo, Hong; Ho, Shirley; McBride, Cameron; Neyrinck, Mark; Olmstead, Matthew D.; Padmanabhan, Nikhil; Percival, Will J.; Prada, Francisco; Rossi, Graziano; Sánchez, Ariel G.; Schlegel, David; Schneider, Donald P.; Seo, Hee-Jong; Streblyanska, Alina; Tinker, Jeremy; Tojeiro, Rita; Vargas-Magana, Mariana

    2017-06-01

    We present a Bayesian phase-space reconstruction of the cosmic large-scale matter density and velocity fields from the Sloan Digital Sky Survey-III Baryon Oscillations Spectroscopic Survey Data Release 12 CMASS galaxy clustering catalogue. We rely on a given Λ cold dark matter cosmology, a mesh resolution in the range of 6-10 h-1 Mpc, and a lognormal-Poisson model with a redshift-dependent non-linear bias. The bias parameters are derived from the data and a general renormalized perturbation theory approach. We use combined Gibbs and Hamiltonian sampling, implemented in the argo code, to iteratively reconstruct the dark matter density field and the coherent peculiar velocities of individual galaxies, correcting hereby for coherent redshift space distortions. Our tests relying on accurate N-body-based mock galaxy catalogues show unbiased real space power spectra of the non-linear density field up to k ˜ 0.2 h Mpc-1, and vanishing quadrupoles down to r ˜ 20 h-1 Mpc. We also demonstrate that the non-linear cosmic web can be obtained from the tidal field tensor based on the Gaussian component of the reconstructed density field. We find that the reconstructed velocities have a statistical correlation coefficient compared to the true velocities of each individual light-cone mock galaxy of r ˜ 0.68 including about 10 per cent of satellite galaxies with virial motions (about r = 0.75 without satellites). The power spectra of the velocity divergence agree well with theoretical predictions up to k ˜ 0.2 h Mpc-1. This work will be especially useful to improve, for example, baryon acoustic oscillation reconstructions, kinematic Sunyaev-Zeldovich, integrated Sachs-Wolfe measurements or environmental studies.

  9. On-Orbit Collision Hazard Analysis in Low Earth Orbit Using the Poisson Probability Distribution (Version 1.0)

    DOT National Transportation Integrated Search

    1992-08-26

    This document provides the basic information needed to estimate a general : probability of collision in Low Earth Orbit (LEO). Although the method : described in this primer is a first order approximation, its results are : reasonable. Furthermore, t...

  10. Yes, the GIGP Really Does Work--And Is Workable!

    ERIC Educational Resources Information Center

    Burrell, Quentin L.; Fenton, Michael R.

    1993-01-01

    Discusses the generalized inverse Gaussian-Poisson (GIGP) process for informetric modeling. Negative binomial distribution is discussed, construction of the GIGP process is explained, zero-truncated GIGP is considered, and applications of the process with journals, library circulation statistics, and database index terms are described. (50…

  11. Twisted Quantum Lax Equations

    NASA Astrophysics Data System (ADS)

    Jurčo, Branislav; Schupp, Peter

    We show the construction of twisted quantum Lax equations associated with quantum groups, and solve these equations using factorization properties of the corresponding quantum groups. Our construction generalizes in many respects the AKS construction for Lie groups and the construction of M. A. Semenov-Tian-Shansky for the Lie-Poisson case.

  12. Crash data modeling with a generalized estimator.

    PubMed

    Ye, Zhirui; Xu, Yueru; Lord, Dominique

    2018-08-01

    The investigation of relationships between traffic crashes and relevant factors is important in traffic safety management. Various methods have been developed for modeling crash data. In real world scenarios, crash data often display the characteristics of over-dispersion. However, on occasions, some crash datasets have exhibited under-dispersion, especially in cases where the data are conditioned upon the mean. The commonly used models (such as the Poisson and the NB regression models) have associated limitations to cope with various degrees of dispersion. In light of this, a generalized event count (GEC) model, which can be generally used to handle over-, equi-, and under-dispersed data, is proposed in this study. This model was first applied to case studies using data from Toronto, characterized by over-dispersion, and then to crash data from railway-highway crossings in Korea, characterized with under-dispersion. The results from the GEC model were compared with those from the Negative binomial and the hyper-Poisson models. The cases studies show that the proposed model provides good performance for crash data characterized with over- and under-dispersion. Moreover, the proposed model simplifies the modeling process and the prediction of crash data. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. ColDICE: A parallel Vlasov–Poisson solver using moving adaptive simplicial tessellation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sousbie, Thierry, E-mail: tsousbie@gmail.com; Department of Physics, The University of Tokyo, Tokyo 113-0033; Research Center for the Early Universe, School of Science, The University of Tokyo, Tokyo 113-0033

    2016-09-15

    Resolving numerically Vlasov–Poisson equations for initially cold systems can be reduced to following the evolution of a three-dimensional sheet evolving in six-dimensional phase-space. We describe a public parallel numerical algorithm consisting in representing the phase-space sheet with a conforming, self-adaptive simplicial tessellation of which the vertices follow the Lagrangian equations of motion. The algorithm is implemented both in six- and four-dimensional phase-space. Refinement of the tessellation mesh is performed using the bisection method and a local representation of the phase-space sheet at second order relying on additional tracers created when needed at runtime. In order to preserve in the bestmore » way the Hamiltonian nature of the system, refinement is anisotropic and constrained by measurements of local Poincaré invariants. Resolution of Poisson equation is performed using the fast Fourier method on a regular rectangular grid, similarly to particle in cells codes. To compute the density projected onto this grid, the intersection of the tessellation and the grid is calculated using the method of Franklin and Kankanhalli [65–67] generalised to linear order. As preliminary tests of the code, we study in four dimensional phase-space the evolution of an initially small patch in a chaotic potential and the cosmological collapse of a fluctuation composed of two sinusoidal waves. We also perform a “warm” dark matter simulation in six-dimensional phase-space that we use to check the parallel scaling of the code.« less

  14. Anisotropic norm-oriented mesh adaptation for a Poisson problem

    NASA Astrophysics Data System (ADS)

    Brèthes, Gautier; Dervieux, Alain

    2016-10-01

    We present a novel formulation for the mesh adaptation of the approximation of a Partial Differential Equation (PDE). The discussion is restricted to a Poisson problem. The proposed norm-oriented formulation extends the goal-oriented formulation since it is equation-based and uses an adjoint. At the same time, the norm-oriented formulation somewhat supersedes the goal-oriented one since it is basically a solution-convergent method. Indeed, goal-oriented methods rely on the reduction of the error in evaluating a chosen scalar output with the consequence that, as mesh size is increased (more degrees of freedom), only this output is proven to tend to its continuous analog while the solution field itself may not converge. A remarkable quality of goal-oriented metric-based adaptation is the mathematical formulation of the mesh adaptation problem under the form of the optimization, in the well-identified set of metrics, of a well-defined functional. In the new proposed formulation, we amplify this advantage. We search, in the same well-identified set of metrics, the minimum of a norm of the approximation error. The norm is prescribed by the user and the method allows addressing the case of multi-objective adaptation like, for example in aerodynamics, adaptating the mesh for drag, lift and moment in one shot. In this work, we consider the basic linear finite-element approximation and restrict our study to L2 norm in order to enjoy second-order convergence. Numerical examples for the Poisson problem are computed.

  15. Zero adjusted models with applications to analysing helminths count data.

    PubMed

    Chipeta, Michael G; Ngwira, Bagrey M; Simoonga, Christopher; Kazembe, Lawrence N

    2014-11-27

    It is common in public health and epidemiology that the outcome of interest is counts of events occurrence. Analysing these data using classical linear models is mostly inappropriate, even after transformation of outcome variables due to overdispersion. Zero-adjusted mixture count models such as zero-inflated and hurdle count models are applied to count data when over-dispersion and excess zeros exist. Main objective of the current paper is to apply such models to analyse risk factors associated with human helminths (S. haematobium) particularly in a case where there's a high proportion of zero counts. The data were collected during a community-based randomised control trial assessing the impact of mass drug administration (MDA) with praziquantel in Malawi, and a school-based cross sectional epidemiology survey in Zambia. Count data models including traditional (Poisson and negative binomial) models, zero modified models (zero inflated Poisson and zero inflated negative binomial) and hurdle models (Poisson logit hurdle and negative binomial logit hurdle) were fitted and compared. Using Akaike information criteria (AIC), the negative binomial logit hurdle (NBLH) and zero inflated negative binomial (ZINB) showed best performance in both datasets. With regards to zero count capturing, these models performed better than other models. This paper showed that zero modified NBLH and ZINB models are more appropriate methods for the analysis of data with excess zeros. The choice between the hurdle and zero-inflated models should be based on the aim and endpoints of the study.

  16. Fitting a defect non-linear model with or without prior, distinguishing nuclear reaction products as an example.

    PubMed

    Helgesson, P; Sjöstrand, H

    2017-11-01

    Fitting a parametrized function to data is important for many researchers and scientists. If the model is non-linear and/or defect, it is not trivial to do correctly and to include an adequate uncertainty analysis. This work presents how the Levenberg-Marquardt algorithm for non-linear generalized least squares fitting can be used with a prior distribution for the parameters and how it can be combined with Gaussian processes to treat model defects. An example, where three peaks in a histogram are to be distinguished, is carefully studied. In particular, the probability r 1 for a nuclear reaction to end up in one out of two overlapping peaks is studied. Synthetic data are used to investigate effects of linearizations and other assumptions. For perfect Gaussian peaks, it is seen that the estimated parameters are distributed close to the truth with good covariance estimates. This assumes that the method is applied correctly; for example, prior knowledge should be implemented using a prior distribution and not by assuming that some parameters are perfectly known (if they are not). It is also important to update the data covariance matrix using the fit if the uncertainties depend on the expected value of the data (e.g., for Poisson counting statistics or relative uncertainties). If a model defect is added to the peaks, such that their shape is unknown, a fit which assumes perfect Gaussian peaks becomes unable to reproduce the data, and the results for r 1 become biased. It is, however, seen that it is possible to treat the model defect with a Gaussian process with a covariance function tailored for the situation, with hyper-parameters determined by leave-one-out cross validation. The resulting estimates for r 1 are virtually unbiased, and the uncertainty estimates agree very well with the underlying uncertainty.

  17. Fitting a defect non-linear model with or without prior, distinguishing nuclear reaction products as an example

    NASA Astrophysics Data System (ADS)

    Helgesson, P.; Sjöstrand, H.

    2017-11-01

    Fitting a parametrized function to data is important for many researchers and scientists. If the model is non-linear and/or defect, it is not trivial to do correctly and to include an adequate uncertainty analysis. This work presents how the Levenberg-Marquardt algorithm for non-linear generalized least squares fitting can be used with a prior distribution for the parameters and how it can be combined with Gaussian processes to treat model defects. An example, where three peaks in a histogram are to be distinguished, is carefully studied. In particular, the probability r1 for a nuclear reaction to end up in one out of two overlapping peaks is studied. Synthetic data are used to investigate effects of linearizations and other assumptions. For perfect Gaussian peaks, it is seen that the estimated parameters are distributed close to the truth with good covariance estimates. This assumes that the method is applied correctly; for example, prior knowledge should be implemented using a prior distribution and not by assuming that some parameters are perfectly known (if they are not). It is also important to update the data covariance matrix using the fit if the uncertainties depend on the expected value of the data (e.g., for Poisson counting statistics or relative uncertainties). If a model defect is added to the peaks, such that their shape is unknown, a fit which assumes perfect Gaussian peaks becomes unable to reproduce the data, and the results for r1 become biased. It is, however, seen that it is possible to treat the model defect with a Gaussian process with a covariance function tailored for the situation, with hyper-parameters determined by leave-one-out cross validation. The resulting estimates for r1 are virtually unbiased, and the uncertainty estimates agree very well with the underlying uncertainty.

  18. Cardiovascular diseases and air pollution in Novi Sad, Serbia.

    PubMed

    Jevtić, Marija; Dragić, Nataša; Bijelović, Sanja; Popović, Milka

    2014-04-01

    A large body of evidence has documented that air pollutants have adverse effect on human health as well as on the environment. The aim of this study was to determine whether there was an association between outdoor concentrations of sulfur dioxide (SO2) and nitrogen dioxide (NO2) and a daily number of hospital admissions due to cardiovascular diseases (CVD) in Novi Sad, Serbia among patients aged above 18. The investigation was carried out during over a 3-year period (from January 1, 2007 to December 31, 2009) in the area of Novi Sad. The number (N = 10 469) of daily CVD (ICD-10: I00-I99) hospital admissions was collected according to patients' addresses. Daily mean levels of NO2 and SO2, measured in the ambient air of Novi Sad via a network of fixed samplers, have been used to put forward outdoor air pollution. Associations between air pollutants and hospital admissions were firstly analyzed by the use of the linear regression in a single polluted model, and then trough a single and multi-polluted adjusted generalized linear Poisson model. The single polluted model (without confounding factors) indicated that there was a linear increase in the number of hospital admissions due to CVD in relation to the linear increase in concentrations of SO2 (p = 0.015; 95% confidence interval (95% CI): 0.144-1.329, R(2) = 0.005) and NO2 (p = 0.007; 95% CI: 0.214-1.361, R(2) = 0.007). However, the single and multi-polluted adjusted models revealed that only NO2 was associated with the CVD (p = 0.016, relative risk (RR) = 1.049, 95% CI: 1.009-1.091 and p = 0.022, RR = 1.047, 95% CI: 1.007-1.089, respectively). This study shows a significant positive association between hospital admissions due to CVD and outdoor NO2 concentrations in the area of Novi Sad, Serbia.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dechant, Lawrence J.

    Wave packet analysis provides a connection between linear small disturbance theory and subsequent nonlinear turbulent spot flow behavior. The traditional association between linear stability analysis and nonlinear wave form is developed via the method of stationary phase whereby asymptotic (simplified) mean flow solutions are used to estimate dispersion behavior and stationary phase approximation are used to invert the associated Fourier transform. The resulting process typically requires nonlinear algebraic equations inversions that can be best performed numerically, which partially mitigates the value of the approximation as compared to a more complete, e.g. DNS or linear/nonlinear adjoint methods. To obtain a simpler,more » closed-form analytical result, the complete packet solution is modeled via approximate amplitude (linear convected kinematic wave initial value problem) and local sinusoidal (wave equation) expressions. Significantly, the initial value for the kinematic wave transport expression follows from a separable variable coefficient approximation to the linearized pressure fluctuation Poisson expression. The resulting amplitude solution, while approximate in nature, nonetheless, appears to mimic many of the global features, e.g. transitional flow intermittency and pressure fluctuation magnitude behavior. A low wave number wave packet models also recover meaningful auto-correlation and low frequency spectral behaviors.« less

  20. Statistical mapping of count survey data

    USGS Publications Warehouse

    Royle, J. Andrew; Link, W.A.; Sauer, J.R.; Scott, J. Michael; Heglund, Patricia J.; Morrison, Michael L.; Haufler, Jonathan B.; Wall, William A.

    2002-01-01

    We apply a Poisson mixed model to the problem of mapping (or predicting) bird relative abundance from counts collected from the North American Breeding Bird Survey (BBS). The model expresses the logarithm of the Poisson mean as a sum of a fixed term (which may depend on habitat variables) and a random effect which accounts for remaining unexplained variation. The random effect is assumed to be spatially correlated, thus providing a more general model than the traditional Poisson regression approach. Consequently, the model is capable of improved prediction when data are autocorrelated. Moreover, formulation of the mapping problem in terms of a statistical model facilitates a wide variety of inference problems which are cumbersome or even impossible using standard methods of mapping. For example, assessment of prediction uncertainty, including the formal comparison of predictions at different locations, or through time, using the model-based prediction variance is straightforward under the Poisson model (not so with many nominally model-free methods). Also, ecologists may generally be interested in quantifying the response of a species to particular habitat covariates or other landscape attributes. Proper accounting for the uncertainty in these estimated effects is crucially dependent on specification of a meaningful statistical model. Finally, the model may be used to aid in sampling design, by modifying the existing sampling plan in a manner which minimizes some variance-based criterion. Model fitting under this model is carried out using a simulation technique known as Markov Chain Monte Carlo. Application of the model is illustrated using Mourning Dove (Zenaida macroura) counts from Pennsylvania BBS routes. We produce both a model-based map depicting relative abundance, and the corresponding map of prediction uncertainty. We briefly address the issue of spatial sampling design under this model. Finally, we close with some discussion of mapping in relation to habitat structure. Although our models were fit in the absence of habitat information, the resulting predictions show a strong inverse relation with a map of forest cover in the state, as expected. Consequently, the results suggest that the correlated random effect in the model is broadly representing ecological variation, and that BBS data may be generally useful for studying bird-habitat relationships, even in the presence of observer errors and other widely recognized deficiencies of the BBS.

  1. Fractional models of seismoacoustic and electromagnetic activity

    NASA Astrophysics Data System (ADS)

    Shevtsov, Boris; Sheremetyeva, Olga

    2017-10-01

    Statistical models of the seismoacoustic and electromagnetic activity caused by deformation disturbances are considered on the basis of compound Poisson process and its fractional generalizations. Wave representations of these processes are used too. It is discussed five regimes of deformation activity and their role in understanding of the earthquakes precursors nature.

  2. Exploring Term Dependences in Probabilistic Information Retrieval Model.

    ERIC Educational Resources Information Center

    Cho, Bong-Hyun; Lee, Changki; Lee, Gary Geunbae

    2003-01-01

    Describes a theoretic process to apply Bahadur-Lazarsfeld expansion (BLE) to general probabilistic models and the state-of-the-art 2-Poisson model. Through experiments on two standard document collections, one in Korean and one in English, it is demonstrated that incorporation of term dependences using BLE significantly contributes to performance…

  3. On the Bayesian Nonparametric Generalization of IRT-Type Models

    ERIC Educational Resources Information Center

    San Martin, Ernesto; Jara, Alejandro; Rolin, Jean-Marie; Mouchart, Michel

    2011-01-01

    We study the identification and consistency of Bayesian semiparametric IRT-type models, where the uncertainty on the abilities' distribution is modeled using a prior distribution on the space of probability measures. We show that for the semiparametric Rasch Poisson counts model, simple restrictions ensure the identification of a general…

  4. Classifying next-generation sequencing data using a zero-inflated Poisson model.

    PubMed

    Zhou, Yan; Wan, Xiang; Zhang, Baoxue; Tong, Tiejun

    2018-04-15

    With the development of high-throughput techniques, RNA-sequencing (RNA-seq) is becoming increasingly popular as an alternative for gene expression analysis, such as RNAs profiling and classification. Identifying which type of diseases a new patient belongs to with RNA-seq data has been recognized as a vital problem in medical research. As RNA-seq data are discrete, statistical methods developed for classifying microarray data cannot be readily applied for RNA-seq data classification. Witten proposed a Poisson linear discriminant analysis (PLDA) to classify the RNA-seq data in 2011. Note, however, that the count datasets are frequently characterized by excess zeros in real RNA-seq or microRNA sequence data (i.e. when the sequence depth is not enough or small RNAs with the length of 18-30 nucleotides). Therefore, it is desired to develop a new model to analyze RNA-seq data with an excess of zeros. In this paper, we propose a Zero-Inflated Poisson Logistic Discriminant Analysis (ZIPLDA) for RNA-seq data with an excess of zeros. The new method assumes that the data are from a mixture of two distributions: one is a point mass at zero, and the other follows a Poisson distribution. We then consider a logistic relation between the probability of observing zeros and the mean of the genes and the sequencing depth in the model. Simulation studies show that the proposed method performs better than, or at least as well as, the existing methods in a wide range of settings. Two real datasets including a breast cancer RNA-seq dataset and a microRNA-seq dataset are also analyzed, and they coincide with the simulation results that our proposed method outperforms the existing competitors. The software is available at http://www.math.hkbu.edu.hk/∼tongt. xwan@comp.hkbu.edu.hk or tongt@hkbu.edu.hk. Supplementary data are available at Bioinformatics online.

  5. On a Poisson homogeneous space of bilinear forms with a Poisson-Lie action

    NASA Astrophysics Data System (ADS)

    Chekhov, L. O.; Mazzocco, M.

    2017-12-01

    Let \\mathscr A be the space of bilinear forms on C^N with defining matrices A endowed with a quadratic Poisson structure of reflection equation type. The paper begins with a short description of previous studies of the structure, and then this structure is extended to systems of bilinear forms whose dynamics is governed by the natural action A\\mapsto B ABT} of the {GL}_N Poisson-Lie group on \\mathscr A. A classification is given of all possible quadratic brackets on (B, A)\\in {GL}_N× \\mathscr A preserving the Poisson property of the action, thus endowing \\mathscr A with the structure of a Poisson homogeneous space. Besides the product Poisson structure on {GL}_N× \\mathscr A, there are two other (mutually dual) structures, which (unlike the product Poisson structure) admit reductions by the Dirac procedure to a space of bilinear forms with block upper triangular defining matrices. Further generalisations of this construction are considered, to triples (B,C, A)\\in {GL}_N× {GL}_N× \\mathscr A with the Poisson action A\\mapsto B ACT}, and it is shown that \\mathscr A then acquires the structure of a Poisson symmetric space. Generalisations to chains of transformations and to the quantum and quantum affine algebras are investigated, as well as the relations between constructions of Poisson symmetric spaces and the Poisson groupoid. Bibliography: 30 titles.

  6. Effects of diurnal temperature range on mortality in Hefei city, China

    NASA Astrophysics Data System (ADS)

    Tang, Jing; Xiao, Chang-chun; Li, Yu-rong; Zhang, Jun-qing; Zhai, Hao-yuan; Geng, Xi-ya; Ding, Rui; Zhai, Jin-xia

    2017-12-01

    Although several studies indicated an association between diurnal temperature range (DTR) and mortality, the results about modifiers are inconsistent, and few studies were conducted in developing inland country. This study aims to evaluate the effects of DTR on cause-specific mortality and whether season, gender, or age might modify any association in Hefei city, China, during 2007-2016. Quasi-Poisson generalized linear regression models combined with a distributed lag non-linear model (DLNM) were applied to evaluate the relationships between DTR and non-accidental, cardiovascular, and respiratory mortality. We observed a J-shaped relationship between DTR and cause-specific mortality. With a DTR of 8.3 °C as the reference, the cumulative effects of extremely high DTR were significantly higher for all types of mortality than effects of lower or moderate DTR in full year. When stratified by season, extremely high DTR in spring had a greater impact on all cause-specific mortality than other three seasons. Male and the elderly (≥ 65 years) were consistently more susceptible to extremely high DTR effect than female and the youth (< 65 years) for non-accidental and cardiovascular mortality. To the contrary, female and the youth were more susceptible to extremely high DTR effect than male and the elderly for respiratory morality. The study suggests that extremely high DTR is a potential trigger for non-accidental mortality in Hefei city, China. Our findings also highlight the importance of protecting susceptible groups from extremely high DTR especially in the spring.

  7. Impact of temperature variability on childhood hand, foot and mouth disease in Huainan, China.

    PubMed

    Xu, J; Zhao, D; Su, H; Xie, M; Cheng, J; Wang, X; Li, K; Yang, H; Wen, L; Wang, B

    2016-05-01

    The short-term temperature variation has been shown to be significantly associated with human health. However, little is known about whether temperature change between neighbouring days (TCN) and diurnal temperature range (DTR) have any effect on childhood hand, foot and mouth disease (HFMD). This study aims to explore whether temperature variability has any effect on childhood HFMD. Ecological study. The association between meteorological variables and HFMD cases in Huainan, China, from January 1st 2012 to December 31st 2014 was analysed using Poisson generalized linear regression combined with distributed lag non-linear model (DLNM) after controlling for long-term trend and seasonality, mean temperature and relative humidity. An adverse effect of TCN on childhood HFMD was observed, and the impact of TCN was the greatest at five days lag, with a 10% (95% CI: 4%-15%) increase of daily number of HFMD cases per 3 °C (10th percentile) decrease of TCN. Male children, children aged 0-5 years, scattered children and children in high-risk areas appeared to be more vulnerable to the TCN effect than others. However, there was no significant association between DTR and childhood HFMD. Our findings indicate that TCN drops may increase the incidence of childhood HFMD in Huainan, highlighting the importance of protecting children from forthcoming TCN drops, particularly for those who are male, young, scattered and from high-risk areas. Copyright © 2015 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.

  8. Seasonality in trauma admissions - Are daylight and weather variables better predictors than general cyclic effects?

    PubMed

    Røislien, Jo; Søvik, Signe; Eken, Torsten

    2018-01-01

    Trauma is a leading global cause of death, and predicting the burden of trauma admissions is vital for good planning of trauma care. Seasonality in trauma admissions has been found in several studies. Seasonal fluctuations in daylight hours, temperature and weather affect social and cultural practices but also individual neuroendocrine rhythms that may ultimately modify behaviour and potentially predispose to trauma. The aim of the present study was to explore to what extent the observed seasonality in daily trauma admissions could be explained by changes in daylight and weather variables throughout the year. Retrospective registry study on trauma admissions in the 10-year period 2001-2010 at Oslo University Hospital, Ullevål, Norway, where the amount of daylight varies from less than 6 hours to almost 19 hours per day throughout the year. Daily number of admissions was analysed by fitting non-linear Poisson time series regression models, simultaneously adjusting for several layers of temporal patterns, including a non-linear long-term trend and both seasonal and weekly cyclic effects. Five daylight and weather variables were explored, including hours of daylight and amount of precipitation. Models were compared using Akaike's Information Criterion (AIC). A regression model including daylight and weather variables significantly outperformed a traditional seasonality model in terms of AIC. A cyclic week effect was significant in all models. Daylight and weather variables are better predictors of seasonality in daily trauma admissions than mere information on day-of-year.

  9. Assessing Weather Effects on Dengue Disease in Malaysia

    PubMed Central

    Cheong, Yoon Ling; Burkart, Katrin; Leitão, Pedro J.; Lakes, Tobia

    2013-01-01

    The number of dengue cases has been increasing on a global level in recent years, and particularly so in Malaysia, yet little is known about the effects of weather for identifying the short-term risk of dengue for the population. The aim of this paper is to estimate the weather effects on dengue disease accounting for non-linear temporal effects in Selangor, Kuala Lumpur and Putrajaya, Malaysia, from 2008 to 2010. We selected the weather parameters with a Poisson generalized additive model, and then assessed the effects of minimum temperature, bi-weekly accumulated rainfall and wind speed on dengue cases using a distributed non-linear lag model while adjusting for trend, day-of-week and week of the year. We found that the relative risk of dengue cases is positively associated with increased minimum temperature at a cumulative percentage change of 11.92% (95% CI: 4.41–32.19), from 25.4 °C to 26.5 °C, with the highest effect delayed by 51 days. Increasing bi-weekly accumulated rainfall had a positively strong effect on dengue cases at a cumulative percentage change of 21.45% (95% CI: 8.96, 51.37), from 215 mm to 302 mm, with the highest effect delayed by 26–28 days. The wind speed is negatively associated with dengue cases. The estimated lagged effects can be adapted in the dengue early warning system to assist in vector control and prevention plan. PMID:24287855

  10. Effects of diurnal temperature range on mortality in Hefei city, China

    NASA Astrophysics Data System (ADS)

    Tang, Jing; Xiao, Chang-chun; Li, Yu-rong; Zhang, Jun-qing; Zhai, Hao-yuan; Geng, Xi-ya; Ding, Rui; Zhai, Jin-xia

    2018-05-01

    Although several studies indicated an association between diurnal temperature range (DTR) and mortality, the results about modifiers are inconsistent, and few studies were conducted in developing inland country. This study aims to evaluate the effects of DTR on cause-specific mortality and whether season, gender, or age might modify any association in Hefei city, China, during 2007-2016. Quasi-Poisson generalized linear regression models combined with a distributed lag non-linear model (DLNM) were applied to evaluate the relationships between DTR and non-accidental, cardiovascular, and respiratory mortality. We observed a J-shaped relationship between DTR and cause-specific mortality. With a DTR of 8.3 °C as the reference, the cumulative effects of extremely high DTR were significantly higher for all types of mortality than effects of lower or moderate DTR in full year. When stratified by season, extremely high DTR in spring had a greater impact on all cause-specific mortality than other three seasons. Male and the elderly (≥ 65 years) were consistently more susceptible to extremely high DTR effect than female and the youth (< 65 years) for non-accidental and cardiovascular mortality. To the contrary, female and the youth were more susceptible to extremely high DTR effect than male and the elderly for respiratory morality. The study suggests that extremely high DTR is a potential trigger for non-accidental mortality in Hefei city, China. Our findings also highlight the importance of protecting susceptible groups from extremely high DTR especially in the spring.

  11. SHARP: A Spatially Higher-order, Relativistic Particle-in-cell Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shalaby, Mohamad; Broderick, Avery E.; Chang, Philip

    Numerical heating in particle-in-cell (PIC) codes currently precludes the accurate simulation of cold, relativistic plasma over long periods, severely limiting their applications in astrophysical environments. We present a spatially higher-order accurate relativistic PIC algorithm in one spatial dimension, which conserves charge and momentum exactly. We utilize the smoothness implied by the usage of higher-order interpolation functions to achieve a spatially higher-order accurate algorithm (up to the fifth order). We validate our algorithm against several test problems—thermal stability of stationary plasma, stability of linear plasma waves, and two-stream instability in the relativistic and non-relativistic regimes. Comparing our simulations to exact solutionsmore » of the dispersion relations, we demonstrate that SHARP can quantitatively reproduce important kinetic features of the linear regime. Our simulations have a superior ability to control energy non-conservation and avoid numerical heating in comparison to common second-order schemes. We provide a natural definition for convergence of a general PIC algorithm: the complement of physical modes captured by the simulation, i.e., those that lie above the Poisson noise, must grow commensurately with the resolution. This implies that it is necessary to simultaneously increase the number of particles per cell and decrease the cell size. We demonstrate that traditional ways for testing for convergence fail, leading to plateauing of the energy error. This new PIC code enables us to faithfully study the long-term evolution of plasma problems that require absolute control of the energy and momentum conservation.« less

  12. Explanation of the Reaction of Monoclonal Antibodies with Candida Albicans Cell Surface in Terms of Compound Poisson Process

    NASA Astrophysics Data System (ADS)

    Dudek, Mirosław R.; Mleczko, Józef

    Surprisingly, still very little is known about the mathematical modeling of peaks in the binding affinities distribution function. In general, it is believed that the peaks represent antibodies directed towards single epitopes. In this paper, we refer to fluorescence flow cytometry experiments and show that even monoclonal antibodies can display multi-modal histograms of affinity distribution. This result take place when some obstacles appear in the paratope-epitope reaction such that the process of reaching the specific epitope ceases to be a point Poisson process. A typical example is the large area of cell surface, which could be unreachable by antibodies leading to the heterogeneity of the cell surface repletion. In this case the affinity of cells to bind the antibodies should be described by a more complex process than the pure-Poisson point process. We suggested to use a doubly stochastic Poisson process, where the points are replaced by a binomial point process resulting in the Neyman distribution. The distribution can have a strongly multinomial character, and with the number of modes depending on the concentration of antibodies and epitopes. All this means that there is a possibility to go beyond the simplified theory, one response towards one epitope. As a consequence, our description provides perspectives for describing antigen-antibody reactions, both qualitatively and quantitavely, even in the case when some peaks result from more than one binding mechanism.

  13. The classical dynamic symmetry for the U(1) -Kepler problems

    NASA Astrophysics Data System (ADS)

    Bouarroudj, Sofiane; Meng, Guowu

    2018-01-01

    For the Jordan algebra of hermitian matrices of order n ≥ 2, we let X be its submanifold consisting of rank-one semi-positive definite elements. The composition of the cotangent bundle map πX: T∗ X → X with the canonical map X → CP n - 1 (i.e., the map that sends a given hermitian matrix to its column space), pulls back the Kähler form of the Fubini-Study metric on CP n - 1 to a real closed differential two-form ωK on T∗ X. Let ωX be the canonical symplectic form on T∗ X and μ a real number. A standard fact says that ωμ ≔ωX + 2 μωK turns T∗ X into a symplectic manifold, hence a Poisson manifold with Poisson bracket {,}μ. In this article we exhibit a Poisson realization of the simple real Lie algebra su(n , n) on the Poisson manifold (T∗ X ,{,}μ) , i.e., a Lie algebra homomorphism from su(n , n) to (C∞(T∗ X , R) ,{,}μ). Consequently one obtains the Laplace-Runge-Lenz vector for the classical U(1) -Kepler problem of level n and magnetic charge μ. Since the McIntosh-Cisneros-Zwanziger-Kepler problems (MICZ-Kepler Problems) are the U(1) -Kepler problems of level 2, the work presented here is a direct generalization of the work by A. Barut and G. Bornzin (1971) on the classical dynamic symmetry for the MICZ-Kepler problems.

  14. Stern potential and Debye length measurements in dilute ionic solutions with electrostatic force microscopy.

    PubMed

    Kumar, Bharat; Crittenden, Scott R

    2013-11-01

    We demonstrate the ability to measure Stern potential and Debye length in dilute ionic solution with atomic force microscopy. We develop an analytic expression for the second harmonic force component of the capacitive force in an ionic solution from the linearized Poisson-Boltzmann equation. This allows us to calibrate the AFM tip potential and, further, obtain the Stern potential of sample surfaces. In addition, the measured capacitive force is independent of van der Waals and double layer forces, thus providing a more accurate measure of Debye length.

  15. Influence diagnostics for count data under AB-BA crossover trials.

    PubMed

    Hao, Chengcheng; von Rosen, Dietrich; von Rosen, Tatjana

    2017-12-01

    This paper aims to develop diagnostic measures to assess the influence of data perturbations on estimates in AB-BA crossover studies with a Poisson distributed response. Generalised mixed linear models with normally distributed random effects are utilised. We show that in this special case, the model can be decomposed into two independent sub-models which allow to derive closed-form expressions to evaluate the changes in the maximum likelihood estimates under several perturbation schemes. The performance of the new influence measures is illustrated by simulation studies and the analysis of a real dataset.

  16. A finite-difference method for the variable coefficient Poisson equation on hierarchical Cartesian meshes

    NASA Astrophysics Data System (ADS)

    Raeli, Alice; Bergmann, Michel; Iollo, Angelo

    2018-02-01

    We consider problems governed by a linear elliptic equation with varying coefficients across internal interfaces. The solution and its normal derivative can undergo significant variations through these internal boundaries. We present a compact finite-difference scheme on a tree-based adaptive grid that can be efficiently solved using a natively parallel data structure. The main idea is to optimize the truncation error of the discretization scheme as a function of the local grid configuration to achieve second-order accuracy. Numerical illustrations are presented in two and three-dimensional configurations.

  17. Threshold detection in an on-off binary communications channel with atmospheric scintillation

    NASA Technical Reports Server (NTRS)

    Webb, W. E.; Marino, J. T., Jr.

    1974-01-01

    The optimum detection threshold in an on-off binary optical communications system operating in the presence of atmospheric turbulence was investigated assuming a poisson detection process and log normal scintillation. The dependence of the probability of bit error on log amplitude variance and received signal strength was analyzed and semi-emperical relationships to predict the optimum detection threshold derived. On the basis of this analysis a piecewise linear model for an adaptive threshold detection system is presented. Bit error probabilities for non-optimum threshold detection system were also investigated.

  18. Threshold detection in an on-off binary communications channel with atmospheric scintillation

    NASA Technical Reports Server (NTRS)

    Webb, W. E.

    1975-01-01

    The optimum detection threshold in an on-off binary optical communications system operating in the presence of atmospheric turbulence was investigated assuming a poisson detection process and log normal scintillation. The dependence of the probability of bit error on log amplitude variance and received signal strength was analyzed and semi-empirical relationships to predict the optimum detection threshold derived. On the basis of this analysis a piecewise linear model for an adaptive threshold detection system is presented. The bit error probabilities for nonoptimum threshold detection systems were also investigated.

  19. Polycrystalline gamma plutonium's elastic moduli versus temperature

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Migliori, Albert; Betts, J; Trugman, A

    2009-01-01

    Resonant ultrasound spectroscopy was used to measure the elastic properties of pure polycrystalline {sup 239}Pu in the {gamma} phase. Shear and longitudinal elastic moduli were measured simultaneously and the bulk modulus was computed from them. A smooth, linear, and large decrease of all elastic moduli with increasing temperature was observed. They calculated the Poisson ratio and found that it increases from 0.242 at 519 K to 0.252 at 571 K. These measurements on extremely well characterized pure Pu are in agreement with other reported results where overlap occurs.

  20. On the Singularity of the Vlasov-Poisson System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    and Hong Qin, Jian Zheng

    2013-04-26

    The Vlasov-Poisson system can be viewed as the collisionless limit of the corresponding Fokker- Planck-Poisson system. It is reasonable to expect that the result of Landau damping can also be obtained from the Fokker-Planck-Poisson system when the collision frequency v approaches zero. However, we show that the colllisionless Vlasov-Poisson system is a singular limit of the collisional Fokker-Planck-Poisson system, and Landau's result can be recovered only as the approaching zero from the positive side.

  1. On the singularity of the Vlasov-Poisson system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng, Jian; Qin, Hong; Plasma Physics Laboratory, Princeton University, Princeton, New Jersey 08550

    2013-09-15

    The Vlasov-Poisson system can be viewed as the collisionless limit of the corresponding Fokker-Planck-Poisson system. It is reasonable to expect that the result of Landau damping can also be obtained from the Fokker-Planck-Poisson system when the collision frequency ν approaches zero. However, we show that the collisionless Vlasov-Poisson system is a singular limit of the collisional Fokker-Planck-Poisson system, and Landau's result can be recovered only as the ν approaches zero from the positive side.

  2. Stroke and the "stroke belt" in dialysis: contribution of patient characteristics to ischemic stroke rate and its geographic variation.

    PubMed

    Wetmore, James B; Ellerbeck, Edward F; Mahnken, Jonathan D; Phadnis, Milind A; Rigler, Sally K; Spertus, John A; Zhou, Xinhua; Mukhopadhyay, Purna; Shireman, Theresa I

    2013-12-01

    Geographic variation in stroke rates is well established in the general population, with higher rates in the South than in other areas of the United States. ESRD is a potent risk factor for stroke, but whether regional variations in stroke risk exist among dialysis patients is unknown. Medicare claims from 2000 to 2005 were used to ascertain ischemic stroke events in a large cohort of 265,685 incident dialysis patients. A Poisson generalized linear mixed model was generated to determine factors associated with stroke and to ascertain state-by-state geographic variability in stroke rates by generating observed-to-expected (O/E) adjusted rate ratios for stroke. Older age, female sex, African American race and Hispanic ethnicity, unemployed status, diabetes, hypertension, history of stroke, and permanent atrial fibrillation were positively associated with ischemic stroke, whereas body mass index >30 kg/m(2) was inversely associated with stroke (P<0.001 for each). After full multivariable adjustment, the three states with O/E rate ratios >1.0 were all in the South: North Carolina, Mississippi, and Oklahoma. Regional efforts to increase primary prevention in the "stroke belt" or to better educate dialysis patients on the signs of stroke so that they may promptly seek care may improve stroke care and outcomes in dialysis patients.

  3. Do Stress Trajectories Predict Mortality in Older Men? Longitudinal Findings from the VA Normative Aging Study

    PubMed Central

    Aldwin, Carolyn M.; Molitor, Nuoo-Ting; Avron, Spiro; Levenson, Michael R.; Molitor, John; Igarashi, Heidi

    2011-01-01

    We examined long-term patterns of stressful life events (SLE) and their impact on mortality contrasting two theoretical models: allostatic load (linear relationship) and hormesis (inverted U relationship) in 1443 NAS men (aged 41–87 in 1985; M = 60.30, SD = 7.3) with at least two reports of SLEs over 18 years (total observations = 7,634). Using a zero-inflated Poisson growth mixture model, we identified four patterns of SLE trajectories, three showing linear decreases over time with low, medium, and high intercepts, respectively, and one an inverted U, peaking at age 70. Repeating the analysis omitting two health-related SLEs yielded only the first three linear patterns. Compared to the low-stress group, both the moderate and the high-stress groups showed excess mortality, controlling for demographics and health behavior habits, HRs = 1.42 and 1.37, ps <.01 and <.05. The relationship between stress trajectories and mortality was complex and not easily explained by either theoretical model. PMID:21961066

  4. HgCdTe APD-based linear-mode photon counting components and ladar receivers

    NASA Astrophysics Data System (ADS)

    Jack, Michael; Wehner, Justin; Edwards, John; Chapman, George; Hall, Donald N. B.; Jacobson, Shane M.

    2011-05-01

    Linear mode photon counting (LMPC) provides significant advantages in comparison with Geiger Mode (GM) Photon Counting including absence of after-pulsing, nanosecond pulse to pulse temporal resolution and robust operation in the present of high density obscurants or variable reflectivity objects. For this reason Raytheon has developed and previously reported on unique linear mode photon counting components and modules based on combining advanced APDs and advanced high gain circuits. By using HgCdTe APDs we enable Poisson number preserving photon counting. A metric of photon counting technology is dark count rate and detection probability. In this paper we report on a performance breakthrough resulting from improvement in design, process and readout operation enabling >10x reduction in dark counts rate to ~10,000 cps and >104x reduction in surface dark current enabling long 10 ms integration times. Our analysis of key dark current contributors suggest that substantial further reduction in DCR to ~ 1/sec or less can be achieved by optimizing wavelength, operating voltage and temperature.

  5. Numerical solution of the Hele-Shaw equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whitaker, N.

    1987-04-01

    An algorithm is presented for approximating the motion of the interface between two immiscible fluids in a Hele-Shaw cell. The interface is represented by a set of volume fractions. We use the Simple Line Interface Calculation method along with the method of fractional steps to transport the interface. The equation of continuity leads to a Poisson equation for the pressure. The Poisson equation is discretized. Near the interface where the velocity field is discontinuous, the discretization is based on a weak formulation of the continuity equation. Interpolation is used on each side of the interface to increase the accuracy ofmore » the algorithm. The weak formulation as well as the interpolation are based on the computed volume fractions. This treatment of the interface is new. The discretized equations are solved by a modified conjugate gradient method. Surface tension is included and the curvature is computed through the use of osculating circles. For perturbations of small amplitude, a surprisingly good agreement is found between the numerical results and linearized perturbation theory. Numerical results are presented for the finite amplitude growth of unstable fingers. 62 refs., 13 figs.« less

  6. Elastic properties of graphene: A pseudo-beam model with modified internal bending moment and its application

    NASA Astrophysics Data System (ADS)

    Xia, Z. M.; Wang, C. G.; Tan, H. F.

    2018-04-01

    A pseudo-beam model with modified internal bending moment is presented to predict elastic properties of graphene, including the Young's modulus and Poisson's ratio. In order to overcome a drawback in existing molecular structural mechanics models, which only account for pure bending (constant bending moment), the presented model accounts for linear bending moments deduced from the balance equations. Based on this pseudo-beam model, an analytical prediction is accomplished to predict the Young's modulus and Poisson's ratio of graphene based on the equation of the strain energies by using Castigliano second theorem. Then, the elastic properties of graphene are calculated compared with results available in literature, which verifies the feasibility of the pseudo-beam model. Finally, the pseudo-beam model is utilized to study the twisting wrinkling characteristics of annular graphene. Due to modifications of the internal bending moment, the wrinkling behaviors of graphene sheet are predicted accurately. The obtained results show that the pseudo-beam model has a good ability to predict the elastic properties of graphene accurately, especially the out-of-plane deformation behavior.

  7. Strong and weak adsorptions of polyelectrolyte chains onto oppositely charged spheres

    NASA Astrophysics Data System (ADS)

    Cherstvy, A. G.; Winkler, R. G.

    2006-08-01

    We investigate the complexation of long thin polyelectrolyte (PE) chains with oppositely charged spheres. In the limit of strong adsorption, when strongly charged PE chains adapt a definite wrapped conformation on the sphere surface, we analytically solve the linear Poisson-Boltzmann equation and calculate the electrostatic potential and the energy of the complex. We discuss some biological applications of the obtained results. For weak adsorption, when a flexible weakly charged PE chain is localized next to the sphere in solution, we solve the Edwards equation for PE conformations in the Hulthén potential, which is used as an approximation for the screened Debye-Hückel potential of the sphere. We predict the critical conditions for PE adsorption. We find that the critical sphere charge density exhibits a distinctively different dependence on the Debye screening length than for PE adsorption onto a flat surface. We compare our findings with experimental measurements on complexation of various PEs with oppositely charged colloidal particles. We also present some numerical results of the coupled Poisson-Boltzmann and self-consistent field equation for PE adsorption in an assembly of oppositely charged spheres.

  8. SELF-GRAVITATIONAL FORCE CALCULATION OF SECOND-ORDER ACCURACY FOR INFINITESIMALLY THIN GASEOUS DISKS IN POLAR COORDINATES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Hsiang-Hsu; Taam, Ronald E.; Yen, David C. C., E-mail: yen@math.fju.edu.tw

    Investigating the evolution of disk galaxies and the dynamics of proto-stellar disks can involve the use of both a hydrodynamical and a Poisson solver. These systems are usually approximated as infinitesimally thin disks using two-dimensional Cartesian or polar coordinates. In Cartesian coordinates, the calculations of the hydrodynamics and self-gravitational forces are relatively straightforward for attaining second-order accuracy. However, in polar coordinates, a second-order calculation of self-gravitational forces is required for matching the second-order accuracy of hydrodynamical schemes. We present a direct algorithm for calculating self-gravitational forces with second-order accuracy without artificial boundary conditions. The Poisson integral in polar coordinates ismore » expressed in a convolution form and the corresponding numerical complexity is nearly linear using a fast Fourier transform. Examples with analytic solutions are used to verify that the truncated error of this algorithm is of second order. The kernel integral around the singularity is applied to modify the particle method. The use of a softening length is avoided and the accuracy of the particle method is significantly improved.« less

  9. Planar screening by charge polydisperse counterions

    NASA Astrophysics Data System (ADS)

    Trulsson, M.; Trizac, E.; Šamaj, L.

    2018-01-01

    We study how a neutralising cloud of counterions screens the electric field of a uniformly charged planar membrane (plate), when the counterions are characterised by a distribution of charges (or valence), n(q) . We work out analytically the one-plate and two-plate cases, at the level of non-linear Poisson-Boltzmann theory. The (essentially asymptotic) predictions are successfully compared to numerical solutions of the full Poisson-Boltzmann theory, but also to Monte Carlo simulations. The counterions with smallest valence control the long-distance features of interactions, and may qualitatively change the results pertaining to the classic monodisperse case where all counterions have the same charge. Emphasis is put on continuous distributions n(q) , for which new power-laws can be evidenced, be it for the ionic density or the pressure, in the one- and two-plates situations respectively. We show that for discrete distributions, more relevant for experiments, these scaling laws persist in an intermediate but yet observable range. Furthermore, it appears that from a practical point of view, hallmarks of the continuous n(q) behaviour are already featured by discrete mixtures with a relatively small number of constituents.

  10. Statistical guides to estimating the number of undiscovered mineral deposits: an example with porphyry copper deposits

    USGS Publications Warehouse

    Singer, Donald A.; Menzie, W.D.; Cheng, Qiuming; Bonham-Carter, G. F.

    2005-01-01

    Estimating numbers of undiscovered mineral deposits is a fundamental part of assessing mineral resources. Some statistical tools can act as guides to low variance, unbiased estimates of the number of deposits. The primary guide is that the estimates must be consistent with the grade and tonnage models. Another statistical guide is the deposit density (i.e., the number of deposits per unit area of permissive rock in well-explored control areas). Preliminary estimates and confidence limits of the number of undiscovered deposits in a tract of given area may be calculated using linear regression and refined using frequency distributions with appropriate parameters. A Poisson distribution leads to estimates having lower relative variances than the regression estimates and implies a random distribution of deposits. Coefficients of variation are used to compare uncertainties of negative binomial, Poisson, or MARK3 empirical distributions that have the same expected number of deposits as the deposit density. Statistical guides presented here allow simple yet robust estimation of the number of undiscovered deposits in permissive terranes. 

  11. On the fractal characterization of Paretian Poisson processes

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo I.; Sokolov, Igor M.

    2012-06-01

    Paretian Poisson processes are Poisson processes which are defined on the positive half-line, have maximal points, and are quantified by power-law intensities. Paretian Poisson processes are elemental in statistical physics, and are the bedrock of a host of power-law statistics ranging from Pareto's law to anomalous diffusion. In this paper we establish evenness-based fractal characterizations of Paretian Poisson processes. Considering an array of socioeconomic evenness-based measures of statistical heterogeneity, we show that: amongst the realm of Poisson processes which are defined on the positive half-line, and have maximal points, Paretian Poisson processes are the unique class of 'fractal processes' exhibiting scale-invariance. The results established in this paper are diametric to previous results asserting that the scale-invariance of Poisson processes-with respect to physical randomness-based measures of statistical heterogeneity-is characterized by exponential Poissonian intensities.

  12. Assessment of inappropriate antibiotic prescribing among a large cohort of general dentists in the United States.

    PubMed

    Durkin, Michael J; Feng, Qianxi; Warren, Kyle; Lockhart, Peter B; Thornhill, Martin H; Munshi, Kiraat D; Henderson, Rochelle R; Hsueh, Kevin; Fraser, Victoria J

    2018-05-01

    The purpose of this study was to assess dental antibiotic prescribing trends over time, to quantify the number and types of antibiotics dentists prescribe inappropriately, and to estimate the excess health care costs of inappropriate antibiotic prescribing with the use of a large cohort of general dentists in the United States. We used a quasi-Poisson regression model to analyze antibiotic prescriptions trends by general dentists between January 1, 2013, and December 31, 2015, with the use of data from Express Scripts Holding Company, a large pharmacy benefits manager. We evaluated antibiotic duration and appropriateness for general dentists. Appropriateness was evaluated by reviewing the antibiotic prescribed and the duration of the prescription. Overall, the number and rate of antibiotic prescriptions prescribed by general dentists remained stable in our cohort. During the 3-year study period, approximately 14% of antibiotic prescriptions were deemed inappropriate, based on the antibiotic prescribed, antibiotic treatment duration, or both indicators. The quasi-Poisson regression model, which adjusted for number of beneficiaries covered, revealed a small but statistically significant decrease in the monthly rate of inappropriate antibiotic prescriptions by 0.32% (95% confidence interval, 0.14% to 0.50%; P = .001). Overall antibiotic prescribing practices among general dentists in this cohort remained stable over time. The rate of inappropriate antibiotic prescriptions by general dentists decreased slightly over time. From these authors' definition of appropriate antibiotic prescription choice and duration, inappropriate antibiotic prescriptions are common (14% of all antibiotic prescriptions) among general dentists. Further analyses with the use of chart review, administrative data sets, or other approaches are needed to better evaluate antibiotic prescribing practices among dentists. Copyright © 2018 American Dental Association. Published by Elsevier Inc. All rights reserved.

  13. SIERRA - A 3-D device simulator for reliability modeling

    NASA Astrophysics Data System (ADS)

    Chern, Jue-Hsien; Arledge, Lawrence A., Jr.; Yang, Ping; Maeda, John T.

    1989-05-01

    SIERRA is a three-dimensional general-purpose semiconductor-device simulation program which serves as a foundation for investigating integrated-circuit (IC) device and reliability issues. This program solves the Poisson and continuity equations in silicon under dc, transient, and small-signal conditions. Executing on a vector/parallel minisupercomputer, SIERRA utilizes a matrix solver which uses an incomplete LU (ILU) preconditioned conjugate gradient square (CGS, BCG) method. The ILU-CGS method provides a good compromise between memory size and convergence rate. The authors have observed a 5x to 7x speedup over standard direct methods in simulations of transient problems containing highly coupled Poisson and continuity equations such as those found in reliability-oriented simulations. The application of SIERRA to parasitic CMOS latchup and dynamic random-access memory single-event-upset studies is described.

  14. Determining X-ray source intensity and confidence bounds in crowded fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Primini, F. A.; Kashyap, V. L., E-mail: fap@head.cfa.harvard.edu

    We present a rigorous description of the general problem of aperture photometry in high-energy astrophysics photon-count images, in which the statistical noise model is Poisson, not Gaussian. We compute the full posterior probability density function for the expected source intensity for various cases of interest, including the important cases in which both source and background apertures contain contributions from the source, and when multiple source apertures partially overlap. A Bayesian approach offers the advantages of allowing one to (1) include explicit prior information on source intensities, (2) propagate posterior distributions as priors for future observations, and (3) use Poisson likelihoods,more » making the treatment valid in the low-counts regime. Elements of this approach have been implemented in the Chandra Source Catalog.« less

  15. Possible Statistics of Two Coupled Random Fields: Application to Passive Scalar

    NASA Technical Reports Server (NTRS)

    Dubrulle, B.; He, Guo-Wei; Bushnell, Dennis M. (Technical Monitor)

    2000-01-01

    We use the relativity postulate of scale invariance to derive the similarity transformations between two coupled scale-invariant random elds at different scales. We nd the equations leading to the scaling exponents. This formulation is applied to the case of passive scalars advected i) by a random Gaussian velocity field; and ii) by a turbulent velocity field. In the Gaussian case, we show that the passive scalar increments follow a log-Levy distribution generalizing Kraichnan's solution and, in an appropriate limit, a log-normal distribution. In the turbulent case, we show that when the velocity increments follow a log-Poisson statistics, the passive scalar increments follow a statistics close to log-Poisson. This result explains the experimental observations of Ruiz et al. about the temperature increments.

  16. Monitoring Poisson's Ratio Degradation of FRP Composites under Fatigue Loading Using Biaxially Embedded FBG Sensors.

    PubMed

    Akay, Erdem; Yilmaz, Cagatay; Kocaman, Esat S; Turkmen, Halit S; Yildiz, Mehmet

    2016-09-19

    The significance of strain measurement is obvious for the analysis of Fiber-Reinforced Polymer (FRP) composites. Conventional strain measurement methods are sufficient for static testing in general. Nevertheless, if the requirements exceed the capabilities of these conventional methods, more sophisticated techniques are necessary to obtain strain data. Fiber Bragg Grating (FBG) sensors have many advantages for strain measurement over conventional ones. Thus, the present paper suggests a novel method for biaxial strain measurement using embedded FBG sensors during the fatigue testing of FRP composites. Poisson's ratio and its reduction were monitored for each cyclic loading by using embedded FBG sensors for a given specimen and correlated with the fatigue stages determined based on the variations of the applied fatigue loading and temperature due to the autogenous heating to predict an oncoming failure of the continuous fiber-reinforced epoxy matrix composite specimens under fatigue loading. The results show that FBG sensor technology has a remarkable potential for monitoring the evolution of Poisson's ratio on a cycle-by-cycle basis, which can reliably be used towards tracking the fatigue stages of composite for structural health monitoring purposes.

  17. Noncommutative spherically symmetric spacetimes at semiclassical order

    NASA Astrophysics Data System (ADS)

    Fritz, Christopher; Majid, Shahn

    2017-07-01

    Working within the recent formalism of Poisson-Riemannian geometry, we completely solve the case of generic spherically symmetric metric and spherically symmetric Poisson-bracket to find a unique answer for the quantum differential calculus, quantum metric and quantum Levi-Civita connection at semiclassical order O(λ) . Here λ is the deformation parameter, plausibly the Planck scale. We find that r, t, d r, d t are all forced to be central, i.e. undeformed at order λ, while for each value of r, t we are forced to have a fuzzy sphere of radius r with a unique differential calculus which is necessarily nonassociative at order λ2 . We give the spherically symmetric quantisation of the FLRW cosmology in detail and also recover a previous analysis for the Schwarzschild black hole, now showing that the quantum Ricci tensor for the latter vanishes at order λ. The quantum Laplace-Beltrami operator for spherically symmetric models turns out to be undeformed at order λ while more generally in Poisson-Riemannian geometry we show that it deforms to □f+λ2ωαβ(Ricγα-Sγα)(∇^βdf)γ+O(λ2) in terms of the classical Levi-Civita connection \\widehat\

  18. On time-dependent Hamiltonian realizations of planar and nonplanar systems

    NASA Astrophysics Data System (ADS)

    Esen, Oğul; Guha, Partha

    2018-04-01

    In this paper, we elucidate the key role played by the cosymplectic geometry in the theory of time dependent Hamiltonian systems in 2 D. We generalize the cosymplectic structures to time-dependent Nambu-Poisson Hamiltonian systems and corresponding Jacobi's last multiplier for 3 D systems. We illustrate our constructions with various examples.

  19. C1 finite elements on non-tensor-product 2d and 3d manifolds.

    PubMed

    Nguyen, Thien; Karčiauskas, Kęstutis; Peters, Jörg

    2016-01-01

    Geometrically continuous ( G k ) constructions naturally yield families of finite elements for isogeometric analysis (IGA) that are C k also for non-tensor-product layout. This paper describes and analyzes one such concrete C 1 geometrically generalized IGA element (short: gIGA element) that generalizes bi-quadratic splines to quad meshes with irregularities. The new gIGA element is based on a recently-developed G 1 surface construction that recommends itself by its a B-spline-like control net, low (least) polynomial degree, good shape properties and reproduction of quadratics at irregular (extraordinary) points. Remarkably, for Poisson's equation on the disk using interior vertices of valence 3 and symmetric layout, we observe O ( h 3 ) convergence in the L ∞ norm for this family of elements. Numerical experiments confirm the elements to be effective for solving the trivariate Poisson equation on the solid cylinder, deformations thereof (a turbine blade), modeling and computing geodesics on smooth free-form surfaces via the heat equation, for solving the biharmonic equation on the disk and for Koiter-type thin-shell analysis.

  20. A new Green's function Monte Carlo algorithm for the solution of the two-dimensional nonlinear Poisson-Boltzmann equation: Application to the modeling of the communication breakdown problem in space vehicles during re-entry

    NASA Astrophysics Data System (ADS)

    Chatterjee, Kausik; Roadcap, John R.; Singh, Surendra

    2014-11-01

    The objective of this paper is the exposition of a recently-developed, novel Green's function Monte Carlo (GFMC) algorithm for the solution of nonlinear partial differential equations and its application to the modeling of the plasma sheath region around a cylindrical conducting object, carrying a potential and moving at low speeds through an otherwise neutral medium. The plasma sheath is modeled in equilibrium through the GFMC solution of the nonlinear Poisson-Boltzmann (NPB) equation. The traditional Monte Carlo based approaches for the solution of nonlinear equations are iterative in nature, involving branching stochastic processes which are used to calculate linear functionals of the solution of nonlinear integral equations. Over the last several years, one of the authors of this paper, K. Chatterjee has been developing a philosophically-different approach, where the linearization of the equation of interest is not required and hence there is no need for iteration and the simulation of branching processes. Instead, an approximate expression for the Green's function is obtained using perturbation theory, which is used to formulate the random walk equations within the problem sub-domains where the random walker makes its walks. However, as a trade-off, the dimensions of these sub-domains have to be restricted by the limitations imposed by perturbation theory. The greatest advantage of this approach is the ease and simplicity of parallelization stemming from the lack of the need for iteration, as a result of which the parallelization procedure is identical to the parallelization procedure for the GFMC solution of a linear problem. The application area of interest is in the modeling of the communication breakdown problem during a space vehicle's re-entry into the atmosphere. However, additional application areas are being explored in the modeling of electromagnetic propagation through the atmosphere/ionosphere in UHF/GPS applications.

Top