Science.gov

Sample records for additive poisson models

  1. Relaxed Poisson cure rate models.

    PubMed

    Rodrigues, Josemar; Cordeiro, Gauss M; Cancho, Vicente G; Balakrishnan, N

    2016-03-01

    The purpose of this article is to make the standard promotion cure rate model (Yakovlev and Tsodikov, ) more flexible by assuming that the number of lesions or altered cells after a treatment follows a fractional Poisson distribution (Laskin, ). It is proved that the well-known Mittag-Leffler relaxation function (Berberan-Santos, ) is a simple way to obtain a new cure rate model that is a compromise between the promotion and geometric cure rate models allowing for superdispersion. So, the relaxed cure rate model developed here can be considered as a natural and less restrictive extension of the popular Poisson cure rate model at the cost of an additional parameter, but a competitor to negative-binomial cure rate models (Rodrigues et al., ). Some mathematical properties of a proper relaxed Poisson density are explored. A simulation study and an illustration of the proposed cure rate model from the Bayesian point of view are finally presented.

  2. Branes in Poisson sigma models

    SciTech Connect

    Falceto, Fernando

    2010-07-28

    In this review we discuss possible boundary conditions (branes) for the Poisson sigma model. We show how to carry out the perturbative quantization in the presence of a general pre-Poisson brane and how this is related to the deformation quantization of Poisson structures. We conclude with an open problem: the perturbative quantization of the system when the boundary has several connected components and we use a different pre-Poisson brane in every component.

  3. Modelling of filariasis in East Java with Poisson regression and generalized Poisson regression models

    NASA Astrophysics Data System (ADS)

    Darnah

    2016-04-01

    Poisson regression has been used if the response variable is count data that based on the Poisson distribution. The Poisson distribution assumed equal dispersion. In fact, a situation where count data are over dispersion or under dispersion so that Poisson regression inappropriate because it may underestimate the standard errors and overstate the significance of the regression parameters, and consequently, giving misleading inference about the regression parameters. This paper suggests the generalized Poisson regression model to handling over dispersion and under dispersion on the Poisson regression model. The Poisson regression model and generalized Poisson regression model will be applied the number of filariasis cases in East Java. Based regression Poisson model the factors influence of filariasis are the percentage of families who don't behave clean and healthy living and the percentage of families who don't have a healthy house. The Poisson regression model occurs over dispersion so that we using generalized Poisson regression. The best generalized Poisson regression model showing the factor influence of filariasis is percentage of families who don't have healthy house. Interpretation of result the model is each additional 1 percentage of families who don't have healthy house will add 1 people filariasis patient.

  4. Estimation of count data using mixed Poisson, generalized Poisson and finite Poisson mixture regression models

    NASA Astrophysics Data System (ADS)

    Zamani, Hossein; Faroughi, Pouya; Ismail, Noriszura

    2014-06-01

    This study relates the Poisson, mixed Poisson (MP), generalized Poisson (GP) and finite Poisson mixture (FPM) regression models through mean-variance relationship, and suggests the application of these models for overdispersed count data. As an illustration, the regression models are fitted to the US skin care count data. The results indicate that FPM regression model is the best model since it provides the largest log likelihood and the smallest AIC, followed by Poisson-Inverse Gaussion (PIG), GP and negative binomial (NB) regression models. The results also show that NB, PIG and GP regression models provide similar results.

  5. Extensions of Rasch's Multiplicative Poisson Model.

    ERIC Educational Resources Information Center

    Jansen, Margo G. H.; van Duijn, Marijtje A. J.

    1992-01-01

    A model developed by G. Rasch that assumes scores on some attainment tests can be realizations of a Poisson process is explained and expanded by assuming a prior distribution, with fixed but unknown parameters, for the subject parameters. How additional between-subject and within-subject factors can be incorporated is discussed. (SLD)

  6. Evaluating the double Poisson generalized linear model.

    PubMed

    Zou, Yaotian; Geedipally, Srinivas Reddy; Lord, Dominique

    2013-10-01

    The objectives of this study are to: (1) examine the applicability of the double Poisson (DP) generalized linear model (GLM) for analyzing motor vehicle crash data characterized by over- and under-dispersion and (2) compare the performance of the DP GLM with the Conway-Maxwell-Poisson (COM-Poisson) GLM in terms of goodness-of-fit and theoretical soundness. The DP distribution has seldom been investigated and applied since its first introduction two decades ago. The hurdle for applying the DP is related to its normalizing constant (or multiplicative constant) which is not available in closed form. This study proposed a new method to approximate the normalizing constant of the DP with high accuracy and reliability. The DP GLM and COM-Poisson GLM were developed using two observed over-dispersed datasets and one observed under-dispersed dataset. The modeling results indicate that the DP GLM with its normalizing constant approximated by the new method can handle crash data characterized by over- and under-dispersion. Its performance is comparable to the COM-Poisson GLM in terms of goodness-of-fit (GOF), although COM-Poisson GLM provides a slightly better fit. For the over-dispersed data, the DP GLM performs similar to the NB GLM. Considering the fact that the DP GLM can be easily estimated with inexpensive computation and that it is simpler to interpret coefficients, it offers a flexible and efficient alternative for researchers to model count data.

  7. Loop coproducts, Gaudin models and Poisson coalgebras

    NASA Astrophysics Data System (ADS)

    Musso, F.

    2010-10-01

    In this paper we show that if A is a Poisson algebra equipped with a set of maps Δ(i)λ: A → Aotimes N satisfying suitable conditions, then the images of the Casimir functions of A under the maps Δ(i)λ (that we call 'loop coproducts') are in involution. Rational, trigonometric and elliptic Gaudin models can be recovered as particular cases of this construction, and we show that the same happens for the integrable (or partially integrable) models that can be obtained through the so-called coproduct method. On the other hand, we show that the loop coproduct approach provides a natural generalization of the Gaudin algebras from the Lie-Poisson to the generic Poisson algebra context and, hopefully, can lead to the definition of new integrable models.

  8. Poisson׳s ratio of arterial wall - Inconsistency of constitutive models with experimental data.

    PubMed

    Skacel, Pavel; Bursa, Jiri

    2016-02-01

    Poisson׳s ratio of fibrous soft tissues is analyzed in this paper on the basis of constitutive models and experimental data. Three different up-to-date constitutive models accounting for the dispersion of fibre orientations are analyzed. Their predictions of the anisotropic Poisson׳s ratios are investigated under finite strain conditions together with the effects of specific orientation distribution functions and of other parameters. The applied constitutive models predict the tendency to lower (or even negative) out-of-plane Poisson׳s ratio. New experimental data of porcine arterial layer under uniaxial tension in orthogonal directions are also presented and compared with the theoretical predictions and other literature data. The results point out the typical features of recent constitutive models with fibres concentrated in circumferential-axial plane of arterial layers and their potential inconsistence with some experimental data. The volumetric (in)compressibility of arterial tissues is also discussed as an eventual and significant factor influencing this inconsistency.

  9. The Poisson-Helmholtz-Boltzmann model.

    PubMed

    Bohinc, K; Shrestha, A; May, S

    2011-10-01

    We present a mean-field model of a one-component electrolyte solution where the mobile ions interact not only via Coulomb interactions but also through a repulsive non-electrostatic Yukawa potential. Our choice of the Yukawa potential represents a simple model for solvent-mediated interactions between ions. We employ a local formulation of the mean-field free energy through the use of two auxiliary potentials, an electrostatic and a non-electrostatic potential. Functional minimization of the mean-field free energy leads to two coupled local differential equations, the Poisson-Boltzmann equation and the Helmholtz-Boltzmann equation. Their boundary conditions account for the sources of both the electrostatic and non-electrostatic interactions on the surface of all macroions that reside in the solution. We analyze a specific example, two like-charged planar surfaces with their mobile counterions forming the electrolyte solution. For this system we calculate the pressure between the two surfaces, and we analyze its dependence on the strength of the Yukawa potential and on the non-electrostatic interactions of the mobile ions with the planar macroion surfaces. In addition, we demonstrate that our mean-field model is consistent with the contact theorem, and we outline its generalization to arbitrary interaction potentials through the use of a Laplace transformation.

  10. Application of Poisson random effect models for highway network screening.

    PubMed

    Jiang, Ximiao; Abdel-Aty, Mohamed; Alamili, Samer

    2014-02-01

    In recent years, Bayesian random effect models that account for the temporal and spatial correlations of crash data became popular in traffic safety research. This study employs random effect Poisson Log-Normal models for crash risk hotspot identification. Both the temporal and spatial correlations of crash data were considered. Potential for Safety Improvement (PSI) were adopted as a measure of the crash risk. Using the fatal and injury crashes that occurred on urban 4-lane divided arterials from 2006 to 2009 in the Central Florida area, the random effect approaches were compared to the traditional Empirical Bayesian (EB) method and the conventional Bayesian Poisson Log-Normal model. A series of method examination tests were conducted to evaluate the performance of different approaches. These tests include the previously developed site consistence test, method consistence test, total rank difference test, and the modified total score test, as well as the newly proposed total safety performance measure difference test. Results show that the Bayesian Poisson model accounting for both temporal and spatial random effects (PTSRE) outperforms the model that with only temporal random effect, and both are superior to the conventional Poisson Log-Normal model (PLN) and the EB model in the fitting of crash data. Additionally, the method evaluation tests indicate that the PTSRE model is significantly superior to the PLN model and the EB model in consistently identifying hotspots during successive time periods. The results suggest that the PTSRE model is a superior alternative for road site crash risk hotspot identification.

  11. Poisson Mixture Regression Models for Heart Disease Prediction.

    PubMed

    Mufudza, Chipo; Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model.

  12. Modelling of nonlinear filtering Poisson time series

    NASA Astrophysics Data System (ADS)

    Bochkarev, Vladimir V.; Belashova, Inna A.

    2016-08-01

    In this article, algorithms of non-linear filtering of Poisson time series are tested using statistical modelling. The objective is to find a representation of a time series as a wavelet series with a small number of non-linear coefficients, which allows distinguishing statistically significant details. There are well-known efficient algorithms of non-linear wavelet filtering for the case when the values of a time series have a normal distribution. However, if the distribution is not normal, good results can be expected using the maximum likelihood estimations. The filtration is studied according to the criterion of maximum likelihood by the example of Poisson time series. For direct optimisation of the likelihood function, different stochastic (genetic algorithms, annealing method) and deterministic optimization algorithms are used. Testing of the algorithm using both simulated series and empirical data (series of rare words frequencies according to the Google Books Ngram data were used) showed that filtering based on the criterion of maximum likelihood has a great advantage over well-known algorithms for the case of Poisson series. Also, the most perspective methods of optimisation were selected for this problem.

  13. Nonlocal Poisson-Fermi model for ionic solvent.

    PubMed

    Xie, Dexuan; Liu, Jinn-Liang; Eisenberg, Bob

    2016-07-01

    We propose a nonlocal Poisson-Fermi model for ionic solvent that includes ion size effects and polarization correlations among water molecules in the calculation of electrostatic potential. It includes the previous Poisson-Fermi models as special cases, and its solution is the convolution of a solution of the corresponding nonlocal Poisson dielectric model with a Yukawa-like kernel function. The Fermi distribution is shown to be a set of optimal ionic concentration functions in the sense of minimizing an electrostatic potential free energy. Numerical results are reported to show the difference between a Poisson-Fermi solution and a corresponding Poisson solution.

  14. Nonlocal Poisson-Fermi model for ionic solvent

    NASA Astrophysics Data System (ADS)

    Xie, Dexuan; Liu, Jinn-Liang; Eisenberg, Bob

    2016-07-01

    We propose a nonlocal Poisson-Fermi model for ionic solvent that includes ion size effects and polarization correlations among water molecules in the calculation of electrostatic potential. It includes the previous Poisson-Fermi models as special cases, and its solution is the convolution of a solution of the corresponding nonlocal Poisson dielectric model with a Yukawa-like kernel function. The Fermi distribution is shown to be a set of optimal ionic concentration functions in the sense of minimizing an electrostatic potential free energy. Numerical results are reported to show the difference between a Poisson-Fermi solution and a corresponding Poisson solution.

  15. A Poisson model for random multigraphs

    PubMed Central

    Ranola, John M. O.; Ahn, Sangtae; Sehl, Mary; Smith, Desmond J.; Lange, Kenneth

    2010-01-01

    Motivation: Biological networks are often modeled by random graphs. A better modeling vehicle is a multigraph where each pair of nodes is connected by a Poisson number of edges. In the current model, the mean number of edges equals the product of two propensities, one for each node. In this context it is possible to construct a simple and effective algorithm for rapid maximum likelihood estimation of all propensities. Given estimated propensities, it is then possible to test statistically for functionally connected nodes that show an excess of observed edges over expected edges. The model extends readily to directed multigraphs. Here, propensities are replaced by outgoing and incoming propensities. Results: The theory is applied to real data on neuronal connections, interacting genes in radiation hybrids, interacting proteins in a literature curated database, and letter and word pairs in seven Shaskespearean plays. Availability: All data used are fully available online from their respective sites. Source code and software is available from http://code.google.com/p/poisson-multigraph/ Contact: klange@ucla.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:20554690

  16. Poisson-Boltzmann-Nernst-Planck model

    SciTech Connect

    Zheng Qiong; Wei Guowei

    2011-05-21

    The Poisson-Nernst-Planck (PNP) model is based on a mean-field approximation of ion interactions and continuum descriptions of concentration and electrostatic potential. It provides qualitative explanation and increasingly quantitative predictions of experimental measurements for the ion transport problems in many areas such as semiconductor devices, nanofluidic systems, and biological systems, despite many limitations. While the PNP model gives a good prediction of the ion transport phenomenon for chemical, physical, and biological systems, the number of equations to be solved and the number of diffusion coefficient profiles to be determined for the calculation directly depend on the number of ion species in the system, since each ion species corresponds to one Nernst-Planck equation and one position-dependent diffusion coefficient profile. In a complex system with multiple ion species, the PNP can be computationally expensive and parameter demanding, as experimental measurements of diffusion coefficient profiles are generally quite limited for most confined regions such as ion channels, nanostructures and nanopores. We propose an alternative model to reduce number of Nernst-Planck equations to be solved in complex chemical and biological systems with multiple ion species by substituting Nernst-Planck equations with Boltzmann distributions of ion concentrations. As such, we solve the coupled Poisson-Boltzmann and Nernst-Planck (PBNP) equations, instead of the PNP equations. The proposed PBNP equations are derived from a total energy functional by using the variational principle. We design a number of computational techniques, including the Dirichlet to Neumann mapping, the matched interface and boundary, and relaxation based iterative procedure, to ensure efficient solution of the proposed PBNP equations. Two protein molecules, cytochrome c551 and Gramicidin A, are employed to validate the proposed model under a wide range of bulk ion concentrations and external

  17. Poisson-Boltzmann-Nernst-Planck model

    NASA Astrophysics Data System (ADS)

    Zheng, Qiong; Wei, Guo-Wei

    2011-05-01

    The Poisson-Nernst-Planck (PNP) model is based on a mean-field approximation of ion interactions and continuum descriptions of concentration and electrostatic potential. It provides qualitative explanation and increasingly quantitative predictions of experimental measurements for the ion transport problems in many areas such as semiconductor devices, nanofluidic systems, and biological systems, despite many limitations. While the PNP model gives a good prediction of the ion transport phenomenon for chemical, physical, and biological systems, the number of equations to be solved and the number of diffusion coefficient profiles to be determined for the calculation directly depend on the number of ion species in the system, since each ion species corresponds to one Nernst-Planck equation and one position-dependent diffusion coefficient profile. In a complex system with multiple ion species, the PNP can be computationally expensive and parameter demanding, as experimental measurements of diffusion coefficient profiles are generally quite limited for most confined regions such as ion channels, nanostructures and nanopores. We propose an alternative model to reduce number of Nernst-Planck equations to be solved in complex chemical and biological systems with multiple ion species by substituting Nernst-Planck equations with Boltzmann distributions of ion concentrations. As such, we solve the coupled Poisson-Boltzmann and Nernst-Planck (PBNP) equations, instead of the PNP equations. The proposed PBNP equations are derived from a total energy functional by using the variational principle. We design a number of computational techniques, including the Dirichlet to Neumann mapping, the matched interface and boundary, and relaxation based iterative procedure, to ensure efficient solution of the proposed PBNP equations. Two protein molecules, cytochrome c551 and Gramicidin A, are employed to validate the proposed model under a wide range of bulk ion concentrations and external

  18. Testing approaches for overdispersion in poisson regression versus the generalized poisson model.

    PubMed

    Yang, Zhao; Hardin, James W; Addy, Cheryl L; Vuong, Quang H

    2007-08-01

    Overdispersion is a common phenomenon in Poisson modeling, and the negative binomial (NB) model is frequently used to account for overdispersion. Testing approaches (Wald test, likelihood ratio test (LRT), and score test) for overdispersion in the Poisson regression versus the NB model are available. Because the generalized Poisson (GP) model is similar to the NB model, we consider the former as an alternate model for overdispersed count data. The score test has an advantage over the LRT and the Wald test in that the score test only requires that the parameter of interest be estimated under the null hypothesis. This paper proposes a score test for overdispersion based on the GP model and compares the power of the test with the LRT and Wald tests. A simulation study indicates the score test based on asymptotic standard Normal distribution is more appropriate in practical application for higher empirical power, however, it underestimates the nominal significance level, especially in small sample situations, and examples illustrate the results of comparing the candidate tests between the Poisson and GP models. A bootstrap test is also proposed to adjust the underestimation of nominal level in the score statistic when the sample size is small. The simulation study indicates the bootstrap test has significance level closer to nominal size and has uniformly greater power than the score test based on asymptotic standard Normal distribution. From a practical perspective, we suggest that, if the score test gives even a weak indication that the Poisson model is inappropriate, say at the 0.10 significance level, we advise the more accurate bootstrap procedure as a better test for comparing whether the GP model is more appropriate than Poisson model. Finally, the Vuong test is illustrated to choose between GP and NB2 models for the same dataset.

  19. Poisson Group Testing: A Probabilistic Model for Boolean Compressed Sensing

    NASA Astrophysics Data System (ADS)

    Emad, Amin; Milenkovic, Olgica

    2015-08-01

    We introduce a novel probabilistic group testing framework, termed Poisson group testing, in which the number of defectives follows a right-truncated Poisson distribution. The Poisson model has a number of new applications, including dynamic testing with diminishing relative rates of defectives. We consider both nonadaptive and semi-adaptive identification methods. For nonadaptive methods, we derive a lower bound on the number of tests required to identify the defectives with a probability of error that asymptotically converges to zero; in addition, we propose test matrix constructions for which the number of tests closely matches the lower bound. For semi-adaptive methods, we describe a lower bound on the expected number of tests required to identify the defectives with zero error probability. In addition, we propose a stage-wise reconstruction algorithm for which the expected number of tests is only a constant factor away from the lower bound. The methods rely only on an estimate of the average number of defectives, rather than on the individual probabilities of subjects being defective.

  20. Comparing Poisson Sigma Model with A-model

    NASA Astrophysics Data System (ADS)

    Bonechi, F.; Cattaneo, A. S.; Iraso, R.

    2016-10-01

    We discuss the A-model as a gauge fixing of the Poisson Sigma Model with target a symplectic structure. We complete the discussion in [4], where a gauge fixing defined by a compatible complex structure was introduced, by showing how to recover the A-model hierarchy of observables in terms of the AKSZ observables. Moreover, we discuss the off-shell supersymmetry of the A-model as a residual BV symmetry of the gauge fixed PSM action.

  1. Improved Denoising via Poisson Mixture Modeling of Image Sensor Noise.

    PubMed

    Zhang, Jiachao; Hirakawa, Keigo

    2017-04-01

    This paper describes a study aimed at comparing the real image sensor noise distribution to the models of noise often assumed in image denoising designs. A quantile analysis in pixel, wavelet transform, and variance stabilization domains reveal that the tails of Poisson, signal-dependent Gaussian, and Poisson-Gaussian models are too short to capture real sensor noise behavior. A new Poisson mixture noise model is proposed to correct the mismatch of tail behavior. Based on the fact that noise model mismatch results in image denoising that undersmoothes real sensor data, we propose a mixture of Poisson denoising method to remove the denoising artifacts without affecting image details, such as edge and textures. Experiments with real sensor data verify that denoising for real image sensor data is indeed improved by this new technique.

  2. Modeling laser velocimeter signals as triply stochastic Poisson processes

    NASA Technical Reports Server (NTRS)

    Mayo, W. T., Jr.

    1976-01-01

    Previous models of laser Doppler velocimeter (LDV) systems have not adequately described dual-scatter signals in a manner useful for analysis and simulation of low-level photon-limited signals. At low photon rates, an LDV signal at the output of a photomultiplier tube is a compound nonhomogeneous filtered Poisson process, whose intensity function is another (slower) Poisson process with the nonstationary rate and frequency parameters controlled by a random flow (slowest) process. In the present paper, generalized Poisson shot noise models are developed for low-level LDV signals. Theoretical results useful in detection error analysis and simulation are presented, along with measurements of burst amplitude statistics. Computer generated simulations illustrate the difference between Gaussian and Poisson models of low-level signals.

  3. Validation of the Poisson Stochastic Radiative Transfer Model

    NASA Technical Reports Server (NTRS)

    Zhuravleva, Tatiana; Marshak, Alexander

    2004-01-01

    A new approach to validation of the Poisson stochastic radiative transfer method is proposed. In contrast to other validations of stochastic models, the main parameter of the Poisson model responsible for cloud geometrical structure - cloud aspect ratio - is determined entirely by matching measurements and calculations of the direct solar radiation. If the measurements of the direct solar radiation is unavailable, it was shown that there is a range of the aspect ratios that allows the stochastic model to accurately approximate the average measurements of surface downward and cloud top upward fluxes. Realizations of the fractionally integrated cascade model are taken as a prototype of real measurements.

  4. Markov modulated Poisson process models incorporating covariates for rainfall intensity.

    PubMed

    Thayakaran, R; Ramesh, N I

    2013-01-01

    Time series of rainfall bucket tip times at the Beaufort Park station, Bracknell, in the UK are modelled by a class of Markov modulated Poisson processes (MMPP) which may be thought of as a generalization of the Poisson process. Our main focus in this paper is to investigate the effects of including covariate information into the MMPP model framework on statistical properties. In particular, we look at three types of time-varying covariates namely temperature, sea level pressure, and relative humidity that are thought to be affecting the rainfall arrival process. Maximum likelihood estimation is used to obtain the parameter estimates, and likelihood ratio tests are employed in model comparison. Simulated data from the fitted model are used to make statistical inferences about the accumulated rainfall in the discrete time interval. Variability of the daily Poisson arrival rates is studied.

  5. Regression models for mixed Poisson and continuous longitudinal data.

    PubMed

    Yang, Ying; Kang, Jian; Mao, Kai; Zhang, Jie

    2007-09-10

    In this article we develop flexible regression models in two respects to evaluate the influence of the covariate variables on the mixed Poisson and continuous responses and to evaluate how the correlation between Poisson response and continuous response changes over time. A scenario for dealing with regression models of mixed continuous and Poisson responses when the heterogeneous variance and correlation changing over time exist is proposed. Our general approach is first to jointly build marginal model and to check whether the variance and correlation change over time via likelihood ratio test. If the variance and correlation change over time, we will do a suitable data transformation to properly evaluate the influence of the covariates on the mixed responses. The proposed methods are applied to the interstitial cystitis data base (ICDB) cohort study, and we find that the positive correlations significantly change over time, which suggests heterogeneous variances should not be ignored in modelling and inference.

  6. The Poisson-Lognormal Model for Bibliometric/Scientometric Distributions.

    ERIC Educational Resources Information Center

    Stewart, John A.

    1994-01-01

    Illustrates that the Poisson-lognormal model provides good fits to a diverse set of distributions commonly studied in bibliometrics and scientometrics. Topics discussed include applications to the empirical data sets related to the laws of Lotka, Bradford, and Zipf; causal processes that could generate lognormal distributions; and implications for…

  7. Wide-area traffic: The failure of Poisson modeling

    SciTech Connect

    Paxson, V.; Floyd, S.

    1994-08-01

    Network arrivals are often modeled as Poisson processes for analytic simplicity, even though a number of traffic studies have shown that packet interarrivals are not exponentially distributed. The authors evaluate 21 wide-area traces, investigating a number of wide-area TCP arrival processes (session and connection arrivals, FTPDATA connection arrivals within FTP sessions, and TELNET packet arrivals) to determine the error introduced by modeling them using Poisson processes. The authors find that user-initiated TCP session arrivals, such as remote-login and file-transfer, are well-modeled as Poisson processes with fixed hourly rates, but that other connection arrivals deviate considerably from Poisson; that modeling TELNET packet interarrivals as exponential grievously underestimates the burstiness of TELNET traffic, but using the empirical Tcplib[DJCME92] interarrivals preserves burstiness over many time scales; and that FTPDATA connection arrivals within FTP sessions come bunched into ``connection bursts``, the largest of which are so large that they completely dominate FTPDATA traffic. Finally, they offer some preliminary results regarding how the findings relate to the possible self-similarity of wide-area traffic.

  8. Translated Poisson Mixture Model for Stratification Learning (PREPRINT)

    DTIC Science & Technology

    2007-09-01

    stratification learning in high dimensional data analysis in general and computer vision and image analysis in particular. 15. SUBJECT TERMS 16...unclassified b. ABSTRACT unclassified c. THIS PAGE unclassified Translated Poisson Mixture Model for Stratification Learning Gloria Haro Dept. Teoria ... general and computer vision and image analysis in particular. 1 Introduction Recently, there has been significant interest in analyzing the intrinsic

  9. Studying Resist Stochastics with the Multivariate Poisson Propagation Model

    DOE PAGES

    Naulleau, Patrick; Anderson, Christopher; Chao, Weilun; ...

    2014-01-01

    Progress in the ultimate performance of extreme ultraviolet resist has arguably decelerated in recent years suggesting an approach to stochastic limits both in photon counts and material parameters. Here we report on the performance of a variety of leading extreme ultraviolet resist both with and without chemical amplification. The measured performance is compared to stochastic modeling results using the Multivariate Poisson Propagation Model. The results show that the best materials are indeed nearing modeled performance limits.

  10. Mixed Poisson distributions in exact solutions of stochastic autoregulation models.

    PubMed

    Iyer-Biswas, Srividya; Jayaprakash, C

    2014-11-01

    In this paper we study the interplay between stochastic gene expression and system design using simple stochastic models of autoactivation and autoinhibition. Using the Poisson representation, a technique whose particular usefulness in the context of nonlinear gene regulation models we elucidate, we find exact results for these feedback models in the steady state. Further, we exploit this representation to analyze the parameter spaces of each model, determine which dimensionless combinations of rates are the shape determinants for each distribution, and thus demarcate where in the parameter space qualitatively different behaviors arise. These behaviors include power-law-tailed distributions, bimodal distributions, and sub-Poisson distributions. We also show how these distribution shapes change when the strength of the feedback is tuned. Using our results, we reexamine how well the autoinhibition and autoactivation models serve their conventionally assumed roles as paradigms for noise suppression and noise exploitation, respectively.

  11. A Generalized QMRA Beta-Poisson Dose-Response Model.

    PubMed

    Xie, Gang; Roiko, Anne; Stratton, Helen; Lemckert, Charles; Dunn, Peter K; Mengersen, Kerrie

    2016-10-01

    Quantitative microbial risk assessment (QMRA) is widely accepted for characterizing the microbial risks associated with food, water, and wastewater. Single-hit dose-response models are the most commonly used dose-response models in QMRA. Denoting PI(d) as the probability of infection at a given mean dose d, a three-parameter generalized QMRA beta-Poisson dose-response model, PI(d|α,β,r*), is proposed in which the minimum number of organisms required for causing infection, Kmin , is not fixed, but a random variable following a geometric distribution with parameter 0Poisson model, PI(d|α,β), is a special case of the generalized model with Kmin = 1 (which implies r*=1). The generalized beta-Poisson model is based on a conceptual model with greater detail in the dose-response mechanism. Since a maximum likelihood solution is not easily available, a likelihood-free approximate Bayesian computation (ABC) algorithm is employed for parameter estimation. By fitting the generalized model to four experimental data sets from the literature, this study reveals that the posterior median r* estimates produced fall short of meeting the required condition of r* = 1 for single-hit assumption. However, three out of four data sets fitted by the generalized models could not achieve an improvement in goodness of fit. These combined results imply that, at least in some cases, a single-hit assumption for characterizing the dose-response process may not be appropriate, but that the more complex models may be difficult to support especially if the sample size is small. The three-parameter generalized model provides a possibility to investigate the mechanism of a dose-response process in greater detail than is possible under a single-hit model.

  12. A Poisson common factor model for projecting mortality and life expectancy jointly for females and males.

    PubMed

    Li, Jackie

    2013-01-01

    We examine the application of a Poisson common factor model for the projection of mortality jointly for females and males. The model structure is an extension of the classical Lee-Carter method in which there is a common factor for the aggregate population, while a number of additional sex-specific factors can also be incorporated. The Poisson distribution is a natural choice for modelling the number of deaths, and its use provides a formal statistical framework for model selection, parameter estimation, and data analysis. Our results for Australian data show that this model leads to projected life expectancy values similar to those produced by the separate projection of mortality for females and males, but possesses the additional advantage of ensuring that the projected male-to-female ratio for death rates at each age converges to a constant. Moreover, the randomness of the corresponding residuals indicates that the model fit is satisfactory.

  13. Lindley frailty model for a class of compound Poisson processes

    NASA Astrophysics Data System (ADS)

    Kadilar, Gamze Özel; Ata, Nihal

    2013-10-01

    The Lindley distribution gain importance in survival analysis for the similarity of exponential distribution and allowance for the different shapes of hazard function. Frailty models provide an alternative to proportional hazards model where misspecified or omitted covariates are described by an unobservable random variable. Despite of the distribution of the frailty is generally assumed to be continuous, it is appropriate to consider discrete frailty distributions In some circumstances. In this paper, frailty models with discrete compound Poisson process for the Lindley distributed failure time are introduced. Survival functions are derived and maximum likelihood estimation procedures for the parameters are studied. Then, the fit of the models to the earthquake data set of Turkey are examined.

  14. Application of the Hyper-Poisson Generalized Linear Model for Analyzing Motor Vehicle Crashes.

    PubMed

    Khazraee, S Hadi; Sáez-Castillo, Antonio Jose; Geedipally, Srinivas Reddy; Lord, Dominique

    2015-05-01

    The hyper-Poisson distribution can handle both over- and underdispersion, and its generalized linear model formulation allows the dispersion of the distribution to be observation-specific and dependent on model covariates. This study's objective is to examine the potential applicability of a newly proposed generalized linear model framework for the hyper-Poisson distribution in analyzing motor vehicle crash count data. The hyper-Poisson generalized linear model was first fitted to intersection crash data from Toronto, characterized by overdispersion, and then to crash data from railway-highway crossings in Korea, characterized by underdispersion. The results of this study are promising. When fitted to the Toronto data set, the goodness-of-fit measures indicated that the hyper-Poisson model with a variable dispersion parameter provided a statistical fit as good as the traditional negative binomial model. The hyper-Poisson model was also successful in handling the underdispersed data from Korea; the model performed as well as the gamma probability model and the Conway-Maxwell-Poisson model previously developed for the same data set. The advantages of the hyper-Poisson model studied in this article are noteworthy. Unlike the negative binomial model, which has difficulties in handling underdispersed data, the hyper-Poisson model can handle both over- and underdispersed crash data. Although not a major issue for the Conway-Maxwell-Poisson model, the effect of each variable on the expected mean of crashes is easily interpretable in the case of this new model.

  15. Modeling the number of car theft using Poisson regression

    NASA Astrophysics Data System (ADS)

    Zulkifli, Malina; Ling, Agnes Beh Yen; Kasim, Maznah Mat; Ismail, Noriszura

    2016-10-01

    Regression analysis is the most popular statistical methods used to express the relationship between the variables of response with the covariates. The aim of this paper is to evaluate the factors that influence the number of car theft using Poisson regression model. This paper will focus on the number of car thefts that occurred in districts in Peninsular Malaysia. There are two groups of factor that have been considered, namely district descriptive factors and socio and demographic factors. The result of the study showed that Bumiputera composition, Chinese composition, Other ethnic composition, foreign migration, number of residence with the age between 25 to 64, number of employed person and number of unemployed person are the most influence factors that affect the car theft cases. These information are very useful for the law enforcement department, insurance company and car owners in order to reduce and limiting the car theft cases in Peninsular Malaysia.

  16. Linear-Nonlinear-Poisson Models of Primate Choice Dynamics

    PubMed Central

    Corrado, Greg S; Sugrue, Leo P; Sebastian Seung, H; Newsome, William T

    2005-01-01

    The equilibrium phenomenon of matching behavior traditionally has been studied in stationary environments. Here we attempt to uncover the local mechanism of choice that gives rise to matching by studying behavior in a highly dynamic foraging environment. In our experiments, 2 rhesus monkeys (Macacca mulatta) foraged for juice rewards by making eye movements to one of two colored icons presented on a computer monitor, each rewarded on dynamic variable-interval schedules. Using a generalization of Wiener kernel analysis, we recover a compact mechanistic description of the impact of past reward on future choice in the form of a Linear-Nonlinear-Poisson model. We validate this model through rigorous predictive and generative testing. Compared to our earlier work with this same data set, this model proves to be a better description of choice behavior and is more tightly correlated with putative neural value signals. Refinements over previous models include hyperbolic (as opposed to exponential) temporal discounting of past rewards, and differential (as opposed to fractional) comparisons of option value. Through numerical simulation we find that within this class of strategies, the model parameters employed by animals are very close to those that maximize reward harvesting efficiency. PMID:16596981

  17. Extension of the application of conway-maxwell-poisson models: analyzing traffic crash data exhibiting underdispersion.

    PubMed

    Lord, Dominique; Geedipally, Srinivas Reddy; Guikema, Seth D

    2010-08-01

    The objective of this article is to evaluate the performance of the COM-Poisson GLM for analyzing crash data exhibiting underdispersion (when conditional on the mean). The COM-Poisson distribution, originally developed in 1962, has recently been reintroduced by statisticians for analyzing count data subjected to either over- or underdispersion. Over the last year, the COM-Poisson GLM has been evaluated in the context of crash data analysis and it has been shown that the model performs as well as the Poisson-gamma model for crash data exhibiting overdispersion. To accomplish the objective of this study, several COM-Poisson models were estimated using crash data collected at 162 railway-highway crossings in South Korea between 1998 and 2002. This data set has been shown to exhibit underdispersion when models linking crash data to various explanatory variables are estimated. The modeling results were compared to those produced from the Poisson and gamma probability models documented in a previous published study. The results of this research show that the COM-Poisson GLM can handle crash data when the modeling output shows signs of underdispersion. Finally, they also show that the model proposed in this study provides better statistical performance than the gamma probability and the traditional Poisson models, at least for this data set.

  18. Understanding poisson regression.

    PubMed

    Hayat, Matthew J; Higgins, Melinda

    2014-04-01

    Nurse investigators often collect study data in the form of counts. Traditional methods of data analysis have historically approached analysis of count data either as if the count data were continuous and normally distributed or with dichotomization of the counts into the categories of occurred or did not occur. These outdated methods for analyzing count data have been replaced with more appropriate statistical methods that make use of the Poisson probability distribution, which is useful for analyzing count data. The purpose of this article is to provide an overview of the Poisson distribution and its use in Poisson regression. Assumption violations for the standard Poisson regression model are addressed with alternative approaches, including addition of an overdispersion parameter or negative binomial regression. An illustrative example is presented with an application from the ENSPIRE study, and regression modeling of comorbidity data is included for illustrative purposes.

  19. Receiver design for SPAD-based VLC systems under Poisson-Gaussian mixed noise model.

    PubMed

    Mao, Tianqi; Wang, Zhaocheng; Wang, Qi

    2017-01-23

    Single-photon avalanche diode (SPAD) is a promising photosensor because of its high sensitivity to optical signals in weak illuminance environment. Recently, it has drawn much attention from researchers in visible light communications (VLC). However, existing literature only deals with the simplified channel model, which only considers the effects of Poisson noise introduced by SPAD, but neglects other noise sources. Specifically, when an analog SPAD detector is applied, there exists Gaussian thermal noise generated by the transimpedance amplifier (TIA) and the digital-to-analog converter (D/A). Therefore, in this paper, we propose an SPAD-based VLC system with pulse-amplitude-modulation (PAM) under Poisson-Gaussian mixed noise model, where Gaussian-distributed thermal noise at the receiver is also investigated. The closed-form conditional likelihood of received signals is derived using the Laplace transform and the saddle-point approximation method, and the corresponding quasi-maximum-likelihood (quasi-ML) detector is proposed. Furthermore, the Poisson-Gaussian-distributed signals are converted to Gaussian variables with the aid of the generalized Anscombe transform (GAT), leading to an equivalent additive white Gaussian noise (AWGN) channel, and a hard-decision-based detector is invoked. Simulation results demonstrate that, the proposed GAT-based detector can reduce the computational complexity with marginal performance loss compared with the proposed quasi-ML detector, and both detectors are capable of accurately demodulating the SPAD-based PAM signals.

  20. Characterizing the performance of the Conway-Maxwell Poisson generalized linear model.

    PubMed

    Francis, Royce A; Geedipally, Srinivas Reddy; Guikema, Seth D; Dhavala, Soma Sekhar; Lord, Dominique; LaRocca, Sarah

    2012-01-01

    Count data are pervasive in many areas of risk analysis; deaths, adverse health outcomes, infrastructure system failures, and traffic accidents are all recorded as count events, for example. Risk analysts often wish to estimate the probability distribution for the number of discrete events as part of doing a risk assessment. Traditional count data regression models of the type often used in risk assessment for this problem suffer from limitations due to the assumed variance structure. A more flexible model based on the Conway-Maxwell Poisson (COM-Poisson) distribution was recently proposed, a model that has the potential to overcome the limitations of the traditional model. However, the statistical performance of this new model has not yet been fully characterized. This article assesses the performance of a maximum likelihood estimation method for fitting the COM-Poisson generalized linear model (GLM). The objectives of this article are to (1) characterize the parameter estimation accuracy of the MLE implementation of the COM-Poisson GLM, and (2) estimate the prediction accuracy of the COM-Poisson GLM using simulated data sets. The results of the study indicate that the COM-Poisson GLM is flexible enough to model under-, equi-, and overdispersed data sets with different sample mean values. The results also show that the COM-Poisson GLM yields accurate parameter estimates. The COM-Poisson GLM provides a promising and flexible approach for performing count data regression.

  1. The Dependent Poisson Race Model and Modeling Dependence in Conjoint Choice Experiments

    ERIC Educational Resources Information Center

    Ruan, Shiling; MacEachern, Steven N.; Otter, Thomas; Dean, Angela M.

    2008-01-01

    Conjoint choice experiments are used widely in marketing to study consumer preferences amongst alternative products. We develop a class of choice models, belonging to the class of Poisson race models, that describe a "random utility" which lends itself to a process-based description of choice. The models incorporate a dependence structure which…

  2. Comparison of a hydrogel model to the Poisson-Boltzmann cell model

    NASA Astrophysics Data System (ADS)

    Claudio, Gil C.; Kremer, Kurt; Holm, Christian

    2009-09-01

    We have investigated a single charged microgel in aqueous solution with a combined simulational model and Poisson-Boltzmann theory. In the simulations we use a coarse-grained charged bead-spring model in a dielectric continuum, with explicit counterions and full electrostatic interactions under periodic and nonperiodic boundary conditions. The Poisson-Boltzmann hydrogel model is that of a single charged colloid confined to a spherical cell where the counterions are allowed to enter the uniformly charged sphere. In order to investigate the origin of the differences these two models may give, we performed a variety of simulations of different hydrogel models which were designed to test for the influence of charge correlations, excluded volume interactions, arrangement of charges along the polymer chains, and thermal fluctuations in the chains of the gel. These intermediate models systematically allow us to connect the Poisson-Boltzmann cell model to the bead-spring model hydrogel model in a stepwise manner thereby testing various approximations. Overall, the simulational results of all these hydrogel models are in good agreement, especially for the number of confined counterions within the gel. Our results support the applicability of the Poisson-Boltzmann cell model to study ionic properties of hydrogels under dilute conditions.

  3. A note on robust inference from a conditional Poisson model.

    PubMed

    Solís-Trápala, Ivonne L; Farewell, Vernon T

    2006-02-01

    A randomised controlled trial to evaluate a training programme for physician-patient communication required the analysis of paired count data. The impact of departures from the Poisson assumption when paired count data are analysed through use of a conditional likelihood is illustrated. A simple approach to providing robust inference is outlined and illustrated.

  4. Fractional poisson--a simple dose-response model for human norovirus.

    PubMed

    Messner, Michael J; Berger, Philip; Nappier, Sharon P

    2014-10-01

    This study utilizes old and new Norovirus (NoV) human challenge data to model the dose-response relationship for human NoV infection. The combined data set is used to update estimates from a previously published beta-Poisson dose-response model that includes parameters for virus aggregation and for a beta-distribution that describes variable susceptibility among hosts. The quality of the beta-Poisson model is examined and a simpler model is proposed. The new model (fractional Poisson) characterizes hosts as either perfectly susceptible or perfectly immune, requiring a single parameter (the fraction of perfectly susceptible hosts) in place of the two-parameter beta-distribution. A second parameter is included to account for virus aggregation in the same fashion as it is added to the beta-Poisson model. Infection probability is simply the product of the probability of nonzero exposure (at least one virus or aggregate is ingested) and the fraction of susceptible hosts. The model is computationally simple and appears to be well suited to the data from the NoV human challenge studies. The model's deviance is similar to that of the beta-Poisson, but with one parameter, rather than two. As a result, the Akaike information criterion favors the fractional Poisson over the beta-Poisson model. At low, environmentally relevant exposure levels (<100), estimation error is small for the fractional Poisson model; however, caution is advised because no subjects were challenged at such a low dose. New low-dose data would be of great value to further clarify the NoV dose-response relationship and to support improved risk assessment for environmentally relevant exposures.

  5. Modeling animal-vehicle collisions using diagonal inflated bivariate Poisson regression.

    PubMed

    Lao, Yunteng; Wu, Yao-Jan; Corey, Jonathan; Wang, Yinhai

    2011-01-01

    Two types of animal-vehicle collision (AVC) data are commonly adopted for AVC-related risk analysis research: reported AVC data and carcass removal data. One issue with these two data sets is that they were found to have significant discrepancies by previous studies. In order to model these two types of data together and provide a better understanding of highway AVCs, this study adopts a diagonal inflated bivariate Poisson regression method, an inflated version of bivariate Poisson regression model, to fit the reported AVC and carcass removal data sets collected in Washington State during 2002-2006. The diagonal inflated bivariate Poisson model not only can model paired data with correlation, but also handle under- or over-dispersed data sets as well. Compared with three other types of models, double Poisson, bivariate Poisson, and zero-inflated double Poisson, the diagonal inflated bivariate Poisson model demonstrates its capability of fitting two data sets with remarkable overlapping portions resulting from the same stochastic process. Therefore, the diagonal inflated bivariate Poisson model provides researchers a new approach to investigating AVCs from a different perspective involving the three distribution parameters (λ(1), λ(2) and λ(3)). The modeling results show the impacts of traffic elements, geometric design and geographic characteristics on the occurrences of both reported AVC and carcass removal data. It is found that the increase of some associated factors, such as speed limit, annual average daily traffic, and shoulder width, will increase the numbers of reported AVCs and carcass removals. Conversely, the presence of some geometric factors, such as rolling and mountainous terrain, will decrease the number of reported AVCs.

  6. Bayesian inference on multiscale models for poisson intensity estimation: applications to photon-limited image denoising.

    PubMed

    Lefkimmiatis, Stamatios; Maragos, Petros; Papandreou, George

    2009-08-01

    We present an improved statistical model for analyzing Poisson processes, with applications to photon-limited imaging. We build on previous work, adopting a multiscale representation of the Poisson process in which the ratios of the underlying Poisson intensities (rates) in adjacent scales are modeled as mixtures of conjugate parametric distributions. Our main contributions include: 1) a rigorous and robust regularized expectation-maximization (EM) algorithm for maximum-likelihood estimation of the rate-ratio density parameters directly from the noisy observed Poisson data (counts); 2) extension of the method to work under a multiscale hidden Markov tree model (HMT) which couples the mixture label assignments in consecutive scales, thus modeling interscale coefficient dependencies in the vicinity of image edges; 3) exploration of a 2-D recursive quad-tree image representation, involving Dirichlet-mixture rate-ratio densities, instead of the conventional separable binary-tree image representation involving beta-mixture rate-ratio densities; and 4) a novel multiscale image representation, which we term Poisson-Haar decomposition, that better models the image edge structure, thus yielding improved performance. Experimental results on standard images with artificially simulated Poisson noise and on real photon-limited images demonstrate the effectiveness of the proposed techniques.

  7. A comparison between Poisson and zero-inflated Poisson regression models with an application to number of black spots in Corriedale sheep.

    PubMed

    Naya, Hugo; Urioste, Jorge I; Chang, Yu-Mei; Rodrigues-Motta, Mariana; Kremer, Roberto; Gianola, Daniel

    2008-01-01

    Dark spots in the fleece area are often associated with dark fibres in wool, which limits its competitiveness with other textile fibres. Field data from a sheep experiment in Uruguay revealed an excess number of zeros for dark spots. We compared the performance of four Poisson and zero-inflated Poisson (ZIP) models under four simulation scenarios. All models performed reasonably well under the same scenario for which the data were simulated. The deviance information criterion favoured a Poisson model with residual, while the ZIP model with a residual gave estimates closer to their true values under all simulation scenarios. Both Poisson and ZIP models with an error term at the regression level performed better than their counterparts without such an error. Field data from Corriedale sheep were analysed with Poisson and ZIP models with residuals. Parameter estimates were similar for both models. Although the posterior distribution of the sire variance was skewed due to a small number of rams in the dataset, the median of this variance suggested a scope for genetic selection. The main environmental factor was the age of the sheep at shearing. In summary, age related processes seem to drive the number of dark spots in this breed of sheep.

  8. A comparison between Poisson and zero-inflated Poisson regression models with an application to number of black spots in Corriedale sheep

    PubMed Central

    Naya, Hugo; Urioste, Jorge I; Chang, Yu-Mei; Rodrigues-Motta, Mariana; Kremer, Roberto; Gianola, Daniel

    2008-01-01

    Dark spots in the fleece area are often associated with dark fibres in wool, which limits its competitiveness with other textile fibres. Field data from a sheep experiment in Uruguay revealed an excess number of zeros for dark spots. We compared the performance of four Poisson and zero-inflated Poisson (ZIP) models under four simulation scenarios. All models performed reasonably well under the same scenario for which the data were simulated. The deviance information criterion favoured a Poisson model with residual, while the ZIP model with a residual gave estimates closer to their true values under all simulation scenarios. Both Poisson and ZIP models with an error term at the regression level performed better than their counterparts without such an error. Field data from Corriedale sheep were analysed with Poisson and ZIP models with residuals. Parameter estimates were similar for both models. Although the posterior distribution of the sire variance was skewed due to a small number of rams in the dataset, the median of this variance suggested a scope for genetic selection. The main environmental factor was the age of the sheep at shearing. In summary, age related processes seem to drive the number of dark spots in this breed of sheep. PMID:18558072

  9. Zero-inflated Poisson model based likelihood ratio test for drug safety signal detection.

    PubMed

    Huang, Lan; Zheng, Dan; Zalkikar, Jyoti; Tiwari, Ram

    2017-02-01

    In recent decades, numerous methods have been developed for data mining of large drug safety databases, such as Food and Drug Administration's (FDA's) Adverse Event Reporting System, where data matrices are formed by drugs such as columns and adverse events as rows. Often, a large number of cells in these data matrices have zero cell counts and some of them are "true zeros" indicating that the drug-adverse event pairs cannot occur, and these zero counts are distinguished from the other zero counts that are modeled zero counts and simply indicate that the drug-adverse event pairs have not occurred yet or have not been reported yet. In this paper, a zero-inflated Poisson model based likelihood ratio test method is proposed to identify drug-adverse event pairs that have disproportionately high reporting rates, which are also called signals. The maximum likelihood estimates of the model parameters of zero-inflated Poisson model based likelihood ratio test are obtained using the expectation and maximization algorithm. The zero-inflated Poisson model based likelihood ratio test is also modified to handle the stratified analyses for binary and categorical covariates (e.g. gender and age) in the data. The proposed zero-inflated Poisson model based likelihood ratio test method is shown to asymptotically control the type I error and false discovery rate, and its finite sample performance for signal detection is evaluated through a simulation study. The simulation results show that the zero-inflated Poisson model based likelihood ratio test method performs similar to Poisson model based likelihood ratio test method when the estimated percentage of true zeros in the database is small. Both the zero-inflated Poisson model based likelihood ratio test and likelihood ratio test methods are applied to six selected drugs, from the 2006 to 2011 Adverse Event Reporting System database, with varying percentages of observed zero-count cells.

  10. Maximum Entropy Discrimination Poisson Regression for Software Reliability Modeling.

    PubMed

    Chatzis, Sotirios P; Andreou, Andreas S

    2015-11-01

    Reliably predicting software defects is one of the most significant tasks in software engineering. Two of the major components of modern software reliability modeling approaches are: 1) extraction of salient features for software system representation, based on appropriately designed software metrics and 2) development of intricate regression models for count data, to allow effective software reliability data modeling and prediction. Surprisingly, research in the latter frontier of count data regression modeling has been rather limited. More specifically, a lack of simple and efficient algorithms for posterior computation has made the Bayesian approaches appear unattractive, and thus underdeveloped in the context of software reliability modeling. In this paper, we try to address these issues by introducing a novel Bayesian regression model for count data, based on the concept of max-margin data modeling, effected in the context of a fully Bayesian model treatment with simple and efficient posterior distribution updates. Our novel approach yields a more discriminative learning technique, making more effective use of our training data during model inference. In addition, it allows of better handling uncertainty in the modeled data, which can be a significant problem when the training data are limited. We derive elegant inference algorithms for our model under the mean-field paradigm and exhibit its effectiveness using the publicly available benchmark data sets.

  11. Analysis of Blood Transfusion Data Using Bivariate Zero-Inflated Poisson Model: A Bayesian Approach

    PubMed Central

    Mohammadi, Tayeb; Sedehi, Morteza

    2016-01-01

    Recognizing the factors affecting the number of blood donation and blood deferral has a major impact on blood transfusion. There is a positive correlation between the variables “number of blood donation” and “number of blood deferral”: as the number of return for donation increases, so does the number of blood deferral. On the other hand, due to the fact that many donors never return to donate, there is an extra zero frequency for both of the above-mentioned variables. In this study, in order to apply the correlation and to explain the frequency of the excessive zero, the bivariate zero-inflated Poisson regression model was used for joint modeling of the number of blood donation and number of blood deferral. The data was analyzed using the Bayesian approach applying noninformative priors at the presence and absence of covariates. Estimating the parameters of the model, that is, correlation, zero-inflation parameter, and regression coefficients, was done through MCMC simulation. Eventually double-Poisson model, bivariate Poisson model, and bivariate zero-inflated Poisson model were fitted on the data and were compared using the deviance information criteria (DIC). The results showed that the bivariate zero-inflated Poisson regression model fitted the data better than the other models. PMID:27703493

  12. Analysis of Blood Transfusion Data Using Bivariate Zero-Inflated Poisson Model: A Bayesian Approach.

    PubMed

    Mohammadi, Tayeb; Kheiri, Soleiman; Sedehi, Morteza

    2016-01-01

    Recognizing the factors affecting the number of blood donation and blood deferral has a major impact on blood transfusion. There is a positive correlation between the variables "number of blood donation" and "number of blood deferral": as the number of return for donation increases, so does the number of blood deferral. On the other hand, due to the fact that many donors never return to donate, there is an extra zero frequency for both of the above-mentioned variables. In this study, in order to apply the correlation and to explain the frequency of the excessive zero, the bivariate zero-inflated Poisson regression model was used for joint modeling of the number of blood donation and number of blood deferral. The data was analyzed using the Bayesian approach applying noninformative priors at the presence and absence of covariates. Estimating the parameters of the model, that is, correlation, zero-inflation parameter, and regression coefficients, was done through MCMC simulation. Eventually double-Poisson model, bivariate Poisson model, and bivariate zero-inflated Poisson model were fitted on the data and were compared using the deviance information criteria (DIC). The results showed that the bivariate zero-inflated Poisson regression model fitted the data better than the other models.

  13. Simulation on Poisson and negative binomial models of count road accident modeling

    NASA Astrophysics Data System (ADS)

    Sapuan, M. S.; Razali, A. M.; Zamzuri, Z. H.; Ibrahim, K.

    2016-11-01

    Accident count data have often been shown to have overdispersion. On the other hand, the data might contain zero count (excess zeros). The simulation study was conducted to create a scenarios which an accident happen in T-junction with the assumption the dependent variables of generated data follows certain distribution namely Poisson and negative binomial distribution with different sample size of n=30 to n=500. The study objective was accomplished by fitting Poisson regression, negative binomial regression and Hurdle negative binomial model to the simulated data. The model validation was compared and the simulation result shows for each different sample size, not all model fit the data nicely even though the data generated from its own distribution especially when the sample size is larger. Furthermore, the larger sample size indicates that more zeros accident count in the dataset.

  14. Image Deconvolution with Uncertainty Estimates: Hierarchical Multiscale Models for Poisson Images

    NASA Astrophysics Data System (ADS)

    Esch, D. N.; Karovska, M. K.; H-S CfA Astron.; Stat. Working Group Collaboration

    2002-12-01

    Have you ever wished you could obtain error maps for image deconvolutions? The work described here, currently under development, provides a method for doing exactly this. Also, the procedures described here can effectively restore point or extended sources, and there is little tuning necessary on the part of the user. We will first survey the currently used methods for image restoration. Our method models images as Poisson processes, the pixel intensities equal to the true image intensities convolved with the PSFs. The true image intensities are modeled as a mixture of point sources and a Haar Wavelet decomposition of the remaining image. The point sources are modeled as small circular Gaussian densities with fixed location, assigned by the user. The particular wavelet decomposition of the remaining image is the only one which allows the Poisson likelihood to be factored into separate parts, corresponding to the wavelet basis, ranging from coarse to fine resolution. Each of these factors in the likelihood can be reparametrized as a split of the intensity from the previous, coarser factor. We assign a prior to these splits, which can be viewed as smoothing parameters, and then fit the model using Markov Chain Monte Carlo (MCMC) methods. This fitting method allows for lower levels of smoothing on the image, and is desirable for our model because we are trying to effectively summarize, not simply maximize, the density. Our method largely automates the choice of tuning parameters in the model, and therefore makes the procedure largely user-independent. It also produces information about the certainty of the estimates; which can be summarized with error maps, or multiple images showing the variability of the posterior distribution. Our procedure has an additional strength in that it can effectively handle extended sources, without shrinking them down to a few localized points. Simulations and examples using real data will be presented and compared with other

  15. Prediction of forest fires occurrences with area-level Poisson mixed models.

    PubMed

    Boubeta, Miguel; Lombardía, María José; Marey-Pérez, Manuel Francisco; Morales, Domingo

    2015-05-01

    The number of fires in forest areas of Galicia (north-west of Spain) during the summer period is quite high. Local authorities are interested in analyzing the factors that explain this phenomenon. Poisson regression models are good tools for describing and predicting the number of fires per forest areas. This work employs area-level Poisson mixed models for treating real data about fires in forest areas. A parametric bootstrap method is applied for estimating the mean squared errors of fires predictors. The developed methodology and software are applied to a real data set of fires in forest areas of Galicia.

  16. Poisson regression for modeling count and frequency outcomes in trauma research.

    PubMed

    Gagnon, David R; Doron-LaMarca, Susan; Bell, Margret; O'Farrell, Timothy J; Taft, Casey T

    2008-10-01

    The authors describe how the Poisson regression method for analyzing count or frequency outcome variables can be applied in trauma studies. The outcome of interest in trauma research may represent a count of the number of incidents of behavior occurring in a given time interval, such as acts of physical aggression or substance abuse. Traditional linear regression approaches assume a normally distributed outcome variable with equal variances over the range of predictor variables, and may not be optimal for modeling count outcomes. An application of Poisson regression is presented using data from a study of intimate partner aggression among male patients in an alcohol treatment program and their female partners. Results of Poisson regression and linear regression models are compared.

  17. Group Sparse Additive Models

    PubMed Central

    Yin, Junming; Chen, Xi; Xing, Eric P.

    2016-01-01

    We consider the problem of sparse variable selection in nonparametric additive models, with the prior knowledge of the structure among the covariates to encourage those variables within a group to be selected jointly. Previous works either study the group sparsity in the parametric setting (e.g., group lasso), or address the problem in the nonparametric setting without exploiting the structural information (e.g., sparse additive models). In this paper, we present a new method, called group sparse additive models (GroupSpAM), which can handle group sparsity in additive models. We generalize the ℓ1/ℓ2 norm to Hilbert spaces as the sparsity-inducing penalty in GroupSpAM. Moreover, we derive a novel thresholding condition for identifying the functional sparsity at the group level, and propose an efficient block coordinate descent algorithm for constructing the estimate. We demonstrate by simulation that GroupSpAM substantially outperforms the competing methods in terms of support recovery and prediction accuracy in additive models, and also conduct a comparative experiment on a real breast cancer dataset.

  18. Functional Generalized Additive Models.

    PubMed

    McLean, Mathew W; Hooker, Giles; Staicu, Ana-Maria; Scheipl, Fabian; Ruppert, David

    2014-01-01

    We introduce the functional generalized additive model (FGAM), a novel regression model for association studies between a scalar response and a functional predictor. We model the link-transformed mean response as the integral with respect to t of F{X(t), t} where F(·,·) is an unknown regression function and X(t) is a functional covariate. Rather than having an additive model in a finite number of principal components as in Müller and Yao (2008), our model incorporates the functional predictor directly and thus our model can be viewed as the natural functional extension of generalized additive models. We estimate F(·,·) using tensor-product B-splines with roughness penalties. A pointwise quantile transformation of the functional predictor is also considered to ensure each tensor-product B-spline has observed data on its support. The methods are evaluated using simulated data and their predictive performance is compared with other competing scalar-on-function regression alternatives. We illustrate the usefulness of our approach through an application to brain tractography, where X(t) is a signal from diffusion tensor imaging at position, t, along a tract in the brain. In one example, the response is disease-status (case or control) and in a second example, it is the score on a cognitive test. R code for performing the simulations and fitting the FGAM can be found in supplemental materials available online.

  19. A Conway-Maxwell-Poisson (CMP) model to address data dispersion on positron emission tomography.

    PubMed

    Santarelli, Maria Filomena; Della Latta, Daniele; Scipioni, Michele; Positano, Vincenzo; Landini, Luigi

    2016-10-01

    Positron emission tomography (PET) in medicine exploits the properties of positron-emitting unstable nuclei. The pairs of γ- rays emitted after annihilation are revealed by coincidence detectors and stored as projections in a sinogram. It is well known that radioactive decay follows a Poisson distribution; however, deviation from Poisson statistics occurs on PET projection data prior to reconstruction due to physical effects, measurement errors, correction of deadtime, scatter, and random coincidences. A model that describes the statistical behavior of measured and corrected PET data can aid in understanding the statistical nature of the data: it is a prerequisite to develop efficient reconstruction and processing methods and to reduce noise. The deviation from Poisson statistics in PET data could be described by the Conway-Maxwell-Poisson (CMP) distribution model, which is characterized by the centring parameter λ and the dispersion parameter ν, the latter quantifying the deviation from a Poisson distribution model. In particular, the parameter ν allows quantifying over-dispersion (ν<1) or under-dispersion (ν>1) of data. A simple and efficient method for λ and ν parameters estimation is introduced and assessed using Monte Carlo simulation for a wide range of activity values. The application of the method to simulated and experimental PET phantom data demonstrated that the CMP distribution parameters could detect deviation from the Poisson distribution both in raw and corrected PET data. It may be usefully implemented in image reconstruction algorithms and quantitative PET data analysis, especially in low counting emission data, as in dynamic PET data, where the method demonstrated the best accuracy.

  20. Poisson Growth Mixture Modeling of Intensive Longitudinal Data: An Application to Smoking Cessation Behavior

    ERIC Educational Resources Information Center

    Shiyko, Mariya P.; Li, Yuelin; Rindskopf, David

    2012-01-01

    Intensive longitudinal data (ILD) have become increasingly common in the social and behavioral sciences; count variables, such as the number of daily smoked cigarettes, are frequently used outcomes in many ILD studies. We demonstrate a generalized extension of growth mixture modeling (GMM) to Poisson-distributed ILD for identifying qualitatively…

  1. Poisson-Helmholtz-Boltzmann model of the electric double layer: analysis of monovalent ionic mixtures.

    PubMed

    Bohinc, Klemen; Shrestha, Ahis; Brumen, Milan; May, Sylvio

    2012-03-01

    In the classical mean-field description of the electric double layer, known as the Poisson-Boltzmann model, ions interact exclusively through their Coulomb potential. Ion specificity can arise through solvent-mediated, nonelectrostatic interactions between ions. We employ the Yukawa pair potential to model the presence of nonelectrostatic interactions. The combination of Yukawa and Coulomb potential on the mean-field level leads to the Poisson-Helmholtz-Boltzmann model, which employs two auxiliary potentials: one electrostatic and the other nonelectrostatic. In the present work we apply the Poisson-Helmholtz-Boltzmann model to ionic mixtures, consisting of monovalent cations and anions that exhibit different Yukawa interaction strengths. As a specific example we consider a single charged surface in contact with a symmetric monovalent electrolyte. From the minimization of the mean-field free energy we derive the Poisson-Boltzmann and Helmholtz-Boltzmann equations. These nonlinear equations can be solved analytically in the weak perturbation limit. This together with numerical solutions in the nonlinear regime suggests an intricate interplay between electrostatic and nonelectrostatic interactions. The structure and free energy of the electric double layer depends sensitively on the Yukawa interaction strengths between the different ion types and on the nonelectrostatic interactions of the mobile ions with the surface.

  2. Poisson-Based Inference for Perturbation Models in Adaptive Spelling Training

    ERIC Educational Resources Information Center

    Baschera, Gian-Marco; Gross, Markus

    2010-01-01

    We present an inference algorithm for perturbation models based on Poisson regression. The algorithm is designed to handle unclassified input with multiple errors described by independent mal-rules. This knowledge representation provides an intelligent tutoring system with local and global information about a student, such as error classification…

  3. Fused Lasso Additive Model

    PubMed Central

    Petersen, Ashley; Witten, Daniela; Simon, Noah

    2016-01-01

    We consider the problem of predicting an outcome variable using p covariates that are measured on n independent observations, in a setting in which additive, flexible, and interpretable fits are desired. We propose the fused lasso additive model (FLAM), in which each additive function is estimated to be piecewise constant with a small number of adaptively-chosen knots. FLAM is the solution to a convex optimization problem, for which a simple algorithm with guaranteed convergence to a global optimum is provided. FLAM is shown to be consistent in high dimensions, and an unbiased estimator of its degrees of freedom is proposed. We evaluate the performance of FLAM in a simulation study and on two data sets. Supplemental materials are available online, and the R package flam is available on CRAN. PMID:28239246

  4. The Poisson model limits in NBA basketball: Complexity in team sports

    NASA Astrophysics Data System (ADS)

    Martín-González, Juan Manuel; de Saá Guerra, Yves; García-Manso, Juan Manuel; Arriaza, Enrique; Valverde-Estévez, Teresa

    2016-12-01

    Team sports are frequently studied by researchers. There is presumption that scoring in basketball is a random process and that can be described using the Poisson Model. Basketball is a collaboration-opposition sport, where the non-linear local interactions among players are reflected in the evolution of the score that ultimately determines the winner. In the NBA, the outcomes of close games are often decided in the last minute, where fouls play a main role. We examined 6130 NBA games in order to analyze the time intervals between baskets and scoring dynamics. Most numbers of baskets (n) over a time interval (ΔT) follow a Poisson distribution, but some (e.g., ΔT = 10 s, n > 3) behave as a Power Law. The Poisson distribution includes most baskets in any game, in most game situations, but in close games in the last minute, the numbers of events are distributed following a Power Law. The number of events can be adjusted by a mixture of two distributions. In close games, both teams try to maintain their advantage solely in order to reach the last minute: a completely different game. For this reason, we propose to use the Poisson model as a reference. The complex dynamics will emerge from the limits of this model.

  5. How does Poisson kriging compare to the popular BYM model for mapping disease risks?

    PubMed Central

    Goovaerts, Pierre; Gebreab, Samson

    2008-01-01

    Background Geostatistical techniques are now available to account for spatially varying population sizes and spatial patterns in the mapping of disease rates. At first glance, Poisson kriging represents an attractive alternative to increasingly popular Bayesian spatial models in that: 1) it is easier to implement and less CPU intensive, and 2) it accounts for the size and shape of geographical units, avoiding the limitations of conditional auto-regressive (CAR) models commonly used in Bayesian algorithms while allowing for the creation of isopleth risk maps. Both approaches, however, have never been compared in simulation studies, and there is a need to better understand their merits in terms of accuracy and precision of disease risk estimates. Results Besag, York and Mollie's (BYM) model and Poisson kriging (point and area-to-area implementations) were applied to age-adjusted lung and cervix cancer mortality rates recorded for white females in two contrasted county geographies: 1) state of Indiana that consists of 92 counties of fairly similar size and shape, and 2) four states in the Western US (Arizona, California, Nevada and Utah) forming a set of 118 counties that are vastly different geographical units. The spatial support (i.e. point versus area) has a much smaller impact on the results than the statistical methodology (i.e. geostatistical versus Bayesian models). Differences between methods are particularly pronounced in the Western US dataset: BYM model yields smoother risk surface and prediction variance that changes mainly as a function of the predicted risk, while the Poisson kriging variance increases in large sparsely populated counties. Simulation studies showed that the geostatistical approach yields smaller prediction errors, more precise and accurate probability intervals, and allows a better discrimination between counties with high and low mortality risks. The benefit of area-to-area Poisson kriging increases as the county geography becomes more

  6. Guidelines for Use of the Approximate Beta-Poisson Dose-Response Model.

    PubMed

    Xie, Gang; Roiko, Anne; Stratton, Helen; Lemckert, Charles; Dunn, Peter K; Mengersen, Kerrie

    2016-10-05

    For dose-response analysis in quantitative microbial risk assessment (QMRA), the exact beta-Poisson model is a two-parameter mechanistic dose-response model with parameters α>0 and β>0, which involves the Kummer confluent hypergeometric function. Evaluation of a hypergeometric function is a computational challenge. Denoting PI(d) as the probability of infection at a given mean dose d, the widely used dose-response model PI(d)=1-(1+dβ)-α is an approximate formula for the exact beta-Poisson model. Notwithstanding the required conditions α<β and β>1, issues related to the validity and approximation accuracy of this approximate formula have remained largely ignored in practice, partly because these conditions are too general to provide clear guidance. Consequently, this study proposes a probability measure Pr(0 < r < 1 | α̂, β̂) as a validity measure (r is a random variable that follows a gamma distribution; α̂ and β̂ are the maximum likelihood estimates of α and β in the approximate model); and the constraint conditions β̂>(22α̂)0.50 for 0.02<α̂<2 as a rule of thumb to ensure an accurate approximation (e.g., Pr(0 < r < 1 | α̂, β̂) >0.99) . This validity measure and rule of thumb were validated by application to all the completed beta-Poisson models (related to 85 data sets) from the QMRA community portal (QMRA Wiki). The results showed that the higher the probability Pr(0 < r < 1 | α̂, β̂), the better the approximation. The results further showed that, among the total 85 models examined, 68 models were identified as valid approximate model applications, which all had a near perfect match to the corresponding exact beta-Poisson model dose-response curve.

  7. An Application of the Poisson Race Model to Confidence Calibration

    ERIC Educational Resources Information Center

    Merkle, Edgar C.; Van Zandt, Trisha

    2006-01-01

    In tasks as diverse as stock market predictions and jury deliberations, a person's feelings of confidence in the appropriateness of different choices often impact that person's final choice. The current study examines the mathematical modeling of confidence calibration in a simple dual-choice task. Experiments are motivated by an accumulator…

  8. Study of non-Hodgkin's lymphoma mortality associated with industrial pollution in Spain, using Poisson models

    PubMed Central

    Ramis, Rebeca; Vidal, Enrique; García-Pérez, Javier; Lope, Virginia; Aragonés, Nuria; Pérez-Gómez, Beatriz; Pollán, Marina; López-Abente, Gonzalo

    2009-01-01

    Background Non-Hodgkin's lymphomas (NHLs) have been linked to proximity to industrial areas, but evidence regarding the health risk posed by residence near pollutant industries is very limited. The European Pollutant Emission Register (EPER) is a public register that furnishes valuable information on industries that release pollutants to air and water, along with their geographical location. This study sought to explore the relationship between NHL mortality in small areas in Spain and environmental exposure to pollutant emissions from EPER-registered industries, using three Poisson-regression-based mathematical models. Methods Observed cases were drawn from mortality registries in Spain for the period 1994–2003. Industries were grouped into the following sectors: energy; metal; mineral; organic chemicals; waste; paper; food; and use of solvents. Populations having an industry within a radius of 1, 1.5, or 2 kilometres from the municipal centroid were deemed to be exposed. Municipalities outside those radii were considered as reference populations. The relative risks (RRs) associated with proximity to pollutant industries were estimated using the following methods: Poisson Regression; mixed Poisson model with random provincial effect; and spatial autoregressive modelling (BYM model). Results Only proximity of paper industries to population centres (>2 km) could be associated with a greater risk of NHL mortality (mixed model: RR:1.24, 95% CI:1.09–1.42; BYM model: RR:1.21, 95% CI:1.01–1.45; Poisson model: RR:1.16, 95% CI:1.06–1.27). Spatial models yielded higher estimates. Conclusion The reported association between exposure to air pollution from the paper, pulp and board industry and NHL mortality is independent of the model used. Inclusion of spatial random effects terms in the risk estimate improves the study of associations between environmental exposures and mortality. The EPER could be of great utility when studying the effects of industrial pollution

  9. A marginalized zero-inflated Poisson regression model with overall exposure effects.

    PubMed

    Long, D Leann; Preisser, John S; Herring, Amy H; Golin, Carol E

    2014-12-20

    The zero-inflated Poisson (ZIP) regression model is often employed in public health research to examine the relationships between exposures of interest and a count outcome exhibiting many zeros, in excess of the amount expected under sampling from a Poisson distribution. The regression coefficients of the ZIP model have latent class interpretations, which correspond to a susceptible subpopulation at risk for the condition with counts generated from a Poisson distribution and a non-susceptible subpopulation that provides the extra or excess zeros. The ZIP model parameters, however, are not well suited for inference targeted at marginal means, specifically, in quantifying the effect of an explanatory variable in the overall mixture population. We develop a marginalized ZIP model approach for independent responses to model the population mean count directly, allowing straightforward inference for overall exposure effects and empirical robust variance estimation for overall log-incidence density ratios. Through simulation studies, the performance of maximum likelihood estimation of the marginalized ZIP model is assessed and compared with other methods of estimating overall exposure effects. The marginalized ZIP model is applied to a recent study of a motivational interviewing-based safer sex counseling intervention, designed to reduce unprotected sexual act counts.

  10. The limiting problem of the drift-diffusion-Poisson model with discontinuous p-n-junctions

    NASA Astrophysics Data System (ADS)

    Lian, Songzhe; Yuan, Hongjun; Cao, Chunling; Gao, Wenjie

    2008-11-01

    In this paper, the authors consider the limiting problem of the drift-diffusion-Poisson model for semiconductors. Different from previous papers, the model considered involve some special doping profiles D which have the property that the function is allowed to have a jump-discontinuity and sign changing property but D2 is required to be Lipschitz continuous. The existence, uniqueness and large-time asymptotic behavior of the global (in time) solutions are given.

  11. Full Bayes Poisson gamma, Poisson lognormal, and zero inflated random effects models: Comparing the precision of crash frequency estimates.

    PubMed

    Aguero-Valverde, Jonathan

    2013-01-01

    In recent years, complex statistical modeling approaches have being proposed to handle the unobserved heterogeneity and the excess of zeros frequently found in crash data, including random effects and zero inflated models. This research compares random effects, zero inflated, and zero inflated random effects models using a full Bayes hierarchical approach. The models are compared not just in terms of goodness-of-fit measures but also in terms of precision of posterior crash frequency estimates since the precision of these estimates is vital for ranking of sites for engineering improvement. Fixed-over-time random effects models are also compared to independent-over-time random effects models. For the crash dataset being analyzed, it was found that once the random effects are included in the zero inflated models, the probability of being in the zero state is drastically reduced, and the zero inflated models degenerate to their non zero inflated counterparts. Also by fixing the random effects over time the fit of the models and the precision of the crash frequency estimates are significantly increased. It was found that the rankings of the fixed-over-time random effects models are very consistent among them. In addition, the results show that by fixing the random effects over time, the standard errors of the crash frequency estimates are significantly reduced for the majority of the segments on the top of the ranking.

  12. Bivariate Poisson models with varying offsets: an application to the paired mitochondrial DNA dataset.

    PubMed

    Su, Pei-Fang; Mau, Yu-Lin; Guo, Yan; Li, Chung-I; Liu, Qi; Boice, John D; Shyr, Yu

    2017-03-01

    To assess the effect of chemotherapy on mitochondrial genome mutations in cancer survivors and their offspring, a study sequenced the full mitochondrial genome and determined the mitochondrial DNA heteroplasmic (mtDNA) mutation rate. To build a model for counts of heteroplasmic mutations in mothers and their offspring, bivariate Poisson regression was used to examine the relationship between mutation count and clinical information while accounting for the paired correlation. However, if the sequencing depth is not adequate, a limited fraction of the mtDNA will be available for variant calling. The classical bivariate Poisson regression model treats the offset term as equal within pairs; thus, it cannot be applied directly. In this research, we propose an extended bivariate Poisson regression model that has a more general offset term to adjust the length of the accessible genome for each observation. We evaluate the performance of the proposed method with comprehensive simulations, and the results show that the regression model provides unbiased parameter estimations. The use of the model is also demonstrated using the paired mtDNA dataset.

  13. Investigation of time and weather effects on crash types using full Bayesian multivariate Poisson lognormal models.

    PubMed

    El-Basyouny, Karim; Barua, Sudip; Islam, Md Tazul

    2014-12-01

    Previous research shows that various weather elements have significant effects on crash occurrence and risk; however, little is known about how these elements affect different crash types. Consequently, this study investigates the impact of weather elements and sudden extreme snow or rain weather changes on crash type. Multivariate models were used for seven crash types using five years of daily weather and crash data collected for the entire City of Edmonton. In addition, the yearly trend and random variation of parameters across the years were analyzed by using four different modeling formulations. The proposed models were estimated in a full Bayesian context via Markov Chain Monte Carlo simulation. The multivariate Poisson lognormal model with yearly varying coefficients provided the best fit for the data according to Deviance Information Criteria. Overall, results showed that temperature and snowfall were statistically significant with intuitive signs (crashes decrease with increasing temperature; crashes increase as snowfall intensity increases) for all crash types, while rainfall was mostly insignificant. Previous snow showed mixed results, being statistically significant and positively related to certain crash types, while negatively related or insignificant in other cases. Maximum wind gust speed was found mostly insignificant with a few exceptions that were positively related to crash type. Major snow or rain events following a dry weather condition were highly significant and positively related to three crash types: Follow-Too-Close, Stop-Sign-Violation, and Ran-Off-Road crashes. The day-of-the-week dummy variables were statistically significant, indicating a possible weekly variation in exposure. Transportation authorities might use the above results to improve road safety by providing drivers with information regarding the risk of certain crash types for a particular weather condition.

  14. Three-dimensional morphological modelling of concrete using multiscale Poisson polyhedra.

    PubMed

    Escoda, J; Jeulin, D; Willot, F; Toulemonde, C

    2015-04-01

    This paper aims at developing a random morphological model for concrete microstructures. A 3D image of concrete is obtained by microtomography and is used in conjunction with the concrete formulation to build and validate the model through morphological measurements. The morphological model is made up of two phases, corresponding to the matrix, or cement paste and to the aggregates. The set of aggregates in the sample is modelled as a combination of Poisson polyhedra of different scales. An algorithm is introduced to generate polyhedra packings in the continuum space. The latter is validated with morphological measurements.

  15. Scaling the Poisson Distribution

    ERIC Educational Resources Information Center

    Farnsworth, David L.

    2014-01-01

    We derive the additive property of Poisson random variables directly from the probability mass function. An important application of the additive property to quality testing of computer chips is presented.

  16. A Spatial Poisson Hurdle Model for Exploring Geographic Variation in Emergency Department Visits

    PubMed Central

    Neelon, Brian; Ghosh, Pulak; Loebs, Patrick F.

    2012-01-01

    Summary We develop a spatial Poisson hurdle model to explore geographic variation in emergency department (ED) visits while accounting for zero inflation. The model consists of two components: a Bernoulli component that models the probability of any ED use (i.e., at least one ED visit per year), and a truncated Poisson component that models the number of ED visits given use. Together, these components address both the abundance of zeros and the right-skewed nature of the nonzero counts. The model has a hierarchical structure that incorporates patient- and area-level covariates, as well as spatially correlated random effects for each areal unit. Because regions with high rates of ED use are likely to have high expected counts among users, we model the spatial random effects via a bivariate conditionally autoregressive (CAR) prior, which introduces dependence between the components and provides spatial smoothing and sharing of information across neighboring regions. Using a simulation study, we show that modeling the between-component correlation reduces bias in parameter estimates. We adopt a Bayesian estimation approach, and the model can be fit using standard Bayesian software. We apply the model to a study of patient and neighborhood factors influencing emergency department use in Durham County, North Carolina. PMID:23543242

  17. A Spatial Poisson Hurdle Model for Exploring Geographic Variation in Emergency Department Visits.

    PubMed

    Neelon, Brian; Ghosh, Pulak; Loebs, Patrick F

    2013-02-01

    We develop a spatial Poisson hurdle model to explore geographic variation in emergency department (ED) visits while accounting for zero inflation. The model consists of two components: a Bernoulli component that models the probability of any ED use (i.e., at least one ED visit per year), and a truncated Poisson component that models the number of ED visits given use. Together, these components address both the abundance of zeros and the right-skewed nature of the nonzero counts. The model has a hierarchical structure that incorporates patient- and area-level covariates, as well as spatially correlated random effects for each areal unit. Because regions with high rates of ED use are likely to have high expected counts among users, we model the spatial random effects via a bivariate conditionally autoregressive (CAR) prior, which introduces dependence between the components and provides spatial smoothing and sharing of information across neighboring regions. Using a simulation study, we show that modeling the between-component correlation reduces bias in parameter estimates. We adopt a Bayesian estimation approach, and the model can be fit using standard Bayesian software. We apply the model to a study of patient and neighborhood factors influencing emergency department use in Durham County, North Carolina.

  18. A combined Poisson cluster-cascade stochastic model for temporal precipitation

    NASA Astrophysics Data System (ADS)

    Paschalis, A.; Molnar, P.; Fatichi, S.; Burlando, P.

    2011-12-01

    Stochastic precipitation simulation is a fundamental tool in hydrology to obtain high resolution time series of precipitation for ungauged basins, or sites where data records are short or of coarse temporal resolution. Different stochastic modeling tools have been developed in the last decades in order to simulate precipitation time series that satisfactorily reproduce observed statistical properties. The two most widely used classes of models in hydrology are Poisson cluster processes (e.g. Neyman-Scott, Bartlett-Lewis models) and multiplicative random cascades (MRC). It has been recognized that these two classes of models behave differently across time scales. The Poisson cluster models are generally more suitable for coarser time scales (typically larger than one hour) since they reproduce the clustering nature of precipitation events. However, due to their construction they are unable to capture small scale within storm variability. On the other hand, MRCs have been widely used as disaggregation tools due to their ability to capture small scale features of precipitation through the self-similar cascading structure across scales which phenomenologically resembles the energy cascade in turbulence. For precipitation this self-similar behavior breaks at coarser temporal scales (typically larger than one day), which is a limitation for MRC models. A combined Poisson cluster-cascade stochastic model is presented to simulate point precipitation across a wide range of temporal scales, from annual down to few minutes. The model attempts to exploit the strengths of both modeling methods. It consists of a Poisson cluster model as external process for coarser temporal scales which is coupled with a MRC model used as a downscaling procedure to capture variability at high temporal resolutions of hydrological interest (i.e. on the order of minutes). First we investigate the performance of the two classes of models across scales in terms of marginal intensity distributions

  19. Simulation of high tensile Poisson's ratios of articular cartilage with a finite element fibril-reinforced hyperelastic model.

    PubMed

    García, José Jaime

    2008-06-01

    Analyses with a finite element fibril-reinforced hyperelastic model were undertaken in this study to simulate high tensile Poisson's ratios that have been consistently documented in experimental studies of articular cartilage. The solid phase was represented by an isotropic matrix reinforced with four sets of fibrils, two of them aligned in orthogonal directions and two oblique fibrils in a symmetric configuration respect to the orthogonal axes. Two distinct hyperelastic functions were used to represent the matrix and the fibrils. Results of the analyses showed that only by considering non-orthogonal fibrils was it possible to represent Poisson's ratios higher than one. Constrains in the grips and finite deformations played a minor role in the calculated Poisson's ratio. This study also showed that the model with oblique fibrils at 45 degrees was able to represent significant differences in Poisson's ratios near 1 documented in experimental studies. However, even considering constrains in the grips, this model was not capable to simulate Poisson's ratios near 2 that have been reported in other studies. The study also confirmed that only with a high relation between the stiffness of the fibers and that of the matrix was it possible to obtain high Poisson's ratios for the tissue. Results suggest that analytical models with a finite number of fibrils are appropriate to represent main mechanical effects of articular cartilage.

  20. Application of spatial Poisson process models to air mass thunderstorm rainfall

    NASA Technical Reports Server (NTRS)

    Eagleson, P. S.; Fennessy, N. M.; Wang, Qinliang; Rodriguez-Iturbe, I.

    1987-01-01

    Eight years of summer storm rainfall observations from 93 stations in and around the 154 sq km Walnut Gulch catchment of the Agricultural Research Service, U.S. Department of Agriculture, in Arizona are processed to yield the total station depths of 428 storms. Statistical analysis of these random fields yields the first two moments, the spatial correlation and variance functions, and the spatial distribution of total rainfall for each storm. The absolute and relative worth of three Poisson models are evaluated by comparing their prediction of the spatial distribution of storm rainfall with observations from the second half of the sample. The effect of interstorm parameter variation is examined.

  1. Kinetic models in n -dimensional Euclidean spaces: From the Maxwellian to the Poisson kernel

    NASA Astrophysics Data System (ADS)

    Zadehgol, Abed

    2015-06-01

    In this work, minimal kinetic theories based on unconventional entropy functions, H ˜lnf (Burg entropy) for 2D and H ˜f1 -2/n (Tsallis entropy) for n D with n ≥3 , are studied. These entropy functions were originally derived by Boghosian et al. [Phys. Rev. E 68, 025103 (2003), 10.1103/PhysRevE.68.025103] as a basis for discrete-velocity and lattice Boltzmann models for incompressible fluid dynamics. The present paper extends the entropic models of Boghosian et al. and shows that the explicit form of the equilibrium distribution function (EDF) of their models, in the continuous-velocity limit, can be identified with the Poisson kernel of the Poisson integral formula. The conservation and Navier-Stokes equations are recovered at low Mach numbers, and it is shown that rest particles can be used to rectify the speed of sound of the extended models. Fourier series expansion of the EDF is used to evaluate the discretization errors of the model. It is shown that the expansion coefficients of the Fourier series coincide with the velocity moments of the model. Employing two-, three-, and four-dimensional (2D, 3D, and 4D) complex systems, the real velocity space is mapped into the hypercomplex spaces and it is shown that the velocity moments can be evaluated, using the Poisson integral formula, in the hypercomplex space. For the practical applications, a 3D projection of the 4D model is presented, and the existence of an H theorem for the discrete model is investigated. The theoretical results have been verified by simulating the following benchmark problems: (1) the Kelvin-Helmholtz instability of thin shear layers in a doubly periodic domain and (2) the 3D flow of incompressible fluid in a lid-driven cubic cavity. The present results are in agreement with the previous works, while they show better stability of the proposed kinetic model, as compared with the BGK type (with single relaxation time) lattice Boltzmann models.

  2. Integrate-and-fire vs Poisson models of LGN input to V1 cortex: noisier inputs reduce orientation selectivity.

    PubMed

    Lin, I-Chun; Xing, Dajun; Shapley, Robert

    2012-12-01

    One of the reasons the visual cortex has attracted the interest of computational neuroscience is that it has well-defined inputs. The lateral geniculate nucleus (LGN) of the thalamus is the source of visual signals to the primary visual cortex (V1). Most large-scale cortical network models approximate the spike trains of LGN neurons as simple Poisson point processes. However, many studies have shown that neurons in the early visual pathway are capable of spiking with high temporal precision and their discharges are not Poisson-like. To gain an understanding of how response variability in the LGN influences the behavior of V1, we study response properties of model V1 neurons that receive purely feedforward inputs from LGN cells modeled either as noisy leaky integrate-and-fire (NLIF) neurons or as inhomogeneous Poisson processes. We first demonstrate that the NLIF model is capable of reproducing many experimentally observed statistical properties of LGN neurons. Then we show that a V1 model in which the LGN input to a V1 neuron is modeled as a group of NLIF neurons produces higher orientation selectivity than the one with Poisson LGN input. The second result implies that statistical characteristics of LGN spike trains are important for V1's function. We conclude that physiologically motivated models of V1 need to include more realistic LGN spike trains that are less noisy than inhomogeneous Poisson processes.

  3. Integrate-and-fire vs Poisson models of LGN input to V1 cortex: noisier inputs reduce orientation selectivity

    PubMed Central

    Lin, I-Chun; Xing, Dajun; Shapley, Robert

    2014-01-01

    One of the reasons the visual cortex has attracted the interest of computational neuroscience is that it has well-defined inputs. The lateral geniculate nucleus (LGN) of the thalamus is the source of visual signals to the primary visual cortex (V1). Most large-scale cortical network models approximate the spike trains of LGN neurons as simple Poisson point processes. However, many studies have shown that neurons in the early visual pathway are capable of spiking with high temporal precision and their discharges are not Poisson-like. To gain an understanding of how response variability in the LGN influences the behavior of V1, we study response properties of model V1 neurons that receive purely feedforward inputs from LGN cells modeled either as noisy leaky integrate-and-fire (NLIF) neurons or as inhomogeneous Poisson processes. We first demonstrate that the NLIF model is capable of reproducing many experimentally observed statistical properties of LGN neurons. Then we show that a V1 model in which the LGN input to a V1 neuron is modeled as a group of NLIF neurons produces higher orientation selectivity than the one with Poisson LGN input. The second result implies that statistical characteristics of LGN spike trains are important for V1's function. We conclude that physiologically motivated models of V1 need to include more realistic LGN spike trains that are less noisy than inhomogeneous Poisson processes. PMID:22684587

  4. The Gamma-Poisson model as a statistical method to determine if micro-organisms are randomly distributed in a food matrix.

    PubMed

    Toft, Nils; Innocent, Giles T; Mellor, Dominic J; Reid, Stuart W J

    2006-02-01

    The Gamma-Poisson model, i.e., a Poisson distribution where the parameter lambda is Gamma distributed, has been suggested as a statistical method for determining whether or not micro-organisms are randomly distributed in a food matrix. In this study, we analyse the Gamma-Poisson model to explore some of the properties of the Gamma-Poisson model left unexplored by the previous study. The conclusion of our analysis is that the Gamma-Poisson model distinguishes poorly between variation at the Poisson level and the Gamma level. Estimated parameter values from simulated data-sets showed large variation around the true values, even for moderate sample sizes (n=100). Furthermore, at these sample sizes the likelihood ratio is not a good test statistic for discriminating between the Gamma-Poisson distribution and the Poisson distribution. Hence, to determine if data are randomly distributed, i.e., Poisson distributed, the Gamma-Poisson distribution is not a good choice. However, the ratio between variation at the Poisson level and the Gamma level does provide a measure of the amount of overdispersion.

  5. Poisson Coordinates.

    PubMed

    Li, Xian-Ying; Hu, Shi-Min

    2013-02-01

    Harmonic functions are the critical points of a Dirichlet energy functional, the linear projections of conformal maps. They play an important role in computer graphics, particularly for gradient-domain image processing and shape-preserving geometric computation. We propose Poisson coordinates, a novel transfinite interpolation scheme based on the Poisson integral formula, as a rapid way to estimate a harmonic function on a certain domain with desired boundary values. Poisson coordinates are an extension of the Mean Value coordinates (MVCs) which inherit their linear precision, smoothness, and kernel positivity. We give explicit formulas for Poisson coordinates in both continuous and 2D discrete forms. Superior to MVCs, Poisson coordinates are proved to be pseudoharmonic (i.e., they reproduce harmonic functions on n-dimensional balls). Our experimental results show that Poisson coordinates have lower Dirichlet energies than MVCs on a number of typical 2D domains (particularly convex domains). As well as presenting a formula, our approach provides useful insights for further studies on coordinates-based interpolation and fast estimation of harmonic functions.

  6. Assessment of Poisson, probit and linear models for genetic analysis of presence and number of black spots in Corriedale sheep.

    PubMed

    Peñagaricano, F; Urioste, J I; Naya, H; de los Campos, G; Gianola, D

    2011-04-01

    Black skin spots are associated with pigmented fibres in wool, an important quality fault. Our objective was to assess alternative models for genetic analysis of presence (BINBS) and number (NUMBS) of black spots in Corriedale sheep. During 2002-08, 5624 records from 2839 animals in two flocks, aged 1 through 6 years, were taken at shearing. Four models were considered: linear and probit for BINBS and linear and Poisson for NUMBS. All models included flock-year and age as fixed effects and animal and permanent environmental as random effects. Models were fitted to the whole data set and were also compared based on their predictive ability in cross-validation. Estimates of heritability ranged from 0.154 to 0.230 for BINBS and 0.269 to 0.474 for NUMBS. For BINBS, the probit model fitted slightly better to the data than the linear model. Predictions of random effects from these models were highly correlated, and both models exhibited similar predictive ability. For NUMBS, the Poisson model, with a residual term to account for overdispersion, performed better than the linear model in goodness of fit and predictive ability. Predictions of random effects from the Poisson model were more strongly correlated with those from BINBS models than those from the linear model. Overall, the use of probit or linear models for BINBS and of a Poisson model with a residual for NUMBS seems a reasonable choice for genetic selection purposes in Corriedale sheep.

  7. Extension of the modified Poisson regression model to prospective studies with correlated binary data.

    PubMed

    Zou, G Y; Donner, Allan

    2013-12-01

    The Poisson regression model using a sandwich variance estimator has become a viable alternative to the logistic regression model for the analysis of prospective studies with independent binary outcomes. The primary advantage of this approach is that it readily provides covariate-adjusted risk ratios and associated standard errors. In this article, the model is extended to studies with correlated binary outcomes as arise in longitudinal or cluster randomization studies. The key step involves a cluster-level grouping strategy for the computation of the middle term in the sandwich estimator. For a single binary exposure variable without covariate adjustment, this approach results in risk ratio estimates and standard errors that are identical to those found in the survey sampling literature. Simulation results suggest that it is reliable for studies with correlated binary data, provided the total number of clusters is at least 50. Data from observational and cluster randomized studies are used to illustrate the methods.

  8. The Allan variance in the presence of a compound Poisson process modelling clock frequency jumps

    NASA Astrophysics Data System (ADS)

    Formichella, Valerio

    2016-12-01

    Atomic clocks can be affected by frequency jumps occurring at random times and with a random amplitude. The frequency jumps degrade the clock stability and this is captured by the Allan variance. In this work we assume that the random jumps can be modelled by a compound Poisson process, independent of the other stochastic and deterministic processes affecting the clock stability. Then, we derive the analytical expression of the Allan variance of a jumping clock. We find that the analytical Allan variance does not depend on the actual shape of the jumps amplitude distribution, but only on its first and second moments, and its final form is the same as for a clock with a random walk of frequency and a frequency drift. We conclude that the Allan variance cannot distinguish between a compound Poisson process and a Wiener process, hence it may not be sufficient to correctly identify the fundamental noise processes affecting a clock. The result is general and applicable to any oscillator, whose frequency is affected by a jump process with the described statistics.

  9. WAITING TIME DISTRIBUTION OF SOLAR ENERGETIC PARTICLE EVENTS MODELED WITH A NON-STATIONARY POISSON PROCESS

    SciTech Connect

    Li, C.; Su, W.; Fang, C.; Zhong, S. J.; Wang, L.

    2014-09-10

    We present a study of the waiting time distributions (WTDs) of solar energetic particle (SEP) events observed with the spacecraft WIND and GOES. The WTDs of both solar electron events (SEEs) and solar proton events (SPEs) display a power-law tail of ∼Δt {sup –γ}. The SEEs display a broken power-law WTD. The power-law index is γ{sub 1} = 0.99 for the short waiting times (<70 hr) and γ{sub 2} = 1.92 for large waiting times (>100 hr). The break of the WTD of SEEs is probably due to the modulation of the corotating interaction regions. The power-law index, γ ∼ 1.82, is derived for the WTD of the SPEs which is consistent with the WTD of type II radio bursts, indicating a close relationship between the shock wave and the production of energetic protons. The WTDs of SEP events can be modeled with a non-stationary Poisson process, which was proposed to understand the waiting time statistics of solar flares. We generalize the method and find that, if the SEP event rate λ = 1/Δt varies as the time distribution of event rate f(λ) = Aλ{sup –α}exp (– βλ), the time-dependent Poisson distribution can produce a power-law tail WTD of ∼Δt {sup α} {sup –3}, where 0 ≤ α < 2.

  10. Incorporating headgroup structure into the Poisson-Boltzmann model of charged lipid membranes

    NASA Astrophysics Data System (ADS)

    Wang, Muyang; Chen, Er-Qiang; Yang, Shuang; May, Sylvio

    2013-07-01

    Charged lipids often possess a complex headgroup structure with several spatially separated charges and internal conformational degrees of freedom. We propose a headgroup model consisting of two rod-like segments of the same length that form a flexible joint, with three charges of arbitrary sign and valence located at the joint and the two terminal positions. One terminal charge is firmly anchored at the polar-apolar interface of the lipid layer whereas the other two benefit from the orientational degrees of freedom of the two headgroup segments. This headgroup model is incorporated into the mean-field continuum Poisson-Boltzmann formalism of the electric double layer. For sufficiently small lengths of the two rod-like segments a closed-form expression of the charging free energy is calculated. For three specific examples—a zwitterionic headgroup with conformational freedom and two headgroups that carry an excess charge—we analyze and discuss conformational properties and electrostatic free energies.

  11. Comparing INLA and OpenBUGS for hierarchical Poisson modeling in disease mapping

    PubMed Central

    Carroll, R; Lawson, AB; Faes, C; Kirby, RS; Aregay, M; Watjou, K

    2015-01-01

    The recently developed R package INLA (Integrated Nested Laplace Approximation) is becoming a more widely used package for Bayesian inference. The INLA software has been promoted as a fast alternative to MCMC for disease mapping applications. Here, we compare the INLA package to the MCMC approach by way of the BRugs package in R, which calls OpenBUGS. We focus on the Poisson data model commonly used for disease mapping. Ultimately, INLA is a computationally efficient way of implementing Bayesian methods and returns nearly identical estimates for fixed parameters in comparison to OpenBUGS, but falls short in recovering the true estimates for the random effects, their precisions, and model goodness of fit measures under the default settings. We assumed default settings for ground truth parameters, and through altering these default settings in our simulation study, we were able to recover estimates comparable to those produced in OpenBUGS under the same assumptions. PMID:26530822

  12. Comparing INLA and OpenBUGS for hierarchical Poisson modeling in disease mapping.

    PubMed

    Carroll, R; Lawson, A B; Faes, C; Kirby, R S; Aregay, M; Watjou, K

    2015-01-01

    The recently developed R package INLA (Integrated Nested Laplace Approximation) is becoming a more widely used package for Bayesian inference. The INLA software has been promoted as a fast alternative to MCMC for disease mapping applications. Here, we compare the INLA package to the MCMC approach by way of the BRugs package in R, which calls OpenBUGS. We focus on the Poisson data model commonly used for disease mapping. Ultimately, INLA is a computationally efficient way of implementing Bayesian methods and returns nearly identical estimates for fixed parameters in comparison to OpenBUGS, but falls short in recovering the true estimates for the random effects, their precisions, and model goodness of fit measures under the default settings. We assumed default settings for ground truth parameters, and through altering these default settings in our simulation study, we were able to recover estimates comparable to those produced in OpenBUGS under the same assumptions.

  13. Zero-Inflated Poisson Modeling of Fall Risk Factors in Community-Dwelling Older Adults.

    PubMed

    Jung, Dukyoo; Kang, Younhee; Kim, Mi Young; Ma, Rye-Won; Bhandari, Pratibha

    2016-02-01

    The aim of this study was to identify risk factors for falls among community-dwelling older adults. The study used a cross-sectional descriptive design. Self-report questionnaires were used to collect data from 658 community-dwelling older adults and were analyzed using logistic and zero-inflated Poisson (ZIP) regression. Perceived health status was a significant factor in the count model, and fall efficacy emerged as a significant predictor in the logistic models. The findings suggest that fall efficacy is important for predicting not only faller and nonfaller status but also fall counts in older adults who may or may not have experienced a previous fall. The fall predictors identified in this study--perceived health status and fall efficacy--indicate the need for fall-prevention programs tailored to address both the physical and psychological issues unique to older adults.

  14. Elastic-plastic cube model for ultrasonic friction reduction via Poisson's effect.

    PubMed

    Dong, Sheng; Dapino, Marcelo J

    2014-01-01

    Ultrasonic friction reduction has been studied experimentally and theoretically. This paper presents a new elastic-plastic cube model which can be applied to various ultrasonic lubrication cases. A cube is used to represent all the contacting asperities of two surfaces. Friction force is considered as the product of the tangential contact stiffness and the deformation of the cube. Ultrasonic vibrations are projected onto three orthogonal directions, separately changing contact parameters and deformations. Hence, the overall change of friction forces. Experiments are conducted to examine ultrasonic friction reduction using different materials under normal loads that vary from 40 N to 240 N. Ultrasonic vibrations are generated both in longitudinal and vertical (out-of-plane) directions by way of the Poisson effect. The tests show up to 60% friction reduction; model simulations describe the trends observed experimentally.

  15. Labour and residential accessibility: a Bayesian analysis based on Poisson gravity models with spatial effects

    NASA Astrophysics Data System (ADS)

    Alonso, M. P.; Beamonte, M. A.; Gargallo, P.; Salvador, M. J.

    2014-10-01

    In this study, we measure jointly the labour and the residential accessibility of a basic spatial unit using a Bayesian Poisson gravity model with spatial effects. The accessibility measures are broken down into two components: the attractiveness component, which is related to its socio-economic and demographic characteristics, and the impedance component, which reflects the ease of communication within and between basic spatial units. For illustration purposes, the methodology is applied to a data set containing information about commuters from the Spanish region of Aragón. We identify the areas with better labour and residential accessibility, and we also analyse the attractiveness and the impedance components of a set of chosen localities which allows us to better understand their mobility patterns.

  16. Poisson-Nernst-Planck model of ion current rectification through a nanofluidic diode.

    PubMed

    Constantin, Dragoş; Siwy, Zuzanna S

    2007-10-01

    We have investigated ion current rectification properties of a recently prepared bipolar nanofluidic diode. This device is based on a single conically shaped nanopore in a polymer film whose pore walls contain a sharp boundary between positively and negatively charged regions. A semiquantitative model that employs Poisson and Nernst-Planck equations predicts current-voltage curves as well as ionic concentrations and electric potential distributions in this system. We show that under certain conditions the rectification degree, defined as a ratio of currents recorded at the same voltage but opposite polarities, can reach values of over 1000 at a voltage range <-2 V,+2 V>. The role of thickness and position of the transition zone on the ion current rectification is discussed as well. We also show that the rectification degree scales with the applied voltage.

  17. Testing a Poisson Counter Model for Visual Identification of Briefly Presented, Mutually Confusable Single Stimuli in Pure Accuracy Tasks

    ERIC Educational Resources Information Center

    Kyllingsbaek, Soren; Markussen, Bo; Bundesen, Claus

    2012-01-01

    The authors propose and test a simple model of the time course of visual identification of briefly presented, mutually confusable single stimuli in pure accuracy tasks. The model implies that during stimulus analysis, tentative categorizations that stimulus i belongs to category j are made at a constant Poisson rate, v(i, j). The analysis is…

  18. Near Earth Object Detection Using a Poisson Statistical Model for Detection on Images Modeled from the Panoramic Survey Telescope and Rapid Response System

    DTIC Science & Technology

    2012-03-01

    previously undetectable asteroid or comet on a collision course with Earth early enough to establish an effective plan of action to save millions of lives...NEAR EARTH OBJECT DETECTION USING A POISSON STATISTICAL MODEL FOR DETECTION ON IMAGES MODELED FROM THE PANORAMIC SURVEY...the United States. AFIT/GE/ENG/12-33 NEAR EARTH OBJECT DETECTION USING POISSON STATISTICAL MODEL FOR DETECTION ON IMAGES MODELED FROM THE

  19. Semiparametric bivariate zero-inflated Poisson models with application to studies of abundance for multiple species

    USGS Publications Warehouse

    Arab, Ali; Holan, Scott H.; Wikle, Christopher K.; Wildhaber, Mark L.

    2012-01-01

    Ecological studies involving counts of abundance, presence–absence or occupancy rates often produce data having a substantial proportion of zeros. Furthermore, these types of processes are typically multivariate and only adequately described by complex nonlinear relationships involving externally measured covariates. Ignoring these aspects of the data and implementing standard approaches can lead to models that fail to provide adequate scientific understanding of the underlying ecological processes, possibly resulting in a loss of inferential power. One method of dealing with data having excess zeros is to consider the class of univariate zero-inflated generalized linear models. However, this class of models fails to address the multivariate and nonlinear aspects associated with the data usually encountered in practice. Therefore, we propose a semiparametric bivariate zero-inflated Poisson model that takes into account both of these data attributes. The general modeling framework is hierarchical Bayes and is suitable for a broad range of applications. We demonstrate the effectiveness of our model through a motivating example on modeling catch per unit area for multiple species using data from the Missouri River Benthic Fishes Study, implemented by the United States Geological Survey.

  20. Probabilistic prediction of cyanobacteria abundance in a Korean reservoir using a Bayesian Poisson model

    NASA Astrophysics Data System (ADS)

    Cha, YoonKyung; Park, Seok Soon; Kim, Kyunghyun; Byeon, Myeongseop; Stow, Craig A.

    2014-03-01

    There have been increasing reports of harmful algal blooms (HABs) worldwide. However, the factors that influence cyanobacteria dominance and HAB formation can be site-specific and idiosyncratic, making prediction challenging. The drivers of cyanobacteria blooms in Lake Paldang, South Korea, the summer climate of which is strongly affected by the East Asian monsoon, may differ from those in well-studied North American lakes. Using the observational data sampled during the growing season in 2007-2011, a Bayesian hurdle Poisson model was developed to predict cyanobacteria abundance in the lake. The model allowed cyanobacteria absence (zero count) and nonzero cyanobacteria counts to be modeled as functions of different environmental factors. The model predictions demonstrated that the principal factor that determines the success of cyanobacteria was temperature. Combined with high temperature, increased residence time indicated by low outflow rates appeared to increase the probability of cyanobacteria occurrence. A stable water column, represented by low suspended solids, and high temperature were the requirements for high abundance of cyanobacteria. Our model results had management implications; the model can be used to forecast cyanobacteria watch or alert levels probabilistically and develop mitigation strategies of cyanobacteria blooms.

  1. Poisson-Boltzmann theory of charged colloids: limits of the cell model for salty suspensions

    NASA Astrophysics Data System (ADS)

    Denton, A. R.

    2010-09-01

    Thermodynamic properties of charge-stabilized colloidal suspensions and polyelectrolyte solutions are commonly modelled by implementing the mean-field Poisson-Boltzmann (PB) theory within a cell model. This approach models a bulk system by a single macroion, together with counterions and salt ions, confined to a symmetrically shaped, electroneutral cell. While easing numerical solution of the nonlinear PB equation, the cell model neglects microion-induced interactions and correlations between macroions, precluding modelling of macroion ordering phenomena. An alternative approach, which avoids the artificial constraints of cell geometry, exploits the mapping of a macroion-microion mixture onto a one-component model of pseudo-macroions governed by effective interparticle interactions. In practice, effective-interaction models are usually based on linear-screening approximations, which can accurately describe strong nonlinear screening only by incorporating an effective (renormalized) macroion charge. Combining charge renormalization and linearized PB theories, in both the cell model and an effective-interaction (cell-free) model, we compute osmotic pressures of highly charged colloids and monovalent microions, in Donnan equilibrium with a salt reservoir, over a range of concentrations. By comparing predictions with primitive model simulation data for salt-free suspensions, and with predictions from nonlinear PB theory for salty suspensions, we chart the limits of both the cell model and linear-screening approximations in modelling bulk thermodynamic properties. Up to moderately strong electrostatic couplings, the cell model proves accurate for predicting osmotic pressures of deionized (counterion-dominated) suspensions. With increasing salt concentration, however, the relative contribution of macroion interactions to the osmotic pressure grows, leading predictions from the cell and effective-interaction models to deviate. No evidence is found for a liquid

  2. The application of a Poisson model to the annual distribution of daily mortality at six Montreal hospitals.

    PubMed Central

    Zweig, J P; Csank, J Z

    1978-01-01

    The daily distributions of annual mortality for varying numbers of years between 1965 and 1975 were investigated in three geriatric hospitals and three general hospitals in the Montreal area. Nearly all the observed mortality distributions were found to mimic the classical Poisson distribution, with little departure. In two of the larger hospitals, the matching of the daily mortality distributions with their Poisson models met stringent statistical criteria. In one of them it was even possible to predict the expected mortality frequencies merely from a knowledge of the annual totals. The remaining four hospitals, which included the three geriatric institutions, also exhibited mortalities regarded as highly suggestive of Poisson distributions, although in one of the geriatric hospitals the mortality distribution tended to be somewhat erratic in this respect. PMID:711981

  3. Multivariate poisson lognormal modeling of crashes by type and severity on rural two lane highways.

    PubMed

    Wang, Kai; Ivan, John N; Ravishanker, Nalini; Jackson, Eric

    2017-02-01

    In an effort to improve traffic safety, there has been considerable interest in estimating crash prediction models and identifying factors contributing to crashes. To account for crash frequency variations among crash types and severities, crash prediction models have been estimated by type and severity. The univariate crash count models have been used by researchers to estimate crashes by crash type or severity, in which the crash counts by type or severity are assumed to be independent of one another and modelled separately. When considering crash types and severities simultaneously, this may neglect the potential correlations between crash counts due to the presence of shared unobserved factors across crash types or severities for a specific roadway intersection or segment, and might lead to biased parameter estimation and reduce model accuracy. The focus on this study is to estimate crashes by both crash type and crash severity using the Integrated Nested Laplace Approximation (INLA) Multivariate Poisson Lognormal (MVPLN) model, and identify the different effects of contributing factors on different crash type and severity counts on rural two-lane highways. The INLA MVPLN model can simultaneously model crash counts by crash type and crash severity by accounting for the potential correlations among them and significantly decreases the computational time compared with a fully Bayesian fitting of the MVPLN model using Markov Chain Monte Carlo (MCMC) method. This paper describes estimation of MVPLN models for three-way stop controlled (3ST) intersections, four-way stop controlled (4ST) intersections, four-way signalized (4SG) intersections, and roadway segments on rural two-lane highways. Annual Average Daily traffic (AADT) and variables describing roadway conditions (including presence of lighting, presence of left-turn/right-turn lane, lane width and shoulder width) were used as predictors. A Univariate Poisson Lognormal (UPLN) was estimated by crash type and

  4. Analytical solutions of nonlocal Poisson dielectric models with multiple point charges inside a dielectric sphere.

    PubMed

    Xie, Dexuan; Volkmer, Hans W; Ying, Jinyong

    2016-04-01

    The nonlocal dielectric approach has led to new models and solvers for predicting electrostatics of proteins (or other biomolecules), but how to validate and compare them remains a challenge. To promote such a study, in this paper, two typical nonlocal dielectric models are revisited. Their analytical solutions are then found in the expressions of simple series for a dielectric sphere containing any number of point charges. As a special case, the analytical solution of the corresponding Poisson dielectric model is also derived in simple series, which significantly improves the well known Kirkwood's double series expansion. Furthermore, a convolution of one nonlocal dielectric solution with a commonly used nonlocal kernel function is obtained, along with the reaction parts of these local and nonlocal solutions. To turn these new series solutions into a valuable research tool, they are programed as a free fortran software package, which can input point charge data directly from a protein data bank file. Consequently, different validation tests can be quickly done on different proteins. Finally, a test example for a protein with 488 atomic charges is reported to demonstrate the differences between the local and nonlocal models as well as the importance of using the reaction parts to develop local and nonlocal dielectric solvers.

  5. Optimization of 3D Poisson-Nernst-Planck model for fast evaluation of diverse protein channels.

    PubMed

    Dyrka, Witold; Bartuzel, Maciej M; Kotulska, Malgorzata

    2013-10-01

    We show the accuracy and applicability of our fast algorithmic implementation of a three-dimensional Poisson-Nernst-Planck (3D-PNP) flow model for characterizing different protein channels. Due to its high computational efficiency, our model can predict the full current-voltage characteristics of a channel within minutes, based on the experimental 3D structure of the channel or its computational model structure. Compared with other methods, such as Brownian dynamics, which currently needs a few weeks of the computational time, or even much more demanding molecular dynamics modeling, 3D-PNP is the only available method for a function-based evaluation of very numerous tentative structural channel models. Flow model tests of our algorithm and its optimal parametrization are provided for five native channels whose experimental structures are available in the protein data bank (PDB) in an open conductive state, and whose experimental current-voltage characteristics have been published. The channels represent very different geometric and structural properties, which makes it the widest test to date of the accuracy of 3D-PNP on real channels. We test whether the channel conductance, rectification, and charge selectivity obtained from the flow model, could be sufficiently sensitive to single-point mutations, related to unsignificant changes in the channel structure. Our results show that the classical 3D-PNP model, under proper parametrization, is able to achieve a qualitative agreement with experimental data for a majority of the tested characteristics and channels, including channels with narrow and irregular conductivity pores. We propose that although the standard PNP model cannot provide insight into complex physical phenomena due to its intrinsic limitations, its semiquantitative agreement is achievable for rectification and selectivity at a level sufficient for the bioinformatical purpose of selecting the best structural models with a great advantage of a very short

  6. An efficient parallel sampling technique for Multivariate Poisson-Lognormal model: Analysis with two crash count datasets

    DOE PAGES

    Zhan, Xianyuan; Aziz, H. M. Abdul; Ukkusuri, Satish V.

    2015-11-19

    Our study investigates the Multivariate Poisson-lognormal (MVPLN) model that jointly models crash frequency and severity accounting for correlations. The ordinary univariate count models analyze crashes of different severity level separately ignoring the correlations among severity levels. The MVPLN model is capable to incorporate the general correlation structure and takes account of the over dispersion in the data that leads to a superior data fitting. But, the traditional estimation approach for MVPLN model is computationally expensive, which often limits the use of MVPLN model in practice. In this work, a parallel sampling scheme is introduced to improve the original Markov Chainmore » Monte Carlo (MCMC) estimation approach of the MVPLN model, which significantly reduces the model estimation time. Two MVPLN models are developed using the pedestrian vehicle crash data collected in New York City from 2002 to 2006, and the highway-injury data from Washington State (5-year data from 1990 to 1994) The Deviance Information Criteria (DIC) is used to evaluate the model fitting. The estimation results show that the MVPLN models provide a superior fit over univariate Poisson-lognormal (PLN), univariate Poisson, and Negative Binomial models. Moreover, the correlations among the latent effects of different severity levels are found significant in both datasets that justifies the importance of jointly modeling crash frequency and severity accounting for correlations.« less

  7. An efficient parallel sampling technique for Multivariate Poisson-Lognormal model: Analysis with two crash count datasets

    SciTech Connect

    Zhan, Xianyuan; Aziz, H. M. Abdul; Ukkusuri, Satish V.

    2015-11-19

    Our study investigates the Multivariate Poisson-lognormal (MVPLN) model that jointly models crash frequency and severity accounting for correlations. The ordinary univariate count models analyze crashes of different severity level separately ignoring the correlations among severity levels. The MVPLN model is capable to incorporate the general correlation structure and takes account of the over dispersion in the data that leads to a superior data fitting. But, the traditional estimation approach for MVPLN model is computationally expensive, which often limits the use of MVPLN model in practice. In this work, a parallel sampling scheme is introduced to improve the original Markov Chain Monte Carlo (MCMC) estimation approach of the MVPLN model, which significantly reduces the model estimation time. Two MVPLN models are developed using the pedestrian vehicle crash data collected in New York City from 2002 to 2006, and the highway-injury data from Washington State (5-year data from 1990 to 1994) The Deviance Information Criteria (DIC) is used to evaluate the model fitting. The estimation results show that the MVPLN models provide a superior fit over univariate Poisson-lognormal (PLN), univariate Poisson, and Negative Binomial models. Moreover, the correlations among the latent effects of different severity levels are found significant in both datasets that justifies the importance of jointly modeling crash frequency and severity accounting for correlations.

  8. Modeling both of the number of pausibacillary and multibacillary leprosy patients by using bivariate poisson regression

    NASA Astrophysics Data System (ADS)

    Winahju, W. S.; Mukarromah, A.; Putri, S.

    2015-03-01

    Leprosy is a chronic infectious disease caused by bacteria of leprosy (Mycobacterium leprae). Leprosy has become an important thing in Indonesia because its morbidity is quite high. Based on WHO data in 2014, in 2012 Indonesia has the highest number of new leprosy patients after India and Brazil with a contribution of 18.994 people (8.7% of the world). This number makes Indonesia automatically placed as the country with the highest number of leprosy morbidity of ASEAN countries. The province that most contributes to the number of leprosy patients in Indonesia is East Java. There are two kind of leprosy. They consist of pausibacillary and multibacillary. The morbidity of multibacillary leprosy is higher than pausibacillary leprosy. This paper will discuss modeling both of the number of multibacillary and pausibacillary leprosy patients as responses variables. These responses are count variables, so modeling will be conducted by using bivariate poisson regression method. Unit experiment used is in East Java, and predictors involved are: environment, demography, and poverty. The model uses data in 2012, and the result indicates that all predictors influence significantly.

  9. Relative risk estimation of Chikungunya disease in Malaysia: An analysis based on Poisson-gamma model

    NASA Astrophysics Data System (ADS)

    Samat, N. A.; Ma'arof, S. H. Mohd Imam

    2015-05-01

    Disease mapping is a method to display the geographical distribution of disease occurrence, which generally involves the usage and interpretation of a map to show the incidence of certain diseases. Relative risk (RR) estimation is one of the most important issues in disease mapping. This paper begins by providing a brief overview of Chikungunya disease. This is followed by a review of the classical model used in disease mapping, based on the standardized morbidity ratio (SMR), which we then apply to our Chikungunya data. We then fit an extension of the classical model, which we refer to as a Poisson-Gamma model, when prior distributions for the relative risks are assumed known. Both results are displayed and compared using maps and we reveal a smoother map with fewer extremes values of estimated relative risk. The extensions of this paper will consider other methods that are relevant to overcome the drawbacks of the existing methods, in order to inform and direct government strategy for monitoring and controlling Chikungunya disease.

  10. Assessment of Poisson, logit, and linear models for genetic analysis of clinical mastitis in Norwegian Red cows.

    PubMed

    Vazquez, A I; Gianola, D; Bates, D; Weigel, K A; Heringstad, B

    2009-02-01

    Clinical mastitis is typically coded as presence/absence during some period of exposure, and records are analyzed with linear or binary data models. Because presence includes cows with multiple episodes, there is loss of information when a count is treated as a binary response. The Poisson model is designed for counting random variables, and although it is used extensively in epidemiology of mastitis, it has rarely been used for studying the genetics of mastitis. Many models have been proposed for genetic analysis of mastitis, but they have not been formally compared. The main goal of this study was to compare linear (Gaussian), Bernoulli (with logit link), and Poisson models for the purpose of genetic evaluation of sires for mastitis in dairy cattle. The response variables were clinical mastitis (CM; 0, 1) and number of CM cases (NCM; 0, 1, 2, ..). Data consisted of records on 36,178 first-lactation daughters of 245 Norwegian Red sires distributed over 5,286 herds. Predictive ability of models was assessed via a 3-fold cross-validation using mean squared error of prediction (MSEP) as the end-point. Between-sire variance estimates for NCM were 0.065 in Poisson and 0.007 in the linear model. For CM the between-sire variance was 0.093 in logit and 0.003 in the linear model. The ratio between herd and sire variances for the models with NCM response was 4.6 and 3.5 for Poisson and linear, respectively, and for model for CM was 3.7 in both logit and linear models. The MSEP for all cows was similar. However, within healthy animals, MSEP was 0.085 (Poisson), 0.090 (linear for NCM), 0.053 (logit), and 0.056 (linear for CM). For mastitic animals the MSEP values were 1.206 (Poisson), 1.185 (linear for NCM response), 1.333 (logit), and 1.319 (linear for CM response). The models for count variables had a better performance when predicting diseased animals and also had a similar performance between them. Logit and linear models for CM had better predictive ability for healthy

  11. ADAPTIVE FINITE ELEMENT MODELING TECHNIQUES FOR THE POISSON-BOLTZMANN EQUATION.

    PubMed

    Holst, Michael; McCammon, James Andrew; Yu, Zeyun; Zhou, Youngcheng; Zhu, Yunrong

    2012-01-01

    We consider the design of an effective and reliable adaptive finite element method (AFEM) for the nonlinear Poisson-Boltzmann equation (PBE). We first examine the two-term regularization technique for the continuous problem recently proposed by Chen, Holst, and Xu based on the removal of the singular electrostatic potential inside biomolecules; this technique made possible the development of the first complete solution and approximation theory for the Poisson-Boltzmann equation, the first provably convergent discretization, and also allowed for the development of a provably convergent AFEM. However, in practical implementation, this two-term regularization exhibits numerical instability. Therefore, we examine a variation of this regularization technique which can be shown to be less susceptible to such instability. We establish a priori estimates and other basic results for the continuous regularized problem, as well as for Galerkin finite element approximations. We show that the new approach produces regularized continuous and discrete problems with the same mathematical advantages of the original regularization. We then design an AFEM scheme for the new regularized problem, and show that the resulting AFEM scheme is accurate and reliable, by proving a contraction result for the error. This result, which is one of the first results of this type for nonlinear elliptic problems, is based on using continuous and discrete a priori L(∞) estimates to establish quasi-orthogonality. To provide a high-quality geometric model as input to the AFEM algorithm, we also describe a class of feature-preserving adaptive mesh generation algorithms designed specifically for constructing meshes of biomolecular structures, based on the intrinsic local structure tensor of the molecular surface. All of the algorithms described in the article are implemented in the Finite Element Toolkit (FETK), developed and maintained at UCSD. The stability advantages of the new regularization scheme

  12. ADAPTIVE FINITE ELEMENT MODELING TECHNIQUES FOR THE POISSON-BOLTZMANN EQUATION

    PubMed Central

    HOLST, MICHAEL; MCCAMMON, JAMES ANDREW; YU, ZEYUN; ZHOU, YOUNGCHENG; ZHU, YUNRONG

    2011-01-01

    We consider the design of an effective and reliable adaptive finite element method (AFEM) for the nonlinear Poisson-Boltzmann equation (PBE). We first examine the two-term regularization technique for the continuous problem recently proposed by Chen, Holst, and Xu based on the removal of the singular electrostatic potential inside biomolecules; this technique made possible the development of the first complete solution and approximation theory for the Poisson-Boltzmann equation, the first provably convergent discretization, and also allowed for the development of a provably convergent AFEM. However, in practical implementation, this two-term regularization exhibits numerical instability. Therefore, we examine a variation of this regularization technique which can be shown to be less susceptible to such instability. We establish a priori estimates and other basic results for the continuous regularized problem, as well as for Galerkin finite element approximations. We show that the new approach produces regularized continuous and discrete problems with the same mathematical advantages of the original regularization. We then design an AFEM scheme for the new regularized problem, and show that the resulting AFEM scheme is accurate and reliable, by proving a contraction result for the error. This result, which is one of the first results of this type for nonlinear elliptic problems, is based on using continuous and discrete a priori L∞ estimates to establish quasi-orthogonality. To provide a high-quality geometric model as input to the AFEM algorithm, we also describe a class of feature-preserving adaptive mesh generation algorithms designed specifically for constructing meshes of biomolecular structures, based on the intrinsic local structure tensor of the molecular surface. All of the algorithms described in the article are implemented in the Finite Element Toolkit (FETK), developed and maintained at UCSD. The stability advantages of the new regularization scheme

  13. Effect of air pollution on lung cancer: A poisson regression model based on vital statistics

    SciTech Connect

    Tango, Toshiro

    1994-11-01

    This article describes a Poisson regression model for time trends of mortality to detect the long-term effects of common levels of air pollution on lung cancer, in which the adjustment for cigarette smoking is not always necessary. The main hypothesis to be tested in the model is that if the long-term and common-level air pollution had an effect on lung cancer, the death rate from lung cancer could be expected to increase gradually at a higher rate in the region with relatively high levels of air pollution than in the region with low levels, and that this trend would not be expected for other control diseases in which cigarette smoking is a risk factor. Using this approach, we analyzed the trend of mortality in females aged 40 to 79, from lung cancer and two control diseases, ischemic heart disease and cerebrovascular disease, based on vital statistics in 23 wards of the Tokyo metropolitan area for 1972 to 1988. Ward-specific mean levels per day of SO{sub 2} and NO{sub 2} from 1974 through 1976 estimated by Makino (1978) were used as the ward-specific exposure measure of air pollution. No data on tobacco consumption in each ward is available. Our analysis supported the existence of long-term effects of air pollution on lung cancer. 14 refs., 5 figs., 2 tabs.

  14. Poisson-Boltzmann model of electrolytes containing uniformly charged spherical nanoparticles.

    PubMed

    Bohinc, Klemen; Volpe Bossa, Guilherme; Gavryushov, Sergei; May, Sylvio

    2016-12-21

    Like-charged macromolecules typically repel each other in aqueous solutions that contain small mobile ions. The interaction tends to turn attractive if mobile ions with spatially extended charge distributions are added. Such systems can be modeled within the mean-field Poisson-Boltzmann formalism by explicitly accounting for charge-charge correlations within the spatially extended ions. We consider an aqueous solution that contains a mixture of spherical nanoparticles with uniform surface charge density and small mobile salt ions, sandwiched between two like-charged planar surfaces. We perform the minimization of an appropriate free energy functional, which leads to a non-linear integral-differential equation for the electrostatic potential that we solve numerically and compare with predictions from Monte Carlo simulations. Nanoparticles with uniform surface charge density are contrasted with nanoparticles that have all their charges relocated at the center. Our mean-field model predicts that only the former (especially when large and highly charged particles) but not the latter are able to mediate attractive interactions between like-charged planar surfaces. We also demonstrate that at high salt concentration attractive interactions between like-charged planar surfaces turn into repulsion.

  15. On the Linear Stability of Crystals in the Schrödinger-Poisson Model.

    PubMed

    Komech, A; Kopylova, E

    2016-01-01

    We consider the Schrödinger-Poisson-Newton equations for crystals with one ion per cell. We linearize this dynamics at the periodic minimizers of energy per cell and introduce a novel class of the ion charge densities that ensures the stability of the linearized dynamics. Our main result is the energy positivity for the Bloch generators of the linearized dynamics under a Wiener-type condition on the ion charge density. We also adopt an additional 'Jellium' condition which cancels the negative contribution caused by the electrostatic instability and provides the 'Jellium' periodic minimizers and the optimality of the lattice: the energy per cell of the periodic minimizer attains the global minimum among all possible lattices. We show that the energy positivity can fail if the Jellium condition is violated, while the Wiener condition holds. The proof of the energy positivity relies on a novel factorization of the corresponding Hamilton functional. The Bloch generators are nonselfadjoint (and even nonsymmetric) Hamilton operators. We diagonalize these generators using our theory of spectral resolution of the Hamilton operators with positive definite energy (Komech and Kopylova in, J Stat Phys 154(1-2):503-521, 2014, J Spectral Theory 5(2):331-361, 2015). The stability of the linearized crystal dynamics is established using this spectral resolution.

  16. Survival analysis of clinical mastitis data using a nested frailty Cox model fit as a mixed-effects Poisson model.

    PubMed

    Elghafghuf, Adel; Dufour, Simon; Reyher, Kristen; Dohoo, Ian; Stryhn, Henrik

    2014-12-01

    Mastitis is a complex disease affecting dairy cows and is considered to be the most costly disease of dairy herds. The hazard of mastitis is a function of many factors, both managerial and environmental, making its control a difficult issue to milk producers. Observational studies of clinical mastitis (CM) often generate datasets with a number of characteristics which influence the analysis of those data: the outcome of interest may be the time to occurrence of a case of mastitis, predictors may change over time (time-dependent predictors), the effects of factors may change over time (time-dependent effects), there are usually multiple hierarchical levels, and datasets may be very large. Analysis of such data often requires expansion of the data into the counting-process format - leading to larger datasets - thus complicating the analysis and requiring excessive computing time. In this study, a nested frailty Cox model with time-dependent predictors and effects was applied to Canadian Bovine Mastitis Research Network data in which 10,831 lactations of 8035 cows from 69 herds were followed through lactation until the first occurrence of CM. The model was fit to the data as a Poisson model with nested normally distributed random effects at the cow and herd levels. Risk factors associated with the hazard of CM during the lactation were identified, such as parity, calving season, herd somatic cell score, pasture access, fore-stripping, and proportion of treated cases of CM in a herd. The analysis showed that most of the predictors had a strong effect early in lactation and also demonstrated substantial variation in the baseline hazard among cows and between herds. A small simulation study for a setting similar to the real data was conducted to evaluate the Poisson maximum likelihood estimation approach with both Gaussian quadrature method and Laplace approximation. Further, the performance of the two methods was compared with the performance of a widely used estimation

  17. Bayesian semi-parametric analysis of Poisson change-point regression models: application to policy making in Cali, Colombia

    PubMed Central

    Park, Taeyoung; Krafty, Robert T.; Sánchez, Alvaro I.

    2012-01-01

    A Poisson regression model with an offset assumes a constant baseline rate after accounting for measured covariates, which may lead to biased estimates of coefficients in an inhomogeneous Poisson process. To correctly estimate the effect of time-dependent covariates, we propose a Poisson change-point regression model with an offset that allows a time-varying baseline rate. When the nonconstant pattern of a log baseline rate is modeled with a nonparametric step function, the resulting semi-parametric model involves a model component of varying dimension and thus requires a sophisticated varying-dimensional inference to obtain correct estimates of model parameters of fixed dimension. To fit the proposed varying-dimensional model, we devise a state-of-the-art MCMC-type algorithm based on partial collapse. The proposed model and methods are used to investigate an association between daily homicide rates in Cali, Colombia and policies that restrict the hours during which the legal sale of alcoholic beverages is permitted. While simultaneously identifying the latent changes in the baseline homicide rate which correspond to the incidence of sociopolitical events, we explore the effect of policies governing the sale of alcohol on homicide rates and seek a policy that balances the economic and cultural dependencies on alcohol sales to the health of the public. PMID:23393408

  18. Poisson type models and descriptive statistics of computer network information flows

    SciTech Connect

    Downing, D.; Fedorov, V.; Dunigan, T.; Batsell, S.

    1997-08-01

    Many contemporary publications on network traffic gravitate to ideas of self-similarity and long-range dependence. The corresponding elegant and parsimonious mathematical techniques proved to be efficient for the description of a wide class of aggregated processes. Sharing the enthusiasm about the above ideas the authors also believe that whenever it is possible any problem must be considered at the most basic level in an attempt to understand the driving forces of the processes under analysis. Consequently the authors try to show that some behavioral patterns of descriptive statistics which are typical for long-memory processes (a particular case of long-range dependence) can also be explained in the framework of the traditional Poisson process paradigm. Applying the concepts of inhomogeneity, compoundness and double stochasticity they propose a simple and intuitively transparent approach of explaining the expected shape of the observed histograms of counts and the expected behavior of the sample covariance functions. Matching the images of these two descriptive statistics allows them to infer the presence of trends or double stochasticity in analyzed time series. They considered only statistics which are based on counts. A similar approach may be applied to waiting or inter-arrival time sequences and will be discussed in other publications. They hope that combining the reported results with the statistical methods based on aggregated models may lead to computationally affordable on-line techniques of compact and visualized data analysis of network flows.

  19. Modelling the influence of temperature and rainfall on malaria incidence in four endemic provinces of Zambia using semiparametric Poisson regression.

    PubMed

    Shimaponda-Mataa, Nzooma M; Tembo-Mwase, Enala; Gebreslasie, Michael; Achia, Thomas N O; Mukaratirwa, Samson

    2017-02-01

    Although malaria morbidity and mortality are greatly reduced globally owing to great control efforts, the disease remains the main contributor. In Zambia, all provinces are malaria endemic. However, the transmission intensities vary mainly depending on environmental factors as they interact with the vectors. Generally in Africa, possibly due to the varying perspectives and methods used, there is variation on the relative importance of malaria risk determinants. In Zambia, the role climatic factors play on malaria case rates has not been determined in combination of space and time using robust methods in modelling. This is critical considering the reversal in malaria reduction after the year 2010 and the variation by transmission zones. Using a geoadditive or structured additive semiparametric Poisson regression model, we determined the influence of climatic factors on malaria incidence in four endemic provinces of Zambia. We demonstrate a strong positive association between malaria incidence and precipitation as well as minimum temperature. The risk of malaria was 95% lower in Lusaka (ARR=0.05, 95% CI=0.04-0.06) and 68% lower in the Western Province (ARR=0.31, 95% CI=0.25-0.41) compared to Luapula Province. North-western Province did not vary from Luapula Province. The effects of geographical region are clearly demonstrated by the unique behaviour and effects of minimum and maximum temperatures in the four provinces. Environmental factors such as landscape in urbanised places may also be playing a role.

  20. Ionic screening of charged impurities in electrolytically gated graphene: A partially linearized Poisson-Boltzmann model.

    PubMed

    Sharma, P; Mišković, Z L

    2015-10-07

    We present a model describing the electrostatic interactions across a structure that consists of a single layer of graphene with large area, lying above an oxide substrate of finite thickness, with its surface exposed to a thick layer of liquid electrolyte containing salt ions. Our goal is to analyze the co-operative screening of the potential fluctuation in a doped graphene due to randomness in the positions of fixed charged impurities in the oxide by the charge carriers in graphene and by the mobile ions in the diffuse layer of the electrolyte. In order to account for a possibly large potential drop in the diffuse later that may arise in an electrolytically gated graphene, we use a partially linearized Poisson-Boltzmann (PB) model of the electrolyte, in which we solve a fully nonlinear PB equation for the surface average of the potential in one dimension, whereas the lateral fluctuations of the potential in graphene are tackled by linearizing the PB equation about the average potential. In this way, we are able to describe the regime of equilibrium doping of graphene to large densities for arbitrary values of the ion concentration without restrictions to the potential drop in the electrolyte. We evaluate the electrostatic Green's function for the partially linearized PB model, which is used to express the screening contributions of the graphene layer and the nearby electrolyte by means of an effective dielectric function. We find that, while the screened potential of a single charged impurity at large in-graphene distances exhibits a strong dependence on the ion concentration in the electrolyte and on the doping density in graphene, in the case of a spatially correlated two-dimensional ensemble of impurities, this dependence is largely suppressed in the autocovariance of the fluctuating potential.

  1. Poisson-Gaussian Noise Reduction Using the Hidden Markov Model in Contourlet Domain for Fluorescence Microscopy Images.

    PubMed

    Yang, Sejung; Lee, Byung-Uk

    2015-01-01

    In certain image acquisitions processes, like in fluorescence microscopy or astronomy, only a limited number of photons can be collected due to various physical constraints. The resulting images suffer from signal dependent noise, which can be modeled as a Poisson distribution, and a low signal-to-noise ratio. However, the majority of research on noise reduction algorithms focuses on signal independent Gaussian noise. In this paper, we model noise as a combination of Poisson and Gaussian probability distributions to construct a more accurate model and adopt the contourlet transform which provides a sparse representation of the directional components in images. We also apply hidden Markov models with a framework that neatly describes the spatial and interscale dependencies which are the properties of transformation coefficients of natural images. In this paper, an effective denoising algorithm for Poisson-Gaussian noise is proposed using the contourlet transform, hidden Markov models and noise estimation in the transform domain. We supplement the algorithm by cycle spinning and Wiener filtering for further improvements. We finally show experimental results with simulations and fluorescence microscopy images which demonstrate the improved performance of the proposed approach.

  2. A Poisson equation formulation for pressure calculations in penalty finite element models for viscous incompressible flows

    NASA Technical Reports Server (NTRS)

    Sohn, J. L.; Heinrich, J. C.

    1990-01-01

    The calculation of pressures when the penalty-function approximation is used in finite-element solutions of laminar incompressible flows is addressed. A Poisson equation for the pressure is formulated that involves third derivatives of the velocity field. The second derivatives appearing in the weak formulation of the Poisson equation are calculated from the C0 velocity approximation using a least-squares method. The present scheme is shown to be efficient, free of spurious oscillations, and accurate. Examples of applications are given and compared with results obtained using mixed formulations.

  3. Poisson-Nernst-Planck-Fermi theory for modeling biological ion channels.

    PubMed

    Liu, Jinn-Liang; Eisenberg, Bob

    2014-12-14

    A Poisson-Nernst-Planck-Fermi (PNPF) theory is developed for studying ionic transport through biological ion channels. Our goal is to deal with the finite size of particle using a Fermi like distribution without calculating the forces between the particles, because they are both expensive and tricky to compute. We include the steric effect of ions and water molecules with nonuniform sizes and interstitial voids, the correlation effect of crowded ions with different valences, and the screening effect of water molecules in an inhomogeneous aqueous electrolyte. Including the finite volume of water and the voids between particles is an important new part of the theory presented here. Fermi like distributions of all particle species are derived from the volume exclusion of classical particles. Volume exclusion and the resulting saturation phenomena are especially important to describe the binding and permeation mechanisms of ions in a narrow channel pore. The Gibbs free energy of the Fermi distribution reduces to that of a Boltzmann distribution when these effects are not considered. The classical Gibbs entropy is extended to a new entropy form - called Gibbs-Fermi entropy - that describes mixing configurations of all finite size particles and voids in a thermodynamic system where microstates do not have equal probabilities. The PNPF model describes the dynamic flow of ions, water molecules, as well as voids with electric fields and protein charges. The model also provides a quantitative mean-field description of the charge/space competition mechanism of particles within the highly charged and crowded channel pore. The PNPF results are in good accord with experimental currents recorded in a 10(8)-fold range of Ca(2+) concentrations. The results illustrate the anomalous mole fraction effect, a signature of L-type calcium channels. Moreover, numerical results concerning water density, dielectric permittivity, void volume, and steric energy provide useful details to study

  4. Relative age and birthplace effect in Japanese professional sports: a quantitative evaluation using a Bayesian hierarchical Poisson model.

    PubMed

    Ishigami, Hideaki

    2016-01-01

    Relative age effect (RAE) in sports has been well documented. Recent studies investigate the effect of birthplace in addition to the RAE. The first objective of this study was to show the magnitude of the RAE in two major professional sports in Japan, baseball and soccer. Second, we examined the birthplace effect and compared its magnitude with that of the RAE. The effect sizes were estimated using a Bayesian hierarchical Poisson model with the number of players as dependent variable. The RAEs were 9.0% and 7.7% per month for soccer and baseball, respectively. These estimates imply that children born in the first month of a school year have about three times greater chance of becoming a professional player than those born in the last month of the year. Over half of the difference in likelihoods of becoming a professional player between birthplaces was accounted for by weather conditions, with the likelihood decreasing by 1% per snow day. An effect of population size was not detected in the data. By investigating different samples, we demonstrated that using quarterly data leads to underestimation and that the age range of sampled athletes should be set carefully.

  5. Numerical methods for a Poisson-Nernst-Planck-Fermi model of biological ion channels.

    PubMed

    Liu, Jinn-Liang; Eisenberg, Bob

    2015-07-01

    Numerical methods are proposed for an advanced Poisson-Nernst-Planck-Fermi (PNPF) model for studying ion transport through biological ion channels. PNPF contains many more correlations than most models and simulations of channels, because it includes water and calculates dielectric properties consistently as outputs. This model accounts for the steric effect of ions and water molecules with different sizes and interstitial voids, the correlation effect of crowded ions with different valences, and the screening effect of polarized water molecules in an inhomogeneous aqueous electrolyte. The steric energy is shown to be comparable to the electrical energy under physiological conditions, demonstrating the crucial role of the excluded volume of particles and the voids in the natural function of channel proteins. Water is shown to play a critical role in both correlation and steric effects in the model. We extend the classical Scharfetter-Gummel (SG) method for semiconductor devices to include the steric potential for ion channels, which is a fundamental physical property not present in semiconductors. Together with a simplified matched interface and boundary (SMIB) method for treating molecular surfaces and singular charges of channel proteins, the extended SG method is shown to exhibit important features in flow simulations such as optimal convergence, efficient nonlinear iterations, and physical conservation. The generalized SG stability condition shows why the standard discretization (without SG exponential fitting) of NP equations may fail and that divalent Ca(2+) may cause more unstable discrete Ca(2+) fluxes than that of monovalent Na(+). Two different methods-called the SMIB and multiscale methods-are proposed for two different types of channels, namely, the gramicidin A channel and an L-type calcium channel, depending on whether water is allowed to pass through the channel. Numerical methods are first validated with constructed models whose exact solutions are

  6. Numerical methods for a Poisson-Nernst-Planck-Fermi model of biological ion channels

    NASA Astrophysics Data System (ADS)

    Liu, Jinn-Liang; Eisenberg, Bob

    2015-07-01

    Numerical methods are proposed for an advanced Poisson-Nernst-Planck-Fermi (PNPF) model for studying ion transport through biological ion channels. PNPF contains many more correlations than most models and simulations of channels, because it includes water and calculates dielectric properties consistently as outputs. This model accounts for the steric effect of ions and water molecules with different sizes and interstitial voids, the correlation effect of crowded ions with different valences, and the screening effect of polarized water molecules in an inhomogeneous aqueous electrolyte. The steric energy is shown to be comparable to the electrical energy under physiological conditions, demonstrating the crucial role of the excluded volume of particles and the voids in the natural function of channel proteins. Water is shown to play a critical role in both correlation and steric effects in the model. We extend the classical Scharfetter-Gummel (SG) method for semiconductor devices to include the steric potential for ion channels, which is a fundamental physical property not present in semiconductors. Together with a simplified matched interface and boundary (SMIB) method for treating molecular surfaces and singular charges of channel proteins, the extended SG method is shown to exhibit important features in flow simulations such as optimal convergence, efficient nonlinear iterations, and physical conservation. The generalized SG stability condition shows why the standard discretization (without SG exponential fitting) of NP equations may fail and that divalent Ca2 + may cause more unstable discrete Ca2 + fluxes than that of monovalent Na+. Two different methods—called the SMIB and multiscale methods—are proposed for two different types of channels, namely, the gramicidin A channel and an L-type calcium channel, depending on whether water is allowed to pass through the channel. Numerical methods are first validated with constructed models whose exact solutions are

  7. Effect of sodium perturbations on rat chemoreceptor spike generation: implications for a Poisson model

    PubMed Central

    Donnelly, David F; Panisello, Jose M; Boggs, Dona

    1998-01-01

    The sensitivity of arterial chemoreceptor spike generation to reductions in excitability was examined using rat chemoreceptors in vitro. Axonal excitability was reduced by reducing extracellular sodium concentration ([Na+]o) by 10-40 % or by applying low doses of tetrodotoxin (TTX).In normoxia and in hypoxia, an isosmotic reduction in [Na+]o caused a proportional decrease in single-fibre, spiking nerve activity. For a 20 % reduction in [Na+]o, nerve activity decreased to 54 ± 7 % of control in normoxia and 41 ± 5 % in hypoxia.Low doses of TTX (25-50 nM) caused a similar decrease in spiking frequency, but this response was variable amongst fibres, with some fibres unaffected by TTX.A reduction in [Na+]o by 20 % caused a slowing of conduction velocity, measured using an electrical stimulus delivered to an electrode placed in the carotid body. Threshold current for spike generation was increased by about 2·7 ± 1·4 %. Threshold current increased by 6·5 ± 3·7 % following a 40 % reduction in [Na+]o.The spike generation process was modelled as a Poisson process in which depolarizing events summate and give rise to an action potential. The experimental data were best fitted to a high order process characterized by a large number of events and high event threshold.This result is not consistent with depolarization events caused by episodic transmitter release, but suggests that afferent spike generation is an endogenous process in the afferent nerve fibres, perhaps linked to random channel activity or to thermal noise fluctuations. PMID:9679183

  8. Conditional modeling of antibody titers using a zero-inflated poisson random effects model: application to Fabrazyme.

    PubMed

    Bonate, Peter L; Sung, Crystal; Welch, Karen; Richards, Susan

    2009-10-01

    Patients that are exposed to biotechnology-derived therapeutics often develop antibodies to the therapeutic, the magnitude of which is assessed by measuring antibody titers. A statistical approach for analyzing antibody titer data conditional on seroconversion is presented. The proposed method is to first transform the antibody titer data based on a geometric series using a common ratio of 2 and a scale factor of 50 and then analyze the exponent using a zero-inflated or hurdle model assuming a Poisson or negative binomial distribution with random effects to account for patient heterogeneity. Patient specific covariates can be used to model the probability of developing an antibody response, i.e., seroconversion, as well as the magnitude of the antibody titer itself. The method was illustrated using antibody titer data from 87 male seroconverted Fabry patients receiving Fabrazyme. Titers from five clinical trials were collected over 276 weeks of therapy with anti-Fabrazyme IgG titers ranging from 100 to 409,600 after exclusion of seronegative patients. The best model to explain seroconversion was a zero-inflated Poisson (ZIP) model where cumulative dose (under a constant dose regimen of dosing every 2 weeks) influenced the probability of seroconversion. There was an 80% chance of seroconversion when the cumulative dose reached 210 mg (90% confidence interval: 194-226 mg). No difference in antibody titers was noted between Japanese or Western patients. Once seroconverted, antibody titers did not remain constant but decreased in an exponential manner from an initial magnitude to a new lower steady-state value. The expected titer after the new steady-state titer had been achieved was 870 (90% CI: 630-1109). The half-life to the new steady-state value after seroconversion was 44 weeks (90% CI: 17-70 weeks). Time to seroconversion did not appear to be correlated with titer at the time of seroconversion. The method can be adequately used to model antibody titer data.

  9. Birth and Death Process Modeling Leads to the Poisson Distribution: A Journey Worth Taking

    ERIC Educational Resources Information Center

    Rash, Agnes M.; Winkel, Brian J.

    2009-01-01

    This paper describes details of development of the general birth and death process from which we can extract the Poisson process as a special case. This general process is appropriate for a number of courses and units in courses and can enrich the study of mathematics for students as it touches and uses a diverse set of mathematical topics, e.g.,…

  10. Poisson-Nernst-Planck-Fermi theory for modeling biological ion channels

    SciTech Connect

    Liu, Jinn-Liang; Eisenberg, Bob

    2014-12-14

    A Poisson-Nernst-Planck-Fermi (PNPF) theory is developed for studying ionic transport through biological ion channels. Our goal is to deal with the finite size of particle using a Fermi like distribution without calculating the forces between the particles, because they are both expensive and tricky to compute. We include the steric effect of ions and water molecules with nonuniform sizes and interstitial voids, the correlation effect of crowded ions with different valences, and the screening effect of water molecules in an inhomogeneous aqueous electrolyte. Including the finite volume of water and the voids between particles is an important new part of the theory presented here. Fermi like distributions of all particle species are derived from the volume exclusion of classical particles. Volume exclusion and the resulting saturation phenomena are especially important to describe the binding and permeation mechanisms of ions in a narrow channel pore. The Gibbs free energy of the Fermi distribution reduces to that of a Boltzmann distribution when these effects are not considered. The classical Gibbs entropy is extended to a new entropy form — called Gibbs-Fermi entropy — that describes mixing configurations of all finite size particles and voids in a thermodynamic system where microstates do not have equal probabilities. The PNPF model describes the dynamic flow of ions, water molecules, as well as voids with electric fields and protein charges. The model also provides a quantitative mean-field description of the charge/space competition mechanism of particles within the highly charged and crowded channel pore. The PNPF results are in good accord with experimental currents recorded in a 10{sup 8}-fold range of Ca{sup 2+} concentrations. The results illustrate the anomalous mole fraction effect, a signature of L-type calcium channels. Moreover, numerical results concerning water density, dielectric permittivity, void volume, and steric energy provide useful

  11. Superposition of the Neyman-Scott Rectangular Pulses Model and the Poisson White Noise Model for the Representation of Tropical Rain Rates

    NASA Astrophysics Data System (ADS)

    Morrissey, M. L.

    2009-12-01

    A point process model for tropical rain rates is developed through the derivation of the third moment expression for a combined point process model. The model is a superposition of a Neyman-Scott rectangular pulse model and a Poisson white noise process model. The model is scalable in the temporal dimension. The derivation of the third moment for this model allows the inclusion of the skewness parameter which is necessary to adequately represent rainfall intensity. Analysis of the model fit to tropical tipping bucket raingauge data ranging in temporal scale from 5 minutes to one day indicates that it can adequately produce synthesized rainfall having the statistical characteristics of rain rate over the range of scales tested. Of special interest is the model’s capability to accurately preserve the probability of extreme tropical rain rates at different scales. In addition to various hydrological applications, the model also has many potential uses in the field of meteorology, such as the study and development of radar rain rate algorithms for the tropics which need to parameterized attenuation due to heavy rain.

  12. A stochastic model for the polygonal tundra based on Poisson-Voronoi Diagrams

    NASA Astrophysics Data System (ADS)

    Cresto Aleina, F.; Brovkin, V.; Muster, S.; Boike, J.; Kutzbach, L.; Sachs, T.; Zuyev, S.

    2012-12-01

    Sub-grid and small scale processes occur in various ecosystems and landscapes (e.g., periglacial ecosystems, peatlands and vegetation patterns). These local heterogeneities are often important or even fundamental to better understand general and large scale properties of the system, but they are either ignored or poorly parameterized in regional and global models. Because of their small scale, the underlying generating processes can be well explained and resolved only by local mechanistic models, which, on the other hand, fail to consider the regional or global influences of those features. A challenging problem is then how to deal with these interactions across different spatial scales, and how to improve our understanding of the role played by local soil heterogeneities in the climate system. This is of particular interest in the northern peatlands, because of the huge amount of carbon stored in these regions. Land-atmosphere greenhouse gas fluxes vary dramatically within these environments. Therefore, to correctly estimate the fluxes a description of the small scale soil variability is needed. Applications of statistical physics methods could be useful tools to upscale local features of the landscape, relating them to large-scale properties. To test this approach we considered a case study: the polygonal tundra. Cryogenic polygons, consisting mainly of elevated dry rims and wet low centers, pattern the terrain of many subartic regions and are generated by complex crack-and-growth processes. Methane, carbon dioxide and water vapor fluxes vary largely within the environment, as an effect of the small scale processes that characterize the landscape. It is then essential to consider the local heterogeneous behavior of the system components, such as the water table level inside the polygon wet centers, or the depth at which frozen soil thaws. We developed a stochastic model for this environment using Poisson-Voronoi diagrams, which is able to upscale statistical

  13. A global spectral element model for poisson equations and advective flow over a sphere

    NASA Astrophysics Data System (ADS)

    Mei, Huan; Wang, Faming; Zeng, Zhong; Qiu, Zhouhua; Yin, Linmao; Li, Liang

    2016-03-01

    A global spherical Fourier-Legendre spectral element method is proposed to solve Poisson equations and advective flow over a sphere. In the meridional direction, Legendre polynomials are used and the region is divided into several elements. In order to avoid coordinate singularities at the north and south poles in the meridional direction, Legendre-Gauss-Radau points are chosen at the elements involving the two poles. Fourier polynomials are applied in the zonal direction for its periodicity, with only one element. Then, the partial differential equations are solved on the longitude-latitude meshes without coordinate transformation between spherical and Cartesian coordinates. For verification of the proposed method, a few Poisson equations and advective flows are tested. Firstly, the method is found to be valid for test cases with smooth solution. The results of the Poisson equations demonstrate that the present method exhibits high accuracy and exponential convergence. Highprecision solutions are also obtained with near negligible numerical diffusion during the time evolution for advective flow with smooth shape. Secondly, the results of advective flow with non-smooth shape and deformational flow are also shown to be reasonable and effective. As a result, the present method is proved to be capable of solving flow through different types of elements, and thereby a desirable method with reliability and high accuracy for solving partial differential equations over a sphere.

  14. A multi-Poisson dynamic mixture model to cluster developmental patterns of gene expression by RNA-seq.

    PubMed

    Ye, Meixia; Wang, Zhong; Wang, Yaqun; Wu, Rongling

    2015-03-01

    Dynamic changes of gene expression reflect an intrinsic mechanism of how an organism responds to developmental and environmental signals. With the increasing availability of expression data across a time-space scale by RNA-seq, the classification of genes as per their biological function using RNA-seq data has become one of the most significant challenges in contemporary biology. Here we develop a clustering mixture model to discover distinct groups of genes expressed during a period of organ development. By integrating the density function of multivariate Poisson distribution, the model accommodates the discrete property of read counts characteristic of RNA-seq data. The temporal dependence of gene expression is modeled by the first-order autoregressive process. The model is implemented with the Expectation-Maximization algorithm and model selection to determine the optimal number of gene clusters and obtain the estimates of Poisson parameters that describe the pattern of time-dependent expression of genes from each cluster. The model has been demonstrated by analyzing a real data from an experiment aimed to link the pattern of gene expression to catkin development in white poplar. The usefulness of the model has been validated through computer simulation. The model provides a valuable tool for clustering RNA-seq data, facilitating our global view of expression dynamics and understanding of gene regulation mechanisms.

  15. The exponential-Poisson model for recurrent event data: an application to a set of data on malaria in Brazil.

    PubMed

    Macera, Márcia A C; Louzada, Francisco; Cancho, Vicente G; Fontes, Cor J F

    2015-03-01

    In this paper, we introduce a new model for recurrent event data characterized by a baseline rate function fully parametric, which is based on the exponential-Poisson distribution. The model arises from a latent competing risk scenario, in the sense that there is no information about which cause was responsible for the event occurrence. Then, the time of each recurrence is given by the minimum lifetime value among all latent causes. The new model has a particular case, which is the classical homogeneous Poisson process. The properties of the proposed model are discussed, including its hazard rate function, survival function, and ordinary moments. The inferential procedure is based on the maximum likelihood approach. We consider an important issue of model selection between the proposed model and its particular case by the likelihood ratio test and score test. Goodness of fit of the recurrent event models is assessed using Cox-Snell residuals. A simulation study evaluates the performance of the estimation procedure in the presence of a small and moderate sample sizes. Applications on two real data sets are provided to illustrate the proposed methodology. One of them, first analyzed by our team of researchers, considers the data concerning the recurrence of malaria, which is an infectious disease caused by a protozoan parasite that infects red blood cells.

  16. Sign-tunable Poisson's ratio in semi-fluorinated graphene.

    PubMed

    Qin, Rui; Zheng, Jiaxin; Zhu, Wenjun

    2017-01-07

    Poisson's ratio is a fundamental property of a material which reflects the transverse strain response to the applied axial strain. Negative Poisson's ratio is allowed theoretically, but is rare in nature. Besides the discovery and tailoring of bulk auxetic materials, recent studies have also found a negative Poisson's ratio in nanomaterials, while their negative Poisson's ratio is mainly based on conventional rigid mechanical models as bulk auxetic materials. In this work, we report the existence of in-plane negative Poisson's ratio in a two-dimensional convex structure of newly synthesized semi-fluorinated graphene by using first-principles calculations. In addition, the sign of the Poisson's ratio can be tuned by the applied strain. Interestingly, we find that this unconventional negative Poisson's ratio cannot be explained by conventional rigid mechanical models but originates from the enhanced bond angle strain over the bond strain due to chemical functionalization. This new mechanism of auxetics extends the scope of auxetic nanomaterials and can serve as design principles for future discovery and design of new auxetic materials.

  17. A Poisson-lognormal conditional-autoregressive model for multivariate spatial analysis of pedestrian crash counts across neighborhoods.

    PubMed

    Wang, Yiyi; Kockelman, Kara M

    2013-11-01

    This work examines the relationship between 3-year pedestrian crash counts across Census tracts in Austin, Texas, and various land use, network, and demographic attributes, such as land use balance, residents' access to commercial land uses, sidewalk density, lane-mile densities (by roadway class), and population and employment densities (by type). The model specification allows for region-specific heterogeneity, correlation across response types, and spatial autocorrelation via a Poisson-based multivariate conditional auto-regressive (CAR) framework and is estimated using Bayesian Markov chain Monte Carlo methods. Least-squares regression estimates of walk-miles traveled per zone serve as the exposure measure. Here, the Poisson-lognormal multivariate CAR model outperforms an aspatial Poisson-lognormal multivariate model and a spatial model (without cross-severity correlation), both in terms of fit and inference. Positive spatial autocorrelation emerges across neighborhoods, as expected (due to latent heterogeneity or missing variables that trend in space, resulting in spatial clustering of crash counts). In comparison, the positive aspatial, bivariate cross correlation of severe (fatal or incapacitating) and non-severe crash rates reflects latent covariates that have impacts across severity levels but are more local in nature (such as lighting conditions and local sight obstructions), along with spatially lagged cross correlation. Results also suggest greater mixing of residences and commercial land uses is associated with higher pedestrian crash risk across different severity levels, ceteris paribus, presumably since such access produces more potential conflicts between pedestrian and vehicle movements. Interestingly, network densities show variable effects, and sidewalk provision is associated with lower severe-crash rates.

  18. On stability of ground states for finite crystals in the Schrödinger-Poisson model

    NASA Astrophysics Data System (ADS)

    Komech, A.; Kopylova, E.

    2017-03-01

    We consider the Schrödinger-Poisson-Newton equations for finite crystals under periodic boundary conditions with one ion per cell of a lattice. The electrons are described by one-particle Schrödinger equation. Our main results are (i) the global dynamics with moving ions and (ii) the orbital stability of periodic ground state under a novel Jellium and Wiener-type conditions on the ion charge density. Under the Jellium condition, both ionic and electronic charge densities for the ground state are uniform.

  19. Harnessing the theoretical foundations of the exponential and beta-Poisson dose-response models to quantify parameter uncertainty using Markov Chain Monte Carlo.

    PubMed

    Schmidt, Philip J; Pintar, Katarina D M; Fazil, Aamir M; Topp, Edward

    2013-09-01

    Dose-response models are the essential link between exposure assessment and computed risk values in quantitative microbial risk assessment, yet the uncertainty that is inherent to computed risks because the dose-response model parameters are estimated using limited epidemiological data is rarely quantified. Second-order risk characterization approaches incorporating uncertainty in dose-response model parameters can provide more complete information to decisionmakers by separating variability and uncertainty to quantify the uncertainty in computed risks. Therefore, the objective of this work is to develop procedures to sample from posterior distributions describing uncertainty in the parameters of exponential and beta-Poisson dose-response models using Bayes's theorem and Markov Chain Monte Carlo (in OpenBUGS). The theoretical origins of the beta-Poisson dose-response model are used to identify a decomposed version of the model that enables Bayesian analysis without the need to evaluate Kummer confluent hypergeometric functions. Herein, it is also established that the beta distribution in the beta-Poisson dose-response model cannot address variation among individual pathogens, criteria to validate use of the conventional approximation to the beta-Poisson model are proposed, and simple algorithms to evaluate actual beta-Poisson probabilities of infection are investigated. The developed MCMC procedures are applied to analysis of a case study data set, and it is demonstrated that an important region of the posterior distribution of the beta-Poisson dose-response model parameters is attributable to the absence of low-dose data. This region includes beta-Poisson models for which the conventional approximation is especially invalid and in which many beta distributions have an extreme shape with questionable plausibility.

  20. Integrated analysis of transcriptomic and proteomic data of Desulfovibrio vulgaris: Zero-Inflated Poisson regression models to predict abundance of undetected proteins

    SciTech Connect

    Nie, Lei; Wu, Gang; Brockman, Fred J.; Zhang, Weiwen

    2006-05-04

    Abstract Advances in DNA microarray and proteomics technologies have enabled high-throughput measurement of mRNA expression and protein abundance. Parallel profiling of mRNA and protein on a global scale and integrative analysis of these two data types could provide additional insight into the metabolic mechanisms underlying complex biological systems. However, because protein abundance and mRNA expression are affected by many cellular and physical processes, there have been conflicting results on the correlation of these two measurements. In addition, as current proteomic methods can detect only a small fraction of proteins present in cells, no correlation study of these two data types has been done thus far at the whole-genome level. In this study, we describe a novel data-driven statistical model to integrate whole-genome microarray and proteomic data collected from Desulfovibrio vulgaris grown under three different conditions. Based on the Poisson distribution pattern of proteomic data and the fact that a large number of proteins were undetected (excess zeros), Zero-inflated Poisson models were used to define the correlation pattern of mRNA and protein abundance. The models assumed that there is a probability mass at zero representing some of the undetected proteins because of technical limitations. The models thus use abundance measurements of transcripts and proteins experimentally detected as input to generate predictions of protein abundances as output for all genes in the genome. We demonstrated the statistical models by comparatively analyzing D. vulgaris grown on lactate-based versus formate-based media. The increased expressions of Ech hydrogenase and alcohol dehydrogenase (Adh)-periplasmic Fe-only hydrogenase (Hyd) pathway for ATP synthesis were predicted for D. vulgaris grown on formate.

  1. Derivation of Poisson and Nernst-Planck equations in a bath and channel from a molecular model.

    PubMed

    Schuss, Z; Nadler, B; Eisenberg, R S

    2001-09-01

    Permeation of ions from one electrolytic solution to another, through a protein channel, is a biological process of considerable importance. Permeation occurs on a time scale of micro- to milliseconds, far longer than the femtosecond time scales of atomic motion. Direct simulations of atomic dynamics are not yet possible for such long-time scales; thus, averaging is unavoidable. The question is what and how to average. In this paper, we average a Langevin model of ionic motion in a bulk solution and protein channel. The main result is a coupled system of averaged Poisson and Nernst-Planck equations (CPNP) involving conditional and unconditional charge densities and conditional potentials. The resulting NP equations contain the averaged force on a single ion, which is the sum of two components. The first component is the gradient of a conditional electric potential that is the solution of Poisson's equation with conditional and permanent charge densities and boundary conditions of the applied voltage. The second component is the self-induced force on an ion due to surface charges induced only by that ion at dielectric interfaces. The ion induces surface polarization charge that exerts a significant force on the ion itself, not present in earlier PNP equations. The proposed CPNP system is not complete, however, because the electric potential satisfies Poisson's equation with conditional charge densities, conditioned on the location of an ion, while the NP equations contain unconditional densities. The conditional densities are closely related to the well-studied pair-correlation functions of equilibrium statistical mechanics. We examine a specific closure relation, which on the one hand replaces the conditional charge densities by the unconditional ones in the Poisson equation, and on the other hand replaces the self-induced force in the NP equation by an effective self-induced force. This effective self-induced force is nearly zero in the baths but is approximately

  2. Poisson-Riemannian geometry

    NASA Astrophysics Data System (ADS)

    Beggs, Edwin J.; Majid, Shahn

    2017-04-01

    We study noncommutative bundles and Riemannian geometry at the semiclassical level of first order in a deformation parameter λ, using a functorial approach. This leads us to field equations of 'Poisson-Riemannian geometry' between the classical metric, the Poisson bracket and a certain Poisson-compatible connection needed as initial data for the quantisation of the differential structure. We use such data to define a functor Q to O(λ2) from the monoidal category of all classical vector bundles equipped with connections to the monoidal category of bimodules equipped with bimodule connections over the quantised algebra. This is used to 'semiquantise' the wedge product of the exterior algebra and in the Riemannian case, the metric and the Levi-Civita connection in the sense of constructing a noncommutative geometry to O(λ2) . We solve our field equations for the Schwarzschild black-hole metric under the assumption of spherical symmetry and classical dimension, finding a unique solution and the necessity of nonassociativity at order λ2, which is similar to previous results for quantum groups. The paper also includes a nonassociative hyperboloid, nonassociative fuzzy sphere and our previously algebraic bicrossproduct model.

  3. Determination of Diffusion Coefficients in Cement-Based Materials: An Inverse Problem for the Nernst-Planck and Poisson Models

    NASA Astrophysics Data System (ADS)

    Szyszkiewicz-Warzecha, Krzysztof; Jasielec, Jerzy J.; Fausek, Janusz; Filipek, Robert

    2016-08-01

    Transport properties of ions have significant impact on the possibility of rebars corrosion thus the knowledge of a diffusion coefficient is important for reinforced concrete durability. Numerous tests for the determination of diffusion coefficients have been proposed but analysis of some of these tests show that they are too simplistic or even not valid. Hence, more rigorous models to calculate the coefficients should be employed. Here we propose the Nernst-Planck and Poisson equations, which take into account the concentration and electric potential field. Based on this model a special inverse method is presented for determination of a chloride diffusion coefficient. It requires the measurement of concentration profiles or flux on the boundary and solution of the NPP model to define the goal function. Finding the global minimum is equivalent to the determination of diffusion coefficients. Typical examples of the application of the presented method are given.

  4. Climatology of Station Storm Rainfall in the Continental United States: Parameters of the Bartlett-Lewis and Poisson Rectangular Pulses Models

    NASA Technical Reports Server (NTRS)

    Hawk, Kelly Lynn; Eagleson, Peter S.

    1992-01-01

    The parameters of two stochastic models of point rainfall, the Bartlett-Lewis model and the Poisson rectangular pulses model, are estimated for each month of the year from the historical records of hourly precipitation at more than seventy first-order stations in the continental United States. The parameters are presented both in tabular form and as isopleths on maps. The Poisson rectangular pulses parameters are useful in implementing models of the land surface water balance. The Bartlett-Lewis parameters are useful in disaggregating precipitation to a time period shorter than that of existing observations. Information is also included on a floppy disk.

  5. Spatio-energetic cross-talks in photon counting detectors: detector model and correlated Poisson data generator

    NASA Astrophysics Data System (ADS)

    Taguchi, Katsuyuki; Polster, Christoph; Lee, Okkyun; Kappler, Steffen

    2016-03-01

    An x-ray photon interacts with photon counting detectors (PCDs) and generates an electron charge cloud or multiple clouds. The clouds (thus, the photon energy) may be split between two adjacent PCD pixels when the interaction occurs near pixel boundaries, producing a count at both of the two pixels. This is called double-counting with charge sharing. The output of individual PCD pixel is Poisson distributed integer counts; however, the outputs of adjacent pixels are correlated due to double-counting. Major problems are the lack of detector noise model for the spatio-energetic crosstalk and the lack of an efficient simulation tool. Monte Carlo simulation can accurately simulate these phenomena and produce noisy data; however, it is not computationally efficient. In this study, we developed a new detector model and implemented into an efficient software simulator which uses a Poisson random number generator to produce correlated noisy integer counts. The detector model takes the following effects into account effects: (1) detection efficiency and incomplete charge collection; (2) photoelectric effect with total absorption; (3) photoelectric effect with fluorescence x-ray emission and re-absorption; (4) photoelectric effect with fluorescence x-ray emission which leaves PCD completely; and (5) electric noise. The model produced total detector spectrum similar to previous MC simulation data. The model can be used to predict spectrum and correlation with various different settings. The simulated noisy data demonstrated the expected performance: (a) data were integers; (b) the mean and covariance matrix was close to the target values; (c) noisy data generation was very efficient

  6. Predictors of the number of under-five malnourished children in Bangladesh: application of the generalized poisson regression model

    PubMed Central

    2013-01-01

    Background Malnutrition is one of the principal causes of child mortality in developing countries including Bangladesh. According to our knowledge, most of the available studies, that addressed the issue of malnutrition among under-five children, considered the categorical (dichotomous/polychotomous) outcome variables and applied logistic regression (binary/multinomial) to find their predictors. In this study malnutrition variable (i.e. outcome) is defined as the number of under-five malnourished children in a family, which is a non-negative count variable. The purposes of the study are (i) to demonstrate the applicability of the generalized Poisson regression (GPR) model as an alternative of other statistical methods and (ii) to find some predictors of this outcome variable. Methods The data is extracted from the Bangladesh Demographic and Health Survey (BDHS) 2007. Briefly, this survey employs a nationally representative sample which is based on a two-stage stratified sample of households. A total of 4,460 under-five children is analysed using various statistical techniques namely Chi-square test and GPR model. Results The GPR model (as compared to the standard Poisson regression and negative Binomial regression) is found to be justified to study the above-mentioned outcome variable because of its under-dispersion (variance < mean) property. Our study also identify several significant predictors of the outcome variable namely mother’s education, father’s education, wealth index, sanitation status, source of drinking water, and total number of children ever born to a woman. Conclusions Consistencies of our findings in light of many other studies suggest that the GPR model is an ideal alternative of other statistical models to analyse the number of under-five malnourished children in a family. Strategies based on significant predictors may improve the nutritional status of children in Bangladesh. PMID:23297699

  7. SU-E-T-144: Bayesian Inference of Local Relapse Data Using a Poisson-Based Tumour Control Probability Model

    SciTech Connect

    La Russa, D

    2015-06-15

    Purpose: The purpose of this project is to develop a robust method of parameter estimation for a Poisson-based TCP model using Bayesian inference. Methods: Bayesian inference was performed using the PyMC3 probabilistic programming framework written in Python. A Poisson-based TCP regression model that accounts for clonogen proliferation was fit to observed rates of local relapse as a function of equivalent dose in 2 Gy fractions for a population of 623 stage-I non-small-cell lung cancer patients. The Slice Markov Chain Monte Carlo sampling algorithm was used to sample the posterior distributions, and was initiated using the maximum of the posterior distributions found by optimization. The calculation of TCP with each sample step required integration over the free parameter α, which was performed using an adaptive 24-point Gauss-Legendre quadrature. Convergence was verified via inspection of the trace plot and posterior distribution for each of the fit parameters, as well as with comparisons of the most probable parameter values with their respective maximum likelihood estimates. Results: Posterior distributions for α, the standard deviation of α (σ), the average tumour cell-doubling time (Td), and the repopulation delay time (Tk), were generated assuming α/β = 10 Gy, and a fixed clonogen density of 10{sup 7} cm−{sup 3}. Posterior predictive plots generated from samples from these posterior distributions are in excellent agreement with the observed rates of local relapse used in the Bayesian inference. The most probable values of the model parameters also agree well with maximum likelihood estimates. Conclusion: A robust method of performing Bayesian inference of TCP data using a complex TCP model has been established.

  8. Poisson structures for the Aristotelian model of three-body motion

    NASA Astrophysics Data System (ADS)

    Abadoğlu, E.; Gümral, H.

    2011-08-01

    We present explicitly Poisson structures of a dynamical system with three degrees of freedom introduced and studied by Calogero et al (2005 J. Phys. A: Math. Gen. 38 8873-96). We first show the construction of a formal Hamiltonian structure for a time-dependent Hamiltonian function. We then cast the dynamical equations into the form of a gradient flow by means of a potential function. By reducing the number of equations, we obtain the second time-independent Hamiltonian function which includes all parameters of the system. This extends the result of Calogero et al (2009 J. Phys. A: Math. Theor. 42 015205) for semi-symmetrical motion. We present bi-Hamiltonian structures for two special cases of the cited references. It turns out that the case of three bodies two of which are not interacting with each other but are coupled through the interaction of a third one requires a separate treatment. We conclude with a discussion on generic form of the second time-independent Hamiltonian function.

  9. Assessment of the spatial occurrence of childhood leukaemia mortality using standardized rate ratios with a simple linear Poisson model.

    PubMed

    Aickin, M; Chapin, C A; Flood, T J; Englender, S J; Caldwell, G G

    1992-08-01

    Reports of a suspected cluster of childhood leukaemia cases in West Central Phoenix have led to a number of epidemiological studies in the geographical area. We report here on a death certificate-based mortality study, which indicated an elevated rate ratio of 1.95 during 1966-1986, using the remainder of the Phoenix standard metropolitan statistical area (SMSA) as a comparison region. In the process of analysing the data from this study, a methodology for dealing with denominator variability in a standardized mortality ratio was developed using a simple linear Poisson model. This new approach is seen as being of general use in the analysis of standardized rate ratios (SRR), as well as being particularly appropriate for cluster investigations.

  10. An effective differential expression analysis of deep-sequencing data based on the Poisson log-normal model.

    PubMed

    Wu, Jun; Zhao, Xiaodong; Lin, Zongli; Shao, Zhifeng

    2015-04-01

    Tremendous amount of deep-sequencing data has unprecedentedly improved our understanding in biomedical science by digital sequence reads. To mine useful information from such data, a proper distribution for modeling all range of the count data and accurate parameter estimation are required. In this paper, we propose a method, called "DEPln," for differential expression analysis based on the Poisson log-normal (PLN) distribution with an accurate parameter estimation strategy, which aims to overcome the inconvenience in the mathematical analysis of the traditional PLN distribution. The performance of our proposed method is validated by both synthetic and real data. Experimental results indicate that our method outperforms the traditional methods in terms of the discrimination ability and results in a good tradeoff between the recall rate and the precision. Thus, our work provides a new approach for gene expression analysis and has strong potential in deep-sequencing based research.

  11. Robust estimates of divergence times and selection with a poisson random field model: a case study of comparative phylogeographic data.

    PubMed

    Amei, Amei; Smith, Brian Tilston

    2014-01-01

    Mutation frequencies can be modeled as a Poisson random field (PRF) to estimate speciation times and the degree of selection on newly arisen mutations. This approach provides a quantitative theory for comparing intraspecific polymorphism with interspecific divergence in the presence of selection and can be used to estimate population genetic parameters. Although the original PRF model has been extended to more general biological settings to make statistical inference about selection and divergence among model organisms, it has not been incorporated into phylogeographic studies that focus on estimating population genetic parameters for nonmodel organisms. Here, we modified a recently developed time-dependent PRF model to independently estimate genetic parameters from a nuclear and mitochondrial DNA data set of 22 sister pairs of birds that have diverged across a biogeographic barrier. We found that species that inhabit humid habitats had more recent divergence times and larger effective population sizes than those that inhabit drier habitats, and divergence time estimated from the PRF model were similar to estimates from a coalescent species-tree approach. Selection coefficients were higher in sister pairs that inhabited drier habitats than in those in humid habitats, but overall the mitochondrial DNA was under weak selection. Our study indicates that PRF models are useful for estimating various population genetic parameters and serve as a framework for incorporating estimates of selection into comparative phylogeographic studies.

  12. Nonlinear Poisson equation for heterogeneous media.

    PubMed

    Hu, Langhua; Wei, Guo-Wei

    2012-08-22

    The Poisson equation is a widely accepted model for electrostatic analysis. However, the Poisson equation is derived based on electric polarizations in a linear, isotropic, and homogeneous dielectric medium. This article introduces a nonlinear Poisson equation to take into consideration of hyperpolarization effects due to intensive charges and possible nonlinear, anisotropic, and heterogeneous media. Variational principle is utilized to derive the nonlinear Poisson model from an electrostatic energy functional. To apply the proposed nonlinear Poisson equation for the solvation analysis, we also construct a nonpolar solvation energy functional based on the nonlinear Poisson equation by using the geometric measure theory. At a fixed temperature, the proposed nonlinear Poisson theory is extensively validated by the electrostatic analysis of the Kirkwood model and a set of 20 proteins, and the solvation analysis of a set of 17 small molecules whose experimental measurements are also available for a comparison. Moreover, the nonlinear Poisson equation is further applied to the solvation analysis of 21 compounds at different temperatures. Numerical results are compared to theoretical prediction, experimental measurements, and those obtained from other theoretical methods in the literature. A good agreement between our results and experimental data as well as theoretical results suggests that the proposed nonlinear Poisson model is a potentially useful model for electrostatic analysis involving hyperpolarization effects.

  13. Nonlinear Poisson Equation for Heterogeneous Media

    PubMed Central

    Hu, Langhua; Wei, Guo-Wei

    2012-01-01

    The Poisson equation is a widely accepted model for electrostatic analysis. However, the Poisson equation is derived based on electric polarizations in a linear, isotropic, and homogeneous dielectric medium. This article introduces a nonlinear Poisson equation to take into consideration of hyperpolarization effects due to intensive charges and possible nonlinear, anisotropic, and heterogeneous media. Variational principle is utilized to derive the nonlinear Poisson model from an electrostatic energy functional. To apply the proposed nonlinear Poisson equation for the solvation analysis, we also construct a nonpolar solvation energy functional based on the nonlinear Poisson equation by using the geometric measure theory. At a fixed temperature, the proposed nonlinear Poisson theory is extensively validated by the electrostatic analysis of the Kirkwood model and a set of 20 proteins, and the solvation analysis of a set of 17 small molecules whose experimental measurements are also available for a comparison. Moreover, the nonlinear Poisson equation is further applied to the solvation analysis of 21 compounds at different temperatures. Numerical results are compared to theoretical prediction, experimental measurements, and those obtained from other theoretical methods in the literature. A good agreement between our results and experimental data as well as theoretical results suggests that the proposed nonlinear Poisson model is a potentially useful model for electrostatic analysis involving hyperpolarization effects. PMID:22947937

  14. Poisson's spot and Gouy phase

    NASA Astrophysics Data System (ADS)

    da Paz, I. G.; Soldati, Rodolfo; Cabral, L. A.; de Oliveira, J. G. G.; Sampaio, Marcos

    2016-12-01

    Recently there have been experimental results on Poisson spot matter-wave interferometry followed by theoretical models describing the relative importance of the wave and particle behaviors for the phenomenon. We propose an analytical theoretical model for Poisson's spot with matter waves based on the Babinet principle, in which we use the results for free propagation and single-slit diffraction. We take into account effects of loss of coherence and finite detection area using the propagator for a quantum particle interacting with an environment. We observe that the matter-wave Gouy phase plays a role in the existence of the central peak and thus corroborates the predominantly wavelike character of the Poisson's spot. Our model shows remarkable agreement with the experimental data for deuterium (D2) molecules.

  15. Modelling carcinogenesis after radiotherapy using Poisson statistics: implications for IMRT, protons and ions.

    PubMed

    Jones, Bleddyn

    2009-06-01

    Current technical radiotherapy advances aim to (a) better conform the dose contours to cancers and (b) reduce the integral dose exposure and thereby minimise unnecessary dose exposure to normal tissues unaffected by the cancer. Various types of conformal and intensity modulated radiotherapy (IMRT) using x-rays can achieve (a) while charged particle therapy (CPT)-using proton and ion beams-can achieve both (a) and (b), but at greater financial cost. Not only is the long term risk of radiation related normal tissue complications important, but so is the risk of carcinogenesis. Physical dose distribution plans can be generated to show the differences between the above techniques. IMRT is associated with a dose bath of low to medium dose due to fluence transfer: dose is effectively transferred from designated organs at risk to other areas; thus dose and risk are transferred. Many clinicians are concerned that there may be additional carcinogenesis many years after IMRT. CPT reduces the total energy deposition in the body and offers many potential advantages in terms of the prospects for better quality of life along with cancer cure. With C ions there is a tail of dose beyond the Bragg peaks, due to nuclear fragmentation; this is not found with protons. CPT generally uses higher linear energy transfer (which varies with particle and energy), which carries a higher relative risk of malignant induction, but also of cell death quantified by the relative biological effect concept, so at higher dose levels the frank development of malignancy should be reduced. Standard linear radioprotection models have been used to show a reduction in carcinogenesis risk of between two- and 15-fold depending on the CPT location. But the standard risk models make no allowance for fractionation and some have a dose limit at 4 Gy. Alternatively, tentative application of the linear quadratic model and Poissonian statistics to chromosome breakage and cell kill simultaneously allows estimation of

  16. Misspecified poisson regression models for large-scale registry data: inference for 'large n and small p'.

    PubMed

    Grøn, Randi; Gerds, Thomas A; Andersen, Per K

    2016-03-30

    Poisson regression is an important tool in register-based epidemiology where it is used to study the association between exposure variables and event rates. In this paper, we will discuss the situation with 'large n and small p', where n is the sample size and p is the number of available covariates. Specifically, we are concerned with modeling options when there are time-varying covariates that can have time-varying effects. One problem is that tests of the proportional hazards assumption, of no interactions between exposure and other observed variables, or of other modeling assumptions have large power due to the large sample size and will often indicate statistical significance even for numerically small deviations that are unimportant for the subject matter. Another problem is that information on important confounders may be unavailable. In practice, this situation may lead to simple working models that are then likely misspecified. To support and improve conclusions drawn from such models, we discuss methods for sensitivity analysis, for estimation of average exposure effects using aggregated data, and a semi-parametric bootstrap method to obtain robust standard errors. The methods are illustrated using data from the Danish national registries investigating the diabetes incidence for individuals treated with antipsychotics compared with the general unexposed population.

  17. A two-phase Poisson process model and its application to analysis of cancer mortality among A-bomb survivors.

    PubMed

    Ohtaki, Megu; Tonda, Tetsuji; Aihara, Kazuyuki

    2015-10-01

    We consider a two-phase Poisson process model where only early successive transitions are assumed to be sensitive to exposure. In the case where intensity transitions are low, we derive analytically an approximate formula for the distribution of time to event for the excess hazard ratio (EHR) due to a single point exposure. The formula for EHR is a polynomial in exposure dose. Since the formula for EHR contains no unknown parameters except for the number of total stages, number of exposure-sensitive stages, and a coefficient of exposure effect, it is applicable easily under a variety of situations where there exists a possible latency time from a single point exposure to occurrence of event. Based on the multistage hypothesis of cancer, we formulate a radiation carcinogenesis model in which only some early consecutive stages of the process are sensitive to exposure, whereas later stages are not affected. An illustrative analysis using the proposed model is given for cancer mortality among A-bomb survivors.

  18. Computational Process Modeling for Additive Manufacturing

    NASA Technical Reports Server (NTRS)

    Bagg, Stacey; Zhang, Wei

    2014-01-01

    Computational Process and Material Modeling of Powder Bed additive manufacturing of IN 718. Optimize material build parameters with reduced time and cost through modeling. Increase understanding of build properties. Increase reliability of builds. Decrease time to adoption of process for critical hardware. Potential to decrease post-build heat treatments. Conduct single-track and coupon builds at various build parameters. Record build parameter information and QM Meltpool data. Refine Applied Optimization powder bed AM process model using data. Report thermal modeling results. Conduct metallography of build samples. Calibrate STK models using metallography findings. Run STK models using AO thermal profiles and report STK modeling results. Validate modeling with additional build. Photodiode Intensity measurements highly linear with power input. Melt Pool Intensity highly correlated to Melt Pool Size. Melt Pool size and intensity increase with power. Applied Optimization will use data to develop powder bed additive manufacturing process model.

  19. Fuzzy classifier based support vector regression framework for Poisson ratio determination

    NASA Astrophysics Data System (ADS)

    Asoodeh, Mojtaba; Bagheripour, Parisa

    2013-09-01

    Poisson ratio is considered as one of the most important rock mechanical properties of hydrocarbon reservoirs. Determination of this parameter through laboratory measurement is time, cost, and labor intensive. Furthermore, laboratory measurements do not provide continuous data along the reservoir intervals. Hence, a fast, accurate, and inexpensive way of determining Poisson ratio which produces continuous data over the whole reservoir interval is desirable. For this purpose, support vector regression (SVR) method based on statistical learning theory (SLT) was employed as a supervised learning algorithm to estimate Poisson ratio from conventional well log data. SVR is capable of accurately extracting the implicit knowledge contained in conventional well logs and converting the gained knowledge into Poisson ratio data. Structural risk minimization (SRM) principle which is embedded in the SVR structure in addition to empirical risk minimization (EMR) principle provides a robust model for finding quantitative formulation between conventional well log data and Poisson ratio. Although satisfying results were obtained from an individual SVR model, it had flaws of overestimation in low Poisson ratios and underestimation in high Poisson ratios. These errors were eliminated through implementation of fuzzy classifier based SVR (FCBSVR). The FCBSVR significantly improved accuracy of the final prediction. This strategy was successfully applied to data from carbonate reservoir rocks of an Iranian Oil Field. Results indicated that SVR predicted Poisson ratio values are in good agreement with measured values.

  20. A Marked Poisson Process Driven Latent Shape Model for 3D Segmentation of Reflectance Confocal Microscopy Image Stacks of Human Skin.

    PubMed

    Ghanta, Sindhu; Jordan, Michael I; Kose, Kivanc; Brooks, Dana H; Rajadhyaksha, Milind; Dy, Jennifer G

    2016-10-05

    Segmenting objects of interest from 3D datasets is a common problem encountered in biological data. Small field of view and intrinsic biological variability combined with optically subtle changes of intensity, resolution and low contrast in images make the task of segmentation difficult, especially for microscopy of unstained living or freshly excised thick tissues. Incorporating shape information in addition to the appearance of the object of interest can often help improve segmentation performance. However, shapes of objects in tissue can be highly variable and design of a flexible shape model that encompasses these variations is challenging. To address such complex segmentation problems, we propose a unified probabilistic framework that can incorporate the uncertainty associated with complex shapes, variable appearance and unknown locations. The driving application which inspired the development of this framework is a biologically important segmentation problem: the task of automatically detecting and segmenting the dermal-epidermal junction (DEJ) in 3D reflectance confocal microscopy (RCM) images of human skin. RCM imaging allows noninvasive observation of cellular, nuclear and morphological detail. The DEJ is an important morphological feature as it is where disorder, disease and cancer usually start. Detecting the DEJ is challenging because it is a 2D surface in a 3D volume which has strong but highly variable number of irregularly spaced and variably shaped "peaks and valleys". In addition, RCM imaging resolution, contrast and intensity vary with depth. Thus a prior model needs to incorporate the intrinsic structure while allowing variability in essentially all its parameters. We propose a model which can incorporate objects of interest with complex shapes and variable appearance in an unsupervised setting by utilizing domain knowledge to build appropriate priors of the model. Our novel strategy to model this structure combines a spatial Poisson process with

  1. A Marked Poisson Process Driven Latent Shape Model for 3D Segmentation of Reflectance Confocal Microscopy Image Stacks of Human Skin.

    PubMed

    Ghanta, Sindhu; Jordan, Michael I; Kose, Kivanc; Brooks, Dana H; Rajadhyaksha, Milind; Dy, Jennifer G

    2017-01-01

    Segmenting objects of interest from 3D data sets is a common problem encountered in biological data. Small field of view and intrinsic biological variability combined with optically subtle changes of intensity, resolution, and low contrast in images make the task of segmentation difficult, especially for microscopy of unstained living or freshly excised thick tissues. Incorporating shape information in addition to the appearance of the object of interest can often help improve segmentation performance. However, the shapes of objects in tissue can be highly variable and design of a flexible shape model that encompasses these variations is challenging. To address such complex segmentation problems, we propose a unified probabilistic framework that can incorporate the uncertainty associated with complex shapes, variable appearance, and unknown locations. The driving application that inspired the development of this framework is a biologically important segmentation problem: the task of automatically detecting and segmenting the dermal-epidermal junction (DEJ) in 3D reflectance confocal microscopy (RCM) images of human skin. RCM imaging allows noninvasive observation of cellular, nuclear, and morphological detail. The DEJ is an important morphological feature as it is where disorder, disease, and cancer usually start. Detecting the DEJ is challenging, because it is a 2D surface in a 3D volume which has strong but highly variable number of irregularly spaced and variably shaped "peaks and valleys." In addition, RCM imaging resolution, contrast, and intensity vary with depth. Thus, a prior model needs to incorporate the intrinsic structure while allowing variability in essentially all its parameters. We propose a model which can incorporate objects of interest with complex shapes and variable appearance in an unsupervised setting by utilizing domain knowledge to build appropriate priors of the model. Our novel strategy to model this structure combines a spatial Poisson

  2. How many dinosaur species were there? Fossil bias and true richness estimated using a Poisson sampling model.

    PubMed

    Starrfelt, Jostein; Liow, Lee Hsiang

    2016-04-05

    The fossil record is a rich source of information about biological diversity in the past. However, the fossil record is not only incomplete but has also inherent biases due to geological, physical, chemical and biological factors. Our knowledge of past life is also biased because of differences in academic and amateur interests and sampling efforts. As a result, not all individuals or species that lived in the past are equally likely to be discovered at any point in time or space. To reconstruct temporal dynamics of diversity using the fossil record, biased sampling must be explicitly taken into account. Here, we introduce an approach that uses the variation in the number of times each species is observed in the fossil record to estimate both sampling bias and true richness. We term our technique TRiPS (True Richness estimated using a Poisson Sampling model) and explore its robustness to violation of its assumptions via simulations. We then venture to estimate sampling bias and absolute species richness of dinosaurs in the geological stages of the Mesozoic. Using TRiPS, we estimate that 1936 (1543-2468) species of dinosaurs roamed the Earth during the Mesozoic. We also present improved estimates of species richness trajectories of the three major dinosaur clades: the sauropodomorphs, ornithischians and theropods, casting doubt on the Jurassic-Cretaceous extinction event and demonstrating that all dinosaur groups are subject to considerable sampling bias throughout the Mesozoic.

  3. Between algorithm and model: different Molecular Surface definitions for the Poisson-Boltzmann based electrostatic characterization of biomolecules in solution.

    PubMed

    Decherchi, Sergio; Colmenares, José; Catalano, Chiara Eva; Spagnuolo, Michela; Alexov, Emil; Rocchia, Walter

    2013-01-01

    The definition of a molecular surface which is physically sound and computationally efficient is a very interesting and long standing problem in the implicit solvent continuum modeling of biomolecular systems as well as in the molecular graphics field. In this work, two molecular surfaces are evaluated with respect to their suitability for electrostatic computation as alternatives to the widely used Connolly-Richards surface: the blobby surface, an implicit Gaussian atom centered surface, and the skin surface. As figures of merit, we considered surface differentiability and surface area continuity with respect to atom positions, and the agreement with explicit solvent simulations. Geometric analysis seems to privilege the skin to the blobby surface, and points to an unexpected relationship between the non connectedness of the surface, caused by interstices in the solute volume, and the surface area dependence on atomic centers. In order to assess the ability to reproduce explicit solvent results, specific software tools have been developed to enable the use of the skin surface in Poisson-Boltzmann calculations with the DelPhi solver. Results indicate that the skin and Connolly surfaces have a comparable performance from this last point of view.

  4. Poisson-model analysis of the risk of vaccine-associated paralytic poliomyelitis in Japan between 1971 and 2000.

    PubMed

    Hao, Lixin; Toyokawa, Satoshi; Kobayashi, Yasuki

    2008-03-01

    This study estimates the risk of vaccine-associated paralytic poliomyelitis (VAPP) in Japan between 1971 and 2000. We acquired data regarding the number of VAPP cases from the website of the Ministry of Health, Labour and Welfare, and we estimated the number of oral poliovirus vaccines (OPV) administered based on the reported immunization data. Risk was calculated as the ratio between the number of VAPP cases and the number of OPV doses administered. Both the Runs test and the Poisson model were used to analyze the occurrence of VAPP. Thirty-three cases of VAPP were recorded in Japan between 1971 and 2000; approximately one case occurred per year. There were no statistical changes in temporal trends as regards the occurrence of VAPP between 1971 and 2000. The overall risk for VAPP, including both recipient and contact VAPP, was one case per 2.0 million OPV doses administered. The risk of recipient VAPP was one per 3.7 million doses, among which the first dose posed a much higher risk of one per 2.3 million than that of the subsequent dose. These data indicated that the occurrence of VAPP is rare, but the risk has remained constant for as long as OPV has been used in Japan.

  5. How many dinosaur species were there? Fossil bias and true richness estimated using a Poisson sampling model

    PubMed Central

    Starrfelt, Jostein; Liow, Lee Hsiang

    2016-01-01

    The fossil record is a rich source of information about biological diversity in the past. However, the fossil record is not only incomplete but has also inherent biases due to geological, physical, chemical and biological factors. Our knowledge of past life is also biased because of differences in academic and amateur interests and sampling efforts. As a result, not all individuals or species that lived in the past are equally likely to be discovered at any point in time or space. To reconstruct temporal dynamics of diversity using the fossil record, biased sampling must be explicitly taken into account. Here, we introduce an approach that uses the variation in the number of times each species is observed in the fossil record to estimate both sampling bias and true richness. We term our technique TRiPS (True Richness estimated using a Poisson Sampling model) and explore its robustness to violation of its assumptions via simulations. We then venture to estimate sampling bias and absolute species richness of dinosaurs in the geological stages of the Mesozoic. Using TRiPS, we estimate that 1936 (1543–2468) species of dinosaurs roamed the Earth during the Mesozoic. We also present improved estimates of species richness trajectories of the three major dinosaur clades: the sauropodomorphs, ornithischians and theropods, casting doubt on the Jurassic–Cretaceous extinction event and demonstrating that all dinosaur groups are subject to considerable sampling bias throughout the Mesozoic. PMID:26977060

  6. Between algorithm and model: different Molecular Surface definitions for the Poisson-Boltzmann based electrostatic characterization of biomolecules in solution

    PubMed Central

    Decherchi, Sergio; Colmenares, José; Catalano, Chiara Eva; Spagnuolo, Michela; Alexov, Emil; Rocchia, Walter

    2011-01-01

    The definition of a molecular surface which is physically sound and computationally efficient is a very interesting and long standing problem in the implicit solvent continuum modeling of biomolecular systems as well as in the molecular graphics field. In this work, two molecular surfaces are evaluated with respect to their suitability for electrostatic computation as alternatives to the widely used Connolly-Richards surface: the blobby surface, an implicit Gaussian atom centered surface, and the skin surface. As figures of merit, we considered surface differentiability and surface area continuity with respect to atom positions, and the agreement with explicit solvent simulations. Geometric analysis seems to privilege the skin to the blobby surface, and points to an unexpected relationship between the non connectedness of the surface, caused by interstices in the solute volume, and the surface area dependence on atomic centers. In order to assess the ability to reproduce explicit solvent results, specific software tools have been developed to enable the use of the skin surface in Poisson-Boltzmann calculations with the DelPhi solver. Results indicate that the skin and Connolly surfaces have a comparable performance from this last point of view. PMID:23519863

  7. Three-Dimensional Polymer Constructs Exhibiting a Tunable Negative Poisson's Ratio.

    PubMed

    Fozdar, David Y; Soman, Pranav; Lee, Jin Woo; Han, Li-Hsin; Chen, Shaochen

    2011-07-22

    Young's modulus and Poisson's ratio of a porous polymeric construct (scaffold) quantitatively describe how it supports and transmits external stresses to its surroundings. While Young's modulus is always non-negative and highly tunable in magnitude, Poisson's ratio can, indeed, take on negative values despite the fact that it is non-negative for virtually every naturally occurring and artificial material. In some applications, a construct having a tunable negative Poisson's ratio (an auxetic construct) may be more suitable for supporting the external forces imposed upon it by its environment. Here, three-dimensional polyethylene glycol scaffolds with tunable negative Poisson's ratios are fabricated. Digital micromirror device projection printing (DMD-PP) is used to print single-layer constructs composed of cellular structures (pores) with special geometries, arrangements, and deformation mechanisms. The presence of the unit-cellular structures tunes the magnitude and polarity (positive or negative) of Poisson's ratio. Multilayer constructs are fabricated with DMD-PP by stacking the single-layer constructs with alternating layers of vertical connecting posts. The Poisson's ratios of the single- and multilayer constructs are determined from strain experiments, which show (1) that the Poisson's ratios of the constructs are accurately predicted by analytical deformation models and (2) that no slipping occurrs between layers in the multilayer constructs and the addition of new layers does not affect Poisson's ratio.

  8. A hierarchical Binomial-Poisson model for the analysis of a crossover design for correlated binary data when the number of trials is dose-dependent.

    PubMed

    Shkedy, Ziv; Molenberghs, Geert; Van Craenendonck, Hansfried; Steckler, Thomas; Bijnens, Luc

    2005-01-01

    The differential reinforcement of a low-rate 72-seconds schedule (DRL-72) is a standard behavioral test procedure for screening a potential antidepressant compound. The data analyzed in the article are binary outcomes from a crossover design for such an experiment. Recently, Shkedy et al. (2004) proposed to estimate the treatments effect using either generalized linear mixed models (GLMM) or generalized estimating equations (GEE) for clustered binary data. The models proposed by Shkedy et al. (2004) assumed the number of responses at each binomial observation is fixed. This might be an unrealistic assumption for a behavioral experiment such as the DRL-72 because the number of responses (the number of trials in each binomial observation) is expected to be influenced by the administered dose level. In this article, we extend the model proposed by Shkedy et al. (2004) and propose a hierarchical Bayesian binomial-Poisson model, which assumes the number of responses to be a Poisson random variable. The results obtained from the GLMM and the binomial-Poisson models are comparable. However, the latter model allows estimating the correlation between the number of successes and number of trials.

  9. Accuracy assessment of the linear Poisson-Boltzmann equation and reparametrization of the OBC generalized Born model for nucleic acids and nucleic acid-protein complexes.

    PubMed

    Fogolari, Federico; Corazza, Alessandra; Esposito, Gennaro

    2015-04-05

    The generalized Born model in the Onufriev, Bashford, and Case (Onufriev et al., Proteins: Struct Funct Genet 2004, 55, 383) implementation has emerged as one of the best compromises between accuracy and speed of computation. For simulations of nucleic acids, however, a number of issues should be addressed: (1) the generalized Born model is based on a linear model and the linearization of the reference Poisson-Boltmann equation may be questioned for highly charged systems as nucleic acids; (2) although much attention has been given to potentials, solvation forces could be much less sensitive to linearization than the potentials; and (3) the accuracy of the Onufriev-Bashford-Case (OBC) model for nucleic acids depends on fine tuning of parameters. Here, we show that the linearization of the Poisson Boltzmann equation has mild effects on computed forces, and that with optimal choice of the OBC model parameters, solvation forces, essential for molecular dynamics simulations, agree well with those computed using the reference Poisson-Boltzmann model.

  10. Poisson's ratio of individual metal nanowires.

    PubMed

    McCarthy, Eoin K; Bellew, Allen T; Sader, John E; Boland, John J

    2014-07-07

    The measurement of Poisson's ratio of nanomaterials is extremely challenging. Here we report a lateral atomic force microscope experimental method to electromechanically measure the Poisson's ratio and gauge factor of individual nanowires. Under elastic loading conditions we monitor the four-point resistance of individual metallic nanowires as a function of strain and different levels of electrical stress. We determine the gauge factor of individual wires and directly measure the Poisson's ratio using a model that is independently validated for macroscopic wires. For macroscopic wires and nickel nanowires we find Poisson's ratios that closely correspond to bulk values, whereas for silver nanowires significant deviations from the bulk silver value are observed. Moreover, repeated measurements on individual silver nanowires at different levels of mechanical and electrical stress yield a small spread in Poisson ratio, with a range of mean values for different wires, all of which are distinct from the bulk value.

  11. Adapting Poisson-Boltzmann to the self-consistent mean field theory: Application to protein side-chain modeling

    NASA Astrophysics Data System (ADS)

    Koehl, Patrice; Orland, Henri; Delarue, Marc

    2011-08-01

    We present an extension of the self-consistent mean field theory for protein side-chain modeling in which solvation effects are included based on the Poisson-Boltzmann (PB) theory. In this approach, the protein is represented with multiple copies of its side chains. Each copy is assigned a weight that is refined iteratively based on the mean field energy generated by the rest of the protein, until self-consistency is reached. At each cycle, the variational free energy of the multi-copy system is computed; this free energy includes the internal energy of the protein that accounts for vdW and electrostatics interactions and a solvation free energy term that is computed using the PB equation. The method converges in only a few cycles and takes only minutes of central processing unit time on a commodity personal computer. The predicted conformation of each residue is then set to be its copy with the highest weight after convergence. We have tested this method on a database of hundred highly refined NMR structures to circumvent the problems of crystal packing inherent to x-ray structures. The use of the PB-derived solvation free energy significantly improves prediction accuracy for surface side chains. For example, the prediction accuracies for χ1 for surface cysteine, serine, and threonine residues improve from 68%, 35%, and 43% to 80%, 53%, and 57%, respectively. A comparison with other side-chain prediction algorithms demonstrates that our approach is consistently better in predicting the conformations of exposed side chains.

  12. Adapting Poisson-Boltzmann to the self-consistent mean field theory: application to protein side-chain modeling.

    PubMed

    Koehl, Patrice; Orland, Henri; Delarue, Marc

    2011-08-07

    We present an extension of the self-consistent mean field theory for protein side-chain modeling in which solvation effects are included based on the Poisson-Boltzmann (PB) theory. In this approach, the protein is represented with multiple copies of its side chains. Each copy is assigned a weight that is refined iteratively based on the mean field energy generated by the rest of the protein, until self-consistency is reached. At each cycle, the variational free energy of the multi-copy system is computed; this free energy includes the internal energy of the protein that accounts for vdW and electrostatics interactions and a solvation free energy term that is computed using the PB equation. The method converges in only a few cycles and takes only minutes of central processing unit time on a commodity personal computer. The predicted conformation of each residue is then set to be its copy with the highest weight after convergence. We have tested this method on a database of hundred highly refined NMR structures to circumvent the problems of crystal packing inherent to x-ray structures. The use of the PB-derived solvation free energy significantly improves prediction accuracy for surface side chains. For example, the prediction accuracies for χ(1) for surface cysteine, serine, and threonine residues improve from 68%, 35%, and 43% to 80%, 53%, and 57%, respectively. A comparison with other side-chain prediction algorithms demonstrates that our approach is consistently better in predicting the conformations of exposed side chains.

  13. Adapting Poisson-Boltzmann to the self-consistent mean field theory: Application to protein side-chain modeling

    PubMed Central

    Koehl, Patrice; Orland, Henri; Delarue, Marc

    2011-01-01

    We present an extension of the self-consistent mean field theory for protein side-chain modeling in which solvation effects are included based on the Poisson-Boltzmann (PB) theory. In this approach, the protein is represented with multiple copies of its side chains. Each copy is assigned a weight that is refined iteratively based on the mean field energy generated by the rest of the protein, until self-consistency is reached. At each cycle, the variational free energy of the multi-copy system is computed; this free energy includes the internal energy of the protein that accounts for vdW and electrostatics interactions and a solvation free energy term that is computed using the PB equation. The method converges in only a few cycles and takes only minutes of central processing unit time on a commodity personal computer. The predicted conformation of each residue is then set to be its copy with the highest weight after convergence. We have tested this method on a database of hundred highly refined NMR structures to circumvent the problems of crystal packing inherent to x-ray structures. The use of the PB-derived solvation free energy significantly improves prediction accuracy for surface side chains. For example, the prediction accuracies for χ1 for surface cysteine, serine, and threonine residues improve from 68%, 35%, and 43% to 80%, 53%, and 57%, respectively. A comparison with other side-chain prediction algorithms demonstrates that our approach is consistently better in predicting the conformations of exposed side chains. PMID:21823735

  14. Vlasov-Maxwell and Vlasov-Poisson equations as models of a one-dimensional electron plasma

    NASA Technical Reports Server (NTRS)

    Klimas, A. J.; Cooper, J.

    1983-01-01

    The Vlasov-Maxwell and Vlasov-Poisson systems of equations for a one-dimensional electron plasma are defined and discussed. A method for transforming a solution of one system which is periodic over a bounded or unbounded spatial interval to a similar solution of the other is constructed.

  15. Cumulative Poisson Distribution Program

    NASA Technical Reports Server (NTRS)

    Bowerman, Paul N.; Scheuer, Ernest M.; Nolty, Robert

    1990-01-01

    Overflow and underflow in sums prevented. Cumulative Poisson Distribution Program, CUMPOIS, one of two computer programs that make calculations involving cumulative Poisson distributions. Both programs, CUMPOIS (NPO-17714) and NEWTPOIS (NPO-17715), used independently of one another. CUMPOIS determines cumulative Poisson distribution, used to evaluate cumulative distribution function (cdf) for gamma distributions with integer shape parameters and cdf for X (sup2) distributions with even degrees of freedom. Used by statisticians and others concerned with probabilities of independent events occurring over specific units of time, area, or volume. Written in C.

  16. Effect of Nutritional Habits on Dental Caries in Permanent Dentition among Schoolchildren Aged 10–12 Years: A Zero-Inflated Generalized Poisson Regression Model Approach

    PubMed Central

    ALMASI, Afshin; RAHIMIFOROUSHANI, Abbas; ESHRAGHIAN, Mohammad Reza; MOHAMMAD, Kazem; PASDAR, Yahya; TARRAHI, Mohammad Javad; MOGHIMBEIGI, Abbas; AHMADI JOUYBARI, Touraj

    2016-01-01

    Background: The aim of this study was to assess the associations between nutrition and dental caries in permanent dentition among schoolchildren. Methods: A cross-sectional survey was undertaken on 698 schoolchildren aged 10 to 12 yr from a random sample of primary schools in Kermanshah, western Iran, in 2014. The study was based on the data obtained from the questionnaire containing information on nutritional habits and the outcome of decayed/missing/filled teeth (DMFT) index. The association between predictors and dental caries was modeled using the Zero Inflated Generalized Poisson (ZIGP) regression model. Results: Fourteen percent of the children were caries free. The model was shown that in female children, the odds of being in a caries susceptible sub-group was 1.23 (95% CI: 1.08–1.51) times more likely than boys (P=0.041). Additionally, mean caries count in children who consumed the fizzy soft beverages and sweet biscuits more than once daily was 1.41 (95% CI: 1.19–1.63) and 1.27 (95% CI: 1.18–1.37) times more than children that were in category of less than 3 times a week or never, respectively. Conclusions: Girls were at a higher risk of caries than boys were. Since our study showed that nutritional status may have significant effect on caries in permanent teeth, we recommend that health promotion activities in school should be emphasized on healthful eating practices; especially limiting beverages containing sugar to only occasionally between meals. PMID:27141498

  17. Tests of continuum theories as models of ion channels. I. Poisson-Boltzmann theory versus Brownian dynamics.

    PubMed Central

    Moy, G; Corry, B; Kuyucak, S; Chung, S H

    2000-01-01

    Continuum theories of electrolytes are widely used to describe physical processes in various biological systems. Although these are well-established theories in macroscopic situations, it is not clear from the outset that they should work in small systems whose dimensions are comparable to or smaller than the Debye length. Here, we test the validity of the mean-field approximation in Poisson-Boltzmann theory by comparing its predictions with those of Brownian dynamics simulations. For this purpose we use spherical and cylindrical boundaries and a catenary shape similar to that of the acetylcholine receptor channel. The interior region filled with electrolyte is assumed to have a high dielectric constant, and the exterior region representing protein a low one. Comparisons of the force on a test ion obtained with the two methods show that the shielding effect due to counterions is overestimated in Poisson-Boltzmann theory when the ion is within a Debye length of the boundary. As the ion gets closer to the boundary, the discrepancy in force grows rapidly. The implication for membrane channels, whose radii are typically smaller than the Debye length, is that Poisson-Boltzmann theory cannot be used to obtain reliable estimates of the electrostatic potential energy and force on an ion in the channel environment. PMID:10777732

  18. A Poisson model for identifying characteristic size effects in frequency data: Application to frequency-size distributions for global earthquakes, "starquakes", and fault lengths

    NASA Astrophysics Data System (ADS)

    Leonard, Thomas; Papasouliotis, Orestis; Main, Ian G.

    2001-01-01

    The standard Gaussian distribution for incremental frequency data requires a constant variance which is independent of the mean. We develop a more general and appropriate method based on the Poisson distribution, which assumes different unknown variances for the frequencies, equal to the means. We explicitly include "empty bins", and our method is quite insensitive to the choice of bin width. We develop a maximum likelihood technique that minimizes bias in the curve fits, and penalizes additional free parameters by objective information criteria. Various data sets are used to test three different physical models that have been suggested for the density distribution: the power law; the double power law; and the "gamma" distribution. For the CMT catalog of global earthquakes, two peaks in the posterior distribution are observed at moment magnitudes m* = 6.4 and 6.9 implying a bimodal distribution of seismogenic depth at around 15 and 30 km, respectively. A similar break at a characteristic length of 60 km or so is observed in moment-length data, but this does not outperform the simpler power law model. For the earthquake frequency-moment data the gamma distribution provides the best overall fit to the data, implying a finite correlation length and a system near but below the critical point. In contrast, data from soft gamma ray repeaters show that the power law is the best fit, implying infinite correlation length and a system that is precisely critical. For the fault break data a significant break of slope is found instead at characteristic scale of 44 km, implying a typical seismogenic thickness of up to 22 km or so in west central Nevada. The exponent changes from 1.5 to -2.1, too large to be accounted for by changes in sampling for an ideal, isotropic fractal set.

  19. Poisson Regression Analysis of Illness and Injury Surveillance Data

    SciTech Connect

    Frome E.L., Watkins J.P., Ellis E.D.

    2012-12-12

    The Department of Energy (DOE) uses illness and injury surveillance to monitor morbidity and assess the overall health of the work force. Data collected from each participating site include health events and a roster file with demographic information. The source data files are maintained in a relational data base, and are used to obtain stratified tables of health event counts and person time at risk that serve as the starting point for Poisson regression analysis. The explanatory variables that define these tables are age, gender, occupational group, and time. Typical response variables of interest are the number of absences due to illness or injury, i.e., the response variable is a count. Poisson regression methods are used to describe the effect of the explanatory variables on the health event rates using a log-linear main effects model. Results of fitting the main effects model are summarized in a tabular and graphical form and interpretation of model parameters is provided. An analysis of deviance table is used to evaluate the importance of each of the explanatory variables on the event rate of interest and to determine if interaction terms should be considered in the analysis. Although Poisson regression methods are widely used in the analysis of count data, there are situations in which over-dispersion occurs. This could be due to lack-of-fit of the regression model, extra-Poisson variation, or both. A score test statistic and regression diagnostics are used to identify over-dispersion. A quasi-likelihood method of moments procedure is used to evaluate and adjust for extra-Poisson variation when necessary. Two examples are presented using respiratory disease absence rates at two DOE sites to illustrate the methods and interpretation of the results. In the first example the Poisson main effects model is adequate. In the second example the score test indicates considerable over-dispersion and a more detailed analysis attributes the over-dispersion to extra-Poisson

  20. New generalized poisson mixture model for bimodal count data with drug effect: An application to rodent brief‐access taste aversion experiments

    PubMed Central

    Soto, J; Orlu Gul, M; Cortina‐Borja, M; Tuleu, C; Standing, JF

    2016-01-01

    Pharmacodynamic (PD) count data can exhibit bimodality and nonequidispersion complicating the inclusion of drug effect. The purpose of this study was to explore four different mixture distribution models for bimodal count data by including both drug effect and distribution truncation. An example dataset, which exhibited bimodal pattern, was from rodent brief‐access taste aversion (BATA) experiments to assess the bitterness of ascending concentrations of an aversive tasting drug. The two generalized Poisson mixture models performed the best and was flexible to explain both under and overdispersion. A sigmoid maximum effect (Emax) model with logistic transformation was introduced to link the drug effect to the data partition within each distribution. Predicted density‐histogram plot is suggested as a model evaluation tool due to its capability to directly compare the model predicted density with the histogram from raw data. The modeling approach presented here could form a useful strategy for modeling similar count data types. PMID:27472892

  1. Multiphase semiclassical approximation of an electron in a one-dimensional crystalline lattice - III. From ab initio models to WKB for Schroedinger-Poisson

    SciTech Connect

    Gosse, Laurent . E-mail: mauser@univie.ac.at

    2006-01-01

    This work is concerned with the semiclassical approximation of the Schroedinger-Poisson equation modeling ballistic transport in a 1D periodic potential by means of WKB techniques. It is derived by considering the mean-field limit of a N-body quantum problem, then K-multivalued solutions are adapted to the treatment of this weakly nonlinear system obtained after homogenization without taking into account for Pauli's exclusion principle. Numerical experiments display the behaviour of self-consistent wave packets and screening effects.

  2. Generalized Poisson distribution: the property of mixture of Poisson and comparison with negative binomial distribution.

    PubMed

    Joe, Harry; Zhu, Rong

    2005-04-01

    We prove that the generalized Poisson distribution GP(theta, eta) (eta > or = 0) is a mixture of Poisson distributions; this is a new property for a distribution which is the topic of the book by Consul (1989). Because we find that the fits to count data of the generalized Poisson and negative binomial distributions are often similar, to understand their differences, we compare the probability mass functions and skewnesses of the generalized Poisson and negative binomial distributions with the first two moments fixed. They have slight differences in many situations, but their zero-inflated distributions, with masses at zero, means and variances fixed, can differ more. These probabilistic comparisons are helpful in selecting a better fitting distribution for modelling count data with long right tails. Through a real example of count data with large zero fraction, we illustrate how the generalized Poisson and negative binomial distributions as well as their zero-inflated distributions can be discriminated.

  3. Statistical Tests of the PTHA Poisson Assumption for Submarine Landslides

    NASA Astrophysics Data System (ADS)

    Geist, E. L.; Chaytor, J. D.; Parsons, T.; Ten Brink, U. S.

    2012-12-01

    We demonstrate that a sequence of dated mass transport deposits (MTDs) can provide information to statistically test whether or not submarine landslides associated with these deposits conform to a Poisson model of occurrence. Probabilistic tsunami hazard analysis (PTHA) most often assumes Poissonian occurrence for all sources, with an exponential distribution of return times. Using dates that define the bounds of individual MTDs, we first describe likelihood and Monte Carlo methods of parameter estimation for a suite of candidate occurrence models (Poisson, lognormal, gamma, Brownian Passage Time). In addition to age-dating uncertainty, both methods incorporate uncertainty caused by the open time intervals: i.e., before the first and after the last event to the present. Accounting for these open intervals is critical when there are a small number of observed events. The optimal occurrence model is selected according to both the Akaike Information Criteria (AIC) and Akaike's Bayesian Information Criterion (ABIC). In addition, the likelihood ratio test can be performed on occurrence models from the same family: e.g., the gamma model relative to the exponential model of return time distribution. Parameter estimation, model selection, and hypothesis testing are performed on data from two IODP holes in the northern Gulf of Mexico that penetrated a total of 14 MTDs, some of which are correlated between the two holes. Each of these events has been assigned an age based on microfossil zonations and magnetostratigraphic datums. Results from these sites indicate that the Poisson assumption is likely valid. However, parameter estimation results using the likelihood method for one of the sites suggest that the events may have occurred quasi-periodically. Methods developed in this study provide tools with which one can determine both the rate of occurrence and the statistical validity of the Poisson assumption when submarine landslides are included in PTHA.

  4. Network Reconstruction Using Nonparametric Additive ODE Models

    PubMed Central

    Henderson, James; Michailidis, George

    2014-01-01

    Network representations of biological systems are widespread and reconstructing unknown networks from data is a focal problem for computational biologists. For example, the series of biochemical reactions in a metabolic pathway can be represented as a network, with nodes corresponding to metabolites and edges linking reactants to products. In a different context, regulatory relationships among genes are commonly represented as directed networks with edges pointing from influential genes to their targets. Reconstructing such networks from data is a challenging problem receiving much attention in the literature. There is a particular need for approaches tailored to time-series data and not reliant on direct intervention experiments, as the former are often more readily available. In this paper, we introduce an approach to reconstructing directed networks based on dynamic systems models. Our approach generalizes commonly used ODE models based on linear or nonlinear dynamics by extending the functional class for the functions involved from parametric to nonparametric models. Concomitantly we limit the complexity by imposing an additive structure on the estimated slope functions. Thus the submodel associated with each node is a sum of univariate functions. These univariate component functions form the basis for a novel coupling metric that we define in order to quantify the strength of proposed relationships and hence rank potential edges. We show the utility of the method by reconstructing networks using simulated data from computational models for the glycolytic pathway of Lactocaccus Lactis and a gene network regulating the pluripotency of mouse embryonic stem cells. For purposes of comparison, we also assess reconstruction performance using gene networks from the DREAM challenges. We compare our method to those that similarly rely on dynamic systems models and use the results to attempt to disentangle the distinct roles of linearity, sparsity, and derivative

  5. Computational Process Modeling for Additive Manufacturing (OSU)

    NASA Technical Reports Server (NTRS)

    Bagg, Stacey; Zhang, Wei

    2015-01-01

    Powder-Bed Additive Manufacturing (AM) through Direct Metal Laser Sintering (DMLS) or Selective Laser Melting (SLM) is being used by NASA and the Aerospace industry to "print" parts that traditionally are very complex, high cost, or long schedule lead items. The process spreads a thin layer of metal powder over a build platform, then melts the powder in a series of welds in a desired shape. The next layer of powder is applied, and the process is repeated until layer-by-layer, a very complex part can be built. This reduces cost and schedule by eliminating very complex tooling and processes traditionally used in aerospace component manufacturing. To use the process to print end-use items, NASA seeks to understand SLM material well enough to develop a method of qualifying parts for space flight operation. Traditionally, a new material process takes many years and high investment to generate statistical databases and experiential knowledge, but computational modeling can truncate the schedule and cost -many experiments can be run quickly in a model, which would take years and a high material cost to run empirically. This project seeks to optimize material build parameters with reduced time and cost through modeling.

  6. CREATION OF THE MODEL ADDITIONAL PROTOCOL

    SciTech Connect

    Houck, F.; Rosenthal, M.; Wulf, N.

    2010-05-25

    In 1991, the international nuclear nonproliferation community was dismayed to discover that the implementation of safeguards by the International Atomic Energy Agency (IAEA) under its NPT INFCIRC/153 safeguards agreement with Iraq had failed to detect Iraq's nuclear weapon program. It was now clear that ensuring that states were fulfilling their obligations under the NPT would require not just detecting diversion but also the ability to detect undeclared materials and activities. To achieve this, the IAEA initiated what would turn out to be a five-year effort to reappraise the NPT safeguards system. The effort engaged the IAEA and its Member States and led to agreement in 1997 on a new safeguards agreement, the Model Protocol Additional to the Agreement(s) between States and the International Atomic Energy Agency for the Application of Safeguards. The Model Protocol makes explicit that one IAEA goal is to provide assurance of the absence of undeclared nuclear material and activities. The Model Protocol requires an expanded declaration that identifies a State's nuclear potential, empowers the IAEA to raise questions about the correctness and completeness of the State's declaration, and, if needed, allows IAEA access to locations. The information required and the locations available for access are much broader than those provided for under INFCIRC/153. The negotiation was completed in quite a short time because it started with a relatively complete draft of an agreement prepared by the IAEA Secretariat. This paper describes how the Model Protocol was constructed and reviews key decisions that were made both during the five-year period and in the actual negotiation.

  7. Poisson Structures:. Towards a Classification

    NASA Astrophysics Data System (ADS)

    Grabowski, J.; Marmo, G.; Perelomov, A. M.

    In the present note we give an explicit description of certain class of Poisson structures. The methods lead to a classification of Poisson structures in low dimensions and suggest a possible approach for higher dimensions.

  8. Testing deviation for a set of serial dilution most probable numbers from a Poisson-binomial model.

    PubMed

    Blodgett, Robert J

    2006-01-01

    A serial dilution experiment estimates the microbial concentration in a broth by inoculating several sets of tubes with various amounts of the broth. The estimation uses the Poisson distribution and the number of tubes in each of these sets that show growth. Several factors, such as interfering microbes, toxins, or disaggregation of adhering microbes, may distort the results of a serial dilution experiment. A mild enough distortion may not raise suspicion with a single outcome. The test introduced here judges whether the entire set of serial dilution outcomes appears unusual. This test forms lists of the possible outcomes. The set of outcomes is declared unusual if any occurrence of an observed outcome is on the first list, or more than one is on the first or second list, etc. A similar test can apply when there are only a finite number of possible outcomes, and each outcome has a calculable probability, and few outcomes have tied probabilities.

  9. On removal of charge singularity in Poisson-Boltzmann equation.

    PubMed

    Cai, Qin; Wang, Jun; Zhao, Hong-Kai; Luo, Ray

    2009-04-14

    The Poisson-Boltzmann theory has become widely accepted in modeling electrostatic solvation interactions in biomolecular calculations. However the standard practice of atomic point charges in molecular mechanics force fields introduces singularity into the Poisson-Boltzmann equation. The finite-difference/finite-volume discretization approach to the Poisson-Boltzmann equation alleviates the numerical difficulty associated with the charge singularity but introduces discretization error into the electrostatic potential. Decomposition of the electrostatic potential has been explored to remove the charge singularity explicitly to achieve higher numerical accuracy in the solution of the electrostatic potential. In this study, we propose an efficient method to overcome the charge singularity problem. In our framework, two separate equations for two different potentials in two different regions are solved simultaneously, i.e., the reaction field potential in the solute region and the total potential in the solvent region. The proposed method can be readily implemented with typical finite-difference Poisson-Boltzmann solvers and return the singularity-free reaction field potential with a single run. Test runs on 42 small molecules and 4 large proteins show a very high agreement between the reaction field energies computed by the proposed method and those by the classical finite-difference Poisson-Boltzmann method. It is also interesting to note that the proposed method converges faster than the classical method, though additional time is needed to compute Coulombic potential on the dielectric boundary. The higher precision, accuracy, and efficiency of the proposed method will allow for more robust electrostatic calculations in molecular mechanics simulations of complex biomolecular systems.

  10. Finite element model predictions of static deformation from dislocation sources in a subduction zone: Sensitivities to homogeneous, isotropic, Poisson-solid, and half-space assumptions

    USGS Publications Warehouse

    Masterlark, Timothy

    2003-01-01

    Dislocation models can simulate static deformation caused by slip along a fault. These models usually take the form of a dislocation embedded in a homogeneous, isotropic, Poisson-solid half-space (HIPSHS). However, the widely accepted HIPSHS assumptions poorly approximate subduction zone systems of converging oceanic and continental crust. This study uses three-dimensional finite element models (FEMs) that allow for any combination (including none) of the HIPSHS assumptions to compute synthetic Green's functions for displacement. Using the 1995 Mw = 8.0 Jalisco-Colima, Mexico, subduction zone earthquake and associated measurements from a nearby GPS array as an example, FEM-generated synthetic Green's functions are combined with standard linear inverse methods to estimate dislocation distributions along the subduction interface. Loading a forward HIPSHS model with dislocation distributions, estimated from FEMs that sequentially relax the HIPSHS assumptions, yields the sensitivity of predicted displacements to each of the HIPSHS assumptions. For the subduction zone models tested and the specific field situation considered, sensitivities to the individual Poisson-solid, isotropy, and homogeneity assumptions can be substantially greater than GPS. measurement uncertainties. Forward modeling quantifies stress coupling between the Mw = 8.0 earthquake and a nearby Mw = 6.3 earthquake that occurred 63 days later. Coulomb stress changes predicted from static HIPSHS models cannot account for the 63-day lag time between events. Alternatively, an FEM that includes a poroelastic oceanic crust, which allows for postseismic pore fluid pressure recovery, can account for the lag time. The pore fluid pressure recovery rate puts an upper limit of 10-17 m2 on the bulk permeability of the oceanic crust. Copyright 2003 by the American Geophysical Union.

  11. Comparison of Multivariate Poisson lognormal spatial and temporal crash models to identify hot spots of intersections based on crash types.

    PubMed

    Cheng, Wen; Gill, Gurdiljot Singh; Dasu, Ravi; Xie, Meiquan; Jia, Xudong; Zhou, Jiao

    2017-02-01

    Most of the studies are focused on the general crashes or total crash counts with considerably less research dedicated to different crash types. This study employs the Systemic approach for detection of hotspots and comprehensively cross-validates five multivariate models of crash type-based HSID methods which incorporate spatial and temporal random effects. It is anticipated that comparison of the crash estimation results of the five models would identify the impact of varied random effects on the HSID. The data over a ten year time period (2003-2012) were selected for analysis of a total 137 intersections in the City of Corona, California. The crash types collected in this study include: Rear-end, Head-on, Side-swipe, Broad-side, Hit object, and Others. Statistically significant correlations among crash outcomes for the heterogeneity error term were observed which clearly demonstrated their multivariate nature. Additionally, the spatial random effects revealed the correlations among neighboring intersections across crash types. Five cross-validation criteria which contains, Residual Sum of Squares, Kappa, Mean Absolute Deviation, Method Consistency Test, and Total Rank Difference, were applied to assess the performance of the five HSID methods at crash estimation. In terms of accumulated results which combined all crash types, the model with spatial random effects consistently outperformed the other competing models with a significant margin. However, the inclusion of spatial random effect in temporal models fell short of attaining the expected results. The overall observation from the model fitness and validation results failed to highlight any correlation among better model fitness and superior crash estimation.

  12. Analytical Calculation of Mutual Information between Weakly Coupled Poisson-Spiking Neurons in Models of Dynamically Gated Communication.

    PubMed

    Cannon, Jonathan

    2017-01-01

    Mutual information is a commonly used measure of communication between neurons, but little theory exists describing the relationship between mutual information and the parameters of the underlying neuronal interaction. Such a theory could help us understand how specific physiological changes affect the capacity of neurons to synaptically communicate, and, in particular, they could help us characterize the mechanisms by which neuronal dynamics gate the flow of information in the brain. Here we study a pair of linear-nonlinear-Poisson neurons coupled by a weak synapse. We derive an analytical expression describing the mutual information between their spike trains in terms of synapse strength, neuronal activation function, the time course of postsynaptic currents, and the time course of the background input received by the two neurons. This expression allows mutual information calculations that would otherwise be computationally intractable. We use this expression to analytically explore the interaction of excitation, information transmission, and the convexity of the activation function. Then, using this expression to quantify mutual information in simulations, we illustrate the information-gating effects of neural oscillations and oscillatory coherence, which may either increase or decrease the mutual information across the synapse depending on parameters. Finally, we show analytically that our results can quantitatively describe the selection of one information pathway over another when multiple sending neurons project weakly to a single receiving neuron.

  13. A Poisson-Boltzmann dynamics method with nonperiodic boundary condition

    NASA Astrophysics Data System (ADS)

    Lu, Qiang; Luo, Ray

    2003-12-01

    We have developed a well-behaved and efficient finite difference Poisson-Boltzmann dynamics method with a nonperiodic boundary condition. This is made possible, in part, by a rather fine grid spacing used for the finite difference treatment of the reaction field interaction. The stability is also made possible by a new dielectric model that is smooth both over time and over space, an important issue in the application of implicit solvents. In addition, the electrostatic focusing technique facilitates the use of an accurate yet efficient nonperiodic boundary condition: boundary grid potentials computed by the sum of potentials from individual grid charges. Finally, the particle-particle particle-mesh technique is adopted in the computation of the Coulombic interaction to balance accuracy and efficiency in simulations of large biomolecules. Preliminary testing shows that the nonperiodic Poisson-Boltzmann dynamics method is numerically stable in trajectories at least 4 ns long. The new model is also fairly efficient: it is comparable to that of the pairwise generalized Born solvent model, making it a strong candidate for dynamics simulations of biomolecules in dilute aqueous solutions. Note that the current treatment of total electrostatic interactions is with no cutoff, which is important for simulations of biomolecules. Rigorous treatment of the Debye-Hückel screening is also possible within the Poisson-Boltzmann framework: its importance is demonstrated by a simulation of a highly charged protein.

  14. Natural Poisson structures of nonlinear plasma dynamics

    SciTech Connect

    Kaufman, A.N.

    1982-06-01

    Hamiltonian field theories, for models of nonlinear plasma dynamics, require a Poisson bracket structure for functionals of the field variables. These are presented, applied, and derived for several sets of field variables: coherent waves, incoherent waves, particle distributions, and multifluid electrodynamics. Parametric coupling of waves and plasma yields concise expressions for ponderomotive effects (in kinetic and fluid models) and for induced scattering.

  15. Evolutionary inference via the Poisson Indel Process.

    PubMed

    Bouchard-Côté, Alexandre; Jordan, Michael I

    2013-01-22

    We address the problem of the joint statistical inference of phylogenetic trees and multiple sequence alignments from unaligned molecular sequences. This problem is generally formulated in terms of string-valued evolutionary processes along the branches of a phylogenetic tree. The classic evolutionary process, the TKF91 model [Thorne JL, Kishino H, Felsenstein J (1991) J Mol Evol 33(2):114-124] is a continuous-time Markov chain model composed of insertion, deletion, and substitution events. Unfortunately, this model gives rise to an intractable computational problem: The computation of the marginal likelihood under the TKF91 model is exponential in the number of taxa. In this work, we present a stochastic process, the Poisson Indel Process (PIP), in which the complexity of this computation is reduced to linear. The Poisson Indel Process is closely related to the TKF91 model, differing only in its treatment of insertions, but it has a global characterization as a Poisson process on the phylogeny. Standard results for Poisson processes allow key computations to be decoupled, which yields the favorable computational profile of inference under the PIP model. We present illustrative experiments in which Bayesian inference under the PIP model is compared with separate inference of phylogenies and alignments.

  16. A generalized Poisson and Poisson-Boltzmann solver for electrostatic environments.

    PubMed

    Fisicaro, G; Genovese, L; Andreussi, O; Marzari, N; Goedecker, S

    2016-01-07

    The computational study of chemical reactions in complex, wet environments is critical for applications in many fields. It is often essential to study chemical reactions in the presence of applied electrochemical potentials, taking into account the non-trivial electrostatic screening coming from the solvent and the electrolytes. As a consequence, the electrostatic potential has to be found by solving the generalized Poisson and the Poisson-Boltzmann equations for neutral and ionic solutions, respectively. In the present work, solvers for both problems have been developed. A preconditioned conjugate gradient method has been implemented for the solution of the generalized Poisson equation and the linear regime of the Poisson-Boltzmann, allowing to solve iteratively the minimization problem with some ten iterations of the ordinary Poisson equation solver. In addition, a self-consistent procedure enables us to solve the non-linear Poisson-Boltzmann problem. Both solvers exhibit very high accuracy and parallel efficiency and allow for the treatment of periodic, free, and slab boundary conditions. The solver has been integrated into the BigDFT and Quantum-ESPRESSO electronic-structure packages and will be released as an independent program, suitable for integration in other codes.

  17. A generalized Poisson and Poisson-Boltzmann solver for electrostatic environments

    SciTech Connect

    Fisicaro, G. Goedecker, S.; Genovese, L.; Andreussi, O.; Marzari, N.

    2016-01-07

    The computational study of chemical reactions in complex, wet environments is critical for applications in many fields. It is often essential to study chemical reactions in the presence of applied electrochemical potentials, taking into account the non-trivial electrostatic screening coming from the solvent and the electrolytes. As a consequence, the electrostatic potential has to be found by solving the generalized Poisson and the Poisson-Boltzmann equations for neutral and ionic solutions, respectively. In the present work, solvers for both problems have been developed. A preconditioned conjugate gradient method has been implemented for the solution of the generalized Poisson equation and the linear regime of the Poisson-Boltzmann, allowing to solve iteratively the minimization problem with some ten iterations of the ordinary Poisson equation solver. In addition, a self-consistent procedure enables us to solve the non-linear Poisson-Boltzmann problem. Both solvers exhibit very high accuracy and parallel efficiency and allow for the treatment of periodic, free, and slab boundary conditions. The solver has been integrated into the BigDFT and Quantum-ESPRESSO electronic-structure packages and will be released as an independent program, suitable for integration in other codes.

  18. Additives

    NASA Technical Reports Server (NTRS)

    Smalheer, C. V.

    1973-01-01

    The chemistry of lubricant additives is discussed to show what the additives are chemically and what functions they perform in the lubrication of various kinds of equipment. Current theories regarding the mode of action of lubricant additives are presented. The additive groups discussed include the following: (1) detergents and dispersants, (2) corrosion inhibitors, (3) antioxidants, (4) viscosity index improvers, (5) pour point depressants, and (6) antifouling agents.

  19. Slope estimation for informatively right censored longitudinal data modelling the number of observations using geometric and Poisson distributions: application to renal transplant cohort.

    PubMed

    Jaffa, Miran A; Lipsitz, Stuart; Woolson, Robert F

    2015-12-01

    Analysis of longitudinal data is often complicated by the presence of informative right censoring. This type of censoring should be accounted for in the analysis so that valid slope estimates are attained. In this study, we developed a new likelihood-based approach wherein the likelihood function is integrated over random effects to obtain a marginal likelihood function. Maximum likelihood estimates for the population slope were acquired by direct maximisation of the marginal likelihood function and empirical Bayes estimates for the individual slopes were generated using Gaussian quadrature. The performance of the model was assessed using the geometric and Poisson distributions to model the number of observations for every individual subject. Our model generated valid estimates for the slopes under both distributions with minimal bias and mean squared errors. Our sensitivity analysis confirmed the robustness of the model to assumptions pertaining to the underlying distribution and demonstrated its insensitivity to normality assumptions. Moreover, superiority of the model in terms of accuracy of slope estimates was consistently shown across the different levels of censoring in comparison to the naïve and bootstrap approaches. This model was illustrated using the cohort of renal transplant patients and estimates of the slopes that are adjusted for informative right censoring were acquired.

  20. CUMPOIS- CUMULATIVE POISSON DISTRIBUTION PROGRAM

    NASA Technical Reports Server (NTRS)

    Bowerman, P. N.

    1994-01-01

    The Cumulative Poisson distribution program, CUMPOIS, is one of two programs which make calculations involving cumulative poisson distributions. Both programs, CUMPOIS (NPO-17714) and NEWTPOIS (NPO-17715), can be used independently of one another. CUMPOIS determines the approximate cumulative binomial distribution, evaluates the cumulative distribution function (cdf) for gamma distributions with integer shape parameters, and evaluates the cdf for chi-square distributions with even degrees of freedom. It can be used by statisticians and others concerned with probabilities of independent events occurring over specific units of time, area, or volume. CUMPOIS calculates the probability that n or less events (ie. cumulative) will occur within any unit when the expected number of events is given as lambda. Normally, this probability is calculated by a direct summation, from i=0 to n, of terms involving the exponential function, lambda, and inverse factorials. This approach, however, eventually fails due to underflow for sufficiently large values of n. Additionally, when the exponential term is moved outside of the summation for simplification purposes, there is a risk that the terms remaining within the summation, and the summation itself, will overflow for certain values of i and lambda. CUMPOIS eliminates these possibilities by multiplying an additional exponential factor into the summation terms and the partial sum whenever overflow/underflow situations threaten. The reciprocal of this term is then multiplied into the completed sum giving the cumulative probability. The CUMPOIS program is written in C. It was developed on an IBM AT with a numeric co-processor using Microsoft C 5.0. Because the source code is written using standard C structures and functions, it should compile correctly on most C compilers. The program format is interactive, accepting lambda and n as inputs. It has been implemented under DOS 3.2 and has a memory requirement of 26K. CUMPOIS was

  1. Short-Term Effects of Climatic Variables on Hand, Foot, and Mouth Disease in Mainland China, 2008–2013: A Multilevel Spatial Poisson Regression Model Accounting for Overdispersion

    PubMed Central

    Yang, Fang; Yang, Min; Hu, Yuehua; Zhang, Juying

    2016-01-01

    Background Hand, Foot, and Mouth Disease (HFMD) is a worldwide infectious disease. In China, many provinces have reported HFMD cases, especially the south and southwest provinces. Many studies have found a strong association between the incidence of HFMD and climatic factors such as temperature, rainfall, and relative humidity. However, few studies have analyzed cluster effects between various geographical units. Methods The nonlinear relationships and lag effects between weekly HFMD cases and climatic variables were estimated for the period of 2008–2013 using a polynomial distributed lag model. The extra-Poisson multilevel spatial polynomial model was used to model the exact relationship between weekly HFMD incidence and climatic variables after considering cluster effects, provincial correlated structure of HFMD incidence and overdispersion. The smoothing spline methods were used to detect threshold effects between climatic factors and HFMD incidence. Results The HFMD incidence spatial heterogeneity distributed among provinces, and the scale measurement of overdispersion was 548.077. After controlling for long-term trends, spatial heterogeneity and overdispersion, temperature was highly associated with HFMD incidence. Weekly average temperature and weekly temperature difference approximate inverse “V” shape and “V” shape relationships associated with HFMD incidence. The lag effects for weekly average temperature and weekly temperature difference were 3 weeks and 2 weeks. High spatial correlated HFMD incidence were detected in northern, central and southern province. Temperature can be used to explain most of variation of HFMD incidence in southern and northeastern provinces. After adjustment for temperature, eastern and Northern provinces still had high variation HFMD incidence. Conclusion We found a relatively strong association between weekly HFMD incidence and weekly average temperature. The association between the HFMD incidence and climatic

  2. Supervised Gamma Process Poisson Factorization

    SciTech Connect

    Anderson, Dylan Zachary

    2015-05-01

    This thesis develops the supervised gamma process Poisson factorization (S- GPPF) framework, a novel supervised topic model for joint modeling of count matrices and document labels. S-GPPF is fully generative and nonparametric: document labels and count matrices are modeled under a uni ed probabilistic framework and the number of latent topics is controlled automatically via a gamma process prior. The framework provides for multi-class classification of documents using a generative max-margin classifier. Several recent data augmentation techniques are leveraged to provide for exact inference using a Gibbs sampling scheme. The first portion of this thesis reviews supervised topic modeling and several key mathematical devices used in the formulation of S-GPPF. The thesis then introduces the S-GPPF generative model and derives the conditional posterior distributions of the latent variables for posterior inference via Gibbs sampling. The S-GPPF is shown to exhibit state-of-the-art performance for joint topic modeling and document classification on a dataset of conference abstracts, beating out competing supervised topic models. The unique properties of S-GPPF along with its competitive performance make it a novel contribution to supervised topic modeling.

  3. Algorithm Calculates Cumulative Poisson Distribution

    NASA Technical Reports Server (NTRS)

    Bowerman, Paul N.; Nolty, Robert C.; Scheuer, Ernest M.

    1992-01-01

    Algorithm calculates accurate values of cumulative Poisson distribution under conditions where other algorithms fail because numbers are so small (underflow) or so large (overflow) that computer cannot process them. Factors inserted temporarily to prevent underflow and overflow. Implemented in CUMPOIS computer program described in "Cumulative Poisson Distribution Program" (NPO-17714).

  4. Poisson Spot with Magnetic Levitation

    ERIC Educational Resources Information Center

    Hoover, Matthew; Everhart, Michael; D'Arruda, Jose

    2010-01-01

    In this paper we describe a unique method for obtaining the famous Poisson spot without adding obstacles to the light path, which could interfere with the effect. A Poisson spot is the interference effect from parallel rays of light diffracting around a solid spherical object, creating a bright spot in the center of the shadow.

  5. Analysis of the Poisson-Nernst-Planck equation in a ball for modeling the Voltage-Current relation in neurobiological microdomains

    NASA Astrophysics Data System (ADS)

    Cartailler, J.; Schuss, Z.; Holcman, D.

    2017-01-01

    The electro-diffusion of ions is often described by the Poisson-Nernst-Planck (PNP) equations, which couple nonlinearly the charge concentration and the electric potential. This model is used, among others, to describe the motion of ions in neuronal micro-compartments. It remains at this time an open question how to determine the relaxation and the steady state distribution of voltage when an initial charge of ions is injected into a domain bounded by an impermeable dielectric membrane. The purpose of this paper is to construct an asymptotic approximation to the solution of the stationary PNP equations in a d-dimensional ball (d = 1 , 2 , 3) in the limit of large total charge. In this geometry the PNP system reduces to the Liouville-Gelfand-Bratú (LGB) equation, with the difference that the boundary condition is Neumann, not Dirichlet, and there is a minus sign in the exponent of the exponential term. The entire boundary is impermeable to ions and the electric field satisfies the compatibility condition of Poisson's equation. These differences replace attraction by repulsion in the LGB equation, thus completely changing the solution. We find that the voltage is maximal in the center and decreases toward the boundary. We also find that the potential drop between the center and the surface increases logarithmically in the total number of charges and not linearly, as in classical capacitance theory. This logarithmic singularity is obtained for d = 3 from an asymptotic argument and cannot be derived from the analysis of the phase portrait. These results are used to derive the relation between the outward current and the voltage in a dendritic spine, which is idealized as a dielectric sphere connected smoothly to the nerve axon by a narrow neck. This is a fundamental microdomain involved in neuronal communication. We compute the escape rate of an ion from the steady density in a ball, which models a neuronal spine head, to a small absorbing window in the sphere. We

  6. Poisson-Boltzmann model for protein-surface electrostatic interactions and grid-convergence study using the PyGBe code

    NASA Astrophysics Data System (ADS)

    Cooper, Christopher D.; Barba, Lorena A.

    2016-05-01

    Interactions between surfaces and proteins occur in many vital processes and are crucial in biotechnology: the ability to control specific interactions is essential in fields like biomaterials, biomedical implants and biosensors. In the latter case, biosensor sensitivity hinges on ligand proteins adsorbing on bioactive surfaces with a favorable orientation, exposing reaction sites to target molecules. Protein adsorption, being a free-energy-driven process, is difficult to study experimentally. This paper develops and evaluates a computational model to study electrostatic interactions of proteins and charged nanosurfaces, via the Poisson-Boltzmann equation. We extended the implicit-solvent model used in the open-source code PyGBe to include surfaces of imposed charge or potential. This code solves the boundary integral formulation of the Poisson-Boltzmann equation, discretized with surface elements. PyGBe has at its core a treecode-accelerated Krylov iterative solver, resulting in O(N log N) scaling, with further acceleration on hardware via multi-threaded execution on GPUs. It computes solvation and surface free energies, providing a framework for studying the effect of electrostatics on adsorption. We derived an analytical solution for a spherical charged surface interacting with a spherical dielectric cavity, and used it in a grid-convergence study to build evidence on the correctness of our approach. The study showed the error decaying with the average area of the boundary elements, i.e., the method is O(1 / N) , which is consistent with our previous verification studies using PyGBe. We also studied grid-convergence using a real molecular geometry (protein G B1 D4‧), in this case using Richardson extrapolation (in the absence of an analytical solution) and confirmed the O(1 / N) scaling. With this work, we can now access a completely new family of problems, which no other major bioelectrostatics solver, e.g. APBS, is capable of dealing with. PyGBe is open

  7. Resources allocation in healthcare for cancer: a case study using generalised additive mixed models.

    PubMed

    Musio, Monica; Sauleau, Erik A; Augustin, Nicole H

    2012-11-01

    Our aim is to develop a method for helping resources re-allocation in healthcare linked to cancer, in order to replan the allocation of providers. Ageing of the population has a considerable impact on the use of health resources because aged people require more specialised medical care due notably to cancer. We propose a method useful to monitor changes of cancer incidence in space and time taking into account two age categories, according to healthcar general organisation. We use generalised additive mixed models with a Poisson response, according to the methodology presented in Wood, Generalised additive models: an introduction with R. Chapman and Hall/CRC, 2006. Besides one-dimensional smooth functions accounting for non-linear effects of covariates, the space-time interaction can be modelled using scale invariant smoothers. Incidence data collected by a general cancer registry between 1992 and 2007 in a specific area of France is studied. Our best model exhibits a strong increase of the incidence of cancer along time and an obvious spatial pattern for people more than 70 years with a higher incidence in the central band of the region. This is a strong argument for re-allocating resources for old people cancer care in this sub-region.

  8. Deformation mechanisms in negative Poisson's ratio materials - Structural aspects

    NASA Technical Reports Server (NTRS)

    Lakes, R.

    1991-01-01

    Poisson's ratio in materials is governed by the following aspects of the microstructure: the presence of rotational degrees of freedom, non-affine deformation kinematics, or anisotropic structure. Several structural models are examined. The non-affine kinematics are seen to be essential for the production of negative Poisson's ratios for isotropic materials containing central force linkages of positive stiffness. Non-central forces combined with pre-load can also give rise to a negative Poisson's ratio in isotropic materials. A chiral microstructure with non-central force interaction or non-affine deformation can also exhibit a negative Poisson's ratio. Toughness and damage resistance in these materials may be affected by the Poisson's ratio itself, as well as by generalized continuum aspects associated with the microstructure.

  9. Newton/Poisson-Distribution Program

    NASA Technical Reports Server (NTRS)

    Bowerman, Paul N.; Scheuer, Ernest M.

    1990-01-01

    NEWTPOIS, one of two computer programs making calculations involving cumulative Poisson distributions. NEWTPOIS (NPO-17715) and CUMPOIS (NPO-17714) used independently of one another. NEWTPOIS determines Poisson parameter for given cumulative probability, from which one obtains percentiles for gamma distributions with integer shape parameters and percentiles for X(sup2) distributions with even degrees of freedom. Used by statisticians and others concerned with probabilities of independent events occurring over specific units of time, area, or volume. Program written in C.

  10. Influence of dispersing additive on asphaltenes aggregation in model system

    NASA Astrophysics Data System (ADS)

    Gorshkov, A. M.; Shishmina, L. V.; Tukhvatullina, A. Z.; Ismailov, Yu R.; Ges, G. A.

    2016-09-01

    The work is devoted to investigation of the dispersing additive influence on asphaltenes aggregation in the asphaltenes-toluene-heptane model system by photon correlation spectroscopy method. The experimental relationship between the onset point of asphaltenes and their concentration in toluene has been obtained. The influence of model system composition on asphaltenes aggregation has been researched. The estimation of aggregative and sedimentation stability of asphaltenes in model system and system with addition of dispersing additive has been given.

  11. Background stratified Poisson regression analysis of cohort data.

    PubMed

    Richardson, David B; Langholz, Bryan

    2012-03-01

    Background stratified Poisson regression is an approach that has been used in the analysis of data derived from a variety of epidemiologically important studies of radiation-exposed populations, including uranium miners, nuclear industry workers, and atomic bomb survivors. We describe a novel approach to fit Poisson regression models that adjust for a set of covariates through background stratification while directly estimating the radiation-disease association of primary interest. The approach makes use of an expression for the Poisson likelihood that treats the coefficients for stratum-specific indicator variables as 'nuisance' variables and avoids the need to explicitly estimate the coefficients for these stratum-specific parameters. Log-linear models, as well as other general relative rate models, are accommodated. This approach is illustrated using data from the Life Span Study of Japanese atomic bomb survivors and data from a study of underground uranium miners. The point estimate and confidence interval obtained from this 'conditional' regression approach are identical to the values obtained using unconditional Poisson regression with model terms for each background stratum. Moreover, it is shown that the proposed approach allows estimation of background stratified Poisson regression models of non-standard form, such as models that parameterize latency effects, as well as regression models in which the number of strata is large, thereby overcoming the limitations of previously available statistical software for fitting background stratified Poisson regression models.

  12. A proximal iteration for deconvolving Poisson noisy images using sparse representations.

    PubMed

    Dupé, François-Xavier; Fadili, Jalal M; Starck, Jean-Luc

    2009-02-01

    We propose an image deconvolution algorithm when the data is contaminated by Poisson noise. The image to restore is assumed to be sparsely represented in a dictionary of waveforms such as the wavelet or curvelet transforms. Our key contributions are as follows. First, we handle the Poisson noise properly by using the Anscombe variance stabilizing transform leading to a nonlinear degradation equation with additive Gaussian noise. Second, the deconvolution problem is formulated as the minimization of a convex functional with a data-fidelity term reflecting the noise properties, and a nonsmooth sparsity-promoting penalty over the image representation coefficients (e.g., l(1) -norm). An additional term is also included in the functional to ensure positivity of the restored image. Third, a fast iterative forward-backward splitting algorithm is proposed to solve the minimization problem. We derive existence and uniqueness conditions of the solution, and establish convergence of the iterative algorithm. Finally, a GCV-based model selection procedure is proposed to objectively select the regularization parameter. Experimental results are carried out to show the striking benefits gained from taking into account the Poisson statistics of the noise. These results also suggest that using sparse-domain regularization may be tractable in many deconvolution applications with Poisson noise such as astronomy and microscopy.

  13. Minimum risk wavelet shrinkage operator for Poisson image denoising.

    PubMed

    Cheng, Wu; Hirakawa, Keigo

    2015-05-01

    The pixel values of images taken by an image sensor are said to be corrupted by Poisson noise. To date, multiscale Poisson image denoising techniques have processed Haar frame and wavelet coefficients--the modeling of coefficients is enabled by the Skellam distribution analysis. We extend these results by solving for shrinkage operators for Skellam that minimizes the risk functional in the multiscale Poisson image denoising setting. The minimum risk shrinkage operator of this kind effectively produces denoised wavelet coefficients with minimum attainable L2 error.

  14. Future-singularity-free accelerating expansion with modified Poisson brackets

    SciTech Connect

    Kim, Wontae; Son, Edwin J.

    2007-01-15

    We show that the second accelerating expansion of the universe appears smoothly from the decelerating phase, which follows the initial inflation, in the two-dimensional soluble semiclassical dilaton gravity along with the modified Poisson brackets with noncommutativity between the relevant fields. This is in contrast to the fact that the ordinary solution of the equations of motion following from the conventional Poisson algebra describes a permanent accelerating universe without any phase change. In this modified model, it turns out that the noncommutative Poisson algebra is responsible for the remarkable phase transition to the second accelerating expansion.

  15. Analysis of Time to Event Outcomes in Randomized Controlled Trials by Generalized Additive Models

    PubMed Central

    Argyropoulos, Christos; Unruh, Mark L.

    2015-01-01

    Background Randomized Controlled Trials almost invariably utilize the hazard ratio calculated with a Cox proportional hazard model as a treatment efficacy measure. Despite the widespread adoption of HRs, these provide a limited understanding of the treatment effect and may even provide a biased estimate when the assumption of proportional hazards in the Cox model is not verified by the trial data. Additional treatment effect measures on the survival probability or the time scale may be used to supplement HRs but a framework for the simultaneous generation of these measures is lacking. Methods By splitting follow-up time at the nodes of a Gauss Lobatto numerical quadrature rule, techniques for Poisson Generalized Additive Models (PGAM) can be adopted for flexible hazard modeling. Straightforward simulation post-estimation transforms PGAM estimates for the log hazard into estimates of the survival function. These in turn were used to calculate relative and absolute risks or even differences in restricted mean survival time between treatment arms. We illustrate our approach with extensive simulations and in two trials: IPASS (in which the proportionality of hazards was violated) and HEMO a long duration study conducted under evolving standards of care on a heterogeneous patient population. Findings PGAM can generate estimates of the survival function and the hazard ratio that are essentially identical to those obtained by Kaplan Meier curve analysis and the Cox model. PGAMs can simultaneously provide multiple measures of treatment efficacy after a single data pass. Furthermore, supported unadjusted (overall treatment effect) but also subgroup and adjusted analyses, while incorporating multiple time scales and accounting for non-proportional hazards in survival data. Conclusions By augmenting the HR conventionally reported, PGAMs have the potential to support the inferential goals of multiple stakeholders involved in the evaluation and appraisal of clinical trial

  16. NEWTPOIS- NEWTON POISSON DISTRIBUTION PROGRAM

    NASA Technical Reports Server (NTRS)

    Bowerman, P. N.

    1994-01-01

    The cumulative poisson distribution program, NEWTPOIS, is one of two programs which make calculations involving cumulative poisson distributions. Both programs, NEWTPOIS (NPO-17715) and CUMPOIS (NPO-17714), can be used independently of one another. NEWTPOIS determines percentiles for gamma distributions with integer shape parameters and calculates percentiles for chi-square distributions with even degrees of freedom. It can be used by statisticians and others concerned with probabilities of independent events occurring over specific units of time, area, or volume. NEWTPOIS determines the Poisson parameter (lambda), that is; the mean (or expected) number of events occurring in a given unit of time, area, or space. Given that the user already knows the cumulative probability for a specific number of occurrences (n) it is usually a simple matter of substitution into the Poisson distribution summation to arrive at lambda. However, direct calculation of the Poisson parameter becomes difficult for small positive values of n and unmanageable for large values. NEWTPOIS uses Newton's iteration method to extract lambda from the initial value condition of the Poisson distribution where n=0, taking successive estimations until some user specified error term (epsilon) is reached. The NEWTPOIS program is written in C. It was developed on an IBM AT with a numeric co-processor using Microsoft C 5.0. Because the source code is written using standard C structures and functions, it should compile correctly on most C compilers. The program format is interactive, accepting epsilon, n, and the cumulative probability of the occurrence of n as inputs. It has been implemented under DOS 3.2 and has a memory requirement of 30K. NEWTPOIS was developed in 1988.

  17. Poisson's ratio over two centuries: challenging hypotheses

    PubMed Central

    Greaves, G. Neville

    2013-01-01

    This article explores Poisson's ratio, starting with the controversy concerning its magnitude and uniqueness in the context of the molecular and continuum hypotheses competing in the development of elasticity theory in the nineteenth century, moving on to its place in the development of materials science and engineering in the twentieth century, and concluding with its recent re-emergence as a universal metric for the mechanical performance of materials on any length scale. During these episodes France lost its scientific pre-eminence as paradigms switched from mathematical to observational, and accurate experiments became the prerequisite for scientific advance. The emergence of the engineering of metals followed, and subsequently the invention of composites—both somewhat separated from the discovery of quantum mechanics and crystallography, and illustrating the bifurcation of technology and science. Nowadays disciplines are reconnecting in the face of new scientific demands. During the past two centuries, though, the shape versus volume concept embedded in Poisson's ratio has remained invariant, but its application has exploded from its origins in describing the elastic response of solids and liquids, into areas such as materials with negative Poisson's ratio, brittleness, glass formation, and a re-evaluation of traditional materials. Moreover, the two contentious hypotheses have been reconciled in their complementarity within the hierarchical structure of materials and through computational modelling. PMID:24687094

  18. Additive and subtractive scrambling in optional randomized response modeling.

    PubMed

    Hussain, Zawar; Al-Sobhi, Mashail M; Al-Zahrani, Bander

    2014-01-01

    This article considers unbiased estimation of mean, variance and sensitivity level of a sensitive variable via scrambled response modeling. In particular, we focus on estimation of the mean. The idea of using additive and subtractive scrambling has been suggested under a recent scrambled response model. Whether it is estimation of mean, variance or sensitivity level, the proposed scheme of estimation is shown relatively more efficient than that recent model. As far as the estimation of mean is concerned, the proposed estimators perform relatively better than the estimators based on recent additive scrambling models. Relative efficiency comparisons are also made in order to highlight the performance of proposed estimators under suggested scrambling technique.

  19. Complex Modelling Scheme Of An Additive Manufacturing Centre

    NASA Astrophysics Data System (ADS)

    Popescu, Liliana Georgeta

    2015-09-01

    This paper presents a modelling scheme sustaining the development of an additive manufacturing research centre model and its processes. This modelling is performed using IDEF0, the resulting model process representing the basic processes required in developing such a centre in any university. While the activities presented in this study are those recommended in general, changes may occur in specific existing situations in a research centre.

  20. Tuning the Poisson's Ratio of Biomaterials for Investigating Cellular Response

    PubMed Central

    Meggs, Kyle; Qu, Xin; Chen, Shaochen

    2013-01-01

    Cells sense and respond to mechanical forces, regardless of whether the source is from a normal tissue matrix, an adjacent cell or a synthetic substrate. In recent years, cell response to surface rigidity has been extensively studied by modulating the elastic modulus of poly(ethylene glycol) (PEG)-based hydrogels. In the context of biomaterials, Poisson's ratio, another fundamental material property parameter has not been explored, primarily because of challenges involved in tuning the Poisson's ratio in biological scaffolds. Two-photon polymerization is used to fabricate suspended web structures that exhibit positive and negative Poisson's ratio (NPR), based on analytical models. NPR webs demonstrate biaxial expansion/compression behavior, as one or multiple cells apply local forces and move the structures. Unusual cell division on NPR structures is also demonstrated. This methodology can be used to tune the Poisson's ratio of several photocurable biomaterials and could have potential implications in the field of mechanobiology. PMID:24076754

  1. Comprehensive European dietary exposure model (CEDEM) for food additives.

    PubMed

    Tennant, David R

    2016-05-01

    European methods for assessing dietary exposures to nutrients, additives and other substances in food are limited by the availability of detailed food consumption data for all member states. A proposed comprehensive European dietary exposure model (CEDEM) applies summary data published by the European Food Safety Authority (EFSA) in a deterministic model based on an algorithm from the EFSA intake method for food additives. The proposed approach can predict estimates of food additive exposure provided in previous EFSA scientific opinions that were based on the full European food consumption database.

  2. Graded geometry and Poisson reduction

    SciTech Connect

    Cattaneo, A. S.; Zambon, M.

    2009-02-02

    The main result extends the Marsden-Ratiu reduction theorem in Poisson geometry, and is proven by means of graded geometry. In this note we provide the background material about graded geometry necessary for the proof. Further, we provide an alternative algebraic proof for the main result.

  3. Sparse Poisson noisy image deblurring.

    PubMed

    Carlavan, Mikael; Blanc-Féraud, Laure

    2012-04-01

    Deblurring noisy Poisson images has recently been a subject of an increasing amount of works in many areas such as astronomy and biological imaging. In this paper, we focus on confocal microscopy, which is a very popular technique for 3-D imaging of biological living specimens that gives images with a very good resolution (several hundreds of nanometers), although degraded by both blur and Poisson noise. Deconvolution methods have been proposed to reduce these degradations, and in this paper, we focus on techniques that promote the introduction of an explicit prior on the solution. One difficulty of these techniques is to set the value of the parameter, which weights the tradeoff between the data term and the regularizing term. Only few works have been devoted to the research of an automatic selection of this regularizing parameter when considering Poisson noise; therefore, it is often set manually such that it gives the best visual results. We present here two recent methods to estimate this regularizing parameter, and we first propose an improvement of these estimators, which takes advantage of confocal images. Following these estimators, we secondly propose to express the problem of the deconvolution of Poisson noisy images as the minimization of a new constrained problem. The proposed constrained formulation is well suited to this application domain since it is directly expressed using the antilog likelihood of the Poisson distribution and therefore does not require any approximation. We show how to solve the unconstrained and constrained problems using the recent alternating-direction technique, and we present results on synthetic and real data using well-known priors, such as total variation and wavelet transforms. Among these wavelet transforms, we specially focus on the dual-tree complex wavelet transform and on the dictionary composed of curvelets and an undecimated wavelet transform.

  4. Modeling Errors in Daily Precipitation Measurements: Additive or Multiplicative?

    NASA Technical Reports Server (NTRS)

    Tian, Yudong; Huffman, George J.; Adler, Robert F.; Tang, Ling; Sapiano, Matthew; Maggioni, Viviana; Wu, Huan

    2013-01-01

    The definition and quantification of uncertainty depend on the error model used. For uncertainties in precipitation measurements, two types of error models have been widely adopted: the additive error model and the multiplicative error model. This leads to incompatible specifications of uncertainties and impedes intercomparison and application.In this letter, we assess the suitability of both models for satellite-based daily precipitation measurements in an effort to clarify the uncertainty representation. Three criteria were employed to evaluate the applicability of either model: (1) better separation of the systematic and random errors; (2) applicability to the large range of variability in daily precipitation; and (3) better predictive skills. It is found that the multiplicative error model is a much better choice under all three criteria. It extracted the systematic errors more cleanly, was more consistent with the large variability of precipitation measurements, and produced superior predictions of the error characteristics. The additive error model had several weaknesses, such as non constant variance resulting from systematic errors leaking into random errors, and the lack of prediction capability. Therefore, the multiplicative error model is a better choice.

  5. Calculation of the Poisson cumulative distribution function

    NASA Technical Reports Server (NTRS)

    Bowerman, Paul N.; Nolty, Robert G.; Scheuer, Ernest M.

    1990-01-01

    A method for calculating the Poisson cdf (cumulative distribution function) is presented. The method avoids computer underflow and overflow during the process. The computer program uses this technique to calculate the Poisson cdf for arbitrary inputs. An algorithm that determines the Poisson parameter required to yield a specified value of the cdf is presented.

  6. Poisson distribution to analyze near-threshold motor evoked potentials.

    PubMed

    Kaelin-Lang, Alain; Conforto, Adriana B; Z'Graggen, Werner; Hess, Christian W

    2010-11-01

    Motor unit action potentials (MUAPs) evoked by repetitive, low-intensity transcranial magnetic stimulation can be modeled as a Poisson process. A mathematical consequence of such a model is that the ratio of the variance to the mean of the amplitudes of motor evoked potentials (MEPs) should provide an estimate of the mean size of the individual MUAPs that summate to generate each MEP. We found that this is, in fact, the case. Our finding thus supports the use of the Poisson distribution to model MEP generation and indicates that this model enables characterization of the motor unit population that contributes to near-threshold MEPs.

  7. Electroacoustics modeling of piezoelectric welders for ultrasonic additive manufacturing processes

    NASA Astrophysics Data System (ADS)

    Hehr, Adam; Dapino, Marcelo J.

    2016-04-01

    Ultrasonic additive manufacturing (UAM) is a recent 3D metal printing technology which utilizes ultrasonic vibrations from high power piezoelectric transducers to additively weld similar and dissimilar metal foils. CNC machining is used intermittent of welding to create internal channels, embed temperature sensitive components, sensors, and materials, and for net shaping parts. Structural dynamics of the welder and work piece influence the performance of the welder and part quality. To understand the impact of structural dynamics on UAM, a linear time-invariant model is used to relate system shear force and electric current inputs to the system outputs of welder velocity and voltage. Frequency response measurements are combined with in-situ operating measurements of the welder to identify model parameters and to verify model assumptions. The proposed LTI model can enhance process consistency, performance, and guide the development of improved quality monitoring and control strategies.

  8. An Additional Symmetry in the Weinberg-Salam Model

    SciTech Connect

    Bakker, B.L.G.; Veselov, A.I.; Zubkov, M.A.

    2005-06-01

    An additional Z{sub 6} symmetry hidden in the fermion and Higgs sectors of the Standard Model has been found recently. It has a singular nature and is connected to the centers of the SU(3) and SU(2) subgroups of the gauge group. A lattice regularization of the Standard Model was constructed that possesses this symmetry. In this paper, we report our results on the numerical simulation of its electroweak sector.

  9. Modeling uranium transport in acidic contaminated groundwater with base addition.

    PubMed

    Zhang, Fan; Luo, Wensui; Parker, Jack C; Brooks, Scott C; Watson, David B; Jardine, Philip M; Gu, Baohua

    2011-06-15

    This study investigates reactive transport modeling in a column of uranium(VI)-contaminated sediments with base additions in the circulating influent. The groundwater and sediment exhibit oxic conditions with low pH, high concentrations of NO(3)(-), SO(4)(2-), U and various metal cations. Preliminary batch experiments indicate that additions of strong base induce rapid immobilization of U for this material. In the column experiment that is the focus of the present study, effluent groundwater was titrated with NaOH solution in an inflow reservoir before reinjection to gradually increase the solution pH in the column. An equilibrium hydrolysis, precipitation and ion exchange reaction model developed through simulation of the preliminary batch titration experiments predicted faster reduction of aqueous Al than observed in the column experiment. The model was therefore modified to consider reaction kinetics for the precipitation and dissolution processes which are the major mechanism for Al immobilization. The combined kinetic and equilibrium reaction model adequately described variations in pH, aqueous concentrations of metal cations (Al, Ca, Mg, Sr, Mn, Ni, Co), sulfate and U(VI). The experimental and modeling results indicate that U(VI) can be effectively sequestered with controlled base addition due to sorption by slowly precipitated Al with pH-dependent surface charge. The model may prove useful to predict field-scale U(VI) sequestration and remediation effectiveness.

  10. Generalised additive modelling approach to the fermentation process of glutamate.

    PubMed

    Liu, Chun-Bo; Li, Yun; Pan, Feng; Shi, Zhong-Ping

    2011-03-01

    In this work, generalised additive models (GAMs) were used for the first time to model the fermentation of glutamate (Glu). It was found that three fermentation parameters fermentation time (T), dissolved oxygen (DO) and oxygen uptake rate (OUR) could capture 97% variance of the production of Glu during the fermentation process through a GAM model calibrated using online data from 15 fermentation experiments. This model was applied to investigate the individual and combined effects of T, DO and OUR on the production of Glu. The conditions to optimize the fermentation process were proposed based on the simulation study from this model. Results suggested that the production of Glu can reach a high level by controlling concentration levels of DO and OUR to the proposed optimization conditions during the fermentation process. The GAM approach therefore provides an alternative way to model and optimize the fermentation process of Glu.

  11. Numerical Solution of 3D Poisson-Nernst-Planck Equations Coupled with Classical Density Functional Theory for Modeling Ion and Electron Transport in a Confined Environment

    SciTech Connect

    Meng, Da; Zheng, Bin; Lin, Guang; Sushko, Maria L.

    2014-08-29

    We have developed efficient numerical algorithms for the solution of 3D steady-state Poisson-Nernst-Planck equations (PNP) with excess chemical potentials described by the classical density functional theory (cDFT). The coupled PNP equations are discretized by finite difference scheme and solved iteratively by Gummel method with relaxation. The Nernst-Planck equations are transformed into Laplace equations through the Slotboom transformation. Algebraic multigrid method is then applied to efficiently solve the Poisson equation and the transformed Nernst-Planck equations. A novel strategy for calculating excess chemical potentials through fast Fourier transforms is proposed which reduces computational complexity from O(N2) to O(NlogN) where N is the number of grid points. Integrals involving Dirac delta function are evaluated directly by coordinate transformation which yields more accurate result compared to applying numerical quadrature to an approximated delta function. Numerical results for ion and electron transport in solid electrolyte for Li ion batteries are shown to be in good agreement with the experimental data and the results from previous studies.

  12. Validation of transport models using additive flux minimization technique

    NASA Astrophysics Data System (ADS)

    Pankin, A. Y.; Kruger, S. E.; Groebner, R. J.; Hakim, A.; Kritz, A. H.; Rafiq, T.

    2013-10-01

    A new additive flux minimization technique is proposed for carrying out the verification and validation (V&V) of anomalous transport models. In this approach, the plasma profiles are computed in time dependent predictive simulations in which an additional effective diffusivity is varied. The goal is to obtain an optimal match between the computed and experimental profile. This new technique has several advantages over traditional V&V methods for transport models in tokamaks and takes advantage of uncertainty quantification methods developed by the applied math community. As a demonstration of its efficiency, the technique is applied to the hypothesis that the paleoclassical density transport dominates in the plasma edge region in DIII-D tokamak discharges. A simplified version of the paleoclassical model that utilizes the Spitzer resistivity for the parallel neoclassical resistivity and neglects the trapped particle effects is tested in this paper. It is shown that a contribution to density transport, in addition to the paleoclassical density transport, is needed in order to describe the experimental profiles. It is found that more additional diffusivity is needed at the top of the H-mode pedestal, and almost no additional diffusivity is needed at the pedestal bottom. The implementation of this V&V technique uses the FACETS::Core transport solver and the DAKOTA toolkit for design optimization and uncertainty quantification. The FACETS::Core solver is used for advancing the plasma density profiles. The DAKOTA toolkit is used for the optimization of plasma profiles and the computation of the additional diffusivity that is required for the predicted density profile to match the experimental profile.

  13. Simultaneous measurement of the Young's modulus and the Poisson ratio of thin elastic layers.

    PubMed

    Gross, Wolfgang; Kress, Holger

    2017-02-07

    The behavior of cells and tissue is greatly influenced by the mechanical properties of their environment. For studies on the interactions between cells and soft matrices, especially those applying traction force microscopy the characterization of the mechanical properties of thin substrate layers is essential. Various techniques to measure the elastic modulus are available. Methods to accurately measure the Poisson ratio of such substrates are rare and often imply either a combination of multiple techniques or additional equipment which is not needed for the actual biological studies. Here we describe a novel technique to measure both parameters, the Youngs's modulus and the Poisson ratio in a single experiment. The technique requires only a standard inverted epifluorescence microscope. As a model system, we chose cross-linked polyacrylamide and poly-N-isopropylacrylamide hydrogels which are known to obey Hooke's law. We place millimeter-sized steel spheres on the substrates which indent the surface. The data are evaluated using a previously published model which takes finite thickness effects of the substrate layer into account. We demonstrate experimentally for the first time that the application of the model allows the simultaneous determination of both the Young's modulus and the Poisson ratio. Since the method is easy to adapt and comes without the need of special equipment, we envision the technique to become a standard tool for the characterization of substrates for a wide range of investigations of cell and tissue behavior in various mechanical environments as well as other samples, including biological materials.

  14. Time-dependent phenomena in the potential response of ion-selective electrodes treated by the Nernst-Planck-Poisson model. 1. Intramembrane processes and selectivity.

    PubMed

    Lingenfelter, Peter; Bedlechowicz-Sliwakowska, Iwona; Sokalski, Tomasz; Maj-Zurawska, Magdalena; Lewenstam, Andrzej

    2006-10-01

    The variability of selectivity coefficients, resulting from potential changes over time and the concentration ratio of primary to interfering ions, impedes many practical applications of ion-selective electrodes (ISEs). Existing theoretical interpretations of ISE selectivity are restricted by severe assumptions, such as steady state and electroneutrality, which hinder theorizing on this problem. For this reason, for the first time, the Nernst-Planck-Poisson equations are used to predict and visualize the selectivity variability over time and the concentration ratio. Special emphasis is placed on the non-Nernstian response in the measurements with liquid-ion-exchanger- and neutral-carrier-based ISEs. The conditions under which measured selectivity coefficients are true (unbiased) are demonstrated.

  15. Electrodiffusion Models of Neurons and Extracellular Space Using the Poisson-Nernst-Planck Equations—Numerical Simulation of the Intra- and Extracellular Potential for an Axon Model

    PubMed Central

    Pods, Jurgis; Schönke, Johannes; Bastian, Peter

    2013-01-01

    In neurophysiology, extracellular signals—as measured by local field potentials (LFP) or electroencephalography—are of great significance. Their exact biophysical basis is, however, still not fully understood. We present a three-dimensional model exploiting the cylinder symmetry of a single axon in extracellular fluid based on the Poisson-Nernst-Planck equations of electrodiffusion. The propagation of an action potential along the axonal membrane is investigated by means of numerical simulations. Special attention is paid to the Debye layer, the region with strong concentration gradients close to the membrane, which is explicitly resolved by the computational mesh. We focus on the evolution of the extracellular electric potential. A characteristic up-down-up LFP waveform in the far-field is found. Close to the membrane, the potential shows a more intricate shape. A comparison with the widely used line source approximation reveals similarities and demonstrates the strong influence of membrane currents. However, the electrodiffusion model shows another signal component stemming directly from the intracellular electric field, called the action potential echo. Depending on the neuronal configuration, this might have a significant effect on the LFP. In these situations, electrodiffusion models should be used for quantitative comparisons with experimental data. PMID:23823244

  16. Multiscale and Multiphysics Modeling of Additive Manufacturing of Advanced Materials

    NASA Technical Reports Server (NTRS)

    Liou, Frank; Newkirk, Joseph; Fan, Zhiqiang; Sparks, Todd; Chen, Xueyang; Fletcher, Kenneth; Zhang, Jingwei; Zhang, Yunlu; Kumar, Kannan Suresh; Karnati, Sreekar

    2015-01-01

    The objective of this proposed project is to research and develop a prediction tool for advanced additive manufacturing (AAM) processes for advanced materials and develop experimental methods to provide fundamental properties and establish validation data. Aircraft structures and engines demand materials that are stronger, useable at much higher temperatures, provide less acoustic transmission, and enable more aeroelastic tailoring than those currently used. Significant improvements in properties can only be achieved by processing the materials under nonequilibrium conditions, such as AAM processes. AAM processes encompass a class of processes that use a focused heat source to create a melt pool on a substrate. Examples include Electron Beam Freeform Fabrication and Direct Metal Deposition. These types of additive processes enable fabrication of parts directly from CAD drawings. To achieve the desired material properties and geometries of the final structure, assessing the impact of process parameters and predicting optimized conditions with numerical modeling as an effective prediction tool is necessary. The targets for the processing are multiple and at different spatial scales, and the physical phenomena associated occur in multiphysics and multiscale. In this project, the research work has been developed to model AAM processes in a multiscale and multiphysics approach. A macroscale model was developed to investigate the residual stresses and distortion in AAM processes. A sequentially coupled, thermomechanical, finite element model was developed and validated experimentally. The results showed the temperature distribution, residual stress, and deformation within the formed deposits and substrates. A mesoscale model was developed to include heat transfer, phase change with mushy zone, incompressible free surface flow, solute redistribution, and surface tension. Because of excessive computing time needed, a parallel computing approach was also tested. In addition

  17. Addition Table of Colours: Additive and Subtractive Mixtures Described Using a Single Reasoning Model

    ERIC Educational Resources Information Center

    Mota, A. R.; Lopes dos Santos, J. M. B.

    2014-01-01

    Students' misconceptions concerning colour phenomena and the apparent complexity of the underlying concepts--due to the different domains of knowledge involved--make its teaching very difficult. We have developed and tested a teaching device, the addition table of colours (ATC), that encompasses additive and subtractive mixtures in a single…

  18. Sensitivity analysis of geometric errors in additive manufacturing medical models.

    PubMed

    Pinto, Jose Miguel; Arrieta, Cristobal; Andia, Marcelo E; Uribe, Sergio; Ramos-Grez, Jorge; Vargas, Alex; Irarrazaval, Pablo; Tejos, Cristian

    2015-03-01

    Additive manufacturing (AM) models are used in medical applications for surgical planning, prosthesis design and teaching. For these applications, the accuracy of the AM models is essential. Unfortunately, this accuracy is compromised due to errors introduced by each of the building steps: image acquisition, segmentation, triangulation, printing and infiltration. However, the contribution of each step to the final error remains unclear. We performed a sensitivity analysis comparing errors obtained from a reference with those obtained modifying parameters of each building step. Our analysis considered global indexes to evaluate the overall error, and local indexes to show how this error is distributed along the surface of the AM models. Our results show that the standard building process tends to overestimate the AM models, i.e. models are larger than the original structures. They also show that the triangulation resolution and the segmentation threshold are critical factors, and that the errors are concentrated at regions with high curvatures. Errors could be reduced choosing better triangulation and printing resolutions, but there is an important need for modifying some of the standard building processes, particularly the segmentation algorithms.

  19. Additive Manufacturing of Medical Models--Applications in Rhinology.

    PubMed

    Raos, Pero; Klapan, Ivica; Galeta, Tomislav

    2015-09-01

    In the paper we are introducing guidelines and suggestions for use of 3D image processing SW in head pathology diagnostic and procedures for obtaining physical medical model by additive manufacturing/rapid prototyping techniques, bearing in mind the improvement of surgery performance, its maximum security and faster postoperative recovery of patients. This approach has been verified in two case reports. In the treatment we used intelligent classifier-schemes for abnormal patterns using computer-based system for 3D-virtual and endoscopic assistance in rhinology, with appropriate visualization of anatomy and pathology within the nose, paranasal sinuses, and scull base area.

  20. Multiscale Modeling of Powder Bed-Based Additive Manufacturing

    NASA Astrophysics Data System (ADS)

    Markl, Matthias; Körner, Carolin

    2016-07-01

    Powder bed fusion processes are additive manufacturing technologies that are expected to induce the third industrial revolution. Components are built up layer by layer in a powder bed by selectively melting confined areas, according to sliced 3D model data. This technique allows for manufacturing of highly complex geometries hardly machinable with conventional technologies. However, the underlying physical phenomena are sparsely understood and difficult to observe during processing. Therefore, an intensive and expensive trial-and-error principle is applied to produce components with the desired dimensional accuracy, material characteristics, and mechanical properties. This review presents numerical modeling approaches on multiple length scales and timescales to describe different aspects of powder bed fusion processes. In combination with tailored experiments, the numerical results enlarge the process understanding of the underlying physical mechanisms and support the development of suitable process strategies and component topologies.

  1. Phase space reduction and Poisson structure

    NASA Astrophysics Data System (ADS)

    Zaalani, Nadhem

    1999-07-01

    Let (P,π,B,G) be a G-principal fiber bundle. The action of G on the cotangent bundle T*P is free and Hamiltonian. By Liberman and Marle [Symplectic Geometry and Analytical Mechanics (Reidel, Dortrecht, 1987)] and Marsden and Ratiu [Lett. Math. Phys. 11, 161 (1981)] the quotient space T*P/G is a Poisson manifold. We will determine the Poisson bracket on the reduced Poisson manifold T*P/G, and its symplectic leaves.

  2. Additive Functions in Boolean Models of Gene Regulatory Network Modules

    PubMed Central

    Darabos, Christian; Di Cunto, Ferdinando; Tomassini, Marco; Moore, Jason H.; Provero, Paolo; Giacobini, Mario

    2011-01-01

    Gene-on-gene regulations are key components of every living organism. Dynamical abstract models of genetic regulatory networks help explain the genome's evolvability and robustness. These properties can be attributed to the structural topology of the graph formed by genes, as vertices, and regulatory interactions, as edges. Moreover, the actual gene interaction of each gene is believed to play a key role in the stability of the structure. With advances in biology, some effort was deployed to develop update functions in Boolean models that include recent knowledge. We combine real-life gene interaction networks with novel update functions in a Boolean model. We use two sub-networks of biological organisms, the yeast cell-cycle and the mouse embryonic stem cell, as topological support for our system. On these structures, we substitute the original random update functions by a novel threshold-based dynamic function in which the promoting and repressing effect of each interaction is considered. We use a third real-life regulatory network, along with its inferred Boolean update functions to validate the proposed update function. Results of this validation hint to increased biological plausibility of the threshold-based function. To investigate the dynamical behavior of this new model, we visualized the phase transition between order and chaos into the critical regime using Derrida plots. We complement the qualitative nature of Derrida plots with an alternative measure, the criticality distance, that also allows to discriminate between regimes in a quantitative way. Simulation on both real-life genetic regulatory networks show that there exists a set of parameters that allows the systems to operate in the critical region. This new model includes experimentally derived biological information and recent discoveries, which makes it potentially useful to guide experimental research. The update function confers additional realism to the model, while reducing the complexity

  3. Additive functions in boolean models of gene regulatory network modules.

    PubMed

    Darabos, Christian; Di Cunto, Ferdinando; Tomassini, Marco; Moore, Jason H; Provero, Paolo; Giacobini, Mario

    2011-01-01

    Gene-on-gene regulations are key components of every living organism. Dynamical abstract models of genetic regulatory networks help explain the genome's evolvability and robustness. These properties can be attributed to the structural topology of the graph formed by genes, as vertices, and regulatory interactions, as edges. Moreover, the actual gene interaction of each gene is believed to play a key role in the stability of the structure. With advances in biology, some effort was deployed to develop update functions in boolean models that include recent knowledge. We combine real-life gene interaction networks with novel update functions in a boolean model. We use two sub-networks of biological organisms, the yeast cell-cycle and the mouse embryonic stem cell, as topological support for our system. On these structures, we substitute the original random update functions by a novel threshold-based dynamic function in which the promoting and repressing effect of each interaction is considered. We use a third real-life regulatory network, along with its inferred boolean update functions to validate the proposed update function. Results of this validation hint to increased biological plausibility of the threshold-based function. To investigate the dynamical behavior of this new model, we visualized the phase transition between order and chaos into the critical regime using Derrida plots. We complement the qualitative nature of Derrida plots with an alternative measure, the criticality distance, that also allows to discriminate between regimes in a quantitative way. Simulation on both real-life genetic regulatory networks show that there exists a set of parameters that allows the systems to operate in the critical region. This new model includes experimentally derived biological information and recent discoveries, which makes it potentially useful to guide experimental research. The update function confers additional realism to the model, while reducing the complexity

  4. WATEQ3 geochemical model: thermodynamic data for several additional solids

    SciTech Connect

    Krupka, K.M.; Jenne, E.A.

    1982-09-01

    Geochemical models such as WATEQ3 can be used to model the concentrations of water-soluble pollutants that may result from the disposal of nuclear waste and retorted oil shale. However, for a model to competently deal with these water-soluble pollutants, an adequate thermodynamic data base must be provided that includes elements identified as important in modeling these pollutants. To this end, several minerals and related solid phases were identified that were absent from the thermodynamic data base of WATEQ3. In this study, the thermodynamic data for the identified solids were compiled and selected from several published tabulations of thermodynamic data. For these solids, an accepted Gibbs free energy of formation, ..delta..G/sup 0//sub f,298/, was selected for each solid phase based on the recentness of the tabulated data and on considerations of internal consistency with respect to both the published tabulations and the existing data in WATEQ3. For those solids not included in these published tabulations, Gibbs free energies of formation were calculated from published solubility data (e.g., lepidocrocite), or were estimated (e.g., nontronite) using a free-energy summation method described by Mattigod and Sposito (1978). The accepted or estimated free energies were then combined with internally consistent, ancillary thermodynamic data to calculate equilibrium constants for the hydrolysis reactions of these minerals and related solid phases. Including these values in the WATEQ3 data base increased the competency of this geochemical model in applications associated with the disposal of nuclear waste and retorted oil shale. Additional minerals and related solid phases that need to be added to the solubility submodel will be identified as modeling applications continue in these two programs.

  5. Sub-Poisson-binomial light

    NASA Astrophysics Data System (ADS)

    Lee, Changhyoup; Ferrari, Simone; Pernice, Wolfram H. P.; Rockstuhl, Carsten

    2016-11-01

    We introduce a general parameter QPB that provides an experimentally accessible nonclassicality measure for light. The parameter is quantified by the click statistics obtained from on-off detectors in a general multiplexing detection setup. Sub-Poisson-binomial statistics, observed by QPB<0 , indicates that a given state of light is nonclassical. Our parameter replaces the binomial parameter QB for more general cases, where any unbalance among the multiplexed modes is allowed, thus enabling the use of arbitrary multiplexing schemes. The significance of the parameter QPB is theoretically examined in a measurement setup that only consists of a ring resonator and a single on-off detector. The proposed setup exploits minimal experimental resources and is geared towards a fully integrated quantum nanophotonic circuit. The results show that nonclassical features remain noticeable even in the presence of significant losses, rendering our nonclassicality test more practical and sufficiently flexible to be used in various nanophotonic platforms.

  6. Double-Negative Mechanical Metamaterials Displaying Simultaneous Negative Stiffness and Negative Poisson's Ratio Properties.

    PubMed

    Hewage, Trishan A M; Alderson, Kim L; Alderson, Andrew; Scarpa, Fabrizio

    2016-12-01

    A scalable mechanical metamaterial simultaneously displaying negative stiffness and negative Poisson's ratio responses is presented. Interlocking hexagonal subunit assemblies containing 3 alternative embedded negative stiffness (NS) element types display Poisson's ratio values of -1 and NS values over two orders of magnitude (-1.4 N mm(-1) to -160 N mm(-1) ), in good agreement with model predictions.

  7. Estimation of propensity scores using generalized additive models.

    PubMed

    Woo, Mi-Ja; Reiter, Jerome P; Karr, Alan F

    2008-08-30

    Propensity score matching is often used in observational studies to create treatment and control groups with similar distributions of observed covariates. Typically, propensity scores are estimated using logistic regressions that assume linearity between the logistic link and the predictors. We evaluate the use of generalized additive models (GAMs) for estimating propensity scores. We compare logistic regressions and GAMs in terms of balancing covariates using simulation studies with artificial and genuine data. We find that, when the distributions of covariates in the treatment and control groups overlap sufficiently, using GAMs can improve overall covariate balance, especially for higher-order moments of distributions. When the distributions in the two groups overlap insufficiently, GAM more clearly reveals this fact than logistic regression does. We also demonstrate via simulation that matching with GAMs can result in larger reductions in bias when estimating treatment effects than matching with logistic regression.

  8. [Critical of the additive model of the randomized controlled trial].

    PubMed

    Boussageon, Rémy; Gueyffier, François; Bejan-Angoulvant, Theodora; Felden-Dominiak, Géraldine

    2008-01-01

    Randomized, double-blind, placebo-controlled clinical trials are currently the best way to demonstrate the clinical effectiveness of drugs. Its methodology relies on the method of difference (John Stuart Mill), through which the observed difference between two groups (drug vs placebo) can be attributed to the pharmacological effect of the drug being tested. However, this additive model can be questioned in the event of statistical interactions between the pharmacological and the placebo effects. Evidence in different domains has shown that the placebo effect can influence the effect of the active principle. This article evaluates the methodological, clinical and epistemological consequences of this phenomenon. Topics treated include extrapolating results, accounting for heterogeneous results, demonstrating the existence of several factors in the placebo effect, the necessity to take these factors into account for given symptoms or pathologies, as well as the problem of the "specific" effect.

  9. On Quantization of Quadratic Poisson Structures

    NASA Astrophysics Data System (ADS)

    Manchon, D.; Masmoudi, M.; Roux, A.

    Any classical r-matrix on the Lie algebra of linear operators on a real vector space V gives rise to a quadratic Poisson structure on V which admits a deformation quantization stemming from the construction of V. Drinfel'd [Dr], [Gr]. We exhibit in this article an example of quadratic Poisson structure which does not arise this way.

  10. Alternative Derivations for the Poisson Integral Formula

    ERIC Educational Resources Information Center

    Chen, J. T.; Wu, C. S.

    2006-01-01

    Poisson integral formula is revisited. The kernel in the Poisson integral formula can be derived in a series form through the direct BEM free of the concept of image point by using the null-field integral equation in conjunction with the degenerate kernels. The degenerate kernels for the closed-form Green's function and the series form of Poisson…

  11. A Three-dimensional Polymer Scaffolding Material Exhibiting a Zero Poisson's Ratio.

    PubMed

    Soman, Pranav; Fozdar, David Y; Lee, Jin Woo; Phadke, Ameya; Varghese, Shyni; Chen, Shaochen

    2012-05-14

    Poisson's ratio describes the degree to which a material contracts (expands) transversally when axially strained. A material with a zero Poisson's ratio does not transversally deform in response to an axial strain (stretching). In tissue engineering applications, scaffolding having a zero Poisson's ratio (ZPR) may be more suitable for emulating the behavior of native tissues and accommodating and transmitting forces to the host tissue site during wound healing (or tissue regrowth). For example, scaffolding with a zero Poisson's ratio may be beneficial in the engineering of cartilage, ligament, corneal, and brain tissues, which are known to possess Poisson's ratios of nearly zero. Here, we report a 3D biomaterial constructed from polyethylene glycol (PEG) exhibiting in-plane Poisson's ratios of zero for large values of axial strain. We use digital micro-mirror device projection printing (DMD-PP) to create single- and double-layer scaffolds composed of semi re-entrant pores whose arrangement and deformation mechanisms contribute the zero Poisson's ratio. Strain experiments prove the zero Poisson's behavior of the scaffolds and that the addition of layers does not change the Poisson's ratio. Human mesenchymal stem cells (hMSCs) cultured on biomaterials with zero Poisson's ratio demonstrate the feasibility of utilizing these novel materials for biological applications which require little to no transverse deformations resulting from axial strains. Techniques used in this work allow Poisson's ratio to be both scale-independent and independent of the choice of strut material for strains in the elastic regime, and therefore ZPR behavior can be imparted to a variety of photocurable biomaterial.

  12. Negative Poisson's Ratio in Single-Layer Graphene Ribbons.

    PubMed

    Jiang, Jin-Wu; Park, Harold S

    2016-04-13

    The Poisson's ratio characterizes the resultant strain in the lateral direction for a material under longitudinal deformation. Though negative Poisson's ratios (NPR) are theoretically possible within continuum elasticity, they are most frequently observed in engineered materials and structures, as they are not intrinsic to many materials. In this work, we report NPR in single-layer graphene ribbons, which results from the compressive edge stress induced warping of the edges. The effect is robust, as the NPR is observed for graphene ribbons with widths smaller than about 10 nm, and for tensile strains smaller than about 0.5% with NPR values reaching as large as -1.51. The NPR is explained analytically using an inclined plate model, which is able to predict the Poisson's ratio for graphene sheets of arbitrary size. The inclined plate model demonstrates that the NPR is governed by the interplay between the width (a bulk property), and the warping amplitude of the edge (an edge property), which eventually yields a phase diagram determining the sign of the Poisson's ratio as a function of the graphene geometry.

  13. Universality of Poisson indicator and Fano factor of transport event statistics in ion channels and enzyme kinetics.

    PubMed

    Chaudhury, Srabanti; Cao, Jianshu; Sinitsyn, Nikolai A

    2013-01-17

    We consider a generic stochastic model of ion transport through a single channel with arbitrary internal structure and kinetic rates of transitions between internal states. This model is also applicable to describe kinetics of a class of enzymes in which turnover events correspond to conversion of substrate into product by a single enzyme molecule. We show that measurement of statistics of single molecule transition time through the channel contains only restricted information about internal structure of the channel. In particular, the most accessible flux fluctuation characteristics, such as the Poisson indicator (P) and the Fano factor (F) as function of solute concentration, depend only on three parameters in addition to the parameters of the Michaelis-Menten curve that characterizes average current through the channel. Nevertheless, measurement of Poisson indicator or Fano factor for such renewal processes can discriminate reactions with multiple intermediate steps as well as provide valuable information about the internal kinetic rates.

  14. Negative Poisson's ratio in rippled graphene.

    PubMed

    Qin, Huasong; Sun, Yu; Liu, Jefferson Zhe; Li, Mengjie; Liu, Yilun

    2017-03-10

    In this work, we perform molecular dynamics (MD) simulations to study the effect of rippling on the Poisson's ratio of graphene. Due to the atomic scale thickness of graphene, out-of-plane ripples are generated in free standing graphene with topological defects (e.g. heptagons and pentagons) to release the in-plane deformation energy. Through MD simulations, we have found that the Poisson's ratio of rippled graphene decreases upon increasing its aspect ratio η (amplitude over wavelength). For the rippled graphene sheet η = 0.188, a negative Poisson's ratio of -0.38 is observed for a tensile strain up to 8%, while the Poisson's ratio for η = 0.066 is almost zero. During uniaxial tension, the ripples gradually become flat, thus the Poisson's ratio of rippled graphene is determined by the competing factors of the intrinsic positive Poisson's ratio of graphene and the negative Poisson's ratio due to the de-wrinkling effect. Besides, the rippled graphene exhibits excellent fracture strength and toughness. With the combination of its auxetic and excellent mechanical properties, rippled graphene may possess potential for application in nano-devices and nanomaterials.

  15. Wavelet-based Poisson rate estimation using the Skellam distribution

    NASA Astrophysics Data System (ADS)

    Hirakawa, Keigo; Baqai, Farhan; Wolfe, Patrick J.

    2009-02-01

    Owing to the stochastic nature of discrete processes such as photon counts in imaging, real-world data measurements often exhibit heteroscedastic behavior. In particular, time series components and other measurements may frequently be assumed to be non-iid Poisson random variables, whose rate parameter is proportional to the underlying signal of interest-witness literature in digital communications, signal processing, astronomy, and magnetic resonance imaging applications. In this work, we show that certain wavelet and filterbank transform coefficients corresponding to vector-valued measurements of this type are distributed as sums and differences of independent Poisson counts, taking the so-called Skellam distribution. While exact estimates rarely admit analytical forms, we present Skellam mean estimators under both frequentist and Bayes models, as well as computationally efficient approximations and shrinkage rules, that may be interpreted as Poisson rate estimation method performed in certain wavelet/filterbank transform domains. This indicates a promising potential approach for denoising of Poisson counts in the above-mentioned applications.

  16. Percolation model with an additional source of disorder

    NASA Astrophysics Data System (ADS)

    Kundu, Sumanta; Manna, S. S.

    2016-06-01

    The ranges of transmission of the mobiles in a mobile ad hoc network are not uniform in reality. They are affected by the temperature fluctuation in air, obstruction due to the solid objects, even the humidity difference in the environment, etc. How the varying range of transmission of the individual active elements affects the global connectivity in the network may be an important practical question to ask. Here a model of percolation phenomena, with an additional source of disorder, is introduced for a theoretical understanding of this problem. As in ordinary percolation, sites of a square lattice are occupied randomly with probability p . Each occupied site is then assigned a circular disk of random value R for its radius. A bond is defined to be occupied if and only if the radii R1 and R2 of the disks centered at the ends satisfy a certain predefined condition. In a very general formulation, one divides the R1-R2 plane into two regions by an arbitrary closed curve. One defines a point within one region as representing an occupied bond; otherwise it is a vacant bond. The study of three different rules under this general formulation indicates that the percolation threshold always varies continuously. This threshold has two limiting values, one is pc(sq) , the percolation threshold for the ordinary site percolation on the square lattice, and the other is unity. The approach of the percolation threshold to its limiting values are characterized by two exponents. In a special case, all lattice sites are occupied by disks of random radii R ∈{0 ,R0} and a percolation transition is observed with R0 as the control variable, similar to the site occupation probability.

  17. Negative Poisson's ratio materials via isotropic interactions.

    PubMed

    Rechtsman, Mikael C; Stillinger, Frank H; Torquato, Salvatore

    2008-08-22

    We show that under tension a classical many-body system with only isotropic pair interactions in a crystalline state can, counterintuitively, have a negative Poisson's ratio, or auxetic behavior. We derive the conditions under which the triangular lattice in two dimensions and lattices with cubic symmetry in three dimensions exhibit a negative Poisson's ratio. In the former case, the simple Lennard-Jones potential can give rise to auxetic behavior. In the latter case, a negative Poisson's ratio can be exhibited even when the material is constrained to be elastically isotropic.

  18. Hyperbolic value addition and general models of animal choice.

    PubMed

    Mazur, J E

    2001-01-01

    Three mathematical models of choice--the contextual-choice model (R. Grace, 1994), delay-reduction theory (N. Squires & E. Fantino, 1971), and a new model called the hyperbolic value-added model--were compared in their ability to predict the results from a wide variety of experiments with animal subjects. When supplied with 2 or 3 free parameters, all 3 models made fairly accurate predictions for a large set of experiments that used concurrent-chain procedures. One advantage of the hyperbolic value-added model is that it is derived from a simpler model that makes accurate predictions for many experiments using discrete-trial adjusting-delay procedures. Some results favor the hyperbolic value-added model and delay-reduction theory over the contextual-choice model, but more data are needed from choice situations for which the models make distinctly different predictions.

  19. Soft elasticity of RNA gels and negative Poisson ratio.

    PubMed

    Ahsan, Amir; Rudnick, Joseph; Bruinsma, Robijn

    2007-12-01

    We propose a model for the elastic properties of RNA gels. The model predicts anomalous elastic properties in the form of a negative Poisson ratio and shape instabilities. The anomalous elasticity is generated by the non-Gaussian force-deformation relation of single-stranded RNA. The effect is greatly magnified by broken rotational symmetry produced by double-stranded sequences and the concomitant soft modes of uniaxial elastomers.

  20. Negative Poisson's Ratio in Modern Functional Materials.

    PubMed

    Huang, Chuanwei; Chen, Lang

    2016-10-01

    Materials with negative Poisson's ratio attract considerable attention due to their underlying intriguing physical properties and numerous promising applications, particularly in stringent environments such as aerospace and defense areas, because of their unconventional mechanical enhancements. Recent progress in materials with a negative Poisson's ratio are reviewed here, with the current state of research regarding both theory and experiment. The inter-relationship between the underlying structure and a negative Poisson's ratio is discussed in functional materials, including macroscopic bulk, low-dimensional nanoscale particles, films, sheets, or tubes. The coexistence and correlations with other negative indexes (such as negative compressibility and negative thermal expansion) are also addressed. Finally, open questions and future research opportunities are proposed for functional materials with negative Poisson's ratios.

  1. Poisson-weighted Lindley distribution and its application on insurance claim data

    NASA Astrophysics Data System (ADS)

    Manesh, Somayeh Nik; Hamzah, Nor Aishah; Zamani, Hossein

    2014-07-01

    This paper introduces a new two-parameter mixed Poisson distribution, namely the Poisson-weighted Lindley (P-WL), which is obtained by mixing the Poisson with a new class of weighted Lindley distributions. The closed form, the moment generating function and the probability generating function are derived. The parameter estimations methods of moments and the maximum likelihood procedure are provided. Some simulation studies are conducted to investigate the performance of P-WL distribution. In addition, the compound P-WL distribution is derived and some applications to insurance area based on observations of the number of claims and on observations of the total amount of claims incurred will be illustrated.

  2. Causal Poisson bracket via deformation quantization

    NASA Astrophysics Data System (ADS)

    Berra-Montiel, Jasel; Molgado, Alberto; Palacios-García, César D.

    2016-06-01

    Starting with the well-defined product of quantum fields at two spacetime points, we explore an associated Poisson structure for classical field theories within the deformation quantization formalism. We realize that the induced star-product is naturally related to the standard Moyal product through an appropriate causal Green’s functions connecting points in the space of classical solutions to the equations of motion. Our results resemble the Peierls-DeWitt bracket that has been analyzed in the multisymplectic context. Once our star-product is defined, we are able to apply the Wigner-Weyl map in order to introduce a generalized version of Wick’s theorem. Finally, we include some examples to explicitly test our method: the real scalar field, the bosonic string and a physically motivated nonlinear particle model. For the field theoretic models, we have encountered causal generalizations of the creation/annihilation relations, and also a causal generalization of the Virasoro algebra for the bosonic string. For the nonlinear particle case, we use the approximate solution in terms of the Green’s function, in order to construct a well-behaved causal bracket.

  3. Using Generalized Additive Models to Analyze Single-Case Designs

    ERIC Educational Resources Information Center

    Shadish, William; Sullivan, Kristynn

    2013-01-01

    Many analyses for single-case designs (SCDs)--including nearly all the effect size indicators-- currently assume no trend in the data. Regression and multilevel models allow for trend, but usually test only linear trend and have no principled way of knowing if higher order trends should be represented in the model. This paper shows how Generalized…

  4. Classification of four-dimensional real Lie bialgebras of symplectic type and their Poisson-Lie groups

    NASA Astrophysics Data System (ADS)

    Abedi-Fardad, J.; Rezaei-Aghdam, A.; Haghighatdoost, Gh.

    2017-01-01

    We classify all four-dimensional real Lie bialgebras of symplectic type and obtain the classical r-matrices for these Lie bialgebras and Poisson structures on all the associated four-dimensional Poisson-Lie groups. We obtain some new integrable models where a Poisson-Lie group plays the role of the phase space and its dual Lie group plays the role of the symmetry group of the system.

  5. A dictionary learning approach for Poisson image deblurring.

    PubMed

    Ma, Liyan; Moisan, Lionel; Yu, Jian; Zeng, Tieyong

    2013-07-01

    The restoration of images corrupted by blur and Poisson noise is a key issue in medical and biological image processing. While most existing methods are based on variational models, generally derived from a maximum a posteriori (MAP) formulation, recently sparse representations of images have shown to be efficient approaches for image recovery. Following this idea, we propose in this paper a model containing three terms: a patch-based sparse representation prior over a learned dictionary, the pixel-based total variation regularization term and a data-fidelity term capturing the statistics of Poisson noise. The resulting optimization problem can be solved by an alternating minimization technique combined with variable splitting. Extensive experimental results suggest that in terms of visual quality, peak signal-to-noise ratio value and the method noise, the proposed algorithm outperforms state-of-the-art methods.

  6. Additive Manufacturing of Anatomical Models from Computed Tomography Scan Data.

    PubMed

    Gür, Y

    2014-12-01

    The purpose of the study presented here was to investigate the manufacturability of human anatomical models from Computed Tomography (CT) scan data via a 3D desktop printer which uses fused deposition modelling (FDM) technology. First, Digital Imaging and Communications in Medicine (DICOM) CT scan data were converted to 3D Standard Triangle Language (STL) format by using In Vaselius digital imaging program. Once this STL file is obtained, a 3D physical version of the anatomical model can be fabricated by a desktop 3D FDM printer. As a case study, a patient's skull CT scan data was considered, and a tangible version of the skull was manufactured by a 3D FDM desktop printer. During the 3D printing process, the skull was built using acrylonitrile-butadiene-styrene (ABS) co-polymer plastic. The printed model showed that the 3D FDM printing technology is able to fabricate anatomical models with high accuracy. As a result, the skull model can be used for preoperative surgical planning, medical training activities, implant design and simulation to show the potential of the FDM technology in medical field. It will also improve communication between medical stuff and patients. Current result indicates that a 3D desktop printer which uses FDM technology can be used to obtain accurate anatomical models.

  7. Universal Poisson Statistics of mRNAs with Complex Decay Pathways.

    PubMed

    Thattai, Mukund

    2016-01-19

    Messenger RNA (mRNA) dynamics in single cells are often modeled as a memoryless birth-death process with a constant probability per unit time that an mRNA molecule is synthesized or degraded. This predicts a Poisson steady-state distribution of mRNA number, in close agreement with experiments. This is surprising, since mRNA decay is known to be a complex process. The paradox is resolved by realizing that the Poisson steady state generalizes to arbitrary mRNA lifetime distributions. A mapping between mRNA dynamics and queueing theory highlights an identifiability problem: a measured Poisson steady state is consistent with a large variety of microscopic models. Here, I provide a rigorous and intuitive explanation for the universality of the Poisson steady state. I show that the mRNA birth-death process and its complex decay variants all take the form of the familiar Poisson law of rare events, under a nonlinear rescaling of time. As a corollary, not only steady-states but also transients are Poisson distributed. Deviations from the Poisson form occur only under two conditions, promoter fluctuations leading to transcriptional bursts or nonindependent degradation of mRNA molecules. These results place severe limits on the power of single-cell experiments to probe microscopic mechanisms, and they highlight the need for single-molecule measurements.

  8. The mechanical influences of the graded distribution in the cross-sectional shape, the stiffness and Poisson׳s ratio of palm branches.

    PubMed

    Liu, Wangyu; Wang, Ningling; Jiang, Xiaoyong; Peng, Yujian

    2016-07-01

    The branching system plays an important role in maintaining the survival of palm trees. Due to the nature of monocots, no additional vascular bundles can be added in the palm tree tissue as it ages. Therefore, the changing of the cross-sectional area in the palm branch creates a graded distribution in the mechanical properties of the tissue. In the present work, this graded distribution in the tissue mechanical properties from sheath to petiole were studied with a multi-scale modeling approach. Then, the entire palm branch was reconstructed and analyzed using finite element methods. The variation of the elastic modulus can lower the level of mechanical stress in the sheath and also allow the branch to have smaller values of pressure on the other branches. Under impact loading, the enhanced frictional dissipation at the surfaces of adjacent branches benefits from the large Poisson׳s ratio of the sheath tissue. These findings can help to link the wind resistance ability of palm trees to their graded materials distribution in the branching system.

  9. Non-additive model for specific heat of electrons

    NASA Astrophysics Data System (ADS)

    Anselmo, D. H. A. L.; Vasconcelos, M. S.; Silva, R.; Mello, V. D.

    2016-10-01

    By using non-additive Tsallis entropy we demonstrate numerically that one-dimensional quasicrystals, whose energy spectra are multifractal Cantor sets, are characterized by an entropic parameter, and calculate the electronic specific heat, where we consider a non-additive entropy Sq. In our method we consider an energy spectra calculated using the one-dimensional tight binding Schrödinger equation, and their bands (or levels) are scaled onto the [ 0 , 1 ] interval. The Tsallis' formalism is applied to the energy spectra of Fibonacci and double-period one-dimensional quasiperiodic lattices. We analytically obtain an expression for the specific heat that we consider to be more appropriate to calculate this quantity in those quasiperiodic structures.

  10. Modeling of additive manufacturing processes for metals: Challenges and opportunities

    DOE PAGES

    Francois, Marianne M.; Sun, Amy; King, Wayne E.; ...

    2017-01-09

    Here, with the technology being developed to manufacture metallic parts using increasingly advanced additive manufacturing processes, a new era has opened up for designing novel structural materials, from designing shapes and complex geometries to controlling the microstructure (alloy composition and morphology). The material properties used within specific structural components are also designable in order to meet specific performance requirements that are not imaginable with traditional metal forming and machining (subtractive) techniques.

  11. Events in time: Basic analysis of Poisson data

    SciTech Connect

    Engelhardt, M.E.

    1994-09-01

    The report presents basic statistical methods for analyzing Poisson data, such as the member of events in some period of time. It gives point estimates, confidence intervals, and Bayesian intervals for the rate of occurrence per unit of time. It shows how to compare subsets of the data, both graphically and by statistical tests, and how to look for trends in time. It presents a compound model when the rate of occurrence varies randomly. Examples and SAS programs are given.

  12. Additional Research Needs to Support the GENII Biosphere Models

    SciTech Connect

    Napier, Bruce A.; Snyder, Sandra F.; Arimescu, Carmen

    2013-11-30

    In the course of evaluating the current parameter needs for the GENII Version 2 code (Snyder et al. 2013), areas of possible improvement for both the data and the underlying models have been identified. As the data review was implemented, PNNL staff identified areas where the models can be improved both to accommodate the locally significant pathways identified and also to incorporate newer models. The areas are general data needs for the existing models and improved formulations for the pathway models. It is recommended that priorities be set by NRC staff to guide selection of the most useful improvements in a cost-effective manner. Suggestions are made based on relatively easy and inexpensive changes, and longer-term more costly studies. In the short term, there are several improved model formulations that could be applied to the GENII suite of codes to make them more generally useful. • Implementation of the separation of the translocation and weathering processes • Implementation of an improved model for carbon-14 from non-atmospheric sources • Implementation of radon exposure pathways models • Development of a KML processor for the output report generator module data that are calculated on a grid that could be superimposed upon digital maps for easier presentation and display • Implementation of marine mammal models (manatees, seals, walrus, whales, etc.). Data needs in the longer term require extensive (and potentially expensive) research. Before picking any one radionuclide or food type, NRC staff should perform an in-house review of current and anticipated environmental analyses to select “dominant” radionuclides of interest to allow setting of cost-effective priorities for radionuclide- and pathway-specific research. These include • soil-to-plant uptake studies for oranges and other citrus fruits, and • Development of models for evaluation of radionuclide concentration in highly-processed foods such as oils and sugars. Finally, renewed

  13. Addition of a Hydrological Cycle to the EPIC Jupiter Model

    NASA Astrophysics Data System (ADS)

    Dowling, T. E.; Palotai, C. J.

    2002-09-01

    We present a progress report on the development of the EPIC atmospheric model to include clouds, moist convection, and precipitation. Two major goals are: i) to study the influence that convective water clouds have on Jupiter's jets and vortices, such as those to the northwest of the Great Red Spot, and ii) to predict ammonia-cloud evolution for direct comparison to visual images (instead of relying on surrogates for clouds like potential vorticity). Data structures in the model are now set up to handle the vapor, liquid, and solid phases of the most common chemical species in planetary atmospheres. We have adapted the Prather conservation of second-order moments advection scheme to the model, which yields high accuracy for dealing with cloud edges. In collaboration with computer scientists H. Dietz and T. Mattox at the U. Kentucky, we have built a dedicated 40-node parallel computer that achieves 34 Gflops (double precision) at 74 cents per Mflop, and have updated the EPIC-model code to use cache-aware memory layouts and other modern optimizations. The latest test-case results of cloud evolution in the model will be presented. This research is funded by NASA's Planetary Atmospheres and EPSCoR programs.

  14. A generalized Poisson solver for first-principles device simulations

    SciTech Connect

    Bani-Hashemian, Mohammad Hossein; VandeVondele, Joost; Brück, Sascha; Luisier, Mathieu

    2016-01-28

    Electronic structure calculations of atomistic systems based on density functional theory involve solving the Poisson equation. In this paper, we present a plane-wave based algorithm for solving the generalized Poisson equation subject to periodic or homogeneous Neumann conditions on the boundaries of the simulation cell and Dirichlet type conditions imposed at arbitrary subdomains. In this way, source, drain, and gate voltages can be imposed across atomistic models of electronic devices. Dirichlet conditions are enforced as constraints in a variational framework giving rise to a saddle point problem. The resulting system of equations is then solved using a stationary iterative method in which the generalized Poisson operator is preconditioned with the standard Laplace operator. The solver can make use of any sufficiently smooth function modelling the dielectric constant, including density dependent dielectric continuum models. For all the boundary conditions, consistent derivatives are available and molecular dynamics simulations can be performed. The convergence behaviour of the scheme is investigated and its capabilities are demonstrated.

  15. Holographic study of conventional and negative Poisson's ratio metallic foams - Elasticity, yield and micro-deformation

    NASA Technical Reports Server (NTRS)

    Chen, C. P.; Lakes, R. S.

    1991-01-01

    An experimental study by holographic interferometry is reported of the following material properties of conventional and negative Poisson's ratio copper foams: Young's moduli, Poisson's ratios, yield strengths and characteristic lengths associated with inhomogeneous deformation. The Young's modulus and yield strength of the conventional copper foam were comparable to those predicted by microstructural modeling on the basis of cellular rib bending. The reentrant copper foam exhibited a negative Poisson's ratio, as indicated by the elliptical contour fringes on the specimen surface in the bending tests. Inhomogeneous, non-affine deformation was observed holographically in both foam materials.

  16. Universal Negative Poisson Ratio of Self-Avoiding Fixed-Connectivity Membranes

    SciTech Connect

    Bowick, M.; Cacciuto, A.; Thorleifsson, G.; Travesset, A.

    2001-10-01

    We determine the Poisson ratio of self-avoiding fixed-connectivity membranes, modeled as impenetrable plaquettes, to be {sigma}=-0.37(6) , in statistical agreement with the Poisson ratio of phantom fixed-connectivity membranes {sigma}=-0.32(4) . Together with the equality of critical exponents, this result implies a unique universality class for fixed-connectivity membranes. Our findings thus establish that physical fixed-connectivity membranes provide a wide class of auxetic (negative Poisson ratio) materials with significant potential applications in materials science.

  17. Universal negative poisson ratio of self-avoiding fixed-connectivity membranes.

    PubMed

    Bowick, M; Cacciuto, A; Thorleifsson, G; Travesset, A

    2001-10-01

    We determine the Poisson ratio of self-avoiding fixed-connectivity membranes, modeled as impenetrable plaquettes, to be sigma = -0.37(6), in statistical agreement with the Poisson ratio of phantom fixed-connectivity membranes sigma = -0.32(4). Together with the equality of critical exponents, this result implies a unique universality class for fixed-connectivity membranes. Our findings thus establish that physical fixed-connectivity membranes provide a wide class of auxetic (negative Poisson ratio) materials with significant potential applications in materials science.

  18. Generalized Additive Models, Cubic Splines and Penalized Likelihood.

    DTIC Science & Technology

    1987-05-22

    in case control studies ). All models in the table include dummy variable to account for the matching. The first 3 lines of the table indicate that OA...Ausoc. Breslow, N. and Day, N. (1980). Statistical methods in cancer research, volume 1- the analysis of case - control studies . International agency

  19. Concentration Addition, Independent Action and Generalized Concentration Addition Models for Mixture Effect Prediction of Sex Hormone Synthesis In Vitro

    PubMed Central

    Hadrup, Niels; Taxvig, Camilla; Pedersen, Mikael; Nellemann, Christine; Hass, Ulla; Vinggaard, Anne Marie

    2013-01-01

    Humans are concomitantly exposed to numerous chemicals. An infinite number of combinations and doses thereof can be imagined. For toxicological risk assessment the mathematical prediction of mixture effects, using knowledge on single chemicals, is therefore desirable. We investigated pros and cons of the concentration addition (CA), independent action (IA) and generalized concentration addition (GCA) models. First we measured effects of single chemicals and mixtures thereof on steroid synthesis in H295R cells. Then single chemical data were applied to the models; predictions of mixture effects were calculated and compared to the experimental mixture data. Mixture 1 contained environmental chemicals adjusted in ratio according to human exposure levels. Mixture 2 was a potency adjusted mixture containing five pesticides. Prediction of testosterone effects coincided with the experimental Mixture 1 data. In contrast, antagonism was observed for effects of Mixture 2 on this hormone. The mixtures contained chemicals exerting only limited maximal effects. This hampered prediction by the CA and IA models, whereas the GCA model could be used to predict a full dose response curve. Regarding effects on progesterone and estradiol, some chemicals were having stimulatory effects whereas others had inhibitory effects. The three models were not applicable in this situation and no predictions could be performed. Finally, the expected contributions of single chemicals to the mixture effects were calculated. Prochloraz was the predominant but not sole driver of the mixtures, suggesting that one chemical alone was not responsible for the mixture effects. In conclusion, the GCA model seemed to be superior to the CA and IA models for the prediction of testosterone effects. A situation with chemicals exerting opposing effects, for which the models could not be applied, was identified. In addition, the data indicate that in non-potency adjusted mixtures the effects cannot always be

  20. Theoretical simulation of the ion current rectification (ICR) in nano-pores based on the Poisson-Nernst-Planck (PNP) model.

    PubMed

    Wang, Jingtao; Zhang, Minghui; Zhai, Jin; Jiang, Lei

    2014-01-07

    Ion current rectification is an important phenomenon for diode-like nano-channels, which can give the ion-selectivity of the channel. Using the PNP model for theoretical simulation can help to investigate the ICR properties, as well as to calculate the rectification ratio and profile of ion concentration along the channel. In this review, we will present the main factors that will influence the ICR effect, which are the surface charge density, the electrolyte solution concentration gradient, and the shape or geometry of the nano-pore. The applications of the PNP model used for the theoretical simulation on these factors will also be discussed.

  1. Technical Work Plan for: Additional Multoscale Thermohydrologic Modeling

    SciTech Connect

    B. Kirstein

    2006-08-24

    The primary objective of Revision 04 of the MSTHM report is to provide TSPA with revised repository-wide MSTHM analyses that incorporate updated percolation flux distributions, revised hydrologic properties, updated IEDs, and information pertaining to the emplacement of transport, aging, and disposal (TAD) canisters. The updated design information is primarily related to the incorporation of TAD canisters, but also includes updates related to superseded IEDs describing emplacement drift cross-sectional geometry and layout. The intended use of the results of Revision 04 of the MSTHM report, as described in this TWP, is to predict the evolution of TH conditions (temperature, relative humidity, liquid-phase saturation, and liquid-phase flux) at specified locations within emplacement drifts and in the adjoining near-field host rock along all emplacement drifts throughout the repository. This information directly supports the TSPA for the nominal and seismic scenarios. The revised repository-wide analyses are required to incorporate updated parameters and design information and to extend those analyses out to 1,000,000 years. Note that the previous MSTHM analyses reported in Revision 03 of Multiscale Thermohydrologic Model (BSC 2005 [DIRS 173944]) only extend out to 20,000 years. The updated parameters are the percolation flux distributions, including incorporation of post-10,000-year distributions, and updated calibrated hydrologic property values for the host-rock units. The applied calibrated hydrologic properties will be an updated version of those available in Calibrated Properties Model (BSC 2004 [DIRS 169857]). These updated properties will be documented in an Appendix of Revision 03 of UZ Flow Models and Submodels (BSC 2004 [DIRS 169861]). The updated calibrated properties are applied because they represent the latest available information. The reasonableness of applying the updated calibrated' properties to the prediction of near-fieldin-drift TH conditions

  2. Performance of the modified Poisson regression approach for estimating relative risks from clustered prospective data.

    PubMed

    Yelland, Lisa N; Salter, Amy B; Ryan, Philip

    2011-10-15

    Modified Poisson regression, which combines a log Poisson regression model with robust variance estimation, is a useful alternative to log binomial regression for estimating relative risks. Previous studies have shown both analytically and by simulation that modified Poisson regression is appropriate for independent prospective data. This method is often applied to clustered prospective data, despite a lack of evidence to support its use in this setting. The purpose of this article is to evaluate the performance of the modified Poisson regression approach for estimating relative risks from clustered prospective data, by using generalized estimating equations to account for clustering. A simulation study is conducted to compare log binomial regression and modified Poisson regression for analyzing clustered data from intervention and observational studies. Both methods generally perform well in terms of bias, type I error, and coverage. Unlike log binomial regression, modified Poisson regression is not prone to convergence problems. The methods are contrasted by using example data sets from 2 large studies. The results presented in this article support the use of modified Poisson regression as an alternative to log binomial regression for analyzing clustered prospective data when clustering is taken into account by using generalized estimating equations.

  3. Finite-size effects and percolation properties of Poisson geometries

    NASA Astrophysics Data System (ADS)

    Larmier, C.; Dumonteil, E.; Malvagi, F.; Mazzolo, A.; Zoia, A.

    2016-07-01

    Random tessellations of the space represent a class of prototype models of heterogeneous media, which are central in several applications in physics, engineering, and life sciences. In this work, we investigate the statistical properties of d -dimensional isotropic Poisson geometries by resorting to Monte Carlo simulation, with special emphasis on the case d =3 . We first analyze the behavior of the key features of these stochastic geometries as a function of the dimension d and the linear size L of the domain. Then, we consider the case of Poisson binary mixtures, where the polyhedra are assigned two labels with complementary probabilities. For this latter class of random geometries, we numerically characterize the percolation threshold, the strength of the percolating cluster, and the average cluster size.

  4. Non-linear Poisson-Boltzmann theory for swollen clays

    NASA Astrophysics Data System (ADS)

    Leote de Carvalho, R. J. F.; Trizac, E.; Hansen, J.-P.

    1998-08-01

    The non-linear Poisson-Boltzmann (PB) equation for a circular, uniformly char ged platelet, confined together with co- and counter-ions to a cylindrical cell, is solved semi-analytically by transforming it into an integral equation and solving the latter iteratively. This method proves efficient and robust, and can be readily generalized to other problems based on cell models, treated within non-linear Poisson-like theory. The solution to the PB equation is computed over a wide range of physical conditions, and the resulting osmotic equation of state is shown to be in semi-quantitative agreement with recent experimental data for Laponite clay suspensions, in the concentrated gel phase.

  5. Software reliability: Additional investigations into modeling with replicated experiments

    NASA Technical Reports Server (NTRS)

    Nagel, P. M.; Schotz, F. M.; Skirvan, J. A.

    1984-01-01

    The effects of programmer experience level, different program usage distributions, and programming languages are explored. All these factors affect performance, and some tentative relational hypotheses are presented. An analytic framework for replicated and non-replicated (traditional) software experiments is presented. A method of obtaining an upper bound on the error rate of the next error is proposed. The method was validated empirically by comparing forecasts with actual data. In all 14 cases the bound exceeded the observed parameter, albeit somewhat conservatively. Two other forecasting methods are proposed and compared to observed results. Although demonstrated relative to this framework that stages are neither independent nor exponentially distributed, empirical estimates show that the exponential assumption is nearly valid for all but the extreme tails of the distribution. Except for the dependence in the stage probabilities, Cox's model approximates to a degree what is being observed.

  6. Magnetostrictive contribution to Poisson ratio of galfenol

    NASA Astrophysics Data System (ADS)

    Paes, V. Z. C.; Mosca, D. H.

    2013-09-01

    In this work we present a detailed study on the magnetostrictive contribution to Poisson ratio for samples under applied mechanical stress. Magnetic contributions to strain and Poisson ratio for cubic materials were derived by accounting elastic and magneto-elastic anisotropy contributions. We apply our theoretical results for a material of interest in magnetomechanics, namely, galfenol (Fe1-xGax). Our results show that there is a non-negligible magnetic contribution in the linear portion of the curve of stress versus strain. The rotation of the magnetization towards [110] crystallographic direction upon application of mechanical stress leads to an auxetic behavior, i.e., exhibiting Poisson ratio with negative values. This magnetic contribution to auxetic behavior provides a novel insight for the discussion of theoretical and experimental developments of materials that display unusual mechanical properties.

  7. The BRST complex of homological Poisson reduction

    NASA Astrophysics Data System (ADS)

    Müller-Lennert, Martin

    2017-02-01

    BRST complexes are differential graded Poisson algebras. They are associated with a coisotropic ideal J of a Poisson algebra P and provide a description of the Poisson algebra (P/J)^J as their cohomology in degree zero. Using the notion of stable equivalence introduced in Felder and Kazhdan (Contemporary Mathematics 610, Perspectives in representation theory, 2014), we prove that any two BRST complexes associated with the same coisotropic ideal are quasi-isomorphic in the case P = R[V] where V is a finite-dimensional symplectic vector space and the bracket on P is induced by the symplectic structure on V. As a corollary, the cohomology of the BRST complexes is canonically associated with the coisotropic ideal J in the symplectic case. We do not require any regularity assumptions on the constraints generating the ideal J. We finally quantize the BRST complex rigorously in the presence of infinitely many ghost variables and discuss the uniqueness of the quantization procedure.

  8. Additional Developments in Atmosphere Revitalization Modeling and Simulation

    NASA Technical Reports Server (NTRS)

    Coker, Robert F.; Knox, James C.; Cummings, Ramona; Brooks, Thomas; Schunk, Richard G.; Gomez, Carlos

    2013-01-01

    NASA's Advanced Exploration Systems (AES) program is developing prototype systems, demonstrating key capabilities, and validating operational concepts for future human missions beyond Earth orbit. These forays beyond the confines of earth's gravity will place unprecedented demands on launch systems. They must launch the supplies needed to sustain a crew over longer periods for exploration missions beyond earth's moon. Thus all spacecraft systems, including those for the separation of metabolic carbon dioxide and water from a crewed vehicle, must be minimized with respect to mass, power, and volume. Emphasis is also placed on system robustness both to minimize replacement parts and ensure crew safety when a quick return to earth is not possible. Current efforts are focused on improving the current state-of-the-art systems utilizing fixed beds of sorbent pellets by evaluating structured sorbents, seeking more robust pelletized sorbents, and examining alternate bed configurations to improve system efficiency and reliability. These development efforts combine testing of sub-scale systems and multi-physics computer simulations to evaluate candidate approaches, select the best performing options, and optimize the configuration of the selected approach. This paper describes the continuing development of atmosphere revitalization models and simulations in support of the Atmosphere Revitalization Recovery and Environmental Monitoring (ARREM) project within the AES program.

  9. Additional Developments in Atmosphere Revitalization Modeling and Simulation

    NASA Technical Reports Server (NTRS)

    Coker, Robert F.; Knox, James C.; Cummings, Ramona; Brooks, Thomas; Schunk, Richard G.

    2013-01-01

    NASA's Advanced Exploration Systems (AES) program is developing prototype systems, demonstrating key capabilities, and validating operational concepts for future human missions beyond Earth orbit. These forays beyond the confines of earth's gravity will place unprecedented demands on launch systems. They must launch the supplies needed to sustain a crew over longer periods for exploration missions beyond earth's moon. Thus all spacecraft systems, including those for the separation of metabolic carbon dioxide and water from a crewed vehicle, must be minimized with respect to mass, power, and volume. Emphasis is also placed on system robustness both to minimize replacement parts and ensure crew safety when a quick return to earth is not possible. Current efforts are focused on improving the current state-of-the-art systems utilizing fixed beds of sorbent pellets by evaluating structured sorbents, seeking more robust pelletized sorbents, and examining alternate bed configurations to improve system efficiency and reliability. These development efforts combine testing of sub-scale systems and multi-physics computer simulations to evaluate candidate approaches, select the best performing options, and optimize the configuration of the selected approach. This paper describes the continuing development of atmosphere revitalization models and simulations in support of the Atmosphere Revitalization Recovery and Environmental Monitoring (ARREM)

  10. Time-dependent solutions for a stochastic model of gene expression with molecule production in the form of a compound Poisson process

    NASA Astrophysics Data System (ADS)

    Jedrak, Jakub; Ochab-Marcinek, Anna

    2016-09-01

    We study a stochastic model of gene expression, in which protein production has a form of random bursts whose size distribution is arbitrary, whereas protein decay is a first-order reaction. We find exact analytical expressions for the time evolution of the cumulant-generating function for the most general case when both the burst size probability distribution and the model parameters depend on time in an arbitrary (e.g., oscillatory) manner, and for arbitrary initial conditions. We show that in the case of periodic external activation and constant protein degradation rate, the response of the gene is analogous to the resistor-capacitor low-pass filter, where slow oscillations of the external driving have a greater effect on gene expression than the fast ones. We also demonstrate that the n th cumulant of the protein number distribution depends on the n th moment of the burst size distribution. We use these results to show that different measures of noise (coefficient of variation, Fano factor, fractional change of variance) may vary in time in a different manner. Therefore, any biological hypothesis of evolutionary optimization based on the nonmonotonic dependence of a chosen measure of noise on time must justify why it assumes that biological evolution quantifies noise in that particular way. Finally, we show that not only for exponentially distributed burst sizes but also for a wider class of burst size distributions (e.g., Dirac delta and gamma) the control of gene expression level by burst frequency modulation gives rise to proportional scaling of variance of the protein number distribution to its mean, whereas the control by amplitude modulation implies proportionality of protein number variance to the mean squared.

  11. Noise parameter estimation for poisson corrupted images using variance stabilization transforms.

    PubMed

    Jin, Xiaodan; Xu, Zhenyu; Hirakawa, Keigo

    2014-03-01

    Noise is present in all images captured by real-world image sensors. Poisson distribution is said to model the stochastic nature of the photon arrival process and agrees with the distribution of measured pixel values. We propose a method for estimating unknown noise parameters from Poisson corrupted images using properties of variance stabilization. With a significantly lower computational complexity and improved stability, the proposed estimation technique yields noise parameters that are comparable in accuracy to the state-of-art methods.

  12. On Poisson's ratio and composition of the Earth's lower mantle

    NASA Astrophysics Data System (ADS)

    Poirier, J. P.

    1987-07-01

    Poisson's ratio of the lower mantle, calculated from recently published values of seismic wave velocities and extrapolated to atmospheric pressure and room temperature is found to be in the range 0.23 ⩽ ν ⩽ 0.25. These values are compared with the values of Poisson's ratio calculated for binary mixtures of MgSiO 3 perovskite and magnesiowüstite with various iron contents. Current values of the experimental error on measured elastic moduli give little hope to be able to discriminate between pyrolite and chondritic lower mantles: both are acceptable if the shear modulus of perovskite is in the upper range of Liebermann et al. estimates. A similar calculation using the seismic parameter φ confirms the results obtained by considering Poisson's ratio and further constrains the value of the shear modulus of perovskite to lie between 1600 and 1700 kilobars for current mantle models to remain plausible. Chemical stratification of the mantle is, therefore, possible but not required by seismological data.

  13. Transferability of regional permafrost disturbance susceptibility modelling using generalized linear and generalized additive models

    NASA Astrophysics Data System (ADS)

    Rudy, Ashley C. A.; Lamoureux, Scott F.; Treitz, Paul; van Ewijk, Karin Y.

    2016-07-01

    To effectively assess and mitigate risk of permafrost disturbance, disturbance-prone areas can be predicted through the application of susceptibility models. In this study we developed regional susceptibility models for permafrost disturbances using a field disturbance inventory to test the transferability of the model to a broader region in the Canadian High Arctic. Resulting maps of susceptibility were then used to explore the effect of terrain variables on the occurrence of disturbances within this region. To account for a large range of landscape characteristics, the model was calibrated using two locations: Sabine Peninsula, Melville Island, NU, and Fosheim Peninsula, Ellesmere Island, NU. Spatial patterns of disturbance were predicted with a generalized linear model (GLM) and generalized additive model (GAM), each calibrated using disturbed and randomized undisturbed locations from both locations and GIS-derived terrain predictor variables including slope, potential incoming solar radiation, wetness index, topographic position index, elevation, and distance to water. Each model was validated for the Sabine and Fosheim Peninsulas using independent data sets while the transferability of the model to an independent site was assessed at Cape Bounty, Melville Island, NU. The regional GLM and GAM validated well for both calibration sites (Sabine and Fosheim) with the area under the receiver operating curves (AUROC) > 0.79. Both models were applied directly to Cape Bounty without calibration and validated equally with AUROC's of 0.76; however, each model predicted disturbed and undisturbed samples differently. Additionally, the sensitivity of the transferred model was assessed using data sets with different sample sizes. Results indicated that models based on larger sample sizes transferred more consistently and captured the variability within the terrain attributes in the respective study areas. Terrain attributes associated with the initiation of disturbances were

  14. A novel derivation of a within-batch sampling plan based on a Poisson-gamma model characterising low microbial counts in foods.

    PubMed

    Gonzales-Barron, Ursula; Zwietering, Marcel H; Butler, Francis

    2013-02-01

    This study proposes a novel step-wise methodology for the derivation of a sampling plan by variables for food production systems characterised by relatively low concentrations of the inspected microorganism. After representing the universe of contaminated batches by modelling the between-batch and within-batch variability in microbial counts, a tolerance criterion defining batch acceptability (i.e., up to a tolerance percentage of the food units having microbial concentrations lower or equal to a critical concentration) is established to delineate a limiting quality contour that separates satisfactory from unsatisfactory batches. The problem consists then of finding the optimum decision criterion - arithmetic mean of the analytical results (microbiological limit, m(L)) and the sample size (n) - that satisfies a pre-defined level of confidence measured on the samples' mean distributions from all possible true within-batch distributions. This is approached by obtaining decision landscape curves representing collectively the conditional and joint producer's and consumer's risks at different microbiological limits along with confidence intervals representing uncertainty due to the propagated between-batch variability. Whilst the method requires a number of risk management decisions to be made such as the objective of the sampling plan (GMP-based or risk-based), the modality of derivation, the tolerance criterion or level of safety, and the statistical level of confidence, the proposed method can be used when past monitoring data are available so as to produce statistically-sound dynamic sampling plans with optimised efficiency and discriminatory power. For the illustration of Enterobacteriaceae concentrations on Irish sheep carcasses, a sampling regime of n=10 and m(L)=17.5CFU/cm(2) is recommended to ensure that the producer has at least a 90% confidence of accepting a satisfactory batch whilst the consumer at least a 97.5% confidence that a batch will not be

  15. Measuring Poisson Ratios at Low Temperatures

    NASA Technical Reports Server (NTRS)

    Boozon, R. S.; Shepic, J. A.

    1987-01-01

    Simple extensometer ring measures bulges of specimens in compression. New method of measuring Poisson's ratio used on brittle ceramic materials at cryogenic temperatures. Extensometer ring encircles cylindrical specimen. Four strain gauges connected in fully active Wheatstone bridge self-temperature-compensating. Used at temperatures as low as liquid helium.

  16. Easy Demonstration of the Poisson Spot

    ERIC Educational Resources Information Center

    Gluck, Paul

    2010-01-01

    Many physics teachers have a set of slides of single, double and multiple slits to show their students the phenomena of interference and diffraction. Thomas Young's historic experiments with double slits were indeed a milestone in proving the wave nature of light. But another experiment, namely the Poisson spot, was also important historically and…

  17. On the Determination of Poisson Statistics for Haystack Radar Observations of Orbital Debris

    NASA Technical Reports Server (NTRS)

    Stokely, Christopher L.; Benbrook, James R.; Horstman, Matt

    2007-01-01

    A convenient and powerful method is used to determine if radar detections of orbital debris are observed according to Poisson statistics. This is done by analyzing the time interval between detection events. For Poisson statistics, the probability distribution of the time interval between events is shown to be an exponential distribution. This distribution is a special case of the Erlang distribution that is used in estimating traffic loads on telecommunication networks. Poisson statistics form the basis of many orbital debris models but the statistical basis of these models has not been clearly demonstrated empirically until now. Interestingly, during the fiscal year 2003 observations with the Haystack radar in a fixed staring mode, there are no statistically significant deviations observed from that expected with Poisson statistics, either independent or dependent of altitude or inclination. One would potentially expect some significant clustering of events in time as a result of satellite breakups, but the presence of Poisson statistics indicates that such debris disperse rapidly with respect to Haystack's very narrow radar beam. An exception to Poisson statistics is observed in the months following the intentional breakup of the Fengyun satellite in January 2007.

  18. Modeling the cardiovascular system using a nonlinear additive autoregressive model with exogenous input

    NASA Astrophysics Data System (ADS)

    Riedl, M.; Suhrbier, A.; Malberg, H.; Penzel, T.; Bretthauer, G.; Kurths, J.; Wessel, N.

    2008-07-01

    The parameters of heart rate variability and blood pressure variability have proved to be useful analytical tools in cardiovascular physics and medicine. Model-based analysis of these variabilities additionally leads to new prognostic information about mechanisms behind regulations in the cardiovascular system. In this paper, we analyze the complex interaction between heart rate, systolic blood pressure, and respiration by nonparametric fitted nonlinear additive autoregressive models with external inputs. Therefore, we consider measurements of healthy persons and patients suffering from obstructive sleep apnea syndrome (OSAS), with and without hypertension. It is shown that the proposed nonlinear models are capable of describing short-term fluctuations in heart rate as well as systolic blood pressure significantly better than similar linear ones, which confirms the assumption of nonlinear controlled heart rate and blood pressure. Furthermore, the comparison of the nonlinear and linear approaches reveals that the heart rate and blood pressure variability in healthy subjects is caused by a higher level of noise as well as nonlinearity than in patients suffering from OSAS. The residue analysis points at a further source of heart rate and blood pressure variability in healthy subjects, in addition to heart rate, systolic blood pressure, and respiration. Comparison of the nonlinear models within and among the different groups of subjects suggests the ability to discriminate the cohorts that could lead to a stratification of hypertension risk in OSAS patients.

  19. Sample Size Determination for a Three-Arm Equivalence Trial of Poisson and Negative Binomial Responses.

    PubMed

    Chang, Yu-Wei; Tsong, Yi; Zhao, Zhigen

    2016-12-09

    Assessing equivalence or similarity has drawn much attention recently as many drug products have lost or will lose their patents in the next few years, especially certain best-selling biologics. To claim equivalence between the test treatment and the reference treatment when assay sensitivity is well-established from historical data, one has to demonstrate both superiority of the test treatment over placebo and equivalence between the test treatment and the reference treatment. Thus, there is urgency for practitioners to derive a practical way to calculate sample size for a three-arm equivalence trial. The primary endpoints of a clinical trial may not always be continuous, but may be discrete. In this paper, the authors derive power function and discuss sample size requirement for a three-arm equivalence trial with Poisson and negative binomial clinical endpoints. In addition, the authors examine the effect of the dispersion parameter on the power and the sample size by varying its coefficient from small to large. In extensive numerical studies, the authors demonstrate that required sample size heavily depends on the dispersion parameter. Therefore, misusing a Poisson model for negative binomial data may easily lose power up to 20%, depending on the value of the dispersion parameter.

  20. Theory of multicolor lattice gas - A cellular automaton Poisson solver

    NASA Technical Reports Server (NTRS)

    Chen, H.; Matthaeus, W. H.; Klein, L. W.

    1990-01-01

    The present class of models for cellular automata involving a quiescent hydrodynamic lattice gas with multiple-valued passive labels termed 'colors', the lattice collisions change individual particle colors while preserving net color. The rigorous proofs of the multicolor lattice gases' essential features are rendered more tractable by an equivalent subparticle representation in which the color is represented by underlying two-state 'spins'. Schemes for the introduction of Dirichlet and Neumann boundary conditions are described, and two illustrative numerical test cases are used to verify the theory. The lattice gas model is equivalent to a Poisson equation solution.

  1. The solution of large multi-dimensional Poisson problems

    NASA Technical Reports Server (NTRS)

    Stone, H. S.

    1974-01-01

    The Buneman algorithm for solving Poisson problems can be adapted to solve large Poisson problems on computers with a rotating drum memory so that the computation is done with very little time lost due to rotational latency of the drum.

  2. Polarizable atomic multipole solutes in a Poisson-Boltzmann continuum

    NASA Astrophysics Data System (ADS)

    Schnieders, Michael J.; Baker, Nathan A.; Ren, Pengyu; Ponder, Jay W.

    2007-03-01

    Modeling the change in the electrostatics of organic molecules upon moving from vacuum into solvent, due to polarization, has long been an interesting problem. In vacuum, experimental values for the dipole moments and polarizabilities of small, rigid molecules are known to high accuracy; however, it has generally been difficult to determine these quantities for a polar molecule in water. A theoretical approach introduced by Onsager [J. Am. Chem. Soc. 58, 1486 (1936)] used vacuum properties of small molecules, including polarizability, dipole moment, and size, to predict experimentally known permittivities of neat liquids via the Poisson equation. Since this important advance in understanding the condensed phase, a large number of computational methods have been developed to study solutes embedded in a continuum via numerical solutions to the Poisson-Boltzmann equation. Only recently have the classical force fields used for studying biomolecules begun to include explicit polarization in their functional forms. Here the authors describe the theory underlying a newly developed polarizable multipole Poisson-Boltzmann (PMPB) continuum electrostatics model, which builds on the atomic multipole optimized energetics for biomolecular applications (AMOEBA) force field. As an application of the PMPB methodology, results are presented for several small folded proteins studied by molecular dynamics in explicit water as well as embedded in the PMPB continuum. The dipole moment of each protein increased on average by a factor of 1.27 in explicit AMOEBA water and 1.26 in continuum solvent. The essentially identical electrostatic response in both models suggests that PMPB electrostatics offers an efficient alternative to sampling explicit solvent molecules for a variety of interesting applications, including binding energies, conformational analysis, and pKa prediction. Introduction of 150mM salt lowered the electrostatic solvation energy between 2 and 13kcal /mole, depending on

  3. Receiver function studies in the southwestern United States and correlation between stratigraphy and Poisson's ratio, southwestern Washington State

    NASA Astrophysics Data System (ADS)

    Kilbride, Fiona Elizabeth Anne

    2000-10-01

    This dissertation consists of two separate lines of research. The first uses the receiver function technique to estimate crustal thickness and Poisson's ratio for three receiver stations in the southwestern United States. One station is located in El Paso because relatively few geophysical experiments have been conducted in the southern Rio Grande rift. Two stations are located on the Colorado Plateau, in an attempt to resolve an ongoing dispute concerning the crustal thickness of this province. The results of the receiver functions studies are used as additional constraints for gravity models along two regional profiles coincident with the much shorter profiles of the Pacific to Arizona Crustal Experiment (PACE) that was led by the U.S. Geological Survey on the Colorado Plateau. Because the profiles extend into adjacent provinces, these models are balanced for isostatic equilibrium and are consistent with elevations predicted by buoyancy calculations. The results are most consistent with a thick (≈50 km) crust for the Colorado Plateau and do not support the presence of large lateral thickness variations within the plateau. The second line of research presented also derives Poisson's ratio, in this case from seismic refraction data. The results are used to interpret a structural cross-section in southwest Washington State and to shed light on a feature of low resistivity (1--5 Om) located in the High Cascades (the Southern Washington Cascades Conductor or SWCC). This feature is delineated by the interpretation of magnetotelluric and seismic reflection profiles and has been interpreted to be largely composed of Lower Eocene marine sedimentary rocks. Both lines of research estimate Poisson's ratio using dissimilar techniques, but have produced results consistent with one another. Poisson's ratio for quartz-rich rocks (such as sandstones and granites) generally lies between 0.23 and 0.26, as exemplified by the upper crust of the Rio Grande rift, and by sedimentary

  4. A Poisson process approximation for generalized K-5 confidence regions

    NASA Technical Reports Server (NTRS)

    Arsham, H.; Miller, D. R.

    1982-01-01

    One-sided confidence regions for continuous cumulative distribution functions are constructed using empirical cumulative distribution functions and the generalized Kolmogorov-Smirnov distance. The band width of such regions becomes narrower in the right or left tail of the distribution. To avoid tedious computation of confidence levels and critical values, an approximation based on the Poisson process is introduced. This aproximation provides a conservative confidence region; moreover, the approximation error decreases monotonically to 0 as sample size increases. Critical values necessary for implementation are given. Applications are made to the areas of risk analysis, investment modeling, reliability assessment, and analysis of fault tolerant systems.

  5. Poisson approach to clustering analysis of regulatory sequences.

    PubMed

    Wang, Haiying; Zheng, Huiru; Hu, Jinglu

    2008-01-01

    The presence of similar patterns in regulatory sequences may aid users in identifying co-regulated genes or inferring regulatory modules. By modelling pattern occurrences in regulatory regions with Poisson statistics, this paper presents a log likelihood ratio statistics-based distance measure to calculate pair-wise similarities between regulatory sequences. We employed it within three clustering algorithms: hierarchical clustering, Self-Organising Map, and a self-adaptive neural network. The results indicate that, in comparison to traditional clustering algorithms, the incorporation of the log likelihood ratio statistics-based distance into the learning process may offer considerable improvements in the process of regulatory sequence-based classification of genes.

  6. A Duflo Star Product for Poisson Groups

    NASA Astrophysics Data System (ADS)

    Brochier, Adrien

    2016-09-01

    Let G be a finite-dimensional Poisson algebraic, Lie or formal group. We show that the center of the quantization of G provided by an Etingof-Kazhdan functor is isomorphic as an algebra to the Poisson center of the algebra of functions on G. This recovers and generalizes Duflo's theorem which gives an isomorphism between the center of the enveloping algebra of a finite-dimensional Lie algebra a and the subalgebra of ad-invariant in the symmetric algebra of a. As our proof relies on Etingof-Kazhdan construction it ultimately depends on the existence of Drinfeld associators, but otherwise it is a fairly simple application of graphical calculus. This shed some lights on Alekseev-Torossian proof of the Kashiwara-Vergne conjecture, and on the relation observed by Bar-Natan-Le-Thurston between the Duflo isomorphism and the Kontsevich integral of the unknot.

  7. A New Echeloned Poisson Series Processor (EPSP)

    NASA Astrophysics Data System (ADS)

    Ivanova, Tamara

    2001-07-01

    A specialized Echeloned Poisson Series Processor (EPSP) is proposed. It is a typical software for the implementation of analytical algorithms of Celestial Mechanics. EPSP is designed for manipulating long polynomial-trigonometric series with literal divisors. The coefficients of these echeloned series are the rational or floating-point numbers. The Keplerian processor and analytical generator of special celestial mechanics functions based on the EPSP are also developed.

  8. Poisson filtering of laser ranging data

    NASA Technical Reports Server (NTRS)

    Ricklefs, Randall L.; Shelus, Peter J.

    1993-01-01

    The filtering of data in a high noise, low signal strength environment is a situation encountered routinely in lunar laser ranging (LLR) and, to a lesser extent, in artificial satellite laser ranging (SLR). The use of Poisson statistics as one of the tools for filtering LLR data is described first in a historical context. The more recent application of this statistical technique to noisy SLR data is also described.

  9. Path Selection in a Poisson field

    NASA Astrophysics Data System (ADS)

    Cohen, Yossi; Rothman, Daniel H.

    2016-11-01

    A criterion for path selection for channels growing in a Poisson field is presented. We invoke a generalization of the principle of local symmetry. We then use this criterion to grow channels in a confined geometry. The channel trajectories reveal a self-similar shape as they reach steady state. Analyzing their paths, we identify a cause for branching that may result in a ramified structure in which the golden ratio appears.

  10. Computation of solar perturbations with Poisson series

    NASA Technical Reports Server (NTRS)

    Broucke, R.

    1974-01-01

    Description of a project for computing first-order perturbations of natural or artificial satellites by integrating the equations of motion on a computer with automatic Poisson series expansions. A basic feature of the method of solution is that the classical variation-of-parameters formulation is used rather than rectangular coordinates. However, the variation-of-parameters formulation uses the three rectangular components of the disturbing force rather than the classical disturbing function, so that there is no problem in expanding the disturbing function in series. Another characteristic of the variation-of-parameters formulation employed is that six rather unusual variables are used in order to avoid singularities at the zero eccentricity and zero (or 90 deg) inclination. The integration process starts by assuming that all the orbit elements present on the right-hand sides of the equations of motion are constants. These right-hand sides are then simple Poisson series which can be obtained with the use of the Bessel expansions of the two-body problem in conjunction with certain interation methods. These Poisson series can then be integrated term by term, and a first-order solution is obtained.

  11. First- and second-order Poisson spots

    NASA Astrophysics Data System (ADS)

    Kelly, William R.; Shirley, Eric L.; Migdall, Alan L.; Polyakov, Sergey V.; Hendrix, Kurt

    2009-08-01

    Although Thomas Young is generally given credit for being the first to provide evidence against Newton's corpuscular theory of light, it was Augustin Fresnel who first stated the modern theory of diffraction. We review the history surrounding Fresnel's 1818 paper and the role of the Poisson spot in the associated controversy. We next discuss the boundary-diffraction-wave approach to calculating diffraction effects and show how it can reduce the complexity of calculating diffraction patterns. We briefly discuss a generalization of this approach that reduces the dimensionality of integrals needed to calculate the complete diffraction pattern of any order diffraction effect. We repeat earlier demonstrations of the conventional Poisson spot and discuss an experimental setup for demonstrating an analogous phenomenon that we call a "second-order Poisson spot." Several features of the diffraction pattern can be explained simply by considering the path lengths of singly and doubly bent paths and distinguishing between first- and second-order diffraction effects related to such paths, respectively.

  12. On the Singularity of the Vlasov-Poisson System

    SciTech Connect

    and Hong Qin, Jian Zheng

    2013-04-26

    The Vlasov-Poisson system can be viewed as the collisionless limit of the corresponding Fokker- Planck-Poisson system. It is reasonable to expect that the result of Landau damping can also be obtained from the Fokker-Planck-Poisson system when the collision frequency v approaches zero. However, we show that the colllisionless Vlasov-Poisson system is a singular limit of the collisional Fokker-Planck-Poisson system, and Landau's result can be recovered only as the approaching zero from the positive side.

  13. On the singularity of the Vlasov-Poisson system

    SciTech Connect

    Zheng, Jian; Qin, Hong

    2013-09-15

    The Vlasov-Poisson system can be viewed as the collisionless limit of the corresponding Fokker-Planck-Poisson system. It is reasonable to expect that the result of Landau damping can also be obtained from the Fokker-Planck-Poisson system when the collision frequency ν approaches zero. However, we show that the collisionless Vlasov-Poisson system is a singular limit of the collisional Fokker-Planck-Poisson system, and Landau's result can be recovered only as the ν approaches zero from the positive side.

  14. Response to selection in finite locus models with non-additive effects.

    PubMed

    Esfandyari, Hadi; Henryon, Mark; Berg, Peer; Thomasen, Jorn Rind; Bijma, Piter; Sørensen, Anders Christian

    2017-01-12

    Under the finite-locus model in the absence of mutation, the additive genetic variation is expected to decrease when directional selection is acting on a population, according to quantitative-genetic theory. However, some theoretical studies of selection suggest that the level of additive variance can be sustained or even increased when non-additive genetic effects are present. We tested the hypothesis that finite-locus models with both additive and non-additive genetic effects maintain more additive genetic variance (V_A) and realize larger medium-to-long term genetic gains than models with only additive effects when the trait under selection is subject to truncation selection. Four genetic models that included additive, dominance, and additive-by-additive epistatic effects were simulated. The simulated genome for individuals consisted of 25 chromosomes, each with a length of 1M. One hundred bi-allelic QTL, four on each chromosome, were considered. In each generation, 100 sires and 100 dams were mated, producing five progeny per mating. The population was selected for a single trait (h(2)=0.1) for 100 discrete generations with selection on phenotype or BLUP-EBV. V_A decreased with directional truncation selection even in presence of non-additive genetic effects. Non-additive effects influenced long-term response to selection and among genetic models additive gene action had highest response to selection. In addition, in all genetic models, BLUP-EBV resulted in a greater fixation of favourable and unfavourable alleles and higher response than phenotypic selection. In conclusion, for the schemes we simulated, the presence of non-additive genetic effects had little effect in changes of additive variance and V_A decreased by directional selection.

  15. Additive Manufacturing Modeling and Simulation A Literature Review for Electron Beam Free Form Fabrication

    NASA Technical Reports Server (NTRS)

    Seufzer, William J.

    2014-01-01

    Additive manufacturing is coming into industrial use and has several desirable attributes. Control of the deposition remains a complex challenge, and so this literature review was initiated to capture current modeling efforts in the field of additive manufacturing. This paper summarizes about 10 years of modeling and simulation related to both welding and additive manufacturing. The goals were to learn who is doing what in modeling and simulation, to summarize various approaches taken to create models, and to identify research gaps. Later sections in the report summarize implications for closed-loop-control of the process, implications for local research efforts, and implications for local modeling efforts.

  16. Delineating high-density areas in spatial Poisson fields from strip-transect sampling using indicator geostatistics: application to unexploded ordnance removal.

    PubMed

    Saito, Hirotaka; McKenna, Sean A

    2007-07-01

    An approach for delineating high anomaly density areas within a mixture of two or more spatial Poisson fields based on limited sample data collected along strip transects was developed. All sampled anomalies were transformed to anomaly count data and indicator kriging was used to estimate the probability of exceeding a threshold value derived from the cdf of the background homogeneous Poisson field. The threshold value was determined so that the delineation of high-density areas was optimized. Additionally, a low-pass filter was applied to the transect data to enhance such segmentation. Example calculations were completed using a controlled military model site, in which accurate delineation of clusters of unexploded ordnance (UXO) was required for site cleanup.

  17. Hydrodynamic limit of Wigner-Poisson kinetic theory: Revisited

    NASA Astrophysics Data System (ADS)

    Akbari-Moghanjoughi, M.

    2015-02-01

    In this paper, we revisit the hydrodynamic limit of the Langmuir wave dispersion relation based on the Wigner-Poisson model in connection with that obtained directly from the original Lindhard dielectric function based on the random-phase-approximation. It is observed that the (fourth-order) expansion of the exact Lindhard dielectric constant correctly reduces to the hydrodynamic dispersion relation with an additional term of fourth-order, beside that caused by the quantum diffraction effect. It is also revealed that the generalized Lindhard dielectric theory accounts for the recently discovered Shukla-Eliasson attractive potential (SEAP). However, the expansion of the exact Lindhard static dielectric function leads to a k4 term of different magnitude than that obtained from the linearized quantum hydrodynamics model. It is shown that a correction factor of 1/9 should be included in the term arising from the quantum Bohm potential of the momentum balance equation in fluid model in order for a correct plasma dielectric response treatment. Finally, it is observed that the long-range oscillatory screening potential (Friedel oscillations) of type cos ( 2 k F r ) / r 3 , which is a consequence of the divergence of the dielectric function at point k = 2kF in a quantum plasma, arises due to the finiteness of the Fermi-wavenumber and is smeared out in the limit of very high electron number-densities, typical of white dwarfs and neutron stars. In the very low electron number-density regime, typical of semiconductors and metals, where the Friedel oscillation wavelength becomes much larger compared to the interparticle distances, the SEAP appears with a much deeper potential valley. It is remarked that the fourth-order approximate Lindhard dielectric constant approaches that of the linearized quantum hydrodynamic in the limit if very high electron number-density. By evaluation of the imaginary part of the Lindhard dielectric function, it is shown that the Landau

  18. Moho Depth and Poisson's Ratio beneath Eastern-Central China and Its Tectonic Implications

    NASA Astrophysics Data System (ADS)

    Wei, Z.; Chen, L.; Li, Z.; Ling, Y.; Li, J.

    2015-12-01

    Eastern-central China comprises a complex amalgamation of geotectonic blocks of different ages and undergone significant modification of lithosphere during the Meso-Cenozoic time. To better characterize its deep structure, we estimated the Moho depth and average Poisson's ratio of eastern-central China by H-κ stacking of receiver functions using teleseismic data collected from 1196 broadband stations. A coexistence of modified and preserved crust was revealed in eastern-central China, which was generally in Airy-type isostatic equilibrium. Crust is obviously thicker to the west of the North-South Gravity Lineament but exhibits complex variations in Poisson's ratio with an overall felsic to intermediate bulk crustal composition. Moho depth and Poisson's ratio show striking differences as compared to the surrounding areas in the rifts and tectonic boundary zones, where earthquakes usually occur. Similarities and differences in the Moho depth and average Poisson's ratio were observed among the Northeast China, North China Craton, South China, and the Qinling-Dabie Orogen as well as different areas within these blocks, which may result from their different evolutionary histories and strong tectonic-magma events since the Mesozoic. In addition, we observed an alteration of Moho depth by ~6 km and of Poisson's ratio by ~0.03 as well as striking E-W difference beneath and across the Xuefeng Mountains, suggesting that the Xuefeng Mountains may be a deep tectonic boundary between the eastern Yangtze Craton and western Cathaysia Block.

  19. Reentrant Origami-Based Metamaterials with Negative Poisson's Ratio and Bistability

    NASA Astrophysics Data System (ADS)

    Yasuda, H.; Yang, J.

    2015-05-01

    We investigate the unique mechanical properties of reentrant 3D origami structures based on the Tachi-Miura polyhedron (TMP). We explore the potential usage as mechanical metamaterials that exhibit tunable negative Poisson's ratio and structural bistability simultaneously. We show analytically and experimentally that the Poisson's ratio changes from positive to negative and vice versa during its folding motion. In addition, we verify the bistable mechanism of the reentrant 3D TMP under rigid origami configurations without relying on the buckling motions of planar origami surfaces. This study forms a foundation in designing and constructing TMP-based metamaterials in the form of bellowslike structures for engineering applications.

  20. An introduction to modeling longitudinal data with generalized additive models: applications to single-case designs.

    PubMed

    Sullivan, Kristynn J; Shadish, William R; Steiner, Peter M

    2015-03-01

    Single-case designs (SCDs) are short time series that assess intervention effects by measuring units repeatedly over time in both the presence and absence of treatment. This article introduces a statistical technique for analyzing SCD data that has not been much used in psychological and educational research: generalized additive models (GAMs). In parametric regression, the researcher must choose a functional form to impose on the data, for example, that trend over time is linear. GAMs reverse this process by letting the data inform the choice of functional form. In this article we review the problem that trend poses in SCDs, discuss how current SCD analytic methods approach trend, describe GAMs as a possible solution, suggest a GAM model testing procedure for examining the presence of trend in SCDs, present a small simulation to show the statistical properties of GAMs, and illustrate the procedure on 3 examples of different lengths. Results suggest that GAMs may be very useful both as a form of sensitivity analysis for checking the plausibility of assumptions about trend and as a primary data analysis strategy for testing treatment effects. We conclude with a discussion of some problems with GAMs and some future directions for research on the application of GAMs to SCDs.

  1. The transverse Poisson's ratio of composites.

    NASA Technical Reports Server (NTRS)

    Foye, R. L.

    1972-01-01

    An expression is developed that makes possible the prediction of Poisson's ratio for unidirectional composites with reference to any pair of orthogonal axes that are normal to the direction of the reinforcing fibers. This prediction appears to be a reasonable one in that it follows the trends of the finite element analysis and the bounding estimates, and has the correct limiting value for zero fiber content. It can only be expected to apply to composites containing stiff, circular, isotropic fibers bonded to a soft matrix material.

  2. Testing the ratio of two poisson rates.

    PubMed

    Gu, Kangxia; Ng, Hon Keung Tony; Tang, Man Lai; Schucany, William R

    2008-04-01

    In this paper we compare the properties of four different general approaches for testing the ratio of two Poisson rates. Asymptotically normal tests, tests based on approximate p -values, exact conditional tests, and a likelihood ratio test are considered. The properties and power performance of these tests are studied by a Monte Carlo simulation experiment. Sample size calculation formulae are given for each of the test procedures and their validities are studied. Some recommendations favoring the likelihood ratio and certain asymptotic tests are based on these simulation results. Finally, all of the test procedures are illustrated with two real life medical examples.

  3. Product versus additive threshold models for analysis of reproduction outcomes in animal genetics.

    PubMed

    David, I; Bodin, L; Gianola, D; Legarra, A; Manfredi, E; Robert-Granié, C

    2009-08-01

    The phenotypic observation of some reproduction traits (e.g., insemination success, interval from lambing to insemination) is the result of environmental and genetic factors acting on 2 individuals: the male and female involved in a mating couple. In animal genetics, the main approach (called additive model) proposed for studying such traits assumes that the phenotype is linked to a purely additive combination, either on the observed scale for continuous traits or on some underlying scale for discrete traits, of environmental and genetic effects affecting the 2 individuals. Statistical models proposed for studying human fecundability generally consider reproduction outcomes as the product of hypothetical unobservable variables. Taking inspiration from these works, we propose a model (product threshold model) for studying a binary reproduction trait that supposes that the observed phenotype is the product of 2 unobserved phenotypes, 1 for each individual. We developed a Gibbs sampling algorithm for fitting a Bayesian product threshold model including additive genetic effects and showed by simulation that it is feasible and that it provides good estimates of the parameters. We showed that fitting an additive threshold model to data that are simulated under a product threshold model provides biased estimates, especially for individuals with high breeding values. A main advantage of the product threshold model is that, in contrast to the additive model, it provides distinct estimates of fixed effects affecting each of the 2 unobserved phenotypes.

  4. Simultaneous estimation of Poisson's ratio and Young's modulus using a single indentation: a finite element study

    NASA Astrophysics Data System (ADS)

    Zheng, Y. P.; Choi, A. P. C.; Ling, H. Y.; Huang, Y. P.

    2009-04-01

    Indentation is commonly used to determine the mechanical properties of different kinds of biological tissues and engineering materials. With the force-deformation data obtained from an indentation test, Young's modulus of the tissue can be calculated using a linear elastic indentation model with a known Poisson's ratio. A novel method for simultaneous estimation of Young's modulus and Poisson's ratio of the tissue using a single indentation was proposed in this study. Finite element (FE) analysis using 3D models was first used to establish the relationship between Poisson's ratio and the deformation-dependent indentation stiffness for different aspect ratios (indentor radius/tissue original thickness) in the indentation test. From the FE results, it was found that the deformation-dependent indentation stiffness linearly increased with the deformation. Poisson's ratio could be extracted based on the deformation-dependent indentation stiffness obtained from the force-deformation data. Young's modulus was then further calculated with the estimated Poisson's ratio. The feasibility of this method was demonstrated in virtue of using the indentation models with different material properties in the FE analysis. The numerical results showed that the percentage errors of the estimated Poisson's ratios and the corresponding Young's moduli ranged from -1.7% to -3.2% and 3.0% to 7.2%, respectively, with the aspect ratio (indentor radius/tissue thickness) larger than 1. It is expected that this novel method can be potentially used for quantitative assessment of various kinds of engineering materials and biological tissues, such as articular cartilage.

  5. Analyzing Seasonal Variations in Suicide With Fourier Poisson Time-Series Regression: A Registry-Based Study From Norway, 1969-2007.

    PubMed

    Bramness, Jørgen G; Walby, Fredrik A; Morken, Gunnar; Røislien, Jo

    2015-08-01

    Seasonal variation in the number of suicides has long been acknowledged. It has been suggested that this seasonality has declined in recent years, but studies have generally used statistical methods incapable of confirming this. We examined all suicides occurring in Norway during 1969-2007 (more than 20,000 suicides in total) to establish whether seasonality decreased over time. Fitting of additive Fourier Poisson time-series regression models allowed for formal testing of a possible linear decrease in seasonality, or a reduction at a specific point in time, while adjusting for a possible smooth nonlinear long-term change without having to categorize time into discrete yearly units. The models were compared using Akaike's Information Criterion and analysis of variance. A model with a seasonal pattern was significantly superior to a model without one. There was a reduction in seasonality during the period. Both the model assuming a linear decrease in seasonality and the model assuming a change at a specific point in time were both superior to a model assuming constant seasonality, thus confirming by formal statistical testing that the magnitude of the seasonality in suicides has diminished. The additive Fourier Poisson time-series regression model would also be useful for studying other temporal phenomena with seasonal components.

  6. Heterogeneous PVA hydrogels with micro-cells of both positive and negative Poisson's ratios.

    PubMed

    Ma, Yanxuan; Zheng, Yudong; Meng, Haoye; Song, Wenhui; Yao, Xuefeng; Lv, Hexiang

    2013-07-01

    Many models describing the deformation of general foam or auxetic materials are based on the assumption of homogeneity and order within the materials. However, non-uniform heterogeneity is often an inherent nature in many porous materials and composites, but difficult to measure. In this work, inspired by the structures of auxetic materials, the porous PVA hydrogels with internal inby-concave pores (IICP) or interconnected pores (ICP) were designed and processed. The deformation of the PVA hydrogels under compression was tested and their Poisson's ratio was characterized. The results indicated that the size, shape and distribution of the pores in the hydrogel matrix had strong influence on the local Poisson's ratio, which varying from positive to negative at micro-scale. The size-dependency of their local Poisson's ratio reflected and quantified the uniformity and heterogeneity of the micro-porous structures in the PVA hydrogels.

  7. A Method of Poisson's Ration Imaging Within a Material Part

    NASA Technical Reports Server (NTRS)

    Roth, Don J. (Inventor)

    1994-01-01

    The present invention is directed to a method of displaying the Poisson's ratio image of a material part. In the present invention, longitudinal data is produced using a longitudinal wave transducer and shear wave data is produced using a shear wave transducer. The respective data is then used to calculate the Poisson's ratio for the entire material part. The Poisson's ratio approximations are then used to display the data.

  8. Method of Poisson's ratio imaging within a material part

    NASA Technical Reports Server (NTRS)

    Roth, Don J. (Inventor)

    1996-01-01

    The present invention is directed to a method of displaying the Poisson's ratio image of a material part. In the present invention longitudinal data is produced using a longitudinal wave transducer and shear wave data is produced using a shear wave transducer. The respective data is then used to calculate the Poisson's ratio for the entire material part. The Poisson's ratio approximations are then used to displayed the image.

  9. Flux theory for Poisson distributed pores with Gaussian permeability.

    PubMed

    Salinas, Dino G

    2016-01-01

    The mean of the solute flux through membrane pores depends on the random distribution and permeability of the pores. Mathematical models including such randomness factors make it possible to obtain statistical parameters for pore characterization. Here, assuming that pores follow a Poisson distribution in the lipid phase and that their permeabilities follow a Gaussian distribution, a mathematical model for solute dynamics is obtained by applying a general result from a previous work regarding any number of different kinds of randomly distributed pores. The new proposed theory is studied using experimental parameters obtained elsewhere, and a method for finding the mean single pore flux rate from liposome flux assays is suggested. This method is useful for pores without requiring studies by patch-clamp in single cells or single-channel recordings. However, it does not apply in the case of ion-selective channels, in which a more complex flux law combining the concentration and electrical gradient is required.

  10. Shape representation and classification using the poisson equation.

    PubMed

    Gorelick, Lena; Galun, Meirav; Sharon, Eitan; Basri, Ronen; Brandt, Achi

    2006-12-01

    We present a novel approach that allows us to reliably compute many useful properties of a silhouette. Our approach assigns, for every internal point of the silhouette, a value reflecting the mean time required for a random walk beginning at the point to hit the boundaries. This function can be computed by solving Poisson's equation, with the silhouette contours providing boundary conditions. We show how this function can be used to reliably extract various shape properties including part structure and rough skeleton, local orientation and aspect ratio of different parts, and convex and concave sections of the boundaries. In addition to this, we discuss properties of the solution and show how to efficiently compute this solution using multigrid algorithms. We demonstrate the utility of the extracted properties by using them for shape classification and retrieval.

  11. Effects of additional food in a delayed predator-prey model.

    PubMed

    Sahoo, Banshidhar; Poria, Swarup

    2015-03-01

    We examine the effects of supplying additional food to predator in a gestation delay induced predator-prey system with habitat complexity. Additional food works in favor of predator growth in our model. Presence of additional food reduces the predatory attack rate to prey in the model. Supplying additional food we can control predator population. Taking time delay as bifurcation parameter the stability of the coexisting equilibrium point is analyzed. Hopf bifurcation analysis is done with respect to time delay in presence of additional food. The direction of Hopf bifurcations and the stability of bifurcated periodic solutions are determined by applying the normal form theory and the center manifold theorem. The qualitative dynamical behavior of the model is simulated using experimental parameter values. It is observed that fluctuations of the population size can be controlled either by supplying additional food suitably or by increasing the degree of habitat complexity. It is pointed out that Hopf bifurcation occurs in the system when the delay crosses some critical value. This critical value of delay strongly depends on quality and quantity of supplied additional food. Therefore, the variation of predator population significantly effects the dynamics of the model. Model results are compared with experimental results and biological implications of the analytical findings are discussed in the conclusion section.

  12. Understanding the changes in ductility and Poisson's ratio of metallic glasses during annealing from microscopic dynamics

    NASA Astrophysics Data System (ADS)

    Wang, Z.; Ngai, K. L.; Wang, W. H.

    2015-07-01

    In the paper K. L. Ngai et al., [J. Chem. 140, 044511 (2014)], the empirical correlation of ductility with the Poisson's ratio, νPoisson, found in metallic glasses was theoretically explained by microscopic dynamic processes which link on the one hand ductility, and on the other hand the Poisson's ratio. Specifically, the dynamic processes are the primitive relaxation in the Coupling Model which is the precursor of the Johari-Goldstein β-relaxation, and the caged atoms dynamics characterized by the effective Debye-Waller factor f0 or equivalently the nearly constant loss (NCL) in susceptibility. All these processes and the parameters characterizing them are accessible experimentally except f0 or the NCL of caged atoms; thus, so far, the experimental verification of the explanation of the correlation between ductility and Poisson's ratio is incomplete. In the experimental part of this paper, we report dynamic mechanical measurement of the NCL of the metallic glass La60Ni15Al25 as-cast, and the changes by annealing at temperature below Tg. The observed monotonic decrease of the NCL with aging time, reflecting the corresponding increase of f0, correlates with the decrease of νPoisson. This is important observation because such measurements, not made before, provide the missing link in confirming by experiment the explanation of the correlation of ductility with νPoisson. On aging the metallic glass, also observed in the isochronal loss spectra is the shift of the β-relaxation to higher temperatures and reduction of the relaxation strength. These concomitant changes of the β-relaxation and NCL are the root cause of embrittlement by aging the metallic glass. The NCL of caged atoms is terminated by the onset of the primitive relaxation in the Coupling Model, which is generally supported by experiments. From this relation, the monotonic decrease of the NCL with aging time is caused by the slowing down of the primitive relaxation and β-relaxation on annealing, and

  13. The non-equilibrium allele frequency spectrum in a Poisson random field framework.

    PubMed

    Kaj, Ingemar; Mugal, Carina F

    2016-10-01

    In population genetic studies, the allele frequency spectrum (AFS) efficiently summarizes genome-wide polymorphism data and shapes a variety of allele frequency-based summary statistics. While existing theory typically features equilibrium conditions, emerging methodology requires an analytical understanding of the build-up of the allele frequencies over time. In this work, we use the framework of Poisson random fields to derive new representations of the non-equilibrium AFS for the case of a Wright-Fisher population model with selection. In our approach, the AFS is a scaling-limit of the expectation of a Poisson stochastic integral and the representation of the non-equilibrium AFS arises in terms of a fixation time probability distribution. The known duality between the Wright-Fisher diffusion process and a birth and death process generalizing Kingman's coalescent yields an additional representation. The results carry over to the setting of a random sample drawn from the population and provide the non-equilibrium behavior of sample statistics. Our findings are consistent with and extend a previous approach where the non-equilibrium AFS solves a partial differential forward equation with a non-traditional boundary condition. Moreover, we provide a bridge to previous coalescent-based work, and hence tie several frameworks together. Since frequency-based summary statistics are widely used in population genetics, for example, to identify candidate loci of adaptive evolution, to infer the demographic history of a population, or to improve our understanding of the underlying mechanics of speciation events, the presented results are potentially useful for a broad range of topics.

  14. Updating a Classic: "The Poisson Distribution and the Supreme Court" Revisited

    ERIC Educational Resources Information Center

    Cole, Julio H.

    2010-01-01

    W. A. Wallis studied vacancies in the US Supreme Court over a 96-year period (1837-1932) and found that the distribution of the number of vacancies per year could be characterized by a Poisson model. This note updates this classic study.

  15. The Cauchy Problem for the 3-D Vlasov-Poisson System with Point Charges

    NASA Astrophysics Data System (ADS)

    Marchioro, Carlo; Miot, Evelyne; Pulvirenti, Mario

    2011-07-01

    In this paper we establish global existence and uniqueness of the solution to the three-dimensional Vlasov-Poisson system in the presence of point charges with repulsive interaction. The present analysis extends an analogous two-dimensional result (Caprino and Marchioro in Kinet. Relat. Models 3(2):241-254, 2010).

  16. Multi-Parameter Linear Least-Squares Fitting to Poisson Data One Count at a Time

    NASA Technical Reports Server (NTRS)

    Wheaton, W.; Dunklee, A.; Jacobson, A.; Ling, J.; Mahoney, W.; Radocinski, R.

    1993-01-01

    A standard problem in gamma-ray astronomy data analysis is the decomposition of a set of observed counts, described by Poisson statistics, according to a given multi-component linear model, with underlying physical count rates or fluxes which are to be estimated from the data.

  17. Testing a Gender Additive Model: The Role of Body Image in Adolescent Depression

    ERIC Educational Resources Information Center

    Bearman, Sarah Kate; Stice, Eric

    2008-01-01

    Despite consistent evidence that adolescent girls are at greater risk of developing depression than adolescent boys, risk factor models that account for this difference have been elusive. The objective of this research was to examine risk factors proposed by the "gender additive" model of depression that attempts to partially explain the increased…

  18. An original traffic additional emission model and numerical simulation on a signalized road

    NASA Astrophysics Data System (ADS)

    Zhu, Wen-Xing; Zhang, Jing-Yu

    2017-02-01

    Based on VSP (Vehicle Specific Power) model traffic real emissions were theoretically classified into two parts: basic emission and additional emission. An original additional emission model was presented to calculate the vehicle's emission due to the signal control effects. Car-following model was developed and used to describe the traffic behavior including cruising, accelerating, decelerating and idling at a signalized intersection. Simulations were conducted under two situations: single intersection and two adjacent intersections with their respective control policy. Results are in good agreement with the theoretical analysis. It is also proved that additional emission model may be used to design the signal control policy in our modern traffic system to solve the serious environmental problems.

  19. Application of the sine-Poisson equation in solar magnetostatics

    NASA Technical Reports Server (NTRS)

    Webb, G. M.; Zank, G. P.

    1990-01-01

    Solutions of the sine-Poisson equations are used to construct a class of isothermal magnetostatic atmospheres, with one ignorable coordinate corresponding to a uniform gravitational field in a plane geometry. The distributed current in the model (j) is directed along the x-axis, where x is the horizontal ignorable coordinate; (j) varies as the sine of the magnetostatic potential and falls off exponentially with distance vertical to the base with an e-folding distance equal to the gravitational scale height. Solutions for the magnetostatic potential A corresponding to the one-soliton, two-soliton, and breather solutions of the sine-Gordon equation are studied. Depending on the values of the free parameters in the soliton solutions, horizontally periodic magnetostatic structures are obtained possessing either a single X-type neutral point, multiple neural X-points, or solutions without X-points.

  20. Nonstationary elementary-field light randomly triggered by Poisson impulses.

    PubMed

    Fernández-Pousa, Carlos R

    2013-05-01

    A stochastic theory of nonstationary light describing the random emission of elementary pulses is presented. The emission is governed by a nonhomogeneous Poisson point process determined by a time-varying emission rate. The model describes, in the appropriate limits, stationary, cyclostationary, locally stationary, and pulsed radiation, and reduces to a Gaussian theory in the limit of dense emission rate. The first- and second-order coherence theories are solved after the computation of second- and fourth-order correlation functions by use of the characteristic function. The ergodicity of second-order correlations under various types of detectors is explored and a number of observables, including optical spectrum, amplitude, and intensity correlations, are analyzed.

  1. Estimate of influenza cases using generalized linear, additive and mixed models.

    PubMed

    Oviedo, Manuel; Domínguez, Ángela; Pilar Muñoz, M

    2015-01-01

    We investigated the relationship between reported cases of influenza in Catalonia (Spain). Covariates analyzed were: population, age, data of report of influenza, and health region during 2010-2014 using data obtained from the SISAP program (Institut Catala de la Salut - Generalitat of Catalonia). Reported cases were related with the study of covariates using a descriptive analysis. Generalized Linear Models, Generalized Additive Models and Generalized Additive Mixed Models were used to estimate the evolution of the transmission of influenza. Additive models can estimate non-linear effects of the covariates by smooth functions; and mixed models can estimate data dependence and variability in factor variables using correlations structures and random effects, respectively. The incidence rate of influenza was calculated as the incidence per 100 000 people. The mean rate was 13.75 (range 0-27.5) in the winter months (December, January, February) and 3.38 (range 0-12.57) in the remaining months. Statistical analysis showed that Generalized Additive Mixed Models were better adapted to the temporal evolution of influenza (serial correlation 0.59) than classical linear models.

  2. Poisson-Boltzmann versus Size-Modified Poisson-Boltzmann Electrostatics Applied to Lipid Bilayers.

    PubMed

    Wang, Nuo; Zhou, Shenggao; Kekenes-Huskey, Peter M; Li, Bo; McCammon, J Andrew

    2014-12-26

    Mean-field methods, such as the Poisson-Boltzmann equation (PBE), are often used to calculate the electrostatic properties of molecular systems. In the past two decades, an enhancement of the PBE, the size-modified Poisson-Boltzmann equation (SMPBE), has been reported. Here, the PBE and the SMPBE are reevaluated for realistic molecular systems, namely, lipid bilayers, under eight different sets of input parameters. The SMPBE appears to reproduce the molecular dynamics simulation results better than the PBE only under specific parameter sets, but in general, it performs no better than the Stern layer correction of the PBE. These results emphasize the need for careful discussions of the accuracy of mean-field calculations on realistic systems with respect to the choice of parameters and call for reconsideration of the cost-efficiency and the significance of the current SMPBE formulation.

  3. A Cartesian grid embedded boundary method for Poisson`s equation on irregular domains

    SciTech Connect

    Johansen, H.; Colella, P.

    1997-01-31

    The authors present a numerical method for solving Poisson`s equation, with variable coefficients and Dirichlet boundary conditions, on two-dimensional regions. The approach uses a finite-volume discretization, which embeds the domain in a regular Cartesian grid. They treat the solution as a cell-centered quantity, even when those centers are outside the domain. Cells that contain a portion of the domain boundary use conservation differencing of second-order accurate fluxes, on each cell volume. The calculation of the boundary flux ensures that the conditioning of the matrix is relatively unaffected by small cell volumes. This allows them to use multi-grid iterations with a simple point relaxation strategy. They have combined this with an adaptive mesh refinement (AMR) procedure. They provide evidence that the algorithm is second-order accurate on various exact solutions, and compare the adaptive and non-adaptive calculations.

  4. Poisson cohomology of scalar multidimensional Dubrovin-Novikov brackets

    NASA Astrophysics Data System (ADS)

    Carlet, Guido; Casati, Matteo; Shadrin, Sergey

    2017-04-01

    We compute the Poisson cohomology of a scalar Poisson bracket of Dubrovin-Novikov type with D independent variables. We find that the second and third cohomology groups are generically non-vanishing in D > 1. Hence, in contrast with the D = 1 case, the deformation theory in the multivariable case is non-trivial.

  5. LETTER TO THE EDITOR: New generalized Poisson structures

    NASA Astrophysics Data System (ADS)

    de Azcárraga, J. A.; Perelomov, A. M.; Pérez Bueno, J. C.

    1996-04-01

    New generalized Poisson structures are introduced by using suitable skew-symmetric contravariant tensors of even order. The corresponding `Jacobi identities' are provided by conditions on these tensors, which may be understood as cocycle conditions. As an example, we provide the linear generalized Poisson structures which can be constructed on the dual spaces of simple Lie algebras.

  6. The Schouten - Nijenhuis bracket, cohomology and generalized Poisson structures

    NASA Astrophysics Data System (ADS)

    de Azcárraga, J. A.; Perelomov, A. M.; Pérez Bueno, J. C.

    1996-12-01

    Newly introduced generalized Poisson structures based on suitable skew-symmetric contravariant tensors of even order are discussed in terms of the Schouten - Nijenhuis bracket. The associated `Jacobi identities' are expressed as conditions on these tensors, the cohomological contents of which is given. In particular, we determine the linear generalized Poisson structures which can be constructed on the dual spaces of simple Lie algebras.

  7. Low porosity metallic periodic structures with negative Poisson's ratio.

    PubMed

    Taylor, Michael; Francesconi, Luca; Gerendás, Miklós; Shanian, Ali; Carson, Carl; Bertoldi, Katia

    2014-04-16

    Auxetic behavior in low porosity metallic structures is demonstrated via a simple system of orthogonal elliptical voids. In this minimal 2D system, the Poisson's ratio can be effectively controlled by changing the aspect ratio of the voids. In this way, large negative values of Poisson's ratio can be achieved, indicating an effective strategy for designing auxetic structures with desired porosity.

  8. Extreme values of the Poisson's ratio of cubic crystals

    NASA Astrophysics Data System (ADS)

    Epishin, A. I.; Lisovenko, D. S.

    2016-10-01

    The problem of determining the extrema of Poisson's ratio for cubic crystals is considered, and analytical expressions are derived to calculate its extreme values. It follows from the obtained solution that, apart from extreme values at standard orientations, extreme values of Poisson's ratio can also be detected at special orientations deviated from the standard ones. The derived analytical expressions are used to calculate the extreme values of Poisson's ratio for a large number of known cubic crystals. The extremely high values of Poisson's ratio are shown to be characteristic of metastable crystals, such as crystals with the shape memory effect caused by martensitic transformation. These crystals are mainly represented by metallic alloys. For some crystals, the absolute extrema of Poisson's ratio can exceed the standard values, which are-1 for a standard minimum and +2 for a standard maximum.

  9. The Poisson Gamma distribution for wind speed data

    NASA Astrophysics Data System (ADS)

    Ćakmakyapan, Selen; Özel, Gamze

    2016-04-01

    The wind energy is one of the most significant alternative clean energy source and rapidly developing renewable energy sources in the world. For the evaluation of wind energy potential, probability density functions (pdfs) are usually used to model wind speed distributions. The selection of the appropriate pdf reduces the wind power estimation error and also allow to achieve characteristics. In the literature, different pdfs used to model wind speed data for wind energy applications. In this study, we propose a new probability distribution to model the wind speed data. Firstly, we defined the new probability distribution named Poisson-Gamma (PG) distribution and we analyzed a wind speed data sets which are about five pressure degree for the station. We obtained the data sets from Turkish State Meteorological Service. Then, we modelled the data sets with Exponential, Weibull, Lomax, 3 parameters Burr, Gumbel, Gamma, Rayleigh which are used to model wind speed data, and PG distributions. Finally, we compared the distribution, to select the best fitted model and demonstrated that PG distribution modeled the data sets better.

  10. An unbiased risk estimator for image denoising in the presence of mixed poisson-gaussian noise.

    PubMed

    Le Montagner, Yoann; Angelini, Elsa D; Olivo-Marin, Jean-Christophe

    2014-03-01

    The behavior and performance of denoising algorithms are governed by one or several parameters, whose optimal settings depend on the content of the processed image and the characteristics of the noise, and are generally designed to minimize the mean squared error (MSE) between the denoised image returned by the algorithm and a virtual ground truth. In this paper, we introduce a new Poisson-Gaussian unbiased risk estimator (PG-URE) of the MSE applicable to a mixed Poisson-Gaussian noise model that unifies the widely used Gaussian and Poisson noise models in fluorescence bioimaging applications. We propose a stochastic methodology to evaluate this estimator in the case when little is known about the internal machinery of the considered denoising algorithm, and we analyze both theoretically and empirically the characteristics of the PG-URE estimator. Finally, we evaluate the PG-URE-driven parametrization for three standard denoising algorithms, with and without variance stabilizing transforms, and different characteristics of the Poisson-Gaussian noise mixture.

  11. AFMPB: An adaptive fast multipole Poisson-Boltzmann solver for calculating electrostatics in biomolecular systems

    NASA Astrophysics Data System (ADS)

    Lu, Benzhuo; Cheng, Xiaolin; Huang, Jingfang; McCammon, J. Andrew

    2010-06-01

    A Fortran program package is introduced for rapid evaluation of the electrostatic potentials and forces in biomolecular systems modeled by the linearized Poisson-Boltzmann equation. The numerical solver utilizes a well-conditioned boundary integral equation (BIE) formulation, a node-patch discretization scheme, a Krylov subspace iterative solver package with reverse communication protocols, and an adaptive new version of fast multipole method in which the exponential expansions are used to diagonalize the multipole-to-local translations. The program and its full description, as well as several closely related libraries and utility tools are available at http://lsec.cc.ac.cn/~lubz/afmpb.html and a mirror site at http://mccammon.ucsd.edu/. This paper is a brief summary of the program: the algorithms, the implementation and the usage. Program summaryProgram title: AFMPB: Adaptive fast multipole Poisson-Boltzmann solver Catalogue identifier: AEGB_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGB_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPL 2.0 No. of lines in distributed program, including test data, etc.: 453 649 No. of bytes in distributed program, including test data, etc.: 8 764 754 Distribution format: tar.gz Programming language: Fortran Computer: Any Operating system: Any RAM: Depends on the size of the discretized biomolecular system Classification: 3 External routines: Pre- and post-processing tools are required for generating the boundary elements and for visualization. Users can use MSMS ( http://www.scripps.edu/~sanner/html/msms_home.html) for pre-processing, and VMD ( http://www.ks.uiuc.edu/Research/vmd/) for visualization. Sub-programs included: An iterative Krylov subspace solvers package from SPARSKIT by Yousef Saad ( http://www-users.cs.umn.edu/~saad/software/SPARSKIT/sparskit.html), and the fast multipole methods subroutines from FMMSuite ( http

  12. Integrated reservoir characterization: Improvement in heterogeneities stochastic modelling by integration of additional external constraints

    SciTech Connect

    Doligez, B.; Eschard, R.; Geffroy, F.

    1997-08-01

    The classical approach to construct reservoir models is to start with a fine scale geological model which is informed with petrophysical properties. Then scaling-up techniques allow to obtain a reservoir model which is compatible with the fluid flow simulators. Geostatistical modelling techniques are widely used to build the geological models before scaling-up. These methods provide equiprobable images of the area under investigation, which honor the well data, and which variability is the same than the variability computed from the data. At an appraisal phase, when few data are available, or when the wells are insufficient to describe all the heterogeneities and the behavior of the field, additional constraints are needed to obtain a more realistic geological model. For example, seismic data or stratigraphic models can provide average reservoir information with an excellent areal coverage, but with a poor vertical resolution. New advances in modelisation techniques allow now to integrate this type of additional external information in order to constrain the simulations. In particular, 2D or 3D seismic derived information grids, or sand-shale ratios maps coming from stratigraphic models can be used as external drifts to compute the geological image of the reservoir at the fine scale. Examples are presented to illustrate the use of these new tools, their impact on the final reservoir model, and their sensitivity to some key parameters.

  13. Analysis of error-prone survival data under additive hazards models: measurement error effects and adjustments.

    PubMed

    Yan, Ying; Yi, Grace Y

    2016-07-01

    Covariate measurement error occurs commonly in survival analysis. Under the proportional hazards model, measurement error effects have been well studied, and various inference methods have been developed to correct for error effects under such a model. In contrast, error-contaminated survival data under the additive hazards model have received relatively less attention. In this paper, we investigate this problem by exploring measurement error effects on parameter estimation and the change of the hazard function. New insights of measurement error effects are revealed, as opposed to well-documented results for the Cox proportional hazards model. We propose a class of bias correction estimators that embraces certain existing estimators as special cases. In addition, we exploit the regression calibration method to reduce measurement error effects. Theoretical results for the developed methods are established, and numerical assessments are conducted to illustrate the finite sample performance of our methods.

  14. The role of Poisson's binomial distribution in the analysis of TEM images.

    PubMed

    Tejada, Arturo; den Dekker, Arnold J

    2011-11-01

    Frank's observation that a TEM bright-field image acquired under non-stationary conditions can be modeled by the time integral of the standard TEM image model [J. Frank, Nachweis von objektbewegungen im lichtoptis- chen diffraktogramm von elektronenmikroskopischen auf- nahmen, Optik 30 (2) (1969) 171-180.] is re-derived here using counting statistics based on Poisson's binomial distribution. The approach yields a statistical image model that is suitable for image analysis and simulation.

  15. Langevin-Poisson-EQT: A dipolar solvent based quasi-continuum approach for electric double layers

    NASA Astrophysics Data System (ADS)

    Mashayak, S. Y.; Aluru, N. R.

    2017-01-01

    Water is a highly polar solvent. As a result, electrostatic interactions of interfacial water molecules play a dominant role in determining the distribution of ions in electric double layers (EDLs). Near a surface, an inhomogeneous and anisotropic arrangement of water molecules gives rise to pronounced variations in the electrostatic and hydration energies of ions. Therefore, a detailed description of the structural and dielectric properties of water is important to study EDLs. However, most theoretical models ignore the molecular effects of water and treat water as a background continuum with a uniform dielectric permittivity. Explicit consideration of water polarization and hydration of ions is both theoretically and numerically challenging. In this work, we present an empirical potential-based quasi-continuum theory (EQT) for EDL, which incorporates the polarization and hydration effects of water explicitly. In EQT, water molecules are modeled as Langevin point dipoles and a point dipole based coarse-grained model for water is developed systematically. The space dependence of the dielectric permittivity of water is included in the Poisson equation to compute the electrostatic potential. In addition, to reproduce hydration of ions, ion-water coarse-grained potentials are developed. We demonstrate the EQT framework for EDL by simulating NaCl aqueous electrolyte confined inside slit-like capacitor channels at various ion concentrations and surface charge densities. We show that the ion and water density predictions from EQT agree well with the reference molecular dynamics simulations.

  16. Comparing GWAS Results of Complex Traits Using Full Genetic Model and Additive Models for Revealing Genetic Architecture

    PubMed Central

    Monir, Md. Mamun; Zhu, Jun

    2017-01-01

    Most of the genome-wide association studies (GWASs) for human complex diseases have ignored dominance, epistasis and ethnic interactions. We conducted comparative GWASs for total cholesterol using full model and additive models, which illustrate the impacts of the ignoring genetic variants on analysis results and demonstrate how genetic effects of multiple loci could differ across different ethnic groups. There were 15 quantitative trait loci with 13 individual loci and 3 pairs of epistasis loci identified by full model, whereas only 14 loci (9 common loci and 5 different loci) identified by multi-loci additive model. Again, 4 full model detected loci were not detected using multi-loci additive model. PLINK-analysis identified two loci and GCTA-analysis detected only one locus with genome-wide significance. Full model identified three previously reported genes as well as several new genes. Bioinformatics analysis showed some new genes are related with cholesterol related chemicals and/or diseases. Analyses of cholesterol data and simulation studies revealed that the full model performs were better than the additive-model performs in terms of detecting power and unbiased estimations of genetic variants of complex traits. PMID:28079101

  17. Sparse Additive Ordinary Differential Equations for Dynamic Gene Regulatory Network Modeling.

    PubMed

    Wu, Hulin; Lu, Tao; Xue, Hongqi; Liang, Hua

    2014-04-02

    The gene regulation network (GRN) is a high-dimensional complex system, which can be represented by various mathematical or statistical models. The ordinary differential equation (ODE) model is one of the popular dynamic GRN models. High-dimensional linear ODE models have been proposed to identify GRNs, but with a limitation of the linear regulation effect assumption. In this article, we propose a sparse additive ODE (SA-ODE) model, coupled with ODE estimation methods and adaptive group LASSO techniques, to model dynamic GRNs that could flexibly deal with nonlinear regulation effects. The asymptotic properties of the proposed method are established and simulation studies are performed to validate the proposed approach. An application example for identifying the nonlinear dynamic GRN of T-cell activation is used to illustrate the usefulness of the proposed method.

  18. Sparse Additive Ordinary Differential Equations for Dynamic Gene Regulatory Network Modeling

    PubMed Central

    Wu, Hulin; Lu, Tao; Xue, Hongqi; Liang, Hua

    2014-01-01

    Summary The gene regulation network (GRN) is a high-dimensional complex system, which can be represented by various mathematical or statistical models. The ordinary differential equation (ODE) model is one of the popular dynamic GRN models. High-dimensional linear ODE models have been proposed to identify GRNs, but with a limitation of the linear regulation effect assumption. In this article, we propose a sparse additive ODE (SA-ODE) model, coupled with ODE estimation methods and adaptive group LASSO techniques, to model dynamic GRNs that could flexibly deal with nonlinear regulation effects. The asymptotic properties of the proposed method are established and simulation studies are performed to validate the proposed approach. An application example for identifying the nonlinear dynamic GRN of T-cell activation is used to illustrate the usefulness of the proposed method. PMID:25061254

  19. Parametrically Guided Generalized Additive Models with Application to Mergers and Acquisitions Data.

    PubMed

    Fan, Jianqing; Maity, Arnab; Wang, Yihui; Wu, Yichao

    2013-01-01

    Generalized nonparametric additive models present a flexible way to evaluate the effects of several covariates on a general outcome of interest via a link function. In this modeling framework, one assumes that the effect of each of the covariates is nonparametric and additive. However, in practice, often there is prior information available about the shape of the regression functions, possibly from pilot studies or exploratory analysis. In this paper, we consider such situations and propose an estimation procedure where the prior information is used as a parametric guide to fit the additive model. Specifically, we first posit a parametric family for each of the regression functions using the prior information (parametric guides). After removing these parametric trends, we then estimate the remainder of the nonparametric functions using a nonparametric generalized additive model, and form the final estimates by adding back the parametric trend. We investigate the asymptotic properties of the estimates and show that when a good guide is chosen, the asymptotic variance of the estimates can be reduced significantly while keeping the asymptotic variance same as the unguided estimator. We observe the performance of our method via a simulation study and demonstrate our method by applying to a real data set on mergers and acquisitions.

  20. Representational Flexibility and Problem-Solving Ability in Fraction and Decimal Number Addition: A Structural Model

    ERIC Educational Resources Information Center

    Deliyianni, Eleni; Gagatsis, Athanasios; Elia, Iliada; Panaoura, Areti

    2016-01-01

    The aim of this study was to propose and validate a structural model in fraction and decimal number addition, which is founded primarily on a synthesis of major theoretical approaches in the field of representations in Mathematics and also on previous research on the learning of fractions and decimals. The study was conducted among 1,701 primary…

  1. Measuring Children's Proportional Reasoning, The "Tendency" for an Additive Strategy and The Effect of Models

    ERIC Educational Resources Information Center

    Misailadou, Christina; Williams, Julian

    2003-01-01

    We report a study of 10-14 year old children's use of additive strategies while solving ratio and proportion tasks. Rasch methodology was used to develop a diagnostic instrument that reveals children's misconceptions. Two versions of this instrument, one with "models" thought to facilitate proportional reasoning and one without were…

  2. Parametrically Guided Generalized Additive Models with Application to Mergers and Acquisitions Data

    PubMed Central

    Fan, Jianqing; Maity, Arnab; Wang, Yihui; Wu, Yichao

    2012-01-01

    Generalized nonparametric additive models present a flexible way to evaluate the effects of several covariates on a general outcome of interest via a link function. In this modeling framework, one assumes that the effect of each of the covariates is nonparametric and additive. However, in practice, often there is prior information available about the shape of the regression functions, possibly from pilot studies or exploratory analysis. In this paper, we consider such situations and propose an estimation procedure where the prior information is used as a parametric guide to fit the additive model. Specifically, we first posit a parametric family for each of the regression functions using the prior information (parametric guides). After removing these parametric trends, we then estimate the remainder of the nonparametric functions using a nonparametric generalized additive model, and form the final estimates by adding back the parametric trend. We investigate the asymptotic properties of the estimates and show that when a good guide is chosen, the asymptotic variance of the estimates can be reduced significantly while keeping the asymptotic variance same as the unguided estimator. We observe the performance of our method via a simulation study and demonstrate our method by applying to a real data set on mergers and acquisitions. PMID:23645976

  3. Generalized HPC method for the Poisson equation

    NASA Astrophysics Data System (ADS)

    Bardazzi, A.; Lugni, C.; Antuono, M.; Graziani, G.; Faltinsen, O. M.

    2015-10-01

    An efficient and innovative numerical algorithm based on the use of Harmonic Polynomials on each Cell of the computational domain (HPC method) has been recently proposed by Shao and Faltinsen (2014) [1], to solve Boundary Value Problem governed by the Laplace equation. Here, we extend the HPC method for the solution of non-homogeneous elliptic boundary value problems. The homogeneous solution, i.e. the Laplace equation, is represented through a polynomial function with harmonic polynomials while the particular solution of the Poisson equation is provided by a bi-quadratic function. This scheme has been called generalized HPC method. The present algorithm, accurate up to the 4th order, proved to be efficient, i.e. easy to be implemented and with a low computational effort, for the solution of two-dimensional elliptic boundary value problems. Furthermore, it provides an analytical representation of the solution within each computational stencil, which allows its coupling with existing numerical algorithms within an efficient domain-decomposition strategy or within an adaptive mesh refinement algorithm.

  4. Integer lattice dynamics for Vlasov-Poisson

    NASA Astrophysics Data System (ADS)

    Mocz, Philip; Succi, Sauro

    2017-03-01

    We revisit the integer lattice (IL) method to numerically solve the Vlasov-Poisson equations, and show that a slight variant of the method is a very easy, viable, and efficient numerical approach to study the dynamics of self-gravitating, collisionless systems. The distribution function lives in a discretized lattice phase-space, and each time-step in the simulation corresponds to a simple permutation of the lattice sites. Hence, the method is Lagrangian, conservative, and fully time-reversible. IL complements other existing methods, such as N-body/particle mesh (computationally efficient, but affected by Monte Carlo sampling noise and two-body relaxation) and finite volume (FV) direct integration schemes (expensive, accurate but diffusive). We also present improvements to the FV scheme, using a moving-mesh approach inspired by IL, to reduce numerical diffusion and the time-step criterion. Being a direct integration scheme like FV, IL is memory limited (memory requirement for a full 3D problem scales as N6, where N is the resolution per linear phase-space dimension). However, we describe a new technique for achieving N4 scaling. The method offers promise for investigating the full 6D phase-space of collisionless systems of stars and dark matter.

  5. Does the model of additive effect in placebo research still hold true? A narrative review

    PubMed Central

    Berger, Bettina; Weger, Ulrich; Heusser, Peter

    2017-01-01

    Personalised and contextualised care has been turned into a major demand by people involved in healthcare suggesting to move toward person-centred medicine. The assessment of person-centred medicine can be most effectively achieved if treatments are investigated using ‘with versus without’ person-centredness or integrative study designs. However, this assumes that the components of an integrative or person-centred intervention have an additive relationship to produce the total effect. Beecher’s model of additivity assumes an additive relation between placebo and drug effects and is thus presenting an arithmetic summation. So far, no review has been carried out assessing the validity of the additive model, which is to be questioned and more closely investigated in this review. Initial searches for primary studies were undertaken in July 2016 using Pubmed and Google Scholar. In order to find matching publications of similar magnitude for the comparison part of this review, corresponding matches for all included reviews were sought. A total of 22 reviews and 3 clinical and experimental studies fulfilled the inclusion criteria. The results pointed to the following factors actively questioning the additive model: interactions of various effects, trial design, conditioning, context effects and factors, neurobiological factors, mechanism of action, statistical factors, intervention-specific factors (alcohol, caffeine), side-effects and type of intervention. All but one of the closely assessed publications was questioning the additive model. A closer examination of study design is necessary. An attempt in a more systematic approach geared towards solutions could be a suggestion for future research in this field. PMID:28321318

  6. Formation and reduction of carcinogenic furan in various model systems containing food additives.

    PubMed

    Kim, Jin-Sil; Her, Jae-Young; Lee, Kwang-Geun

    2015-12-15

    The aim of this study was to analyse and reduce furan in various model systems. Furan model systems consisting of monosaccharides (0.5M glucose and ribose), amino acids (0.5M alanine and serine) and/or 1.0M ascorbic acid were heated at 121°C for 25 min. The effects of food additives (each 0.1M) such as metal ions (iron sulphate, magnesium sulphate, zinc sulphate and calcium sulphate), antioxidants (BHT and BHA), and sodium sulphite on the formation of furan were measured. The level of furan formed in the model systems was 6.8-527.3 ng/ml. The level of furan in the model systems of glucose/serine and glucose/alanine increased 7-674% when food additives were added. In contrast, the level of furan decreased by 18-51% in the Maillard reaction model systems that included ribose and alanine/serine with food additives except zinc sulphate.

  7. Modeling Longitudinal Data with Generalized Additive Models: Applications to Single-Case Designs

    ERIC Educational Resources Information Center

    Sullivan, Kristynn J.; Shadish, William R.

    2013-01-01

    Single case designs (SCDs) are short time series that assess intervention effects by measuring units repeatedly over time both in the presence and absence of treatment. For a variety of reasons, interest in the statistical analysis and meta-analysis of these designs has been growing in recent years. This paper proposes modeling SCD data with…

  8. NB-PLC channel modelling with cyclostationary noise addition & OFDM implementation for smart grid

    NASA Astrophysics Data System (ADS)

    Thomas, Togis; Gupta, K. K.

    2016-03-01

    Power line communication (PLC) technology can be a viable solution for the future ubiquitous networks because it provides a cheaper alternative to other wired technology currently being used for communication. In smart grid Power Line Communication (PLC) is used to support communication with low rate on low voltage (LV) distribution network. In this paper, we propose the channel modelling of narrowband (NB) PLC in the frequency range 5 KHz to 500 KHz by using ABCD parameter with cyclostationary noise addition. Behaviour of the channel was studied by the addition of 11KV/230V transformer, by varying load location and load. Bit error rate (BER) Vs signal to noise ratio SNR) was plotted for the proposed model by employing OFDM. Our simulation results based on the proposed channel model show an acceptable performance in terms of bit error rate versus signal to noise ratio, which enables communication required for smart grid applications.

  9. The Zero-truncated Poisson with Right Censoring: an Application to Translational Breast Cancer Research.

    PubMed

    Yeh, Hung-Wen; Gajewski, Byron; Mukhopadhyay, Purna; Behbod, Fariba

    2012-08-30

    We propose to analyze positive count data with right censoring from Behbod et al. (2009) using the censored zero-truncated Poisson model (CZTP). The comparison in truncated means across subgroups in each cell line is carried out through a log-linear model that links the un-truncated Poisson parameter and regression covariates. We also perform simulation to evaluate the performance of the CZTP model in finite and large sample sizes. In general, the CZTP model provides accurate and precise estimates. However, for data with small means and small sample sizes, it may be more proper to make inference based on the mean counts rather than on the regression coefficients. For small sample sizes and moderate means, the likelihood ratio test is more reliable than the Wald test. We also demonstrate how power analysis can be used to justify and/or guide the choice of censoring thresholds in study design. A SAS macro is provided in Appendix for readers' reference.

  10. Dynamics of a prey-predator system under Poisson white noise excitation

    NASA Astrophysics Data System (ADS)

    Pan, Shan-Shan; Zhu, Wei-Qiu

    2014-10-01

    The classical Lotka-Volterra (LV) model is a well-known mathematical model for prey-predator ecosystems. In the present paper, the pulse-type version of stochastic LV model, in which the effect of a random natural environment has been modeled as Poisson white noise, is investigated by using the stochastic averaging method. The averaged generalized Itô stochastic differential equation and Fokker-Planck-Kolmogorov (FPK) equation are derived for prey-predator ecosystem driven by Poisson white noise. Approximate stationary solution for the averaged generalized FPK equation is obtained by using the perturbation method. The effect of prey self-competition parameter ɛ2 s on ecosystem behavior is evaluated. The analytical result is confirmed by corresponding Monte Carlo (MC) simulation.

  11. Poisson's Ratios and Volume Changes for Plastically Orthotropic Material

    NASA Technical Reports Server (NTRS)

    Stowell, Elbridge Z; Pride, Richard A

    1956-01-01

    Measurements of Poisson's ratios have been made in three orthogonal directions on aluminum alloy blocks in compression and on stainless-steel sheet in both tension and compression. These measurements, as well as those obtained by density determinations, show that there is no permanent plastic change in volume within the accuracy of observation. A method is suggested whereby a correlation may be effected between the measured individual values of the Poisson's ratios and the stress-strain curves for the material. Allowance must be made for the difference in the stress-strain in tension and compression; this difference, wherever it appears, is accompanied by significant changes in the Poisson's ratios.

  12. Bayesian Inference and Online Learning in Poisson Neuronal Networks.

    PubMed

    Huang, Yanping; Rao, Rajesh P N

    2016-08-01

    Motivated by the growing evidence for Bayesian computation in the brain, we show how a two-layer recurrent network of Poisson neurons can perform both approximate Bayesian inference and learning for any hidden Markov model. The lower-layer sensory neurons receive noisy measurements of hidden world states. The higher-layer neurons infer a posterior distribution over world states via Bayesian inference from inputs generated by sensory neurons. We demonstrate how such a neuronal network with synaptic plasticity can implement a form of Bayesian inference similar to Monte Carlo methods such as particle filtering. Each spike in a higher-layer neuron represents a sample of a particular hidden world state. The spiking activity across the neural population approximates the posterior distribution over hidden states. In this model, variability in spiking is regarded not as a nuisance but as an integral feature that provides the variability necessary for sampling during inference. We demonstrate how the network can learn the likelihood model, as well as the transition probabilities underlying the dynamics, using a Hebbian learning rule. We present results illustrating the ability of the network to perform inference and learning for arbitrary hidden Markov models.

  13. Predicting the occurrence of wildfires with binary structured additive regression models.

    PubMed

    Ríos-Pena, Laura; Kneib, Thomas; Cadarso-Suárez, Carmen; Marey-Pérez, Manuel

    2017-02-01

    Wildfires are one of the main environmental problems facing societies today, and in the case of Galicia (north-west Spain), they are the main cause of forest destruction. This paper used binary structured additive regression (STAR) for modelling the occurrence of wildfires in Galicia. Binary STAR models are a recent contribution to the classical logistic regression and binary generalized additive models. Their main advantage lies in their flexibility for modelling non-linear effects, while simultaneously incorporating spatial and temporal variables directly, thereby making it possible to reveal possible relationships among the variables considered. The results showed that the occurrence of wildfires depends on many covariates which display variable behaviour across space and time, and which largely determine the likelihood of ignition of a fire. The joint possibility of working on spatial scales with a resolution of 1 × 1 km cells and mapping predictions in a colour range makes STAR models a useful tool for plotting and predicting wildfire occurrence. Lastly, it will facilitate the development of fire behaviour models, which can be invaluable when it comes to drawing up fire-prevention and firefighting plans.

  14. The effect of tailor-made additives on crystal growth of methyl paraben: Experiments and modelling

    NASA Astrophysics Data System (ADS)

    Cai, Zhihui; Liu, Yong; Song, Yang; Guan, Guoqiang; Jiang, Yanbin

    2017-03-01

    In this study, methyl paraben (MP) was selected as the model component, and acetaminophen (APAP), p-methyl acetanilide (PMAA) and acetanilide (ACET), which share the similar molecular structure as MP, were selected as the three tailor-made additives to study the effect of tailor-made additives on the crystal growth of MP. HPLC results indicated that the MP crystals induced by the three additives contained MP only. Photographs of the single crystals prepared indicated that the morphology of the MP crystals was greatly changed by the additives, but PXRD and single crystal diffraction results illustrated that the MP crystals were the same polymorph only with different crystal habits, and no new crystal form was found compared with other references. To investigate the effect of the additives on the crystal growth, the interaction between additives and facets was discussed in detail using the DFT methods and MD simulations. The results showed that APAP, PMAA and ACET would be selectively adsorbed on the growth surfaces of the crystal facets, which induced the change in MP crystal habits.

  15. Regulatory network reconstruction using an integral additive model with flexible kernel functions

    PubMed Central

    Novikov, Eugene; Barillot, Emmanuel

    2008-01-01

    Background Reconstruction of regulatory networks is one of the most challenging tasks of systems biology. A limited amount of experimental data and little prior knowledge make the problem difficult to solve. Although models that are currently used for inferring regulatory networks are sometimes able to make useful predictions about the structures and mechanisms of molecular interactions, there is still a strong demand to develop increasingly universal and accurate approaches for network reconstruction. Results The additive regulation model is represented by a set of differential equations and is frequently used for network inference from time series data. Here we generalize this model by converting differential equations into integral equations with adjustable kernel functions. These kernel functions can be selected based on prior knowledge or defined through iterative improvement in data analysis. This makes the integral model very flexible and thus capable of covering a broad range of biological systems more adequately and specifically than previous models. Conclusion We reconstructed network structures from artificial and real experimental data using differential and integral inference models. The artificial data were simulated using mathematical models implemented in JDesigner. The real data were publicly available yeast cell cycle microarray time series. The integral model outperformed the differential one for all cases. In the integral model, we tested the zero-degree polynomial and single exponential kernels. Further improvements could be expected if the kernel were selected more specifically depending on the system. PMID:18218091

  16. Wavelet-based Poisson solver for use in particle-in-cell simulations.

    PubMed

    Terzić, Balsa; Pogorelov, Ilya V

    2005-06-01

    We report on a successful implementation of a wavelet-based Poisson solver for use in three-dimensional particle-in-cell simulations. Our method harnesses advantages afforded by the wavelet formulation, such as sparsity of operators and data sets, existence of effective preconditioners, and the ability simultaneously to remove numerical noise and additional compression of relevant data sets. We present and discuss preliminary results relating to the application of the new solver to test problems in accelerator physics and astrophysics.

  17. Poisson noise obscures hypometabolic lesions in PET.

    PubMed

    Kerr, Wesley T; Lau, Edward P

    2012-12-01

    The technology of fluoro-deoxyglucose positron emission tomography (PET) has drastically increased our ability to visualize the metabolic process of numerous neurological diseases. The relationship between the methodological noise sources inherent to PET technology and the resulting noise in the reconstructed image is complex. In this study, we use Monte Carlo simulations to examine the effect of Poisson noise in the PET signal on the noise in reconstructed space for two pervasive reconstruction algorithms: the historical filtered back-projection (FBP) and the more modern expectation maximization (EM). We confirm previous observations that the image reconstructed with the FBP biases all intensity values toward the mean, likely due to spatial spreading of high intensity voxels. However, we demonstrate that in both algorithms the variance from high intensity voxels spreads to low intensity voxels and obliterates their signal to noise ratio. This finding has profound impacts on the clinical interpretation of hypometabolic lesions. Our results suggest that PET is relatively insensitive when it comes to detecting and quantifying changes in hypometabolic tissue. Further, the images reconstructed with EM visually match the original images more closely, but more detailed analysis reveals as much as a 40 percent decrease in the signal to noise ratio for high intensity voxels relative to the FBP. This suggests that even though the apparent spatial resolution of EM outperforms FBP, the signal to noise ratio of the intensity of each voxel may be higher in the FBP. Therefore, EM may be most appropriate for manual visualization of pathology, but FBP should be used when analyzing quantitative markers of the PET signal. This suggestion that different reconstruction algorithms should be used for quantification versus visualization represents a major paradigm shift in the analysis and interpretation of PET images.

  18. Test of the Additivity Principle for Current Fluctuations in a Model of Heat Conduction

    NASA Astrophysics Data System (ADS)

    Hurtado, Pablo I.; Garrido, Pedro L.

    2009-06-01

    The additivity principle allows to compute the current distribution in many one-dimensional (1D) nonequilibrium systems. Using simulations, we confirm this conjecture in the 1D Kipnis-Marchioro-Presutti model of heat conduction for a wide current interval. The current distribution shows both Gaussian and non-Gaussian regimes, and obeys the Gallavotti-Cohen fluctuation theorem. We verify the existence of a well-defined temperature profile associated to a given current fluctuation. This profile is independent of the sign of the current, and this symmetry extends to higher-order profiles and spatial correlations. We also show that finite-time joint fluctuations of the current and the profile are described by the additivity functional. These results suggest the additivity hypothesis as a general and powerful tool to compute current distributions in many nonequilibrium systems.

  19. Test of the additivity principle for current fluctuations in a model of heat conduction.

    PubMed

    Hurtado, Pablo I; Garrido, Pedro L

    2009-06-26

    The additivity principle allows to compute the current distribution in many one-dimensional (1D) nonequilibrium systems. Using simulations, we confirm this conjecture in the 1D Kipnis-Marchioro-Presutti model of heat conduction for a wide current interval. The current distribution shows both Gaussian and non-Gaussian regimes, and obeys the Gallavotti-Cohen fluctuation theorem. We verify the existence of a well-defined temperature profile associated to a given current fluctuation. This profile is independent of the sign of the current, and this symmetry extends to higher-order profiles and spatial correlations. We also show that finite-time joint fluctuations of the current and the profile are described by the additivity functional. These results suggest the additivity hypothesis as a general and powerful tool to compute current distributions in many nonequilibrium systems.

  20. Goodness-of-fit methods for additive-risk models in tumorigenicity experiments.

    PubMed

    Ghosh, Debashis

    2003-09-01

    In tumorigenicity experiments, a complication is that the time to event is generally not observed, so that the time to tumor is subject to interval censoring. One of the goals in these studies is to properly model the effect of dose on risk. Thus, it is important to have goodness of fit procedures available for assessing the model fit. While several estimation procedures have been developed for current-status data, relatively little work has been done on model-checking techniques. In this article, we propose numerical and graphical methods for the analysis of current-status data using the additive-risk model, primarily focusing on the situation where the monitoring times are dependent. The finite-sample properties of the proposed methodology are examined through numerical studies. The methods are then illustrated with data from a tumorigenicity experiment.

  1. Generalized Additive Mixed-Models for Pharmacology Using Integrated Discrete Multiple Organ Co-Culture.

    PubMed

    Ingersoll, Thomas; Cole, Stephanie; Madren-Whalley, Janna; Booker, Lamont; Dorsey, Russell; Li, Albert; Salem, Harry

    2016-01-01

    Integrated Discrete Multiple Organ Co-culture (IDMOC) is emerging as an in-vitro alternative to in-vivo animal models for pharmacology studies. IDMOC allows dose-response relationships to be investigated at the tissue and organoid levels, yet, these relationships often exhibit responses that are far more complex than the binary responses often measured in whole animals. To accommodate departure from binary endpoints, IDMOC requires an expansion of analytic techniques beyond simple linear probit and logistic models familiar in toxicology. IDMOC dose-responses may be measured at continuous scales, exhibit significant non-linearity such as local maxima or minima, and may include non-independent measures. Generalized additive mixed-modeling (GAMM) provides an alternative description of dose-response that relaxes assumptions of independence and linearity. We compared GAMMs to traditional linear models for describing dose-response in IDMOC pharmacology studies.

  2. Generalized Additive Mixed-Models for Pharmacology Using Integrated Discrete Multiple Organ Co-Culture

    PubMed Central

    Ingersoll, Thomas; Cole, Stephanie; Madren-Whalley, Janna; Booker, Lamont; Dorsey, Russell; Li, Albert; Salem, Harry

    2016-01-01

    Integrated Discrete Multiple Organ Co-culture (IDMOC) is emerging as an in-vitro alternative to in-vivo animal models for pharmacology studies. IDMOC allows dose-response relationships to be investigated at the tissue and organoid levels, yet, these relationships often exhibit responses that are far more complex than the binary responses often measured in whole animals. To accommodate departure from binary endpoints, IDMOC requires an expansion of analytic techniques beyond simple linear probit and logistic models familiar in toxicology. IDMOC dose-responses may be measured at continuous scales, exhibit significant non-linearity such as local maxima or minima, and may include non-independent measures. Generalized additive mixed-modeling (GAMM) provides an alternative description of dose-response that relaxes assumptions of independence and linearity. We compared GAMMs to traditional linear models for describing dose-response in IDMOC pharmacology studies. PMID:27110941

  3. Use of additive technologies for practical working with complex models for foundry technologies

    NASA Astrophysics Data System (ADS)

    Olkhovik, E.; Butsanets, A. A.; Ageeva, A. A.

    2016-07-01

    The article presents the results of research of additive technology (3D printing) application for developing a geometrically complex model of castings parts. Investment casting is well known and widely used technology for the production of complex parts. The work proposes the use of a 3D printing technology for manufacturing models parts, which are removed by thermal destruction. Traditional methods of equipment production for investment casting involve the use of manual labor which has problems with dimensional accuracy, and CNC technology which is less used. Such scheme is low productive and demands considerable time. We have offered an alternative method which consists in printing the main knots using a 3D printer (PLA and ABS) with a subsequent production of castings models from them. In this article, the main technological methods are considered and their problems are discussed. The dimensional accuracy of models in comparison with investment casting technology is considered as the main aspect.

  4. Evidence of thermal additivity during short laser pulses in an in vitro retinal model

    NASA Astrophysics Data System (ADS)

    Denton, Michael L.; Tijerina, Amanda J.; Dyer, Phillip N.; Oian, Chad A.; Noojin, Gary D.; Rickman, John M.; Shingledecker, Aurora D.; Clark, Clifton D.; Castellanos, Cherry C.; Thomas, Robert J.; Rockwell, Benjamin A.

    2015-03-01

    Laser damage thresholds were determined for exposure to 2.5-ms 532-nm pulses in an established in vitro retinal model. Single and multiple pulses (10, 100, 1000) were delivered to the cultured cells at three different pulse repetition frequency (PRF) values, and overt damage (membrane breach) was scored 1 hr post laser exposure. Trends in the damage data within and across the PRF range identified significant thermal additivity as PRF was increased, as evidenced by drastically reduced threshold values (< 40% of single-pulse value). Microthermography data that were collected in real time during each exposure also provided evidence of thermal additivity between successive laser pulses. Using thermal profiles simulated at high temporal resolution, damage threshold values were predicted by an in-house computational model. Our simulated ED50 value for a single 2.5-ms pulse was in very good agreement with experimental results, but ED50 predictions for multiple-pulse trains will require more refinement.

  5. Describing long-term trends in precipitation using generalized additive models

    NASA Astrophysics Data System (ADS)

    Underwood, Fiona M.

    2009-01-01

    SummaryWith the current concern over climate change, descriptions of how rainfall patterns are changing over time can be useful. Observations of daily rainfall data over the last few decades provide information on these trends. Generalized linear models are typically used to model patterns in the occurrence and intensity of rainfall. These models describe rainfall patterns for an average year but are more limited when describing long-term trends, particularly when these are potentially non-linear. Generalized additive models (GAMs) provide a framework for modelling non-linear relationships by fitting smooth functions to the data. This paper describes how GAMs can extend the flexibility of models to describe seasonal patterns and long-term trends in the occurrence and intensity of daily rainfall using data from Mauritius from 1962 to 2001. Smoothed estimates from the models provide useful graphical descriptions of changing rainfall patterns over the last 40 years at this location. GAMs are particularly helpful when exploring non-linear relationships in the data. Care is needed to ensure the choice of smooth functions is appropriate for the data and modelling objectives.

  6. Boosted structured additive regression for Escherichia coli fed-batch fermentation modeling.

    PubMed

    Melcher, Michael; Scharl, Theresa; Luchner, Markus; Striedner, Gerald; Leisch, Friedrich

    2017-02-01

    The quality of biopharmaceuticals and patients' safety are of highest priority and there are tremendous efforts to replace empirical production process designs by knowledge-based approaches. Main challenge in this context is that real-time access to process variables related to product quality and quantity is severely limited. To date comprehensive on- and offline monitoring platforms are used to generate process data sets that allow for development of mechanistic and/or data driven models for real-time prediction of these important quantities. Ultimate goal is to implement model based feed-back control loops that facilitate online control of product quality. In this contribution, we explore structured additive regression (STAR) models in combination with boosting as a variable selection tool for modeling the cell dry mass, product concentration, and optical density on the basis of online available process variables and two-dimensional fluorescence spectroscopic data. STAR models are powerful extensions of linear models allowing for inclusion of smooth effects or interactions between predictors. Boosting constructs the final model in a stepwise manner and provides a variable importance measure via predictor selection frequencies. Our results show that the cell dry mass can be modeled with a relative error of about ±3%, the optical density with ±6%, the soluble protein with ±16%, and the insoluble product with an accuracy of ±12%. Biotechnol. Bioeng. 2017;114: 321-334. © 2016 Wiley Periodicals, Inc.

  7. Addition of a 5/cm Spectral Resolution Band Model Option to LOWTRAN5.

    DTIC Science & Technology

    1980-10-01

    FORM I. REPORT NUMBER .GOVT ACCESSION NO. 3 . RECIPIENT’S CATALCI UMISER ARI-RR-232 -9 1 0. T Ct IIIM INNY S TYPE OF REPORT & PERIOD COVERED I ddition of...5r/TPAN (2) the addition of temperature dependent ecular absorption coefficients,’ and ( 3 ) the use of a multi-parameter, Dp 71pForentz band model for...LOWTRA.I5 and LOWTRAN5(IMOD) ..... 2-10 2.8 Comparison of LOWTRAN5 Models to Measurements 2-16 3 . MODIFICATIONS TO LOWTRAN5

  8. Model for Assembly Line Re-Balancing Considering Additional Capacity and Outsourcing to Face Demand Fluctuations

    NASA Astrophysics Data System (ADS)

    Samadhi, TMAA; Sumihartati, Atin

    2016-02-01

    The most critical stage in a garment industry is sewing process, because generally, it consists of a number of operations and a large number of sewing machines for each operation. Therefore, it requires a balancing method that can assign task to work station with balance workloads. Many studies on assembly line balancing assume a new assembly line, but in reality, due to demand fluctuation and demand increased a re-balancing is needed. To cope with those fluctuating demand changes, additional capacity can be carried out by investing in spare sewing machine and paying for sewing service through outsourcing. This study develops an assembly line balancing (ALB) model on existing line to cope with fluctuating demand change. Capacity redesign is decided if the fluctuation demand exceeds the available capacity through a combination of making investment on new machines and outsourcing while considering for minimizing the cost of idle capacity in the future. The objective of the model is to minimize the total cost of the line assembly that consists of operating costs, machine cost, adding capacity cost, losses cost due to idle capacity and outsourcing costs. The model develop is based on an integer programming model. The model is tested for a set of data of one year demand with the existing number of sewing machines of 41 units. The result shows that additional maximum capacity up to 76 units of machine required when there is an increase of 60% of the average demand, at the equal cost parameters..

  9. Patient-specific in vitro models for hemodynamic analysis of congenital heart disease - Additive manufacturing approach.

    PubMed

    Medero, Rafael; García-Rodríguez, Sylvana; François, Christopher J; Roldán-Alzate, Alejandro

    2017-03-21

    Non-invasive hemodynamic assessment of total cavopulmonary connection (TCPC) is challenging due to the complex anatomy. Additive manufacturing (AM) is a suitable alternative for creating patient-specific in vitro models for flow measurements using four-dimensional (4D) Flow MRI. These in vitro systems have the potential to serve as validation for computational fluid dynamics (CFD), simulating different physiological conditions. This study investigated three different AM technologies, stereolithography (SLA), selective laser sintering (SLS) and fused deposition modeling (FDM), to determine differences in hemodynamics when measuring flow using 4D Flow MRI. The models were created using patient-specific MRI data from an extracardiac TCPC. These models were connected to a perfusion pump circulating water at three different flow rates. Data was processed for visualization and quantification of velocity, flow distribution, vorticity and kinetic energy. These results were compared between each model. In addition, the flow distribution obtained in vitro was compared to in vivo. The results showed significant difference in velocities measured at the outlets of the models that required internal support material when printing. Furthermore, an ultrasound flow sensor was used to validate flow measurements at the inlets and outlets of the in vitro models. These results were highly correlated to those measured with 4D Flow MRI. This study showed that commercially available AM technologies can be used to create patient-specific vascular models for in vitro hemodynamic studies at reasonable costs. However, technologies that do not require internal supports during manufacturing allow smoother internal surfaces, which makes them better suited for flow analyses.

  10. Negative Poisson's ratios for extreme states of matter

    PubMed

    Baughman; Dantas; Stafstrom; Zakhidov; Mitchell; Dubin

    2000-06-16

    Negative Poisson's ratios are predicted for body-centered-cubic phases that likely exist in white dwarf cores and neutron star outer crusts, as well as those found for vacuumlike ion crystals, plasma dust crystals, and colloidal crystals (including certain virus crystals). The existence of this counterintuitive property, which means that a material laterally expands when stretched, is experimentally demonstrated for very low density crystals of trapped ions. At very high densities, the large predicted negative and positive Poisson's ratios might be important for understanding the asteroseismology of neutron stars and white dwarfs and the effect of stellar stresses on nuclear reaction rates. Giant Poisson's ratios are both predicted and observed for highly strained coulombic photonic crystals, suggesting possible applications of large, tunable Poisson's ratios for photonic crystal devices.

  11. Information transmission using non-poisson regular firing.

    PubMed

    Koyama, Shinsuke; Omi, Takahiro; Kass, Robert E; Shinomoto, Shigeru

    2013-04-01

    In many cortical areas, neural spike trains do not follow a Poisson process. In this study, we investigate a possible benefit of non-Poisson spiking for information transmission by studying the minimal rate fluctuation that can be detected by a Bayesian estimator. The idea is that an inhomogeneous Poisson process may make it difficult for downstream decoders to resolve subtle changes in rate fluctuation, but by using a more regular non-Poisson process, the nervous system can make rate fluctuations easier to detect. We evaluate the degree to which regular firing reduces the rate fluctuation detection threshold. We find that the threshold for detection is reduced in proportion to the coefficient of variation of interspike intervals.

  12. Negative poisson's ratio in single-layer black phosphorus.

    PubMed

    Jiang, Jin-Wu; Park, Harold S

    2014-08-18

    The Poisson's ratio is a fundamental mechanical property that relates the resulting lateral strain to applied axial strain. Although this value can theoretically be negative, it is positive for nearly all materials, though negative values have been observed in so-called auxetic structures. However, nearly all auxetic materials are bulk materials whose microstructure has been specifically engineered to generate a negative Poisson's ratio. Here we report using first-principles calculations the existence of a negative Poisson's ratio in a single-layer, two-dimensional material, black phosphorus. In contrast to engineered bulk auxetics, this behaviour is intrinsic for single-layer black phosphorus, and originates from its puckered structure, where the pucker can be regarded as a re-entrant structure that is comprised of two coupled orthogonal hinges. As a result of this atomic structure, a negative Poisson's ratio is observed in the out-of-plane direction under uniaxial deformation in the direction parallel to the pucker.

  13. Use of anatomical and kinetic models in the evaluation of human food additive safety.

    PubMed

    Roth, William L

    2005-09-22

    Toxicological testing in animals is relied upon as a surrogate for clinical testing of most food additives. Both animal and human clinical test results are generally available for direct additives when high levels of exposure are expected. Limited animal studies or in vitro test results may be the only sources of toxicological data available when low levels of exposure (microg/person/day) are expected and where no effects of the additive on the food itself are desired. Safety assessment of such materials for humans requires mathematical extrapolation from any effects observed in test animals to arrive at acceptable daily intakes (ADIs) for humans. Models of anatomy may be used to estimate tissue and organ weights where that information is missing and necessary for evaluation of a data set. The effect of growth on target tissue exposure during critical phases of organ development can be more accurately assessed when models of growth and known physiological changes are combined with pharmacokinetic results for test species. Kinetic models, when combined with limited chemical property, kinetic, and distribution data, can often be used to predict steady-state plasma and tissue levels of a test material over the range of doses employed in chronic studies to aid in interpretation of effects that are often nonlinear with respect to delivered dose. A better understanding of the reasons for nonlinearity of effects in animals improves our confidence in extrapolation to humans.

  14. Rain water transport and storage in a model sandy soil with hydrogel particle additives.

    PubMed

    Wei, Y; Durian, D J

    2014-10-01

    We study rain water infiltration and drainage in a dry model sandy soil with superabsorbent hydrogel particle additives by measuring the mass of retained water for non-ponding rainfall using a self-built 3D laboratory set-up. In the pure model sandy soil, the retained water curve measurements indicate that instead of a stable horizontal wetting front that grows downward uniformly, a narrow fingered flow forms under the top layer of water-saturated soil. This rain water channelization phenomenon not only further reduces the available rain water in the plant root zone, but also affects the efficiency of soil additives, such as superabsorbent hydrogel particles. Our studies show that the shape of the retained water curve for a soil packing with hydrogel particle additives strongly depends on the location and the concentration of the hydrogel particles in the model sandy soil. By carefully choosing the particle size and distribution methods, we may use the swollen hydrogel particles to modify the soil pore structure, to clog or extend the water channels in sandy soils, or to build water reservoirs in the plant root zone.

  15. Can ligand addition to soil enhance Cd phytoextraction? A mechanistic model study.

    PubMed

    Lin, Zhongbing; Schneider, André; Nguyen, Christophe; Sterckeman, Thibault

    2014-11-01

    Phytoextraction is a potential method for cleaning Cd-polluted soils. Ligand addition to soil is expected to enhance Cd phytoextraction. However, experimental results show that this addition has contradictory effects on plant Cd uptake. A mechanistic model simulating the reaction kinetics (adsorption on solid phase, complexation in solution), transport (convection, diffusion) and root absorption (symplastic, apoplastic) of Cd and its complexes in soil was developed. This was used to calculate plant Cd uptake with and without ligand addition in a great number of combinations of soil, ligand and plant characteristics, varying the parameters within defined domains. Ligand addition generally strongly reduced hydrated Cd (Cd(2+)) concentration in soil solution through Cd complexation. Dissociation of Cd complex ([Formula: see text]) could not compensate for this reduction, which greatly lowered Cd(2+) symplastic uptake by roots. The apoplastic uptake of [Formula: see text] was not sufficient to compensate for the decrease in symplastic uptake. This explained why in the majority of the cases, ligand addition resulted in the reduction of the simulated Cd phytoextraction. A few results showed an enhanced phytoextraction in very particular conditions (strong plant transpiration with high apoplastic Cd uptake capacity), but this enhancement was very limited, making chelant-enhanced phytoextraction poorly efficient for Cd.

  16. Spectral prediction model for color prints on paper with fluorescent additives.

    PubMed

    Hersch, Roger David

    2008-12-20

    I propose a model for predicting the total reflectance of color halftones printed on paper incorporating fluorescent brighteners. The total reflectance is modeled as the additive superposition of the relative fluorescent emission and the pure reflectance of the color print. The fluorescent emission prediction model accounts for both the attenuation of light by the halftone within the excitation wavelength range and for the attenuation of the fluorescent emission by the same halftone within the emission wavelength range. The model's calibration relies on reflectance measurements of the optically brightened paper and of the solid colorant patches with two illuminants, one including and one excluding the UV components. The part of the model predicting the pure reflectance relies on an ink-spreading extended Clapper-Yule model. On uniformly distributed surface coverages of cyan, magenta, and yellow halftone patches, the proposed model predicts the relative fluorescent emission with a high accuracy (mean DeltaE(94)=0.42 under a D65 standard illuminant). For optically brightened paper exhibiting a moderate fluorescence, the total reflectance prediction improves the spectral reflectance prediction mainly for highlight color halftones, comprising a proportion of paper white above 12%. Applications include the creation of improved printer characterization tables for color management purposes and the prediction of color gamuts for new combinations of optically brightened papers and inks.

  17. Generalized linear and generalized additive models in studies of species distributions: Setting the scene

    USGS Publications Warehouse

    Guisan, A.; Edwards, T.C.; Hastie, T.

    2002-01-01

    An important statistical development of the last 30 years has been the advance in regression analysis provided by generalized linear models (GLMs) and generalized additive models (GAMs). Here we introduce a series of papers prepared within the framework of an international workshop entitled: Advances in GLMs/GAMs modeling: from species distribution to environmental management, held in Riederalp, Switzerland, 6-11 August 2001. We first discuss some general uses of statistical models in ecology, as well as provide a short review of several key examples of the use of GLMs and GAMs in ecological modeling efforts. We next present an overview of GLMs and GAMs, and discuss some of their related statistics used for predictor selection, model diagnostics, and evaluation. Included is a discussion of several new approaches applicable to GLMs and GAMs, such as ridge regression, an alternative to stepwise selection of predictors, and methods for the identification of interactions by a combined use of regression trees and several other approaches. We close with an overview of the papers and how we feel they advance our understanding of their application to ecological modeling. ?? 2002 Elsevier Science B.V. All rights reserved.

  18. Measurement of Poisson's ratio of dental composite restorative materials.

    PubMed

    Chung, Sew Meng; Yap, Adrian U Jin; Koh, Wee Kiat; Tsai, Kuo Tsing; Lim, Chwee Teck

    2004-06-01

    The aim of this study was to determine the Poisson ratio of resin-based dental composites using a static tensile test method. Materials used in this investigation were from the same manufacturer (3M ESPE) and included microfill (A110), minifill (Z100 and Filtek Z250), polyacid-modified (F2000), and flowable (Filtek Flowable [FF]) composites. The Poisson ratio of the materials were determined after 1 week conditioning in water at 37 degrees C. The tensile test was performed with using a uniaxial testing system at crosshead speed of 0.5 mm/min. Data was analysed using one-way ANOVA/post-hoc Scheffe's test and Pearson's correlation test at significance level of 0.05. Mean Poisson's ratio (n=8) ranged from 0.302 to 0.393. The Poisson ratio of FF was significantly higher than all other composites evaluated, and the Poisson ratio of A110 was higher than Z100, Z250 and F2000. The Poisson ratio is higher for materials with lower filler volume fraction.

  19. Quantum-chemical model evaluations of thermodynamics and kinetics of oxygen atom additions to narrow nanotubes.

    PubMed

    Slanina, Zdenĕk; Stobinski, Leszek; Tomasik, Piotr; Lin, Hong-Ming; Adamowicz, Ludwik

    2003-01-01

    This paper reports a computational study of oxygen additions to narrow nanotubes, a problem frequently studied with fullerenes. In fact, fullerene oxides were the first observed fullerene derivatives, and they have naturally attracted the attention of both experiment and theory. C60O had represented a long-standing case of experiment-theory disagreement, and there has been a similar problem with C60O2. The disagreement has been explained by kinetic rather than thermodynamic control. In this paper a similar computational approach is applied to narrow nanotubes. Recently, very narrow nanotubes have been observed with a diameter of 5 A and even with a diameter of 4 A. It has been supposed that the narrow nanotubes are closed by fragments of small fullerenes like C36 or C20. In this report we perform calculations for oxygen additions to such model nanotubes capped by fragments of D2d C36, D4d C32, and Ih C20 fullerenic cages (though the computational models have to be rather short). The three models have the following carbon contents: C84, C80, and C80. Both thermodynamic enthalpy changes and kinetic activation barriers for oxygen addition to six selected bonds are computed and analyzed. The lowest isomer (thermodynamically the most stable) is never of the 6/6 type, that is, the enthalpically favored structures are produced by oxygen additions to the nanotube tips. Interestingly enough, the lowest energy isomer has, for the D2d C36 and D4d C32 cases, the lowest kinetic activation barrier as well.

  20. Mixed-effects Poisson regression analysis of adverse event reports

    PubMed Central

    Gibbons, Robert D.; Segawa, Eisuke; Karabatsos, George; Amatya, Anup K.; Bhaumik, Dulal K.; Brown, C. Hendricks; Kapur, Kush; Marcus, Sue M.; Hur, Kwan; Mann, J. John

    2008-01-01

    SUMMARY A new statistical methodology is developed for the analysis of spontaneous adverse event (AE) reports from post-marketing drug surveillance data. The method involves both empirical Bayes (EB) and fully Bayes estimation of rate multipliers for each drug within a class of drugs, for a particular AE, based on a mixed-effects Poisson regression model. Both parametric and semiparametric models for the random-effect distribution are examined. The method is applied to data from Food and Drug Administration (FDA)’s Adverse Event Reporting System (AERS) on the relationship between antidepressants and suicide. We obtain point estimates and 95 per cent confidence (posterior) intervals for the rate multiplier for each drug (e.g. antidepressants), which can be used to determine whether a particular drug has an increased risk of association with a particular AE (e.g. suicide). Confidence (posterior) intervals that do not include 1.0 provide evidence for either significant protective or harmful associations of the drug and the adverse effect. We also examine EB, parametric Bayes, and semiparametric Bayes estimators of the rate multipliers and associated confidence (posterior) intervals. Results of our analysis of the FDA AERS data revealed that newer antidepressants are associated with lower rates of suicide adverse event reports compared with older antidepressants. We recommend improvements to the existing AERS system, which are likely to improve its public health value as an early warning system. PMID:18404622

  1. Testing departure from additivity in Tukey's model using shrinkage: application to a longitudinal setting.

    PubMed

    Ko, Yi-An; Mukherjee, Bhramar; Smith, Jennifer A; Park, Sung Kyun; Kardia, Sharon L R; Allison, Matthew A; Vokonas, Pantel S; Chen, Jinbo; Diez-Roux, Ana V

    2014-12-20

    While there has been extensive research developing gene-environment interaction (GEI) methods in case-control studies, little attention has been given to sparse and efficient modeling of GEI in longitudinal studies. In a two-way table for GEI with rows and columns as categorical variables, a conventional saturated interaction model involves estimation of a specific parameter for each cell, with constraints ensuring identifiability. The estimates are unbiased but are potentially inefficient because the number of parameters to be estimated can grow quickly with increasing categories of row/column factors. On the other hand, Tukey's one-degree-of-freedom model for non-additivity treats the interaction term as a scaled product of row and column main effects. Because of the parsimonious form of interaction, the interaction estimate leads to enhanced efficiency, and the corresponding test could lead to increased power. Unfortunately, Tukey's model gives biased estimates and low power if the model is misspecified. When screening multiple GEIs where each genetic and environmental marker may exhibit a distinct interaction pattern, a robust estimator for interaction is important for GEI detection. We propose a shrinkage estimator for interaction effects that combines estimates from both Tukey's and saturated interaction models and use the corresponding Wald test for testing interaction in a longitudinal setting. The proposed estimator is robust to misspecification of interaction structure. We illustrate the proposed methods using two longitudinal studies-the Normative Aging Study and the Multi-ethnic Study of Atherosclerosis.

  2. Testing Departure from Additivity in Tukey’s Model using Shrinkage: Application to a Longitudinal Setting

    PubMed Central

    Ko, Yi-An; Mukherjee, Bhramar; Smith, Jennifer A.; Park, Sung Kyun; Kardia, Sharon L.R.; Allison, Matthew A.; Vokonas, Pantel S.; Chen, Jinbo; Diez-Roux, Ana V.

    2014-01-01

    While there has been extensive research developing gene-environment interaction (GEI) methods in case-control studies, little attention has been given to sparse and efficient modeling of GEI in longitudinal studies. In a two-way table for GEI with rows and columns as categorical variables, a conventional saturated interaction model involves estimation of a specific parameter for each cell, with constraints ensuring identifiability. The estimates are unbiased but are potentially inefficient because the number of parameters to be estimated can grow quickly with increasing categories of row/column factors. On the other hand, Tukey’s one degree of freedom (df) model for non-additivity treats the interaction term as a scaled product of row and column main effects. Due to the parsimonious form of interaction, the interaction estimate leads to enhanced efficiency and the corresponding test could lead to increased power. Unfortunately, Tukey’s model gives biased estimates and low power if the model is misspecified. When screening multiple GEIs where each genetic and environmental marker may exhibit a distinct interaction pattern, a robust estimator for interaction is important for GEI detection. We propose a shrinkage estimator for interaction effects that combines estimates from both Tukey’s and saturated interaction models and use the corresponding Wald test for testing interaction in a longitudinal setting. The proposed estimator is robust to misspecification of interaction structure. We illustrate the proposed methods using two longitudinal studies — the Normative Aging Study and the Multi-Ethnic Study of Atherosclerosis. PMID:25112650

  3. AFMPB: An adaptive fast multipole Poisson-Boltzmann solver for calculating electrostatics in biomolecular systems

    NASA Astrophysics Data System (ADS)

    Lu, Benzhuo; Cheng, Xiaolin; Huang, Jingfang; McCammon, J. Andrew

    2013-11-01

    A Fortran program package is introduced for rapid evaluation of the electrostatic potentials and forces in biomolecular systems modeled by the linearized Poisson-Boltzmann equation. The numerical solver utilizes a well-conditioned boundary integral equation (BIE) formulation, a node-patch discretization scheme, a Krylov subspace iterative solver package with reverse communication protocols, and an adaptive new version of the fast multipole method in which the exponential expansions are used to diagonalize the multipole-to-local translations. The program and its full description, as well as several closely related libraries and utility tools are available at http://lsec.cc.ac.cn/~lubz/afmpb.html and a mirror site at http://mccammon.ucsd.edu/. This paper is a brief summary of the program: the algorithms, the implementation and the usage. Restrictions: Only three or six significant digits options are provided in this version. Unusual features: Most of the codes are in Fortran77 style. Memory allocation functions from Fortran90 and above are used in a few subroutines. Additional comments: The current version of the codes is designed and written for single core/processor desktop machines. Check http://lsec.cc.ac.cn/lubz/afmpb.html for updates and changes. Running time: The running time varies with the number of discretized elements (N) in the system and their distributions. In most cases, it scales linearly as a function of N.

  4. Criticality in a Vlasov-Poisson system: a fermioniclike universality class.

    PubMed

    Ivanov, A V; Vladimirov, S V; Robinson, P A

    2005-05-01

    A model Vlasov-Poisson system is simulated close to the point of marginal stability, thus assuming only the wave-particle resonant interactions are responsible for saturation, and shown to obey the power-law scaling of a second-order phase transition. The set of critical exponents analogous to those of the Ising universality class is calculated and shown to obey the Widom and Rushbrooke scaling and Josephson's hyperscaling relations at the formal dimensionality d=5 below the critical point at nonzero order parameter. However, the two-point correlation function does not correspond to the propagator of Euclidean quantum field theory, which is the Gaussian model for the Ising universality class. Instead, it corresponds to the propagator for the fermionic vector field and to the upper critical dimensionality d(c) = 2. This suggests criticality of collisionless Vlasov-Poisson systems corresponds to a universality class analogous to that of critical phenomena of a fermionic quantum field description.

  5. PB-AM: An open-source, fully analytical linear poisson-boltzmann solver.

    PubMed

    Felberg, Lisa E; Brookes, David H; Yap, Eng-Hui; Jurrus, Elizabeth; Baker, Nathan A; Head-Gordon, Teresa

    2016-11-02

    We present the open source distributed software package Poisson-Boltzmann Analytical Method (PB-AM), a fully analytical solution to the linearized PB equation, for molecules represented as non-overlapping spherical cavities. The PB-AM software package includes the generation of outputs files appropriate for visualization using visual molecular dynamics, a Brownian dynamics scheme that uses periodic boundary conditions to simulate dynamics, the ability to specify docking criteria, and offers two different kinetics schemes to evaluate biomolecular association rate constants. Given that PB-AM defines mutual polarization completely and accurately, it can be refactored as a many-body expansion to explore 2- and 3-body polarization. Additionally, the software has been integrated into the Adaptive Poisson-Boltzmann Solver (APBS) software package to make it more accessible to a larger group of scientists, educators, and students that are more familiar with the APBS framework. © 2016 Wiley Periodicals, Inc.

  6. Reduction of carcinogenic 4(5)-methylimidazole in a caramel model system: influence of food additives.

    PubMed

    Seo, Seulgi; Ka, Mi-Hyun; Lee, Kwang-Geun

    2014-07-09

    The effect of various food additives on the formation of carcinogenic 4(5)-methylimidazole (4-MI) in a caramel model system was investigated. The relationship between the levels of 4-MI and various pyrazines was studied. When glucose and ammonium hydroxide were heated, the amount of 4-MI was 556 ± 1.3 μg/mL, which increased to 583 ± 2.6 μg/mL by the addition of 0.1 M of sodium sulfite. When various food additives, such as 0.1 M of iron sulfate, magnesium sulfate, zinc sulfate, tryptophan, and cysteine were added, the amount of 4-MI was reduced to 110 ± 0.7, 483 ± 2.0, 460 ± 2.0, 409 ± 4.4, and 397 ± 1.7 μg/mL, respectively. The greatest reduction, 80%, occurred with the addition of iron sulfate. Among the 12 pyrazines, 2-ethyl-6-methylpyrazine with 4-MI showed the highest correlation (r = -0.8239).

  7. Marginal regression approach for additive hazards models with clustered current status data.

    PubMed

    Su, Pei-Fang; Chi, Yunchan

    2014-01-15

    Current status data arise naturally from tumorigenicity experiments, epidemiology studies, biomedicine, econometrics and demographic and sociology studies. Moreover, clustered current status data may occur with animals from the same litter in tumorigenicity experiments or with subjects from the same family in epidemiology studies. Because the only information extracted from current status data is whether the survival times are before or after the monitoring or censoring times, the nonparametric maximum likelihood estimator of survival function converges at a rate of n(1/3) to a complicated limiting distribution. Hence, semiparametric regression models such as the additive hazards model have been extended for independent current status data to derive the test statistics, whose distributions converge at a rate of n(1/2) , for testing the regression parameters. However, a straightforward application of these statistical methods to clustered current status data is not appropriate because intracluster correlation needs to be taken into account. Therefore, this paper proposes two estimating functions for estimating the parameters in the additive hazards model for clustered current status data. The comparative results from simulation studies are presented, and the application of the proposed estimating functions to one real data set is illustrated.

  8. Phase-shifted reflective coherent gradient sensor for measuring Young's modulus and Poisson's ratio of polished alloys

    NASA Astrophysics Data System (ADS)

    Ma, Kang; Xie, Huimin; Fan, Bozhao

    2017-02-01

    In this study, the Young's modulus and Poisson's ratio of Ni-Cr Alloy are measured using phase-shifted reflective coherent gradient sensing (CGS) method. Three-point bending experiment is applied to obtain the Young's modulus by measuring the specimen out-of-plane displacement slopes. Bending experiment of a circular plate with fixed edges loaded by a centric concentrated force is applied to obtain the specimen bending stiffness. The Poisson's ratio is then solved by substituting the bending stiffness into Young's modulus. The results show that the phase-shifted reflective CGS method is valid for measuring Young's modulus and Poisson's ratio of metals and alloys. In addition, the reflective specimen surfaces are obtained with precision finishing operations and the polishing parameters are optimized for CGS measurement. This method is more effective than the reflecting film transfer method, which is widely used in previous studies.

  9. Topsoil organic carbon content of Europe, a new map based on a generalised additive model

    NASA Astrophysics Data System (ADS)

    de Brogniez, Delphine; Ballabio, Cristiano; Stevens, Antoine; Jones, Robert J. A.; Montanarella, Luca; van Wesemael, Bas

    2014-05-01

    There is an increasing demand for up-to-date spatially continuous organic carbon (OC) data for global environment and climatic modeling. Whilst the current map of topsoil organic carbon content for Europe (Jones et al., 2005) was produced by applying expert-knowledge based pedo-transfer rules on large soil mapping units, the aim of this study was to replace it by applying digital soil mapping techniques on the first European harmonised geo-referenced topsoil (0-20 cm) database, which arises from the LUCAS (land use/cover area frame statistical survey) survey. A generalized additive model (GAM) was calibrated on 85% of the dataset (ca. 17 000 soil samples) and a backward stepwise approach selected slope, land cover, temperature, net primary productivity, latitude and longitude as environmental covariates (500 m resolution). The validation of the model (applied on 15% of the dataset), gave an R2 of 0.27. We observed that most organic soils were under-predicted by the model and that soils of Scandinavia were also poorly predicted. The model showed an RMSE of 42 g kg-1 for mineral soils and of 287 g kg-1 for organic soils. The map of predicted OC content showed the lowest values in Mediterranean countries and in croplands across Europe, whereas highest OC content were predicted in wetlands, woodlands and in mountainous areas. The map of standard error of the OC model predictions showed high values in northern latitudes, wetlands, moors and heathlands, whereas low uncertainty was mostly found in croplands. A comparison of our results with the map of Jones et al. (2005) showed a general agreement on the prediction of mineral soils' OC content, most probably because the models use some common covariates, namely land cover and temperature. Our model however failed to predict values of OC content greater than 200 g kg-1, which we explain by the imposed unimodal distribution of our model, whose mean is tilted towards the majority of soils, which are mineral. Finally, average

  10. Improving the predictive accuracy of hurricane power outage forecasts using generalized additive models.

    PubMed

    Han, Seung-Ryong; Guikema, Seth D; Quiring, Steven M

    2009-10-01

    Electric power is a critical infrastructure service after hurricanes, and rapid restoration of electric power is important in order to minimize losses in the impacted areas. However, rapid restoration of electric power after a hurricane depends on obtaining the necessary resources, primarily repair crews and materials, before the hurricane makes landfall and then appropriately deploying these resources as soon as possible after the hurricane. This, in turn, depends on having sound estimates of both the overall severity of the storm and the relative risk of power outages in different areas. Past studies have developed statistical, regression-based approaches for estimating the number of power outages in advance of an approaching hurricane. However, these approaches have either not been applicable for future events or have had lower predictive accuracy than desired. This article shows that a different type of regression model, a generalized additive model (GAM), can outperform the types of models used previously. This is done by developing and validating a GAM based on power outage data during past hurricanes in the Gulf Coast region and comparing the results from this model to the previously used generalized linear models.

  11. Predicting the Survival Time for Bladder Cancer Using an Additive Hazards Model in Microarray Data

    PubMed Central

    TAPAK, Leili; MAHJUB, Hossein; SADEGHIFAR, Majid; SAIDIJAM, Massoud; POOROLAJAL, Jalal

    2016-01-01

    Background: One substantial part of microarray studies is to predict patients’ survival based on their gene expression profile. Variable selection techniques are powerful tools to handle high dimensionality in analysis of microarray data. However, these techniques have not been investigated in competing risks setting. This study aimed to investigate the performance of four sparse variable selection methods in estimating the survival time. Methods: The data included 1381 gene expression measurements and clinical information from 301 patients with bladder cancer operated in the years 1987 to 2000 in hospitals in Denmark, Sweden, Spain, France, and England. Four methods of the least absolute shrinkage and selection operator, smoothly clipped absolute deviation, the smooth integration of counting and absolute deviation and elastic net were utilized for simultaneous variable selection and estimation under an additive hazards model. The criteria of area under ROC curve, Brier score and c-index were used to compare the methods. Results: The median follow-up time for all patients was 47 months. The elastic net approach was indicated to outperform other methods. The elastic net had the lowest integrated Brier score (0.137±0.07) and the greatest median of the over-time AUC and C-index (0.803±0.06 and 0.779±0.13, respectively). Five out of 19 selected genes by the elastic net were significant (P<0.05) under an additive hazards model. It was indicated that the expression of RTN4, SON, IGF1R and CDC20 decrease the survival time, while the expression of SMARCAD1 increase it. Conclusion: The elastic net had higher capability than the other methods for the prediction of survival time in patients with bladder cancer in the presence of competing risks base on additive hazards model. PMID:27114989

  12. Comparison of prosthetic models produced by traditional and additive manufacturing methods

    PubMed Central

    Park, Jin-Young; Kim, Hae-Young; Kim, Ji-Hwan; Kim, Jae-Hong

    2015-01-01

    PURPOSE The purpose of this study was to verify the clinical-feasibility of additive manufacturing by comparing the accuracy of four different manufacturing methods for metal coping: the conventional lost wax technique (CLWT); subtractive methods with wax blank milling (WBM); and two additive methods, multi jet modeling (MJM), and micro-stereolithography (Micro-SLA). MATERIALS AND METHODS Thirty study models were created using an acrylic model with the maxillary upper right canine, first premolar, and first molar teeth. Based on the scan files from a non-contact blue light scanner (Identica; Medit Co. Ltd., Seoul, Korea), thirty cores were produced using the WBM, MJM, and Micro-SLA methods, respectively, and another thirty frameworks were produced using the CLWT method. To measure the marginal and internal gap, the silicone replica method was adopted, and the silicone images obtained were evaluated using a digital microscope (KH-7700; Hirox, Tokyo, Japan) at 140X magnification. Analyses were performed using two-way analysis of variance (ANOVA) and Tukey post hoc test (α=.05). RESULTS The mean marginal gaps and internal gaps showed significant differences according to tooth type (P<.001 and P<.001, respectively) and manufacturing method (P<.037 and P<.001, respectively). Micro-SLA did not show any significant difference from CLWT regarding mean marginal gap compared to the WBM and MJM methods. CONCLUSION The mean values of gaps resulting from the four different manufacturing methods were within a clinically allowable range, and, thus, the clinical use of additive manufacturing methods is acceptable as an alternative to the traditional lost wax-technique and subtractive manufacturing. PMID:26330976

  13. Thermodynamic network model for predicting effects of substrate addition and other perturbations on subsurface microbial communities

    SciTech Connect

    Jack Istok; Melora Park; James McKinley; Chongxuan Liu; Lee Krumholz; Anne Spain; Aaron Peacock; Brett Baldwin

    2007-04-19

    The overall goal of this project is to develop and test a thermodynamic network model for predicting the effects of substrate additions and environmental perturbations on microbial growth, community composition and system geochemistry. The hypothesis is that a thermodynamic analysis of the energy-yielding growth reactions performed by defined groups of microorganisms can be used to make quantitative and testable predictions of the change in microbial community composition that will occur when a substrate is added to the subsurface or when environmental conditions change.

  14. Understanding the changes in ductility and Poisson's ratio of metallic glasses during annealing from microscopic dynamics

    SciTech Connect

    Wang, Z.; Ngai, K. L.; Wang, W. H.

    2015-07-21

    In the paper K. L. Ngai et al., [J. Chem. 140, 044511 (2014)], the empirical correlation of ductility with the Poisson's ratio, ν{sub Poisson}, found in metallic glasses was theoretically explained by microscopic dynamic processes which link on the one hand ductility, and on the other hand the Poisson's ratio. Specifically, the dynamic processes are the primitive relaxation in the Coupling Model which is the precursor of the Johari–Goldstein β-relaxation, and the caged atoms dynamics characterized by the effective Debye–Waller factor f{sub 0} or equivalently the nearly constant loss (NCL) in susceptibility. All these processes and the parameters characterizing them are accessible experimentally except f{sub 0} or the NCL of caged atoms; thus, so far, the experimental verification of the explanation of the correlation between ductility and Poisson's ratio is incomplete. In the experimental part of this paper, we report dynamic mechanical measurement of the NCL of the metallic glass La{sub 60}Ni{sub 15}Al{sub 25} as-cast, and the changes by annealing at temperature below T{sub g}. The observed monotonic decrease of the NCL with aging time, reflecting the corresponding increase of f{sub 0}, correlates with the decrease of ν{sub Poisson}. This is important observation because such measurements, not made before, provide the missing link in confirming by experiment the explanation of the correlation of ductility with ν{sub Poisson}. On aging the metallic glass, also observed in the isochronal loss spectra is the shift of the β-relaxation to higher temperatures and reduction of the relaxation strength. These concomitant changes of the β-relaxation and NCL are the root cause of embrittlement by aging the metallic glass. The NCL of caged atoms is terminated by the onset of the primitive relaxation in the Coupling Model, which is generally supported by experiments. From this relation, the monotonic decrease of the NCL with aging time is caused by the slowing down

  15. Semiclassical Limits of Ore Extensions and a Poisson Generalized Weyl Algebra

    NASA Astrophysics Data System (ADS)

    Cho, Eun-Hee; Oh, Sei-Qwon

    2016-07-01

    We observe [Launois and Lecoutre, Trans. Am. Math. Soc. 368:755-785, 2016, Proposition 4.1] that Poisson polynomial extensions appear as semiclassical limits of a class of Ore extensions. As an application, a Poisson generalized Weyl algebra A 1, considered as a Poisson version of the quantum generalized Weyl algebra, is constructed and its Poisson structures are studied. In particular, a necessary and sufficient condition is obtained, such that A 1 is Poisson simple and established that the Poisson endomorphisms of A 1 are Poisson analogues of the endomorphisms of the quantum generalized Weyl algebra.

  16. Generalized additive models and Lucilia sericata growth: assessing confidence intervals and error rates in forensic entomology.

    PubMed

    Tarone, Aaron M; Foran, David R

    2008-07-01

    Forensic entomologists use blow fly development to estimate a postmortem interval. Although accurate, fly age estimates can be imprecise for older developmental stages and no standard means of assigning confidence intervals exists. Presented here is a method for modeling growth of the forensically important blow fly Lucilia sericata, using generalized additive models (GAMs). Eighteen GAMs were created to predict the extent of juvenile fly development, encompassing developmental stage, length, weight, strain, and temperature data, collected from 2559 individuals. All measures were informative, explaining up to 92.6% of the deviance in the data, though strain and temperature exerted negligible influences. Predictions made with an independent data set allowed for a subsequent examination of error. Estimates using length and developmental stage were within 5% of true development percent during the feeding portion of the larval life cycle, while predictions for postfeeding third instars were less precise, but within expected error.

  17. Phase-Field Modeling of Microstructure Evolution in Electron Beam Additive Manufacturing

    NASA Astrophysics Data System (ADS)

    Gong, Xibing; Chou, Kevin

    2015-05-01

    In this study, the microstructure evolution in the powder-bed electron beam additive manufacturing (EBAM) process is studied using phase-field modeling. In essence, EBAM involves a rapid solidification process and the properties of a build partly depend on the solidification behavior as well as the microstructure of the build material. Thus, the prediction of microstructure evolution in EBAM is of importance for its process optimization. Phase-field modeling was applied to study the microstructure evolution and solute concentration of the Ti-6Al-4V alloy in the EBAM process. The effect of undercooling was investigated through the simulations; the greater the undercooling, the faster the dendrite grows. The microstructure simulations show multiple columnar-grain growths, comparable with experimental results for the tested range.

  18. Robust estimation of mean and dispersion functions in extended generalized additive models.

    PubMed

    Croux, Christophe; Gijbels, Irène; Prosdocimi, Ilaria

    2012-03-01

    Generalized linear models are a widely used method to obtain parametric estimates for the mean function. They have been further extended to allow the relationship between the mean function and the covariates to be more flexible via generalized additive models. However, the fixed variance structure can in many cases be too restrictive. The extended quasilikelihood (EQL) framework allows for estimation of both the mean and the dispersion/variance as functions of covariates. As for other maximum likelihood methods though, EQL estimates are not resistant to outliers: we need methods to obtain robust estimates for both the mean and the dispersion function. In this article, we obtain functional estimates for the mean and the dispersion that are both robust and smooth. The performance of the proposed method is illustrated via a simulation study and some real data examples.

  19. Observations and model calculations of an additional layer in the topside ionosphere above Fortaleza, Brazil

    NASA Astrophysics Data System (ADS)

    Jenkins, B.; Bailey, G. J.; Abdu, M. A.; Batista, I. S.; Balan, N.

    1997-06-01

    Calculations using the Sheffield University plasmasphere ionosphere model have shown that under certain conditions an additional layer can form in the low latitude topside ionosphere. This layer (the F3 layer) has subsequently been observed in ionograms recorded at Fortaleza in Brazil. It has not been observed in ionograms recorded at the neighbouring station São Luis. Model calculations have shown that the F3 layer is most likely to form in summer at Fortaleza due to a combination of the neutral wind and the E×B drift acting to raise the plasma. At the location of São Luis, almost on the geomagnetic equator, the neutral wind has a smaller vertical component so the F3 layer does not form.

  20. Continental crust composition constrained by measurements of crustal Poisson's ratio

    NASA Astrophysics Data System (ADS)

    Zandt, George; Ammon, Charles J.

    1995-03-01

    DECIPHERING the geological evolution of the Earth's continental crust requires knowledge of its bulk composition and global variability. The main uncertainties are associated with the composition of the lower crust. Seismic measurements probe the elastic properties of the crust at depth, from which composition can be inferred. Of particular note is Poisson's ratio,Σ ; this elastic parameter can be determined uniquely from the ratio of P- to S-wave seismic velocity, and provides a better diagnostic of crustal composition than either P- or S-wave velocity alone1. Previous attempts to measure Σ have been limited by difficulties in obtaining coincident P- and S-wave data sampling the entire crust2. Here we report 76 new estimates of crustal Σ spanning all of the continents except Antarctica. We find that, on average, Σ increases with the age of the crust. Our results strongly support the presence of a mafic lower crust beneath cratons, and suggest either a uniformitarian craton formation process involving delamination of the lower crust during continental collisions, followed by magmatic underplating, or a model in which crust formation processes have changed since the Precambrian era.

  1. High-resolution regional gravity field recovery from Poisson wavelets using heterogeneous observational techniques

    NASA Astrophysics Data System (ADS)

    Wu, Yihao; Luo, Zhicai; Chen, Wu; Chen, Yongqi

    2017-02-01

    We adopt Poisson wavelets for regional gravity field recovery using data acquired from various observational techniques; the method combines data of different spatial resolutions and coverage, and various spectral contents and noise levels. For managing the ill-conditioned system, the performances of the zero- and first-order Tikhonov regularization approaches are investigated. Moreover, a direct approach is proposed to properly combine Global Positioning System (GPS)/leveling data with the gravimetric quasi-geoid/geoid, where GPS/leveling data are treated as an additional observation group to form a new functional model. In this manner, the quasi-geoid/geoid that fits the local leveling system can be computed in one step, and no post-processing (e.g., corrector surface or least squares collocation) procedures are needed. As a case study, we model a new reference surface over Hong Kong. The results show solutions with first-order regularization are better than those obtained from zero-order regularization, which indicates the former may be more preferable for regional gravity field modeling. The numerical results also demonstrate the gravimetric quasi-geoid/geoid and GPS/leveling data can be combined properly using this direct approach, where no systematic errors exist between these two data sets. A comparison with 61 independent GPS/leveling points shows the accuracy of the new geoid, HKGEOID-2016, is around 1.1 cm. Further evaluation demonstrates the new geoid has improved significantly compared to the original model, HKGEOID-2000, and the standard deviation for the differences between the observed and computed geoidal heights at all GPS/leveling points is reduced from 2.4 to 0.6 cm. Finally, we conclude HKGEOID-2016 can be substituted for HKGEOID-2000 for engineering purposes and geophysical investigations in Hong Kong.

  2. Guarana provides additional stimulation over caffeine alone in the planarian model.

    PubMed

    Moustakas, Dimitrios; Mezzio, Michael; Rodriguez, Branden R; Constable, Mic Andre; Mulligan, Margaret E; Voura, Evelyn B

    2015-01-01

    The stimulant effect of energy drinks is primarily attributed to the caffeine they contain. Many energy drinks also contain other ingredients that might enhance the tonic effects of these caffeinated beverages. One of these additives is guarana. Guarana is a climbing plant native to the Amazon whose seeds contain approximately four times the amount of caffeine found in coffee beans. The mix of other natural chemicals contained in guarana seeds is thought to heighten the stimulant effects of guarana over caffeine alone. Yet, despite the growing use of guarana as an additive in energy drinks, and a burgeoning market for it as a nutritional supplement, the science examining guarana and how it affects other dietary ingredients is lacking. To appreciate the stimulant effects of guarana and other natural products, a straightforward model to investigate their physiological properties is needed. The planarian provides such a system. The locomotor activity and convulsive response of planarians with substance exposure has been shown to provide an excellent system to measure the effects of drug stimulation, addiction and withdrawal. To gauge the stimulant effects of guarana we studied how it altered the locomotor activity of the planarian species Dugesia tigrina. We report evidence that guarana seeds provide additional stimulation over caffeine alone, and document the changes to this stimulation in the context of both caffeine and glucose.

  3. Guarana Provides Additional Stimulation over Caffeine Alone in the Planarian Model

    PubMed Central

    Moustakas, Dimitrios; Mezzio, Michael; Rodriguez, Branden R.; Constable, Mic Andre; Mulligan, Margaret E.; Voura, Evelyn B.

    2015-01-01

    The stimulant effect of energy drinks is primarily attributed to the caffeine they contain. Many energy drinks also contain other ingredients that might enhance the tonic effects of these caffeinated beverages. One of these additives is guarana. Guarana is a climbing plant native to the Amazon whose seeds contain approximately four times the amount of caffeine found in coffee beans. The mix of other natural chemicals contained in guarana seeds is thought to heighten the stimulant effects of guarana over caffeine alone. Yet, despite the growing use of guarana as an additive in energy drinks, and a burgeoning market for it as a nutritional supplement, the science examining guarana and how it affects other dietary ingredients is lacking. To appreciate the stimulant effects of guarana and other natural products, a straightforward model to investigate their physiological properties is needed. The planarian provides such a system. The locomotor activity and convulsive response of planarians with substance exposure has been shown to provide an excellent system to measure the effects of drug stimulation, addiction and withdrawal. To gauge the stimulant effects of guarana we studied how it altered the locomotor activity of the planarian species Dugesia tigrina. We report evidence that guarana seeds provide additional stimulation over caffeine alone, and document the changes to this stimulation in the context of both caffeine and glucose. PMID:25880065

  4. Analysis and Modeling of soil hydrology under different soil additives in artificial runoff plots

    NASA Astrophysics Data System (ADS)

    Ruidisch, M.; Arnhold, S.; Kettering, J.; Huwe, B.; Kuzyakov, Y.; Ok, Y.; Tenhunen, J. D.

    2009-12-01

    The impact of monsoon events during June and July in the Korean project region Haean Basin, which is located in the northeastern part of South Korea plays a key role for erosion, leaching and groundwater pollution risk by agrochemicals. Therefore, the project investigates the main hydrological processes in agricultural soils under field and laboratory conditions on different scales (plot, hillslope and catchment). Soil hydrological parameters were analysed depending on different soil additives, which are known for prevention of soil erosion and nutrient loss as well as increasing of water infiltration, aggregate stability and soil fertility. Hence, synthetic water-soluble Polyacrylamides (PAM), Biochar (Black Carbon mixed with organic fertilizer), both PAM and Biochar were applied in runoff plots at three agricultural field sites. Additionally, as control a subplot was set up without any additives. The field sites were selected in areas with similar hillslope gradients and with emphasis on the dominant land management form of dryland farming in Haean, which is characterised by row planting and row covering by foil. Hydrological parameters like satured water conductivity, matrix potential and water content were analysed by infiltration experiments, continuous tensiometer measurements, time domain reflectometry as well as pressure plates to indentify characteristic water retention curves of each horizon. Weather data were observed by three weather stations next to the runoff plots. Measured data also provide the input data for modeling water transport in the unsatured zone in runoff plots with HYDRUS 1D/2D/3D and SWAT (Soil & Water Assessment Tool).

  5. “Skill of Generalized Additive Model to Detect PM2.5 Health ...

    EPA Pesticide Factsheets

    Summary. Measures of health outcomes are collinear with meteorology and air quality, making analysis of connections between human health and air quality difficult. The purpose of this analysis was to determine time scales and periods shared by the variables of interest (and by implication scales and periods that are not shared). Hospital admissions, meteorology (temperature and relative humidity), and air quality (PM2.5 and daily maximum ozone) for New York City during the period 2000-2006 were decomposed into temporal scales ranging from 2 days to greater than two years using a complex wavelet transform. Health effects were modeled as functions of the wavelet components of meteorology and air quality using the generalized additive model (GAM) framework. This simulation study showed that GAM is extremely successful at extracting and estimating a health effect embedded in a dataset. It also shows that, if the objective in mind is to estimate the health signal but not to fully explain this signal, a simple GAM model with a single confounder (calendar time) whose smooth representation includes a sufficient number of constraints is as good as a more complex model.Introduction. In the context of wavelet regression, confounding occurs when two or more independent variables interact with the dependent variable at the same frequency. Confounding also acts on a variety of time scales, changing the PM2.5 coefficient (magnitude and sign) and its significance ac

  6. Computation of octanol-water partition coefficients by guiding an additive model with knowledge.

    PubMed

    Cheng, Tiejun; Zhao, Yuan; Li, Xun; Lin, Fu; Xu, Yong; Zhang, Xinglong; Li, Yan; Wang, Renxiao; Lai, Luhua

    2007-01-01

    We have developed a new method, i.e., XLOGP3, for logP computation. XLOGP3 predicts the logP value of a query compound by using the known logP value of a reference compound as a starting point. The difference in the logP values of the query compound and the reference compound is then estimated by an additive model. The additive model implemented in XLOGP3 uses a total of 87 atom/group types and two correction factors as descriptors. It is calibrated on a training set of 8199 organic compounds with reliable logP data through a multivariate linear regression analysis. For a given query compound, the compound showing the highest structural similarity in the training set will be selected as the reference compound. Structural similarity is quantified based on topological torsion descriptors. XLOGP3 has been tested along with its predecessor, i.e., XLOGP2, as well as several popular logP methods on two independent test sets: one contains 406 small-molecule drugs approved by the FDA and the other contains 219 oligopeptides. On both test sets, XLOGP3 produces more accurate predictions than most of the other methods with average unsigned errors of 0.24-0.51 units. Compared to conventional additive methods, XLOGP3 does not rely on an extensive classification of fragments and correction factors in order to improve accuracy. It is also able to utilize the ever-increasing experimentally measured logP data more effectively.

  7. Poisson-like height distribution of Ag nanoislands on Si(111) 7 ×7

    NASA Astrophysics Data System (ADS)

    Chen, Yiyao; Gramlich, M. W.; Hayden, S. T.; Miceli, P. F.

    2017-01-01

    The height distribution of Ag(111) islands grown on Si(111) 7 ×7 was studied using in situ x-ray reflectivity. This noble metal-on-semiconductor system is of particular interest because the islands exhibit an unusual minimum height that is imposed by the quantum confinement of the conduction electrons. For different coverages and temperatures as well as annealing, it was found that the island heights exhibit a variance that is less than the mean by a constant amount. We argue that this behavior is related to Poisson-like statistics with the imposition of the minimum island height. A modified Poisson height distribution model is presented and shown to provide a good description of the experimentally measured island height distributions. The results, which contribute to a better understanding of the nanoscale growth behavior for an important noble metal, are discussed in terms of mobility that leads to taller islands.

  8. Stationary and Nontationary Response Probability Density Function of a Beam under Poisson White Noise

    NASA Astrophysics Data System (ADS)

    Vasta, M.; Di Paola, M.

    In this paper an approximate explicit probability density function for the analysis of external oscillations of a linear and geometric nonlinear simply supported beam driven by random pulses is proposed. The adopted impulsive loading model is the Poisson White Noise , that is a process having Dirac's delta occurrences with random intensity distributed in time according to Poisson's law. The response probability density function can be obtained solving the related Kolmogorov-Feller (KF) integro-differential equation. An approximated solution, using path integral method, is derived transforming the KF equation to a first order partial differential equation. The method of characteristic is then applied to obtain an explicit solution. Different levels of approximation, depending on the physical assumption on the transition probability density function, are found and the solution for the response density is obtained as series expansion using convolution integrals.

  9. Poisson image reconstruction with Hessian Schatten-norm regularization.

    PubMed

    Lefkimmiatis, Stamatios; Unser, Michael

    2013-11-01

    Poisson inverse problems arise in many modern imaging applications, including biomedical and astronomical ones. The main challenge is to obtain an estimate of the underlying image from a set of measurements degraded by a linear operator and further corrupted by Poisson noise. In this paper, we propose an efficient framework for Poisson image reconstruction, under a regularization approach, which depends on matrix-valued regularization operators. In particular, the employed regularizers involve the Hessian as the regularization operator and Schatten matrix norms as the potential functions. For the solution of the problem, we propose two optimization algorithms that are specifically tailored to the Poisson nature of the noise. These algorithms are based on an augmented-Lagrangian formulation of the problem and correspond to two variants of the alternating direction method of multipliers. Further, we derive a link that relates the proximal map of an l(p) norm with the proximal map of a Schatten matrix norm of order p. This link plays a key role in the development of one of the proposed algorithms. Finally, we provide experimental results on natural and biological images for the task of Poisson image deblurring and demonstrate the practical relevance and effectiveness of the proposed framework.

  10. Efficient Levenberg-Marquardt minimization of the maximum likelihood estimator for Poisson deviates

    SciTech Connect

    Laurence, T; Chromy, B

    2009-11-10

    Histograms of counted events are Poisson distributed, but are typically fitted without justification using nonlinear least squares fitting. The more appropriate maximum likelihood estimator (MLE) for Poisson distributed data is seldom used. We extend the use of the Levenberg-Marquardt algorithm commonly used for nonlinear least squares minimization for use with the MLE for Poisson distributed data. In so doing, we remove any excuse for not using this more appropriate MLE. We demonstrate the use of the algorithm and the superior performance of the MLE using simulations and experiments in the context of fluorescence lifetime imaging. Scientists commonly form histograms of counted events from their data, and extract parameters by fitting to a specified model. Assuming that the probability of occurrence for each bin is small, event counts in the histogram bins will be distributed according to the Poisson distribution. We develop here an efficient algorithm for fitting event counting histograms using the maximum likelihood estimator (MLE) for Poisson distributed data, rather than the non-linear least squares measure. This algorithm is a simple extension of the common Levenberg-Marquardt (L-M) algorithm, is simple to implement, quick and robust. Fitting using a least squares measure is most common, but it is the maximum likelihood estimator only for Gaussian-distributed data. Non-linear least squares methods may be applied to event counting histograms in cases where the number of events is very large, so that the Poisson distribution is well approximated by a Gaussian. However, it is not easy to satisfy this criterion in practice - which requires a large number of events. It has been well-known for years that least squares procedures lead to biased results when applied to Poisson-distributed data; a recent paper providing extensive characterization of these biases in exponential fitting is given. The more appropriate measure based on the maximum likelihood estimator (MLE

  11. The charge conserving Poisson-Boltzmann equations: Existence, uniqueness, and maximum principle

    SciTech Connect

    Lee, Chiun-Chang

    2014-05-15

    The present article is concerned with the charge conserving Poisson-Boltzmann (CCPB) equation in high-dimensional bounded smooth domains. The CCPB equation is a Poisson-Boltzmann type of equation with nonlocal coefficients. First, under the Robin boundary condition, we get the existence of weak solutions to this equation. The main approach is variational, based on minimization of a logarithm-type energy functional. To deal with the regularity of weak solutions, we establish a maximum modulus estimate for the standard Poisson-Boltzmann (PB) equation to show that weak solutions of the CCPB equation are essentially bounded. Then the classical solutions follow from the elliptic regularity theorem. Second, a maximum principle for the CCPB equation is established. In particular, we show that in the case of global electroneutrality, the solution achieves both its maximum and minimum values at the boundary. However, in the case of global non-electroneutrality, the solution may attain its maximum value at an interior point. In addition, under certain conditions on the boundary, we show that the global non-electroneutrality implies pointwise non-electroneutrality.

  12. The biobehavioral family model: testing social support as an additional exogenous variable.

    PubMed

    Woods, Sarah B; Priest, Jacob B; Roush, Tara

    2014-12-01

    This study tests the inclusion of social support as a distinct exogenous variable in the Biobehavioral Family Model (BBFM). The BBFM is a biopsychosocial approach to health that proposes that biobehavioral reactivity (anxiety and depression) mediates the relationship between family emotional climate and disease activity. Data for this study included married, English-speaking adult participants (n = 1,321; 55% female; M age = 45.2 years) from the National Comorbidity Survey Replication, a nationally representative epidemiological study of the frequency of mental disorders in the United States. Participants reported their demographics, marital functioning, social support from friends and relatives, anxiety and depression (biobehavioral reactivity), number of chronic health conditions, and number of prescription medications. Confirmatory factor analyses supported the items used in the measures of negative marital interactions, social support, and biobehavioral reactivity, as well as the use of negative marital interactions, friends' social support, and relatives' social support as distinct factors in the model. Structural equation modeling indicated a good fit of the data to the hypothesized model (χ(2)  = 846.04, p = .000, SRMR = .039, CFI = .924, TLI = .914, RMSEA = .043). Negative marital interactions predicted biobehavioral reactivity (β = .38, p < .001), as did relatives' social support, inversely (β = -.16, p < .001). Biobehavioral reactivity predicted disease activity (β = .40, p < .001) and was demonstrated to be a significant mediator through tests of indirect effects. Findings are consistent with previous tests of the BBFM with adult samples, and suggest the important addition of family social support as a predicting factor in the model.

  13. A habitat suitability model for Chinese sturgeon determined using the generalized additive method

    NASA Astrophysics Data System (ADS)

    Yi, Yujun; Sun, Jie; Zhang, Shanghong

    2016-03-01

    The Chinese sturgeon is a type of large anadromous fish that migrates between the ocean and rivers. Because of the construction of dams, this sturgeon's migration path has been cut off, and this species currently is on the verge of extinction. Simulating suitable environmental conditions for spawning followed by repairing or rebuilding its spawning grounds are effective ways to protect this species. Various habitat suitability models based on expert knowledge have been used to evaluate the suitability of spawning habitat. In this study, a two-dimensional hydraulic simulation is used to inform a habitat suitability model based on the generalized additive method (GAM). The GAM is based on real data. The values of water depth and velocity are calculated first via the hydrodynamic model and later applied in the GAM. The final habitat suitability model is validated using the catch per unit effort (CPUEd) data of 1999 and 2003. The model results show that a velocity of 1.06-1.56 m/s and a depth of 13.33-20.33 m are highly suitable ranges for the Chinese sturgeon to spawn. The hydraulic habitat suitability indexes (HHSI) for seven discharges (4000; 9000; 12,000; 16,000; 20,000; 30,000; and 40,000 m3/s) are calculated to evaluate integrated habitat suitability. The results show that the integrated habitat suitability reaches its highest value at a discharge of 16,000 m3/s. This study is the first to apply a GAM to evaluate the suitability of spawning grounds for the Chinese sturgeon. The study provides a reference for the identification of potential spawning grounds in the entire basin.

  14. Modeling particulate matter concentrations measured through mobile monitoring in a deletion/substitution/addition approach

    NASA Astrophysics Data System (ADS)

    Su, Jason G.; Hopke, Philip K.; Tian, Yilin; Baldwin, Nichole; Thurston, Sally W.; Evans, Kristin; Rich, David Q.

    2015-12-01

    Land use regression modeling (LUR) through local scale circular modeling domains has been used to predict traffic-related air pollution such as nitrogen oxides (NOX). LUR modeling for fine particulate matters (PM), which generally have smaller spatial gradients than NOX, has been typically applied for studies involving multiple study regions. To increase the spatial coverage for fine PM and key constituent concentrations, we designed a mobile monitoring network in Monroe County, New York to measure pollutant concentrations of black carbon (BC, wavelength at 880 nm), ultraviolet black carbon (UVBC, wavelength at 3700 nm) and Delta-C (the difference between the UVBC and BC concentrations) using the Clarkson University Mobile Air Pollution Monitoring Laboratory (MAPL). A Deletion/Substitution/Addition (D/S/A) algorithm was conducted, which used circular buffers as a basis for statistics. The algorithm maximizes the prediction accuracy for locations without measurements using the V-fold cross-validation technique, and it reduces overfitting compared to other approaches. We found that the D/S/A LUR modeling approach could achieve good results, with prediction powers of 60%, 63%, and 61%, respectively, for BC, UVBC, and Delta-C. The advantage of mobile monitoring is that it can monitor pollutant concentrations at hundreds of spatial points in a region, rather than the typical less than 100 points from a fixed site saturation monitoring network. This research indicates that a mobile saturation sampling network, when combined with proper modeling techniques, can uncover small area variations (e.g., 10 m) in particulate matter concentrations.

  15. Revisiting automated G-protein coupled receptor modeling: the benefit of additional template structures for a neurokinin-1 receptor model.

    PubMed

    Kneissl, Benny; Leonhardt, Bettina; Hildebrandt, Andreas; Tautermann, Christofer S

    2009-05-28

    The feasibility of automated procedures for the modeling of G-protein coupled receptors (GPCR) is investigated on the example of the human neurokinin-1 (NK1) receptor. We use a combined method of homology modeling and molecular docking and analyze the information content of the resulting docking complexes regarding the binding mode for further refinements. Moreover, we explore the impact of different template structures, the bovine rhodopsin structure, the human beta(2) adrenergic receptor, and in particular a combination of both templates to include backbone flexibility in the target conformational space. Our results for NK1 modeling demonstrate that model selection from a set of decoys can in general not solely rely on docking experiments but still requires additional mutagenesis data. However, an enrichment factor of 2.6 in a nearly fully automated approach indicates that reasonable models can be created automatically if both available templates are used for model construction. Thus, the recently resolved GPCR structures open new ways to improve the model building fundamentally.

  16. Generalized Additive Models Used to Predict Species Abundance in the Gulf of Mexico: An Ecosystem Modeling Tool

    PubMed Central

    Drexler, Michael; Ainsworth, Cameron H.

    2013-01-01

    Spatially explicit ecosystem models of all types require an initial allocation of biomass, often in areas where fisheries independent abundance estimates do not exist. A generalized additive modelling (GAM) approach is used to describe the abundance of 40 species groups (i.e. functional groups) across the Gulf of Mexico (GoM) using a large fisheries independent data set (SEAMAP) and climate scale oceanographic conditions. Predictor variables included in the model are chlorophyll a, sediment type, dissolved oxygen, temperature, and depth. Despite the presence of a large number of zeros in the data, a single GAM using a negative binomial distribution was suitable to make predictions of abundance for multiple functional groups. We present an example case study using pink shrimp (Farfantepenaeus duroarum) and compare the results to known distributions. The model successfully predicts the known areas of high abundance in the GoM, including those areas where no data was inputted into the model fitting. Overall, the model reliably captures areas of high and low abundance for the large majority of functional groups observed in SEAMAP. The result of this method allows for the objective setting of spatial distributions for numerous functional groups across a modeling domain, even where abundance data may not exist. PMID:23691223

  17. Impact of an additional chronic BDNF reduction on learning performance in an Alzheimer mouse model

    PubMed Central

    Psotta, Laura; Rockahr, Carolin; Gruss, Michael; Kirches, Elmar; Braun, Katharina; Lessmann, Volkmar; Bock, Jörg; Endres, Thomas

    2015-01-01

    There is increasing evidence that brain-derived neurotrophic factor (BDNF) plays a crucial role in Alzheimer’s disease (AD) pathology. A number of studies demonstrated that AD patients exhibit reduced BDNF levels in the brain and the blood serum, and in addition, several animal-based studies indicated a potential protective effect of BDNF against Aβ-induced neurotoxicity. In order to further investigate the role of BDNF in the etiology of AD, we created a novel mouse model by crossing a well-established AD mouse model (APP/PS1) with a mouse exhibiting a chronic BDNF deficiency (BDNF+/−). This new triple transgenic mouse model enabled us to further analyze the role of BDNF in AD in vivo. We reasoned that in case BDNF has a protective effect against AD pathology, an AD-like phenotype in our new mouse model should occur earlier and/or in more severity than in the APP/PS1-mice. Indeed, the behavioral analysis revealed that the APP/PS1-BDNF+/−-mice show an earlier onset of learning impairments in a two-way active avoidance task in comparison to APP/PS1- and BDNF+/−-mice. However in the Morris water maze (MWM) test, we could not observe an overall aggrevated impairment in spatial learning and also short-term memory in an object recognition task remained intact in all tested mouse lines. In addition to the behavioral experiments, we analyzed the amyloid plaque pathology in the APP/PS1 and APP/PS1-BDNF+/−-mice and observed a comparable plaque density in the two genotypes. Moreover, our results revealed a higher plaque density in prefrontal cortical compared to hippocampal brain regions. Our data reveal that higher cognitive tasks requiring the recruitment of cortical networks appear to be more severely affected in our new mouse model than learning tasks requiring mainly sub-cortical networks. Furthermore, our observations of an accelerated impairment in active avoidance learning in APP/PS1-BDNF+/−-mice further supports the hypothesis that BDNF deficiency

  18. Spectral models of additive and modulation noise in speech and phonatory excitation signals

    NASA Astrophysics Data System (ADS)

    Schoentgen, Jean

    2003-01-01

    The article presents spectral models of additive and modulation noise in speech. The purpose is to learn about the causes of noise in the spectra of normal and disordered voices and to gauge whether the spectral properties of the perturbations of the phonatory excitation signal can be inferred from the spectral properties of the speech signal. The approach to modeling consists of deducing the Fourier series of the perturbed speech, assuming that the Fourier series of the noise and of the clean monocycle-periodic excitation are known. The models explain published data, take into account the effects of supraglottal tremor, demonstrate the modulation distortion owing to vocal tract filtering, establish conditions under which noise cues of different speech signals may be compared, and predict the impossibility of inferring the spectral properties of the frequency modulating noise from the spectral properties of the frequency modulation noise (e.g., phonatory jitter and frequency tremor). The general conclusion is that only phonatory frequency modulation noise is spectrally relevant. Other types of noise in speech are either epiphenomenal, or their spectral effects are masked by the spectral effects of frequency modulation noise.

  19. Mental self-government: development of the additional democratic learning style scale using Rasch measurement models.

    PubMed

    Nielsen, Tine; Kreiner, Svend; Styles, Irene

    2007-01-01

    This paper describes the development and validation of a democratic learning style scale intended to fill a gap in Sternberg's theory of mental self-government and the associated learning style inventory (Sternberg, 1988, 1997). The scale was constructed as an 8-item scale with a 7-category response scale. The scale was developed following an adapted version of DeVellis' (2003) guidelines for scale development. The validity of the Democratic Learning Style Scale was assessed by items analysis using graphical loglinear Rasch models (Kreiner and Christensen, 2002, 2004, 2006) The item analysis confirmed that the full 8-item revised Democratic Learning Style Scale fitted a graphical loglinear Rasch model with no differential item functioning but weak to moderate uniform local dependence between two items. In addition, a reduced 6-item version of the scale fitted the pure Rasch model with a rating scale parameterization. The revised Democratic Learning Style Scale can therefore be regarded as a sound measurement scale meeting requirements of both construct validity and objectivity.

  20. A Bayesian additive model for understanding public transport usage in special events.

    PubMed

    Rodrigues, Filipe; Borysov, Stanislav; Ribeiro, Bernardete; Pereira, Francisco

    2016-12-02

    Public special events, like sports games, concerts and festivals are well known to create disruptions in transportation systems, often catching the operators by surprise. Although these are usually planned well in advance, their impact is difficult to predict, even when organisers and transportation operators coordinate. The problem highly increases when several events happen concurrently. To solve these problems, costly processes, heavily reliant on manual search and personal experience, are usual practice in large cities like Singapore, London or Tokyo. This paper presents a Bayesian additive model with Gaussian process components that combines smart card records from public transport with context information about events that is continuously mined from the Web. We develop an efficient approximate inference algorithm using expectation propagation, which allows us to predict the total number of public transportation trips to the special event areas, thereby contributing to a more adaptive transportation system. Furthermore, for multiple concurrent event scenarios, the proposed algorithm is able to disaggregate gross trip counts into their most likely components related to specific events and routine behavior. Using real data from Singapore, we show that the presented model outperforms the best baseline model by up to 26% in R2 and also has explanatory power for its individual components.

  1. Do bacterial cell numbers follow a theoretical Poisson distribution? Comparison of experimentally obtained numbers of single cells with random number generation via computer simulation.

    PubMed

    Koyama, Kento; Hokunan, Hidekazu; Hasegawa, Mayumi; Kawamura, Shuso; Koseki, Shigenobu

    2016-12-01

    We investigated a bacterial sample preparation procedure for single-cell studies. In the present study, we examined whether single bacterial cells obtained via 10-fold dilution followed a theoretical Poisson distribution. Four serotypes of Salmonella enterica, three serotypes of enterohaemorrhagic Escherichia coli and one serotype of Listeria monocytogenes were used as sample bacteria. An inoculum of each serotype was prepared via a 10-fold dilution series to obtain bacterial cell counts with mean values of one or two. To determine whether the experimentally obtained bacterial cell counts follow a theoretical Poisson distribution, a likelihood ratio test between the experimentally obtained cell counts and Poisson distribution which parameter estimated by maximum likelihood estimation (MLE) was conducted. The bacterial cell counts of each serotype sufficiently followed a Poisson distribution. Furthermore, to examine the validity of the parameters of Poisson distribution from experimentally obtained bacterial cell counts, we compared these with the parameters of a Poisson distribution that were estimated using random number generation via computer simulation. The Poisson distribution parameters experimentally obtained from bacterial cell counts were within the range of the parameters estimated using a computer simulation. These results demonstrate that the bacterial cell counts of each serotype obtained via 10-fold dilution followed a Poisson distribution. The fact that the frequency of bacterial cell counts follows a Poisson distribution at low number would be applied to some single-cell studies with a few bacterial cells. In particular, the procedure presented in this study enables us to develop an inactivation model at the single-cell level that can estimate the variability of survival bacterial numbers during the bacterial death process.

  2. POISs3: A 3D poisson smoother of structured grids

    NASA Astrophysics Data System (ADS)

    Lehtimaeki, R.

    Flow solvers based on solving Navier-Stokes or Euler equations generally need a computational grid to represent the domain of the flow. A structured computational grid can be efficiently produced by algebraic methods like transfinite interpolation. Unfortunately, algebraic methods propagate all kinds of unsmoothness of the boundary into the field. Unsmoothness of the grid, in turn, can result in inaccuracy in the flow solver. In the present work a 3D elliptic grid smoother was developed. The smoother is based on solving three Poisson equations, one for each curvilinear direction. The Poisson equations formed in the physical region are first transformed to the computational (rectilinear) region. The resulting equations form a system of three coupled elliptic quasi-linear partial differential equations in the computational domain. A short review of the Poisson method is presented. The regularity of a grid cell is studied and a skewness value is developed.

  3. A spectral Poisson solver for kinetic plasma simulation

    NASA Astrophysics Data System (ADS)

    Szeremley, Daniel; Obberath, Jens; Brinkmann, Ralf

    2011-10-01

    Plasma resonance spectroscopy is a well established plasma diagnostic method, realized in several designs. One of these designs is the multipole resonance probe (MRP). In its idealized - geometrically simplified - version it consists of two dielectrically shielded, hemispherical electrodes to which an RF signal is applied. A numerical tool is under development which is capable of simulating the dynamics of the plasma surrounding the MRP in electrostatic approximation. In this contribution we concentrate on the specialized Poisson solver for that tool. The plasma is represented by an ensemble of point charges. By expanding both the charge density and the potential into spherical harmonics, a largely analytical solution of the Poisson problem can be employed. For a practical implementation, the expansion must be appropriately truncated. With this spectral solver we are able to efficiently solve the Poisson equation in a kinetic plasma simulation without the need of introducing a spatial discretization.

  4. Blocked Shape Memory Effect in Negative Poisson's Ratio Polymer Metamaterials.

    PubMed

    Boba, Katarzyna; Bianchi, Matteo; McCombe, Greg; Gatt, Ruben; Griffin, Anselm C; Richardson, Robert M; Scarpa, Fabrizio; Hamerton, Ian; Grima, Joseph N

    2016-08-10

    We describe a new class of negative Poisson's ratio (NPR) open cell PU-PE foams produced by blocking the shape memory effect in the polymer. Contrary to classical NPR open cell thermoset and thermoplastic foams that return to their auxetic phase after reheating (and therefore limit their use in technological applications), this new class of cellular solids has a permanent negative Poisson's ratio behavior, generated through multiple shape memory (mSM) treatments that lead to a fixity of the topology of the cell foam. The mSM-NPR foams have Poisson's ratio values similar to the auxetic foams prior their return to the conventional phase, but compressive stress-strain curves similar to the ones of conventional foams. The results show that by manipulating the shape memory effect in polymer microstructures it is possible to obtain new classes of materials with unusual deformation mechanisms.

  5. Effect of Poisson noise on adiabatic quantum control

    NASA Astrophysics Data System (ADS)

    Kiely, A.; Muga, J. G.; Ruschhaupt, A.

    2017-01-01

    We present a detailed derivation of the master equation describing a general time-dependent quantum system with classical Poisson white noise and outline its various properties. We discuss the limiting cases of Poisson white noise and provide approximations for the different noise strength regimes. We show that using the eigenstates of the noise superoperator as a basis can be a useful way of expressing the master equation. Using this, we simulate various settings to illustrate different effects of Poisson noise. In particular, we show a dip in the fidelity as a function of noise strength where high fidelity can occur in the strong-noise regime for some cases. We also investigate recent claims [J. Jing et al., Phys. Rev. A 89, 032110 (2014), 10.1103/PhysRevA.89.032110] that this type of noise may improve rather than destroy adiabaticity.

  6. Design and tuning of standard additive model based fuzzy PID controllers for multivariable process systems.

    PubMed

    Harinath, Eranda; Mann, George K I

    2008-06-01

    This paper describes a design and two-level tuning method for fuzzy proportional-integral derivative (FPID) controllers for a multivariable process where the fuzzy inference uses the inference of standard additive model. The proposed method can be used for any n x n multi-input-multi-output process and guarantees closed-loop stability. In the two-level tuning scheme, the tuning follows two steps: low-level tuning followed by high-level tuning. The low-level tuning adjusts apparent linear gains, whereas the high-level tuning changes the nonlinearity in the normalized fuzzy output. In this paper, two types of FPID configurations are considered, and their performances are evaluated by using a real-time multizone temperature control problem having a 3 x 3 process system.

  7. Modeling the flux of metabolites in the juvenile hormone biosynthesis pathway using generalized additive models and ordinary differential equations.

    PubMed

    Martínez-Rincón, Raúl O; Rivera-Pérez, Crisalejandra; Diambra, Luis; Noriega, Fernando G

    2017-01-01

    Juvenile hormone (JH) regulates development and reproductive maturation in insects. The corpora allata (CA) from female adult mosquitoes synthesize fluctuating levels of JH, which have been linked to the ovarian development and are influenced by nutritional signals. The rate of JH biosynthesis is controlled by the rate of flux of isoprenoids in the pathway, which is the outcome of a complex interplay of changes in precursor pools and enzyme levels. A comprehensive study of the changes in enzymatic activities and precursor pool sizes have been previously reported for the mosquito Aedes aegypti JH biosynthesis pathway. In the present studies, we used two different quantitative approaches to describe and predict how changes in the individual metabolic reactions in the pathway affect JH synthesis. First, we constructed generalized additive models (GAMs) that described the association between changes in specific metabolite concentrations with changes in enzymatic activities and substrate concentrations. Changes in substrate concentrations explained 50% or more of the model deviances in 7 of the 13 metabolic steps analyzed. Addition of information on enzymatic activities almost always improved the fitness of GAMs built solely based on substrate concentrations. GAMs were validated using experimental data that were not included when the model was built. In addition, a system of ordinary differential equations (ODE) was developed to describe the instantaneous changes in metabolites as a function of the levels of enzymatic catalytic activities. The results demonstrated the ability of the models to predict changes in the flux of metabolites in the JH pathway, and can be used in the future to design and validate experimental manipulations of JH synthesis.

  8. Modeling the flux of metabolites in the juvenile hormone biosynthesis pathway using generalized additive models and ordinary differential equations

    PubMed Central

    Martínez-Rincón, Raúl O.; Rivera-Pérez, Crisalejandra; Diambra, Luis; Noriega, Fernando G.

    2017-01-01

    Juvenile hormone (JH) regulates development and reproductive maturation in insects. The corpora allata (CA) from female adult mosquitoes synthesize fluctuating levels of JH, which have been linked to the ovarian development and are influenced by nutritional signals. The rate of JH biosynthesis is controlled by the rate of flux of isoprenoids in the pathway, which is the outcome of a complex interplay of changes in precursor pools and enzyme levels. A comprehensive study of the changes in enzymatic activities and precursor pool sizes have been previously reported for the mosquito Aedes aegypti JH biosynthesis pathway. In the present studies, we used two different quantitative approaches to describe and predict how changes in the individual metabolic reactions in the pathway affect JH synthesis. First, we constructed generalized additive models (GAMs) that described the association between changes in specific metabolite concentrations with changes in enzymatic activities and substrate concentrations. Changes in substrate concentrations explained 50% or more of the model deviances in 7 of the 13 metabolic steps analyzed. Addition of information on enzymatic activities almost always improved the fitness of GAMs built solely based on substrate concentrations. GAMs were validated using experimental data that were not included when the model was built. In addition, a system of ordinary differential equations (ODE) was developed to describe the instantaneous changes in metabolites as a function of the levels of enzymatic catalytic activities. The results demonstrated the ability of the models to predict changes in the flux of metabolites in the JH pathway, and can be used in the future to design and validate experimental manipulations of JH synthesis. PMID:28158248

  9. Use of generalised additive models to categorise continuous variables in clinical prediction

    PubMed Central

    2013-01-01

    Background In medical practice many, essentially continuous, clinical parameters tend to be categorised by physicians for ease of decision-making. Indeed, categorisation is a common practice both in medical research and in the development of clinical prediction rules, particularly where the ensuing models are to be applied in daily clinical practice to support clinicians in the decision-making process. Since the number of categories into which a continuous predictor must be categorised depends partly on the relationship between the predictor and the outcome, the need for more than two categories must be borne in mind. Methods We propose a categorisation methodology for clinical-prediction models, using Generalised Additive Models (GAMs) with P-spline smoothers to determine the relationship between the continuous predictor and the outcome. The proposed method consists of creating at least one average-risk category along with high- and low-risk categories based on the GAM smooth function. We applied this methodology to a prospective cohort of patients with exacerbated chronic obstructive pulmonary disease. The predictors selected were respiratory rate and partial pressure of carbon dioxide in the blood (PCO2), and the response variable was poor evolution. An additive logistic regression model was used to show the relationship between the covariates and the dichotomous response variable. The proposed categorisation was compared to the continuous predictor as the best option, using the AIC and AUC evaluation parameters. The sample was divided into a derivation (60%) and validation (40%) samples. The first was used to obtain the cut points while the second was used to validate the proposed methodology. Results The three-category proposal for the respiratory rate was ≤ 20;(20,24];> 24, for which the following values were obtained: AIC=314.5 and AUC=0.638. The respective values for the continuous predictor were AIC=317.1 and AUC=0.634, with no statistically

  10. Composite laminates with negative through-the-thickness Poisson's ratios

    NASA Technical Reports Server (NTRS)

    Herakovich, C. T.

    1984-01-01

    A simple analysis using two dimensional lamination theory combined with the appropriate three dimensional anisotropic constitutive equation is presented to show some rather surprising results for the range of values of the through-the-thickness effective Poisson's ratio nu sub xz for angle ply laminates. Results for graphite-epoxy show that the through-the-thickness effective Poisson's ratio can range from a high of 0.49 for a 90 laminate to a low of -0.21 for a + or - 25s laminate. It is shown that negative values of nu sub xz are also possible for other laminates.

  11. Composite laminates with negative through-the-thickness Poisson's ratios

    NASA Technical Reports Server (NTRS)

    Herakovich, C. T.

    1984-01-01

    A simple analysis using two-dimensional lamination theory combined with the appropriate three-dimensional anisotropic constitutive equation is presented to show some rather surprising results for the range of values of the through-the-thickness effective Poisson's ratio nu sub xz for angle ply laminates. Results for graphite-epoxy show that the through-the-thickness effective Poisson's ratio can range from a high of 0.49 for a 90 laminate to a low of -0.21 for a + or - 25s laminate. It is shown that negative values of nu sub xz are also possible for other laminates.

  12. A Study of Poisson's Ratio in the Yield Region

    NASA Technical Reports Server (NTRS)

    Gerard, George; Wildhorn, Sorrel

    1952-01-01

    In the yield region of the stress-strain curve the variation in Poisson's ratio from the elastic to the plastic value is most pronounced. This variation was studied experimentally by a systematic series of tests on several aluminum alloys. The tests were conducted under simple tensile and compressive loading along three orthogonal axes. A theoretical variation of Poisson's ratio for an orthotropic solid was obtained from dilatational considerations. The assumptions used in deriving the theory were examined by use of the test data and were found to be in reasonable agreement with experimental evidence.

  13. Spin-probe ESR and molecular modeling studies on calcium carbonate dispersions in overbased detergent additives.

    PubMed

    Montanari, Luciano; Frigerio, Francesco

    2010-08-15

    Oil-soluble calcium carbonate colloids are used as detergent additives in lubricating oils. They are colloidal dispersions of calcium carbonate particles stabilized by different surfactants; in this study alkyl-aryl-sulfonates and sulfurized alkyl-phenates, widely used in the synthesis of these additives, are considered. The physical properties of surfactant layers surrounding the surfaces of calcium carbonate particles were analyzed by using some nitroxide spin-probes (stable free radicals) and observing the corresponding ESR spectra. The spin-probe molecules contain polar groups which tend to tether them to the carbonate particle polar surface. They can reach these surfaces only if the surfactant layers are not very compact, hence the relative amounts of spin-probe molecules accessing carbonate surfaces are an index of the compactness of surfactant core. ESR signals of spin-probe molecules dissolved in oil or "locked" near the carbonate surfaces are different because of the different molecular mobility. Through deconvolution of the ESR spectra, the fraction of spin-probes penetrating surfactant shells have been calculated, and differences were observed according to the surfactant molecular structures. Moreover, by using specially labeled spin-probes based on stearic acids, functionalized at different separations from the carboxylic acid group, it was possible to interrogate the molecular physical behavior of surfactant shells at different distances from carbonate surfaces. Molecular modeling was applied to generate some three-dimensional micellar models of the colloidal stabilizations of the stabilized carbonate particles with different molecular structures of the surfactant. The diffusion of spin-probe molecules into the surfactant shells were studied by applying a starting force to push the molecules towards the carbonate surfaces and then observing the ensuing behavior. The simulations are in accordance with the ESR data and show that the geometrical

  14. Modeling external carbon addition in biological nutrient removal processes with an extension of the international water association activated sludge model.

    PubMed

    Swinarski, M; Makinia, J; Stensel, H D; Czerwionka, K; Drewnowski, J

    2012-08-01

    The aim of this study was to expand the International Water Association Activated Sludge Model No. 2d (ASM2d) to account for a newly defined readily biodegradable substrate that can be consumed by polyphosphate-accumulating organisms (PAOs) under anoxic and aerobic conditions, but not under anaerobic conditions. The model change was to add a new substrate component and process terms for its use by PAOs and other heterotrophic bacteria under anoxic and aerobic conditions. The Gdansk (Poland) wastewater treatment plant (WWTP), which has a modified University of Cape Town (MUCT) process for nutrient removal, provided field data and mixed liquor for batch tests for model evaluation. The original ASM2d was first calibrated under dynamic conditions with the results of batch tests with settled wastewater and mixed liquor, in which nitrate-uptake rates, phosphorus-release rates, and anoxic phosphorus uptake rates were followed. Model validation was conducted with data from a 96-hour measurement campaign in the full-scale WWTP. The results of similar batch tests with ethanol and fusel oil as the external carbon sources were used to adjust kinetic and stoichiometric coefficients in the expanded ASM2d. Both models were compared based on their predictions of the effect of adding supplemental carbon to the anoxic zone of an MUCT process. In comparison with the ASM2d, the new model better predicted the anoxic behaviors of carbonaceous oxygen demand, nitrate-nitrogen (NO3-N), and phosphorous (PO4-P) in batch experiments with ethanol and fusel oil. However, when simulating ethanol addition to the anoxic zone of a full-scale biological nutrient removal facility, both models predicted similar effluent NO3-N concentrations (6.6 to 6.9 g N/m3). For the particular application, effective enhanced biological phosphorus removal was predicted by both models with external carbon addition but, for the new model, the effluent PO4-P concentration was approximately one-half of that found from

  15. An optimal method to segment piecewise poisson distributed signals with application to sequencing data.

    PubMed

    Duan, Junbo; Soussen, Charles; Brie, David; Idier, Jerome; Wang, Yu-Ping; Wan, Mingxi

    2015-01-01

    To analyze the next generation sequencing data, the so-called read depth signal is often segmented with standard segmentation tools. However, these tools usually assume the signal to be a piecewise constant signal and contaminated with zero mean Gaussian noise, and therefore modeling error occurs. This paper models the read depth signal with piecewise Poisson distribution, which is more appropriate to the next generation sequencing mechanism. Based on the proposed model, an opti- mal dynamic programming algorithm with parallel computing is proposed to segment the piecewise signal, and furthermore detect the copy number variation.

  16. Additive surface complexation modeling of uranium(VI) adsorption onto quartz-sand dominated sediments.

    PubMed

    Dong, Wenming; Wan, Jiamin

    2014-06-17

    Many aquifers contaminated by U(VI)-containing acidic plumes are composed predominantly of quartz-sand sediments. The F-Area of the Savannah River Site (SRS) in South Carolina (USA) is an example. To predict U(VI) mobility and natural attenuation, we conducted U(VI) adsorption experiments using the F-Area plume sediments and reference quartz, goethite, and kaolinite. The sediments are composed of ∼96% quartz-sand and 3-4% fine fractions of kaolinite and goethite. We developed a new humic acid adsorption method for determining the relative surface area abundances of goethite and kaolinite in the fine fractions. This method is expected to be applicable to many other binary mineral pairs, and allows successful application of the component additivity (CA) approach based surface complexation modeling (SCM) at the SRS F-Area and other similar aquifers. Our experimental results indicate that quartz has stronger U(VI) adsorption ability per unit surface area than goethite and kaolinite at pH ≤ 4.0. Our modeling results indicate that the binary (goethite/kaolinite) CA-SCM under-predicts U(VI) adsorption to the quartz-sand dominated sediments at pH ≤ 4.0. The new ternary (quartz/goethite/kaolinite) CA-SCM provides excellent predictions. The contributions of quartz-sand, kaolinite, and goethite to U(VI) adsorption and the potential influences of dissolved Al, Si, and Fe are also discussed.

  17. Modeling and additive manufacturing of bio-inspired composites with tunable fracture mechanical properties.

    PubMed

    Dimas, Leon S; Buehler, Markus J

    2014-07-07

    Flaws, imperfections and cracks are ubiquitous in material systems and are commonly the catalysts of catastrophic material failure. As stresses and strains tend to concentrate around cracks and imperfections, structures tend to fail far before large regions of material have ever been subjected to significant loading. Therefore, a major challenge in material design is to engineer systems that perform on par with pristine structures despite the presence of imperfections. In this work we integrate knowledge of biological systems with computational modeling and state of the art additive manufacturing to synthesize advanced composites with tunable fracture mechanical properties. Supported by extensive mesoscale computer simulations, we demonstrate the design and manufacturing of composites that exhibit deformation mechanisms characteristic of pristine systems, featuring flaw-tolerant properties. We analyze the results by directly comparing strain fields for the synthesized composites, obtained through digital image correlation (DIC), and the computationally tested composites. Moreover, we plot Ashby diagrams for the range of simulated and experimental composites. Our findings show good agreement between simulation and experiment, confirming that the proposed mechanisms have a significant potential for vastly improving the fracture response of composite materials. We elucidate the role of stiffness ratio variations of composite constituents as an important feature in determining the composite properties. Moreover, our work validates the predictive ability of our models, presenting them as useful tools for guiding further material design. This work enables the tailored design and manufacturing of composites assembled from inferior building blocks, that obtain optimal combinations of stiffness and toughness.

  18. Evaluation of the performance of smoothing functions in generalized additive models for spatial variation in disease.

    PubMed

    Siangphoe, Umaporn; Wheeler, David C

    2015-01-01

    Generalized additive models (GAMs) with bivariate smoothing functions have been applied to estimate spatial variation in risk for many types of cancers. Only a handful of studies have evaluated the performance of smoothing functions applied in GAMs with regard to different geographical areas of elevated risk and different risk levels. This study evaluates the ability of different smoothing functions to detect overall spatial variation of risk and elevated risk in diverse geographical areas at various risk levels using a simulation study. We created five scenarios with different true risk area shapes (circle, triangle, linear) in a square study region. We applied four different smoothing functions in the GAMs, including two types of thin plate regression splines (TPRS) and two versions of locally weighted scatterplot smoothing (loess). We tested the null hypothesis of constant risk and detected areas of elevated risk using analysis of deviance with permutation methods and assessed the performance of the smoothing methods based on the spatial detection rate, sensitivity, accuracy, precision, power, and false-positive rate. The results showed that all methods had a higher sensitivity and a consistently moderate-to-high accuracy rate when the true disease risk was higher. The models generally performed better in detecting elevated risk areas than detecting overall spatial variation. One of the loess methods had the highest precision in detecting overall spatial variation across scenarios and outperformed the other methods in detecting a linear elevated risk area. The TPRS methods outperformed loess in detecting elevated risk in two circular areas.

  19. Evaluation of the Performance of Smoothing Functions in Generalized Additive Models for Spatial Variation in Disease

    PubMed Central

    Siangphoe, Umaporn; Wheeler, David C.

    2015-01-01

    Generalized additive models (GAMs) with bivariate smoothing functions have been applied to estimate spatial variation in risk for many types of cancers. Only a handful of studies have evaluated the performance of smoothing functions applied in GAMs with regard to different geographical areas of elevated risk and different risk levels. This study evaluates the ability of different smoothing functions to detect overall spatial variation of risk and elevated risk in diverse geographical areas at various risk levels using a simulation study. We created five scenarios with different true risk area shapes (circle, triangle, linear) in a square study region. We applied four different smoothing functions in the GAMs, including two types of thin plate regression splines (TPRS) and two versions of locally weighted scatterplot smoothing (loess). We tested the null hypothesis of constant risk and detected areas of elevated risk using analysis of deviance with permutation methods and assessed the performance of the smoothing methods based on the spatial detection rate, sensitivity, accuracy, precision, power, and false-positive rate. The results showed that all methods had a higher sensitivity and a consistently moderate-to-high accuracy rate when the true disease risk was higher. The models generally performed better in detecting elevated risk areas than detecting overall spatial variation. One of the loess methods had the highest precision in detecting overall spatial variation across scenarios and outperformed the other methods in detecting a linear elevated risk area. The TPRS methods outperformed loess in detecting elevated risk in two circular areas. PMID:25983545

  20. Generalized additive models reveal the intrinsic complexity of wood formation dynamics.

    PubMed

    Cuny, Henri E; Rathgeber, Cyrille B K; Kiessé, Tristan Senga; Hartmann, Felix P; Barbeito, Ignacio; Fournier, Meriem

    2013-04-01

    The intra-annual dynamics of wood formation, which involves the passage of newly produced cells through three successive differentiation phases (division, enlargement, and wall thickening) to reach the final functional mature state, has traditionally been described in conifers as three delayed bell-shaped curves followed by an S-shaped curve. Here the classical view represented by the 'Gompertz function (GF) approach' was challenged using two novel approaches based on parametric generalized linear models (GLMs) and 'data-driven' generalized additive models (GAMs). These three approaches (GFs, GLMs, and GAMs) were used to describe seasonal changes in cell numbers in each of the xylem differentiation phases and to calculate the timing of cell development in three conifer species [Picea abies (L.), Pinus sylvestris L., and Abies alba Mill.]. GAMs outperformed GFs and GLMs in describing intra-annual wood formation dynamics, showing two left-skewed bell-shaped curves for division and enlargement, and a right-skewed bimodal curve for thickening. Cell residence times progressively decreased through the season for enlargement, whilst increasing late but rapidly for thickening. These patterns match changes in cell anatomical features within a tree ring, which allows the separation of earlywood and latewood into two distinct cell populations. A novel statistical approach is presented which renews our understanding of xylogenesis, a dynamic biological process in which the rate of cell production interplays with cell residence times in each developmental phase to create complex seasonal patterns.

  1. Collisional effects on the numerical recurrence in Vlasov-Poisson simulations

    SciTech Connect

    Pezzi, Oreste; Valentini, Francesco; Camporeale, Enrico

    2016-02-15

    The initial state recurrence in numerical simulations of the Vlasov-Poisson system is a well-known phenomenon. Here, we study the effect on recurrence of artificial collisions modeled through the Lenard-Bernstein operator [A. Lenard and I. B. Bernstein, Phys. Rev. 112, 1456–1459 (1958)]. By decomposing the linear Vlasov-Poisson system in the Fourier-Hermite space, the recurrence problem is investigated in the linear regime of the damping of a Langmuir wave and of the onset of the bump-on-tail instability. The analysis is then confirmed and extended to the nonlinear regime through an Eulerian collisional Vlasov-Poisson code. It is found that, despite being routinely used, an artificial collisionality is not a viable way of preventing recurrence in numerical simulations without compromising the kinetic nature of the solution. Moreover, it is shown how numerical effects associated to the generation of fine velocity scales can modify the physical features of the system evolution even in nonlinear regime. This means that filamentation-like phenomena, usually associated with low amplitude fluctuations contexts, can play a role even in nonlinear regime.

  2. Collisional effects on the numerical recurrence in Vlasov-Poisson simulations

    NASA Astrophysics Data System (ADS)

    Pezzi, Oreste; Camporeale, Enrico; Valentini, Francesco

    2016-02-01

    The initial state recurrence in numerical simulations of the Vlasov-Poisson system is a well-known phenomenon. Here, we study the effect on recurrence of artificial collisions modeled through the Lenard-Bernstein operator [A. Lenard and I. B. Bernstein, Phys. Rev. 112, 1456-1459 (1958)]. By decomposing the linear Vlasov-Poisson system in the Fourier-Hermite space, the recurrence problem is investigated in the linear regime of the damping of a Langmuir wave and of the onset of the bump-on-tail instability. The analysis is then confirmed and extended to the nonlinear regime through an Eulerian collisional Vlasov-Poisson code. It is found that, despite being routinely used, an artificial collisionality is not a viable way of preventing recurrence in numerical simulations without compromising the kinetic nature of the solution. Moreover, it is shown how numerical effects associated to the generation of fine velocity scales can modify the physical features of the system evolution even in nonlinear regime. This means that filamentation-like phenomena, usually associated with low amplitude fluctuations contexts, can play a role even in nonlinear regime.

  3. Extremal Properties of an Intermittent Poisson Process Generating 1/f Noise

    NASA Astrophysics Data System (ADS)

    Grüneis, Ferdinand

    2016-08-01

    It is well-known that the total power of a signal exhibiting a pure 1/f shape is divergent. This phenomenon is also called the infrared catastrophe. Mandelbrot claims that the infrared catastrophe can be overcome by stochastic processes which alternate between active and quiescent states. We investigate an intermittent Poisson process (IPP) which belongs to the family of stochastic processes suggested by Mandelbrot. During the intermission δ (quiescent period) the signal is zero. The active period is divided into random intervals of mean length τ0 consisting of a fluctuating number of events; this is giving rise to so-called clusters. The advantage of our treatment is that the spectral features of the IPP can be derived analytically. Our considerations are focused on the case that intermission is only a small disturbance of the Poisson process, i.e., to the case that δ ≤ τ0. This makes it difficult or even impossible to discriminate a spike train of such an IPP from that of a Poisson process. We investigate the conditions under which a 1/f spectrum can be observed. It is shown that 1/f noise generated by the IPP is accompanied with extreme variance. In agreement with the considerations of Mandelbrot, the IPP avoids the infrared catastrophe. Spectral analysis of the simulated IPP confirms our theoretical results. The IPP is a model for an almost random walk generating both white and 1/f noise and can be applied for an interpretation of 1/f noise in metallic resistors.

  4. Multinomial additive hazard model to assess the disability burden using cross-sectional data.

    PubMed

    Yokota, Renata T C; Van Oyen, Herman; Looman, Caspar W N; Nusselder, Wilma J; Otava, Martin; Kifle, Yimer Wasihun; Molenberghs, Geert

    2017-03-23

    Population aging is accompanied by the burden of chronic diseases and disability. Chronic diseases are among the main causes of disability, which is associated with poor quality of life and high health care costs in the elderly. The identification of which chronic diseases contribute most to the disability prevalence is important to reduce the burden. Although longitudinal studies can be considered the gold standard to assess the causes of disability, they are costly and often with restricted sample size. Thus, the use of cross-sectional data under certain assumptions has become a popular alternative. Among the existing methods based on cross-sectional data, the attribution method, which was originally developed for binary disability outcomes, is an attractive option, as it enables the partition of disability into the additive contribution of chronic diseases, taking into account multimorbidity and that disability can be present even in the absence of disease. In this paper, we propose an extension of the attribution method to multinomial responses, since disability is often measured as a multicategory variable in most surveys, representing different severity levels. The R function constrOptim is used to maximize the multinomial log-likelihood function subject to a linear inequality constraint. Our simulation study indicates overall good performance of the model, without convergence problems. However, the model must be used with care for populations with low marginal disability probabilities and with high sum of conditional probabilities, especially with small sample size. For illustration, we apply the model to the data of the Belgian Health Interview Surveys.

  5. A Comparative Kirkwood-Buff Study of Aqueous Methanol Solutions Modeled by the CHARMM Additive and Drude Polarizable Force Fields

    PubMed Central

    Lin, Bin; He, Xibing; MacKerell, Alexander D.

    2013-01-01

    A comparative study on aqueous methanol solutions modeled by the CHARMM additive and Drude polarizable force fields was carried out by employing Kirkwood-Buff analysis. It was shown that both models reproduced the experimental Kirkwood-Buff integrals and excess coordination numbers adequately well over the entire concentration range. The Drude model showed significant improvement over the additive model in solution densities, partial molar volumes, excess molar volumes, concentration-dependent diffusion constants, and dielectric constants. However, the additive model performed somewhat better than the Drude model in reproducing the activity derivative, excess molar Gibbs energy and excess molar enthalpy of mixing. This is due to the additive achieving a better balance among solute-solute, solute-solvent, and solvent-solvent interactions, indicating the potential for improvements in the Drude polarizable alcohol model. PMID:23947568

  6. Properties of the Bivariate Delayed Poisson Process

    DTIC Science & Technology

    1974-07-01

    Initial Conditions. The purpose and methodology of stationary Initial conditions for uni- varlate point processes have been described in Lawrance ... Lawrance , A. J. (1972). Some models for stationary series of univariate events. In Stochastic Point Processes: Statistical Analysis, Theory and

  7. Area-to-Area Poisson Kriging and Spatial Bayesian Analysis in Mapping of Gastric Cancer Incidence in Iran

    PubMed

    Asmarian, Naeimehossadat; Jafari-Koshki, Tohid; Soleimani, Ali; Taghi Ayatollahi, Seyyed Mohammad

    2016-10-01

    Background: In many countries gastric cancer has the highest incidence among the gastrointestinal cancers and is the second most common cancer in Iran. The aim of this study was to identify and map high risk gastric cancer regions at the county-level in Iran. Methods: In this study we analyzed gastric cancer data for Iran in the years 2003-2010. Areato- area Poisson kriging and Besag, York and Mollie (BYM) spatial models were applied to smoothing the standardized incidence ratios of gastric cancer for the 373 counties surveyed in this study. The two methods were compared in term of accuracy and precision in identifying high risk regions. Result: The highest smoothed standardized incidence rate (SIR) according to area-to-area Poisson kriging was in Meshkinshahr county in Ardabil province in north-western Iran (2.4,SD=0.05), while the highest smoothed standardized incidence rate (SIR) according to the BYM model was in Ardabil, the capital of that province (2.9,SD=0.09). Conclusion: Both methods of mapping, ATA Poisson kriging and BYM, showed the gastric cancer incidence rate to be highest in north and north-west Iran. However, area-to-area Poisson kriging was more precise than the BYM model and required less smoothing. According to the results obtained, preventive measures and treatment programs should be focused on particular counties of Iran.

  8. On covariant Poisson brackets in classical field theory

    NASA Astrophysics Data System (ADS)

    Forger, Michael; Salles, Mário O.

    2015-10-01

    How to give a natural geometric definition of a covariant Poisson bracket in classical field theory has for a long time been an open problem—as testified by the extensive literature on "multisymplectic Poisson brackets," together with the fact that all these proposals suffer from serious defects. On the other hand, the functional approach does provide a good candidate which has come to be known as the Peierls-De Witt bracket and whose construction in a geometrical setting is now well understood. Here, we show how the basic "multisymplectic Poisson bracket" already proposed in the 1970s can be derived from the Peierls-De Witt bracket, applied to a special class of functionals. This relation allows to trace back most (if not all) of the problems encountered in the past to ambiguities (the relation between differential forms on multiphase space and the functionals they define is not one-to-one) and also to the fact that this class of functionals does not form a Poisson subalgebra.

  9. Vectorized multigrid Poisson solver for the CDC CYBER 205

    NASA Technical Reports Server (NTRS)

    Barkai, D.; Brandt, M. A.

    1984-01-01

    The full multigrid (FMG) method is applied to the two dimensional Poisson equation with Dirichlet boundary conditions. This has been chosen as a relatively simple test case for examining the efficiency of fully vectorizing of the multigrid method. Data structure and programming considerations and techniques are discussed, accompanied by performance details.

  10. Indentability of conventional and negative Poisson's ratio foams

    NASA Technical Reports Server (NTRS)

    Lakes, R. S.; Elms, K.

    1992-01-01

    The indentation resistance of foams, both of conventional structure and of reentrant structure giving rise to negative Poisson's ratio, is studied using holographic interferometry. In holographic indentation tests, reentrant foams had higher yield strength and lower stiffness than conventional foams of the same original relative density. Calculated energy absorption for dynamic impact is considerably higher for reentrant foam than conventional foam.

  11. 3D soft metamaterials with negative Poisson's ratio.

    PubMed

    Babaee, Sahab; Shim, Jongmin; Weaver, James C; Chen, Elizabeth R; Patel, Nikita; Bertoldi, Katia

    2013-09-25

    Buckling is exploited to design a new class of three-dimensional metamaterials with negative Poisson's ratio. A library of auxetic building blocks is identified and procedures are defined to guide their selection and assembly. The auxetic properties of these materials are demonstrated both through experiments and finite element simulations and exhibit excellent qualitative and quantitative agreement.

  12. Tailoring graphene to achieve negative Poisson's ratio properties.

    PubMed

    Grima, Joseph N; Winczewski, Szymon; Mizzi, Luke; Grech, Michael C; Cauchi, Reuben; Gatt, Ruben; Attard, Daphne; Wojciechowski, Krzysztof W; Rybicki, Jarosław

    2015-02-25

    Graphene can be made auxetic through the introduction of vacancy defects. This results in the thinnest negative Poisson's ratio material at ambient conditions known so far, an effect achieved via a nanoscale de-wrinkling mechanism that mimics the behavior at the macroscale exhibited by a crumpled sheet of paper when stretched.

  13. Subsonic Flow for the Multidimensional Euler-Poisson System

    NASA Astrophysics Data System (ADS)

    Bae, Myoungjean; Duan, Ben; Xie, Chunjing

    2016-04-01

    We establish the existence and stability of subsonic potential flow for the steady Euler-Poisson system in a multidimensional nozzle of a finite length when prescribing the electric potential difference on a non-insulated boundary from a fixed point at the exit, and prescribing the pressure at the exit of the nozzle. The Euler-Poisson system for subsonic potential flow can be reduced to a nonlinear elliptic system of second order. In this paper, we develop a technique to achieve a priori {C^{1,α}} estimates of solutions to a quasi-linear second order elliptic system with mixed boundary conditions in a multidimensional domain enclosed by a Lipschitz continuous boundary. In particular, we discovered a special structure of the Euler-Poisson system which enables us to obtain {C^{1,α}} estimates of the velocity potential and the electric potential functions, and this leads us to establish structural stability of subsonic flows for the Euler-Poisson system under perturbations of various data.

  14. Enhanced coding in a cochlear-implant model using additive noise: Aperiodic stochastic resonance with tuning

    NASA Astrophysics Data System (ADS)

    Morse, Robert P.; Roper, Peter

    2000-05-01

    Analog electrical stimulation of the cochlear nerve (the nerve of hearing) by a cochlear implant is an effective method of providing functional hearing to profoundly deaf people. Recent physiological and computational experiments have shown that analog cochlear implants are unlikely to convey certain speech cues by the temporal pattern of evoked nerve discharges. However, these experiments have also shown that the optimal addition of noise to cochlear implant signals can enhance the temporal representation of speech cues [R. P. Morse and E. F. Evans, Nature Medicine 2, 928 (1996)]. We present a simple model to explain this enhancement of temporal representation. Our model derives from a rate equation for the mean threshold-crossing rate of an infinite set of parallel discriminators (level-crossing detectors); a system that well describes the time coding of information by a set of nerve fibers. Our results show that the optimal transfer of information occurs when the threshold level of each discriminator is equal to the root-mean-square noise level. The optimal transfer of information by a cochlear implant is therefore expected to occur when the internal root-mean-square noise level of each stimulated fiber is approximately equal to the nerve threshold. When interpreted within the framework of aperiodic stochastic resonance, our results indicate therefore that for an infinite array of discriminators, a tuning of the noise is still necessary for optimal performance. This is in contrast to previous results [Collins, Chow, and Imhoff, Nature 376, 236 (1995); Chialvo, Longtin, and Müller-Gerking, Phys. Rev. E 55, 1798 (1997)] on arrays of FitzHugh-Nagumo neurons.

  15. Using additive modelling to quantify the effect of chemicals on phytoplankton diversity and biomass.

    PubMed

    Viaene, K P J; De Laender, F; Van den Brink, P J; Janssen, C R

    2013-04-01

    Environmental authorities require the protection of biodiversity and other ecosystem properties such as biomass production. However, the endpoints listed in available ecotoxicological datasets generally do not contain these two ecosystem descriptors. Inferring the effects of chemicals on such descriptors from micro- or mesocosm experiments is often hampered by inherent differences in the initial biodiversity levels between experimental units or by delayed community responses. Here we introduce additive modelling to establish the effects of a chronic application of the herbicide linuron on 10 biodiversity indices and phytoplankton biomass in microcosms. We found that communities with a low (high) initial biodiversity subsequently became more (less) diverse, indicating an equilibrium biodiversity status in the communities considered here. Linuron adversely affected richness and evenness while dominance increased but no biodiversity indices were different from the control treatment at linuron concentrations below 2.4 μg/L. Richness-related indices changed at lower linuron concentrations (effects noticeable from 2.4 μg/L) than other biodiversity indices (effects noticeable from 14.4 μg/L) and, in contrast to the other indices, showed no signs of recovery following chronic exposure. Phytoplankton biomass was unaffected by linuron due to functional redundancy within the phytoplankton community. Comparing thresholds for biodiversity with conventional toxicity test results showed that standard ecological risk assessments also protect biodiversity in the case of linuron.

  16. Inhibition of Ostwald ripening in model beverage emulsions by addition of poorly water soluble triglyceride oils.

    PubMed

    McClements, David Julian; Henson, Lulu; Popplewell, L Michael; Decker, Eric Andrew; Choi, Seung Jun

    2012-01-01

    Beverage emulsions containing flavor oils that have a relatively high water-solubility are unstable to droplet growth due to Ostwald ripening. The aim of this study was to improve the stability of model beverage emulsions to this kind of droplet growth by incorporating poorly water-soluble triglyceride oils. High pressure homogenization was used to prepare a series of 5 wt% oil-in-water emulsions stabilized by modified starch that had different lipid phase compositions (orange oil : corn oil). Emulsions prepared using only orange oil as the lipid phase were highly unstable to droplet growth during storage, which was attributed to Ostwald ripening resulting from the relatively high water-solubility of orange oil. Droplet growth could be effectively inhibited by incorporating ≥ 10% corn oil into the lipid phase prior to homogenization. In addition, creaming was also retarded because the lipid phase density was closer to that of the aqueous phase density. These results illustrate a simple method of improving the physical stability of orange oil emulsions for utilization in the food, beverage, and fragrance industries.

  17. Influence of the heterogeneous reaction HCl + HOCl on an ozone hole model with hydrocarbon additions

    NASA Astrophysics Data System (ADS)

    Elliott, Scott; Cicerone, Ralph J.; Turco, Richard P.; Drdla, Katja; Tabazadeh, Azadeh

    1994-02-01

    Injection of ethane or propane has been suggested as a means for reducing ozone loss within the Antarctic vortex because alkanes can convert active chlorine radicals into hydrochloric acid. In kinetic models of vortex chemistry including as heterogeneous processes only the hydrolysis and HCl reactions of ClONO2 and N2O5, parts per billion by volume levels of the light alkanes counteract ozone depletion by sequestering chlorine atoms. Introduction of the surface reaction of HCl with HOCl causes ethane to deepen baseline ozone holes and generally works to impede any mitigation by hydrocarbons. The increased depletion occurs because HCl + HOCl can be driven by HOx radicals released during organic oxidation. Following initial hydrogen abstraction by chlorine, alkane breakdown leads to a net hydrochloric acid activation as the remaining hydrogen atoms enter the photochemical system. Lowering the rate constant for reactions of organic peroxy radicals with ClO to 10-13 cm3 molecule-1 s-1 does not alter results, and the major conclusions are insensitive to the timing of the ethane additions. Ignoring the organic peroxy radical plus ClO reactions entirely restores remediation capabilities by allowing HOx removal independent of HCl. Remediation also returns if early evaporation of polar stratospheric clouds leaves hydrogen atoms trapped in aldehyde intermediates, but real ozone losses are small in such cases.

  18. Statistical inference for the additive hazards model under outcome-dependent sampling.

    PubMed

    Yu, Jichang; Liu, Yanyan; Sandler, Dale P; Zhou, Haibo

    2015-09-01

    Cost-effective study design and proper inference procedures for data from such designs are always of particular interests to study investigators. In this article, we propose a biased sampling scheme, an outcome-dependent sampling (ODS) design for survival data with right censoring under the additive hazards model. We develop a weighted pseudo-score estimator for the regression parameters for the proposed design and derive the asymptotic properties of the proposed estimator. We also provide some suggestions for using the proposed method by evaluating the relative efficiency of the proposed method against simple random sampling design and derive the optimal allocation of the subsamples for the proposed design. Simulation studies show that the proposed ODS design is more powerful than other existing designs and the proposed estimator is more efficient than other estimators. We apply our method to analyze a cancer study conducted at NIEHS, the Cancer Incidence and Mortality of Uranium Miners Study, to study the risk of radon exposure to cancer.

  19. Regression analysis of mixed recurrent-event and panel-count data with additive rate models.

    PubMed

    Zhu, Liang; Zhao, Hui; Sun, Jianguo; Leisenring, Wendy; Robison, Leslie L

    2015-03-01

    Event-history studies of recurrent events are often conducted in fields such as demography, epidemiology, medicine, and social sciences (Cook and Lawless, 2007, The Statistical Analysis of Recurrent Events. New York: Springer-Verlag; Zhao et al., 2011, Test 20, 1-42). For such analysis, two types of data have been extensively investigated: recurrent-event data and panel-count data. However, in practice, one may face a third type of data, mixed recurrent-event and panel-count data or mixed event-history data. Such data occur if some study subjects are monitored or observed continuously and thus provide recurrent-event data, while the others are observed only at discrete times and hence give only panel-count data. A more general situation is that each subject is observed continuously over certain time periods but only at discrete times over other time periods. There exists little literature on the analysis of such mixed data except that published by Zhu et al. (2013, Statistics in Medicine 32, 1954-1963). In this article, we consider the regression analysis of mixed data using the additive rate model and develop some estimating equation-based approaches to estimate the regression parameters of interest. Both finite sample and asymptotic properties of the resulting estimators are established, and the numerical studies suggest that the proposed methodology works well for practical situations. The approach is applied to a Childhood Cancer Survivor Study that motivated this study.

  20. Enhancement of colour stability of anthocyanins in model beverages by gum arabic addition.

    PubMed

    Chung, Cheryl; Rojanasasithara, Thananunt; Mutilangi, William; McClements, David Julian

    2016-06-15

    This study investigated the potential of gum arabic to improve the stability of anthocyanins that are used in commercial beverages as natural colourants. The degradation of purple carrot anthocyanin in model beverage systems (pH 3.0) containing L-ascorbic acid proceeded with a first-order reaction rate during storage (40 °C for 5 days in light). The addition of gum arabic (0.05-5.0%) significantly enhanced the colour stability of anthocyanin, with the most stable systems observed at intermediate levels (1.5%). A further increase in concentration (>1.5%) reduced its efficacy due to a change in the conformation of the gum arabic molecules that hindered their exposure to the anthocyanins. Fluorescence quenching measurements showed that the anthocyanin could have interacted with the glycoprotein fractions of the gum arabic through hydrogen bonding, resulting in enhanced stability. Overall, this study provides valuable information about enhancing the stability of anthocyanins in beverage systems using natural ingredients.

  1. Statistical inference for the additive hazards model under outcome-dependent sampling

    PubMed Central

    Yu, Jichang; Liu, Yanyan; Sandler, Dale P.; Zhou, Haibo

    2015-01-01

    Cost-effective study design and proper inference procedures for data from such designs are always of particular interests to study investigators. In this article, we propose a biased sampling scheme, an outcome-dependent sampling (ODS) design for survival data with right censoring under the additive hazards model. We develop a weighted pseudo-score estimator for the regression parameters for the proposed design and derive the asymptotic properties of the proposed estimator. We also provide some suggestions for using the proposed method by evaluating the relative efficiency of the proposed method against simple random sampling design and derive the optimal allocation of the subsamples for the proposed design. Simulation studies show that the proposed ODS design is more powerful than other existing designs and the proposed estimator is more efficient than other estimators. We apply our method to analyze a cancer study conducted at NIEHS, the Cancer Incidence and Mortality of Uranium Miners Study, to study the risk of radon exposure to cancer. PMID:26379363

  2. Detecting Departure From Additivity Along a Fixed-Ratio Mixture Ray With a Piecewise Model for Dose and Interaction Thresholds

    PubMed Central

    Gennings, Chris; Wagner, Elizabeth D.; Simmons, Jane Ellen; Plewa, Michael J.

    2010-01-01

    For mixtures of many chemicals, a ray design based on a relevant, fixed mixing ratio is useful for detecting departure from additivity. Methods for detecting departure involve modeling the response as a function of total dose along the ray. For mixtures with many components, the interaction may be dose dependent. Therefore, we have developed the use of a three-segment model containing both a dose threshold and an interaction threshold. Prior to the dose threshold, the response is that of background; between the dose threshold and the interaction threshold, an additive relationship exists; the model allows for departure from additivity beyond the interaction threshold. With such a model, we can conduct a hypothesis test of additivity, as well as a test for a region of additivity. The methods are illustrated with cytotoxicity data that arise when Chinese hamster ovary cells are exposed to a mixture of nine haloacetic acids. PMID:21359103

  3. Exact momentum conservation laws for the gyrokinetic Vlasov-Poisson equations

    SciTech Connect

    Brizard, Alain J.; Tronko, Natalia

    2011-08-15

    The exact momentum conservation laws for the nonlinear gyrokinetic Vlasov-Poisson equations are derived by applying the Noether method on the gyrokinetic variational principle [A. J. Brizard, Phys. Plasmas 7, 4816 (2000)]. From the gyrokinetic Noether canonical-momentum equation derived by the Noether method, the gyrokinetic parallel momentum equation and other gyrokinetic Vlasov-moment equations are obtained. In addition, an exact gyrokinetic toroidal angular-momentum conservation law is derived in axisymmetric tokamak geometry, where the transport of parallel-toroidal momentum is related to the radial gyrocenter polarization, which includes contributions from the guiding-center and gyrocenter transformations.

  4. Determination of the Poisson's ratio of the cell: recovery properties of chondrocytes after release from complete micropipette aspiration.

    PubMed

    Trickey, Wendy R; Baaijens, Frank P T; Laursen, Tod A; Alexopoulos, Leonidas G; Guilak, Farshid

    2006-01-01

    Chondrocytes in articular cartilage are regularly subjected to compression and recovery due to dynamic loading of the joint. Previous studies have investigated the elastic and viscoelastic properties of chondrocytes using micropipette aspiration techniques, but in order to calculate cell properties, these studies have generally assumed that cells are incompressible with a Poisson's ratio of 0.5. The goal of this study was to measure the Poisson's ratio and recovery properties of the chondrocyte by combining theoretical modeling with experimental measures of complete cellular aspiration and release from a micropipette. Chondrocytes isolated from non-osteoarthritic and osteoarthritic cartilage were fully aspirated into a micropipette and allowed to reach mechanical equilibrium. Cells were then extruded from the micropipette and cell volume and morphology were measured throughout the experiment. This experimental procedure was simulated with finite element analysis, modeling the chondrocyte as either a compressible two-mode viscoelastic solid, or as a biphasic viscoelastic material. By fitting the experimental data to the theoretically predicted cell response, the Poisson's ratio and the viscoelastic recovery properties of the cell were determined. The Poisson's ratio of chondrocytes was found to be 0.38 for non-osteoarthritic cartilage and 0.36 for osteoarthritic chondrocytes (no significant difference). Osteoarthritic chondrocytes showed an increased recovery time following full aspiration. In contrast to previous assumptions, these findings suggest that chondrocytes are compressible, consistent with previous studies showing cell volume changes with compression of the extracellular matrix.

  5. Grain-Size Based Additivity Models for Scaling Multi-rate Uranyl Surface Complexation in Subsurface Sediments

    SciTech Connect

    Zhang, Xiaoying; Liu, Chongxuan; Hu, Bill X.; Hu, Qinhong

    2015-09-28

    This study statistically analyzed a grain-size based additivity model that has been proposed to scale reaction rates and parameters from laboratory to field. The additivity model assumed that reaction properties in a sediment including surface area, reactive site concentration, reaction rate, and extent can be predicted from field-scale grain size distribution by linearly adding reaction properties for individual grain size fractions. This study focused on the statistical analysis of the additivity model with respect to reaction rate constants using multi-rate uranyl (U(VI)) surface complexation reactions in a contaminated sediment as an example. Experimental data of rate-limited U(VI) desorption in a stirred flow-cell reactor were used to estimate the statistical properties of multi-rate parameters for individual grain size fractions. The statistical properties of the rate constants for the individual grain size fractions were then used to analyze the statistical properties of the additivity model to predict rate-limited U(VI) desorption in the composite sediment, and to evaluate the relative importance of individual grain size fractions to the overall U(VI) desorption. The result indicated that the additivity model provided a good prediction of the U(VI) desorption in the composite sediment. However, the rate constants were not directly scalable using the additivity model, and U(VI) desorption in individual grain size fractions have to be simulated in order to apply the additivity model. An approximate additivity model for directly scaling rate constants was subsequently proposed and evaluated. The result found that the approximate model provided a good prediction of the experimental results within statistical uncertainty. This study also found that a gravel size fraction (2-8mm), which is often ignored in modeling U(VI) sorption and desorption, is statistically significant to the U(VI) desorption in the sediment.

  6. Second-order Poisson Nernst-Planck solver for ion channel transport

    PubMed Central

    Zheng, Qiong; Chen, Duan; Wei, Guo-Wei

    2010-01-01

    The Poisson Nernst-Planck (PNP) theory is a simplified continuum model for a wide variety of chemical, physical and biological applications. Its ability of providing quantitative explanation and increasingly qualitative predictions of experimental measurements has earned itself much recognition in the research community. Numerous computational algorithms have been constructed for the solution of the PNP equations. However, in the realistic ion-channel context, no second order convergent PNP algorithm has ever been reported in the literature, due to many numerical obstacles, including discontinuous coefficients, singular charges, geometric singularities, and nonlinear couplings. The present work introduces a number of numerical algorithms to overcome the abovementioned numerical challenges and constructs the first second-order convergent PNP solver in the ion-channel context. First, a Dirichlet to Neumann mapping (DNM) algorithm is designed to alleviate the charge singularity due to the protein structure. Additionally, the matched interface and boundary (MIB) method is reformulated for solving the PNP equations. The MIB method systematically enforces the interface jump conditions and achieves the second order accuracy in the presence of complex geometry and geometric singularities of molecular surfaces. Moreover, two iterative schemes are utilized to deal with the coupled nonlinear equations. Furthermore, extensive and rigorous numerical validations are carried out over a number of geometries, including a sphere, two proteins and an ion channel, to examine the numerical accuracy and convergence order of the present numerical algorithms. Finally, application is considered to a real transmembrane protein, the Gramicidin A channel protein. The performance of the proposed numerical techniques is tested against a number of factors, including mesh sizes, diffusion coefficient profiles, iterative schemes, ion concentrations, and applied voltages. Numerical predictions are

  7. Two-sample discrimination of Poisson means

    NASA Technical Reports Server (NTRS)

    Lampton, M.

    1994-01-01

    This paper presents a statistical test for detecting significant differences between two random count accumulations. The null hypothesis is that the two samples share a common random arrival process with a mean count proportional to each sample's exposure. The model represents the partition of N total events into two counts, A and B, as a sequence of N independent Bernoulli trials whose partition fraction, f, is determined by the ratio of the exposures of A and B. The detection of a significant difference is claimed when the background (null) hypothesis is rejected, which occurs when the observed sample falls in a critical region of (A, B) space. The critical region depends on f and the desired significance level, alpha. The model correctly takes into account the fluctuations in both the signals and the background data, including the important case of small numbers of counts in the signal, the background, or both. The significance can be exactly determined from the cumulative binomial distribution, which in turn can be inverted to determine the critical A(B) or B(A) contour. This paper gives efficient implementations of these tests, based on lookup tables. Applications include the detection of clustering of astronomical objects, the detection of faint emission or absorption lines in photon-limited spectroscopy, the detection of faint emitters or absorbers in photon-limited imaging, and dosimetry.

  8. Poisson-type inequalities for growth properties of positive superharmonic functions.

    PubMed

    Luan, Kuan; Vieira, John

    2017-01-01

    In this paper, we present new Poisson-type inequalities for Poisson integrals with continuous data on the boundary. The obtained inequalities are used to obtain growth properties at infinity of positive superharmonic functions in a smooth cone.

  9. 78 FR 32224 - Availability of Version 3.1.2 of the Connect America Fund Phase II Cost Model; Additional...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-29

    ...; Additional Discussion Topics in Connect America Cost Model Virtual Workshop AGENCY: Federal Communications... issues in the ongoing virtual workshop. DATES: Comments are due on or before June 18, 2013. If you... comments. Virtual Workshop: In addition to the usual methods for filing electronic comments, the...

  10. A Legendre-Fourier spectral method with exact conservation laws for the Vlasov-Poisson system

    NASA Astrophysics Data System (ADS)

    Manzini, G.; Delzanno, G. L.; Vencels, J.; Markidis, S.

    2016-07-01

    We present the design and implementation of an L2-stable spectral method for the discretization of the Vlasov-Poisson model of a collisionless plasma in one space and velocity dimension. The velocity and space dependence of the Vlasov equation are resolved through a truncated spectral expansion based on Legendre and Fourier basis functions, respectively. The Poisson equation, which is coupled to the Vlasov equation, is also resolved through a Fourier expansion. The resulting system of ordinary differential equation is discretized by the implicit second-order accurate Crank-Nicolson time discretization. The non-linear dependence between the Vlasov and Poisson equations is iteratively solved at any time cycle by a Jacobian-Free Newton-Krylov method. In this work we analyze the structure of the main conservation laws of the resulting Legendre-Fourier model, e.g., mass, momentum, and energy, and prove that they are exactly satisfied in the semi-discrete and discrete setting. The L2-stability of the method is ensured by discretizing the boundary conditions of the distribution function at the boundaries of the velocity domain by a suitable penalty term. The impact of the penalty term on the conservation properties is investigated theoretically and numerically. An implementation of the penalty term that does not affect the conservation of mass, momentum and energy, is also proposed and studied. A collisional term is introduced in the discrete model to control the filamentation effect, but does not affect the conservation properties of the system. Numerical results on a set of standard test problems illustrate the performance of the method.

  11. A Legendre–Fourier spectral method with exact conservation laws for the Vlasov–Poisson system

    SciTech Connect

    Manzini, Gianmarco; Delzanno, Gian Luca; Vencels, Juris; Markidis, Stefano

    2016-04-22

    In this study, we present the design and implementation of an L2-stable spectral method for the discretization of the Vlasov–Poisson model of a collisionless plasma in one space and velocity dimension. The velocity and space dependence of the Vlasov equation are resolved through a truncated spectral expansion based on Legendre and Fourier basis functions, respectively. The Poisson equation, which is coupled to the Vlasov equation, is also resolved through a Fourier expansion. The resulting system of ordinary differential equation is discretized by the implicit second-order accurate Crank–Nicolson time discretization. The non-linear dependence between the Vlasov and Poisson equations is iteratively solved at any time cycle by a Jacobian-Free Newton–Krylov method. In this work we analyze the structure of the main conservation laws of the resulting Legendre–Fourier model, e.g., mass, momentum, and energy, and prove that they are exactly satisfied in the semi-discrete and discrete setting. The L2-stability of the method is ensured by discretizing the boundary conditions of the distribution function at the boundaries of the velocity domain by a suitable penalty term. The impact of the penalty term on the conservation properties is investigated theoretically and numerically. An implementation of the penalty term that does not affect the conservation of mass, momentum and energy, is also proposed and studied. A collisional term is introduced in the discrete model to control the filamentation effect, but does not affect the conservation properties of the system. Numerical results on a set of standard test problems illustrate the performance of the method.

  12. A Legendre–Fourier spectral method with exact conservation laws for the Vlasov–Poisson system

    DOE PAGES

    Manzini, Gianmarco; Delzanno, Gian Luca; Vencels, Juris; ...

    2016-04-22

    In this study, we present the design and implementation of an L2-stable spectral method for the discretization of the Vlasov–Poisson model of a collisionless plasma in one space and velocity dimension. The velocity and space dependence of the Vlasov equation are resolved through a truncated spectral expansion based on Legendre and Fourier basis functions, respectively. The Poisson equation, which is coupled to the Vlasov equation, is also resolved through a Fourier expansion. The resulting system of ordinary differential equation is discretized by the implicit second-order accurate Crank–Nicolson time discretization. The non-linear dependence between the Vlasov and Poisson equations is iterativelymore » solved at any time cycle by a Jacobian-Free Newton–Krylov method. In this work we analyze the structure of the main conservation laws of the resulting Legendre–Fourier model, e.g., mass, momentum, and energy, and prove that they are exactly satisfied in the semi-discrete and discrete setting. The L2-stability of the method is ensured by discretizing the boundary conditions of the distribution function at the boundaries of the velocity domain by a suitable penalty term. The impact of the penalty term on the conservation properties is investigated theoretically and numerically. An implementation of the penalty term that does not affect the conservation of mass, momentum and energy, is also proposed and studied. A collisional term is introduced in the discrete model to control the filamentation effect, but does not affect the conservation properties of the system. Numerical results on a set of standard test problems illustrate the performance of the method.« less

  13. Mass-Radius Spirals for Steady State Families of the Vlasov-Poisson System

    NASA Astrophysics Data System (ADS)

    Ramming, Tobias; Rein, Gerhard

    2017-02-01

    We consider spherically symmetric steady states of the Vlasov-Poisson system, which describe equilibrium configurations of galaxies or globular clusters. If the microscopic equation of state, i.e., the dependence of the steady state on the particle energy (and angular momentum) is fixed, a one-parameter family of such states is obtained. In the polytropic case the mass of the state along such a one-parameter family is a monotone function of its radius. We prove that for the King, Woolley-Dickens, and related models this mass-radius relation takes the form of a spiral.

  14. Effect of non-Poisson samples on turbulence spectra from laser velocimetry

    NASA Technical Reports Server (NTRS)

    Sree, Dave; Kjelgaard, Scott O.; Sellers, William L., III

    1994-01-01

    Spectral analysis of laser velocimetry (LV) data plays an important role in characterizing a turbulent flow and in estimating the associated turbulence scales, which can be helpful in validating theoretical and numerical turbulence models. The determination of turbulence scales is critically dependent on the accuracy of the spectral estimates. Spectral estimations from 'individual realization' laser velocimetry data are typically based on the assumption of a Poisson sampling process. What this Note has demonstrated is that the sampling distribution must be considered before spectral estimates are used to infer turbulence scales.

  15. Superposition of many independent spike trains is generally not a Poisson process

    NASA Astrophysics Data System (ADS)

    Lindner, Benjamin

    2006-02-01

    We study the sum of many independent spike trains and ask whether the resulting spike train has Poisson statistics or not. It is shown that for a non-Poissonian statistics of the single spike train, the resulting sum of spikes has exponential interspike interval (ISI) distributions, vanishing the ISI correlation at a finite lag but exhibits exactly the same power spectrum as the original spike train does. This paradox is resolved by considering what happens to ISI correlations in the limit of an infinite number of superposed trains. Implications of our findings for stochastic models in the neurosciences are briefly discussed.

  16. Filling of a Poisson trap by a population of random intermittent searchers.

    PubMed

    Bressloff, Paul C; Newby, Jay M

    2012-03-01

    We extend the continuum theory of random intermittent search processes to the case of N independent searchers looking to deliver cargo to a single hidden target located somewhere on a semi-infinite track. Each searcher randomly switches between a stationary state and either a leftward or rightward constant velocity state. We assume that all of the particles start at one end of the track and realize sample trajectories independently generated from the same underlying stochastic process. The hidden target is treated as a partially absorbing trap in which a particle can only detect the target and deliver its cargo if it is stationary and within range of the target; the particle is removed from the system after delivering its cargo. As a further generalization of previous models, we assume that up to n successive particles can find the target and deliver its cargo. Assuming that the rate of target detection scales as 1/N, we show that there exists a well-defined mean-field limit N→∞, in which the stochastic model reduces to a deterministic system of linear reaction-hyperbolic equations for the concentrations of particles in each of the internal states. These equations decouple from the stochastic process associated with filling the target with cargo. The latter can be modeled as a Poisson process in which the time-dependent rate of filling λ(t) depends on the concentration of stationary particles within the target domain. Hence, we refer to the target as a Poisson trap. We analyze the efficiency of filling the Poisson trap with n particles in terms of the waiting time density f(n)(t). The latter is determined by the integrated Poisson rate μ(t)=∫(0)(t)λ(s)ds, which in turn depends on the solution to the reaction-hyperbolic equations. We obtain an approximate solution for the particle concentrations by reducing the system of reaction-hyperbolic equations to a scalar advection-diffusion equation using a quasisteady-state analysis. We compare our analytical

  17. Self-regulating genes. Exact steady state solution by using Poisson representation

    NASA Astrophysics Data System (ADS)

    Sugár, István P.; Simon, István

    2014-09-01

    Systems biology studies the structure and behavior of complex gene regulatory networks. One of its aims is to develop a quantitative understanding of the modular components that constitute such networks. The self-regulating gene is a type of auto regulatory genetic modules which appears in over 40% of known transcription factors in E. coli. In this work, using the technique of Poisson Representation, we are able to provide exact steady state solutions for this feedback model. By using the methods of synthetic biology (P.E.M. Purnick and Weiss, R., Nature Reviews, Molecular Cell Biology, 2009, 10: 410-422) one can build the system itself from modules like this.

  18. Correlation between supercooled liquid relaxation and glass Poisson's ratio

    NASA Astrophysics Data System (ADS)

    Sun, Qijing; Hu, Lina; Zhou, Chao; Zheng, Haijiao; Yue, Yuanzheng

    2015-10-01

    We report on a correlation between the supercooled liquid (SL) relaxation and glass Poisson's ratio (v) by comparing the activation energy ratio (r) of the α and the slow β relaxations and the v values for both metallic and nonmetallic glasses. Poisson's ratio v generally increases with an increase in the ratio r and this relation can be described by the empirical function v = 0.5 - A*exp(-B*r), where A and B are constants. This correlation might imply that glass plasticity is associated with the competition between the α and the slow β relaxations in SLs. The underlying physics of this correlation lies in the heredity of the structural heterogeneity from liquid to glass. This work gives insights into both the microscopic mechanism of glass deformation through the SL dynamics and the complex structural evolution during liquid-glass transition.

  19. Reference manual for the POISSON/SUPERFISH Group of Codes

    SciTech Connect

    Not Available

    1987-01-01

    The POISSON/SUPERFISH Group codes were set up to solve two separate problems: the design of magnets and the design of rf cavities in a two-dimensional geometry. The first stage of either problem is to describe the layout of the magnet or cavity in a way that can be used as input to solve the generalized Poisson equation for magnets or the Helmholtz equations for cavities. The computer codes require that the problems be discretized by replacing the differentials (dx,dy) by finite differences ({delta}X,{delta}Y). Instead of defining the function everywhere in a plane, the function is defined only at a finite number of points on a mesh in the plane.

  20. Poisson and symplectic structures on Lie algebras. I

    NASA Astrophysics Data System (ADS)

    Alekseevsky, D. V.; Perelomov, A. M.

    1997-06-01

    The purpose of this paper is to describe a new class of Poisson and symplectic structures on Lie algebras. This gives a new class of solutions of the classical Yang-Baxter equation. The class of elementary Lie algebras is defined and the Poisson and symplectic structures for them are described. The algorithm is given for description of all closed 2-forms and of symplectic structures on any Lie algebra G, which is decomposed into semidirect sum of elementary subalgebras. Using these results we obtain the description of closed 2-forms and symplectic forms (if they exist) on the Borel subalgebra B(G) of semisimple Lie algebra G. As a byproduct, we get description of the second cohomology group H2( B( G)).